text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Cuemon 2.1.2013.2041 Cuemon . NET Framework Additions Simple, intuitive and logical every-day-usage additions to the Microsoft .NET Framework 2.0 SP1 and newer. Follows the same namespace structure already found in the .NET Framework. Install-Package Cuemon -Version 2.1.2013.2041 dotnet add package Cuemon --version 2.1.2013.2041 paket add Cuemon --version 2.1.2013.2041 The NuGet Team does not provide support for this client. Please contact its maintainers for support. Release Notes This is a patch to the 2.1.2013.2040 release which fixes a critical bug triggered by computers having more than one physical CPU. This bug was introduced due to the recent changes in the ManagementUtility class. Some would argue that the current heatwave combined with a sunburn left the QA incapacitated. My apologies for the inconvenience. Dependencies This package has no dependencies.
https://www.nuget.org/packages/Cuemon/2.1.2013.2041
CC-MAIN-2018-05
refinedweb
145
62.95
Iterators An iterator is a block of code that supplies all the values to be used in a foreach loop. A class that represents a collection can implement the System.Collections.IEnumerable interface. This interface requires an implementation for the GetEnumerator()method which returns an IEnumerator interface. The IEnumerator interface has a Current property which contains the current value that was returned by the iterator. It also has a MoveNext() method which moves the Current property to the next item and returns false if there is no more item left. The Reset() method returns the iterator back to the first item. The IEnumerator interface is implemented by different types of collections in the .NET library and this includes arrays which is why you can use the foreach loop on them. Supposed we have a code like this which reads every element of an array using a foreach loop. int[] numbers = { 1, 2, 3, 4, 5 }; foreach(int n in numbers) { Console.WriteLine(n); } To better understand iterators, let’s translate the above foreach loop into a call to an array’s GetEnumerator() method. int[] numbers = { 1, 2, 3, 4, 5 }; IEnumerator iterator = numbers.GetEnumerator(); while(iterator.MoveNext()) { Console.WriteLine(iterator.Current); } As you can see, we first retrieve the array’s iterator using the GetEnumerator() method which returns an IEnumerator interface. We used this iterator in a while loop and called the MoveNext() method. The MoveNext() method retrieves the first element of a collection such as an array and if it is successful in retrieving, the method will return true. It will then retrieve the second element on the next call and so on until it reaches the end of the array where it will return false because there is no more to retrieve. The retrieved value of the element can be accessed using the IEnumerator.Current property. Now that you know how an iterator works and its major contribution when using iterating values from collections, let’s consider creating our own iterators. Using an iterator requires a yield return statement. A yield return statement is different from an ordinary return statement. One visible difference is the use of the keyword yield before the return keyword. Consider the following example: using System; using System.Collections; namespace IteratorsDemo { class Program { public static IEnumerable GetMessages() { yield return "Message 1"; yield return "Message 2"; yield return "Message 3"; } public static void Main() { foreach (string message in GetMessages()) { Console.WriteLine(message); } } } } Example 1 – A Simple Iterator Message 1 Message 2 Message 3 The GetMessages() method returns an IEnumerable object which in turn contains a definition for a GetEnumerator() method. The yield return statement brings the value it will return to the variable that will hold each of the values in a foreach loop. The first yield return statement in the method is returned to the foreach loop inside our Main() method. When the foreach loop calls again the GetMessages() method, then the next yield return statement is returned to it. This continues until no more yield returnstatement is found. You can interrupt the returning of values from the method by using the yield break statement. public static IEnumerable GetMessages() { yield return "Message 1"; yield return "Message 2"; yield break; yield return "Message 3"; } Creating Our Own Iterator Let’s take a look an example of using an iterator by creating a new class which contains an ArrayList field that will hold some values. We will then create an iterator which will provide a foreach loop with the values from the ArrayList field. using System.Collections; using System; namespace IteratorsDemo2 { public class Names : IEnumerable { private ArrayList innerList; public Names(params object[] names) { innerList = new ArrayList(); foreach (object n in names) { innerList.Add(n); } } public IEnumerator GetEnumerator() { foreach (object n in innerList) { yield return n.ToString(); } } } public class Program { public static void Main() { Names nameList = new Names("John", "Mark", "Lawrence", "Michael", "Steven"); foreach (string name in nameList) { Console.WriteLine(name); } } } } Example 2 – User Defined Iterator We created a collection class named Names which will hold a list of names. The definition of the class in line 6 shows that we did not implement the CollectionBase class. This is because the CollectionBase class implements the IEnumerable interface and already has an implementation of the GetEnumerator() method. We are creating our collection class from scratch and we will define our own iterator. Our class simply implements the IEnumerable interface which now requires our class to have an implementation of the GetEnumerator() method. Our own iterator is defined in lines 20-26. Inside it, we iterate through each of the value from the innerList field. Each value is converted to string and then yield returned to the caller. Lines 35-38 shows our own iterator in work. Since the yield returns in our iterator convert each value to string, we can simply use string as the type of the range variable of the foreach loop. When the next value will be retrieved from the iterator via the foreach loop in our Main() method, the foreach loop inside the iterator goes to the next iteration and yield return the next value from the innerList. Without the iterator, we won’t be able to use foreach loop for our Names class. Creating our own iterator gives us a great control on the behavior of the foreach loop when dealing with our class. For example, we can edit our iterator to only return names starting with letter M. public IEnumerator GetEnumerator() { foreach (object n in innerList) { if (n.ToString().StartsWith("M")) yield return n.ToString(); } } We used the StartsWith() method of the System.String class to determine if a name starts with letter M. If the names start with letter M, then we yield return it. If not, then it is ignored and then next name in the innerList is inspected. With our modified GetEnumerator() method, when you use a foreach loop on an instance of the Names class, only names starting with letter M will be retrieved. Even if you can do the above technique of modifying GetEnumerator() method, it would be more practical to just define a separate method for retrieving names starting with a specified letter. public IEnumerable GetNamesStartingWith(string letter) { foreach (object n in innerList) { if (n.ToString().StartsWith(letter)) yield return n.ToString(); } } We now have a more flexible iterator that you can use to retrieve names which start with a specified letter or substring. Note that we used IEnumerable instead of IEnumerator. The rule is use IEnumerable for all iterators except the GetEnumerator() method which is used as default by the foreach loop. The IEnumerable already has a GetEnumerator() method. When you call this iterator, you need to modify our foreach loop in lines 35-38. foreach (string name in nameList.GetNamesStartingWith("M")) { Console.WriteLine(name); } Improving Our Animals Dictionary Class A more practical use of an iterator is applying it to a collection or dictionary class. For example, the dictionary class we made in the last lesson uses DictionaryEntry as the type of variable that will store each of the elements inside a foreach loop. By using an iterator, we can use Animal as the type of the class and save as a little casting. The following code demonstrates this to you. using System.Collections; using System; namespace IteratorsDemo3 { public class Animal { public string Name { get; set; } public int Age { get; set; } public double Height { get; set; } public Animal(string name, int age, double height) { Name = name; Age = age; Height = height; } } public class Animals : DictionaryBase { public void Add(string key, Animal newAnimalnewAnimal) { Dictionary.Add(key, newAnimalnewAnimal); } public void Remove(string key) { Dictionary.Remove(key); } public Animal this[string key] { get { return (Animal)Dictionary[key]; } set { Dictionary[key] = value; } } public new IEnumerator GetEnumerator() { foreach (object animal in Dictionary.Values) { yield return (Animal)animal; } } })); foreach (Animal animal in animalDictionary) { Console.WriteLine(animal.Name); } } } } Example 3 – Adding an Iterator to a Custom Dictionary Lines 38-44 defines an iterator for our Animals dictionary class. The DictionaryBase class already implements the IEnumerableinterface which has the GetEnumerator() method that is used for getting values from a collection using a foreach loop. We implement our own GetEnumerator() method. Notice that it has a return type of IEnumerator.The new keyword simply indicates that the program should use this version of GetEnumerator() instead of the one already defined in the DictionaryBase. Inside the method, we use a foreach loop to cast and yield return each of the Animal objects from the Values properties. Recall that without an iterator, using a foreach loop for our dictionary class looks like this: foreach (DictionaryEntry animal in animalDictionary) { Console.WriteLine((animal.Value as Animal).Name); } With the new iterator defined in our dictionary class, you can now use the type of each element in a foreach loop without using the DictionaryEntry class. foreach (Animal animal in animalDictionary) { Console.WriteLine(animal.Name); } Each iteration of the above foreach loop triggers an iteration of the foreach loop inside our iterator and each loop executes a yield return statement.
https://compitionpoint.com/iterators/
CC-MAIN-2021-21
refinedweb
1,500
54.02
In this post, I will show you how to include React Router in your react project. It's easy to use and it's great for improving the navigation experience.👌🏽 Here's a demo of a simple navbar (and the button in the About page that redirects back to Home): Now let's see how to get started with React Router. Installation - Install react-router-dom - Note: Make sure that you're already working on a create-react-appbefore adding it to your project npm install react-router-dom Include the Router - Wrap your <App />component with <BrowserRouter /> - Add each <Route />with its path and respective component - Wrap <Switch />around your routes. Switch will start looking for a matching route and the exactattribute will make sure that it matches exactly what we want The <Navbar /> component will take care of the <NavLink />, more on this below. import React from 'react'; import {BrowserRouter, Switch, Route} from 'react-router-dom'; import About from './About'; import Home from './Home'; import Navbar from './Navbar'; function App() { return ( <BrowserRouter> <Navbar /> <Switch> <Route exact path="/" component={Home}/> <Route exact path="/about" component={About}/> </Switch> </BrowserRouter> ); } export default App; Add NavLink <NavLink />will act as each Navbar link, which is using client-side routing (exclusive of single-page applications) <NavLink />comes with the activeClassNameproperty, which will allow us to add CSS to active/non-active links import React from 'react'; import {NavLink} from 'react-router-dom' import './App.css'; export default function Navbar() { return ( <div> <NavLink activeClassName="selected" className="not-selected" to="/" exact >HOME</NavLink> <NavLink to="/about" activeClassName="selected" className="not-selected" exact >ABOUT </NavLink> </div> ) } The useHistory hook - What does it do? It provides access to the history prop that you may use to navigate - In other words, useHistorycan be used for redirecting which is very convenient! import React from 'react'; import {useHistory} from 'react-router-dom'; export default function About() { const history = useHistory() const handleClick = () => { history.push('/') } return ( <div> <h1>ABOUT</h1> <p>THIS IS THE ABOUT PAGE</p> <div> <button onClick={handleClick}> Back to Home </button> </div> </div> ) } And that's it! 😬 Discussion (4) This is awesome. You have summarized all the basics of React Router in a concise manner. It would be great if you could also include URL Parameters nested routes also An amazing piece. Coming from vue background and trying react, I'm keep this article for raining days you know. Thanks Debi Super clear, concise explanation of how to use React Router effectively! Nice work! Lovely writeup Deborah! 👏👏So proud of you ☺️☺️☺️
https://dev.to/deboragaleano/how-to-use-react-router-3em5
CC-MAIN-2021-17
refinedweb
423
52.09
Suport for finding strings in memory. This namespace provides support for various kinds of strings in specimen memory, including an analysis that searches for strings in specimen memory. A string is a sequence of characters encoded in one of a variety of ways in memory. For instance, NUL-terminated ASCII is a common encoding from C compilers. The characters within the string must all satisfy some valid-character predicate. The terms used in this analysis are based on the Unicode standard, and are defined here in terms of string encoding (translation of a string as printed to a sequence of octets). Although this analysis can encode strings, its main purpose is decoding strings from an octet stream into a sequence of code points.. To describe this model correctly one needs more precise terms than "character set" and "character encoding." The terms used in the modern model follow:). Once the code points of a string are encoded as octets, the string as a whole needs some description to demarcate it from surrounding data. ROSE currently supports two styles of demarcation: length-encoded strings and terminated strings. A length-encoded string's code point octets are preceded by octets that encode the string length, usually in terms of the number of code points. Decoding such a string consists of decoding the length and then decoding code points until the required number of code points have been obtained. On the other hand, terminated strings are demarcated from surrounding data by a special code point such as the NUL character for ASCII strings. Decoding a terminated string consists of decoding code points until a terminator is found, then discarding the terminator. This example shows how to find all strings in memory that is readable but not writable using a list of common encodings such as C-style NUL-terminated printable ASCII, zero terminated UTF-16 little-endian, 2-byte little-endian length encoded ASCII, etc. The StringFinder analysis is tuned for searching for strings at unknown locations while trying to decode multiple encodings simultaneously. If all you want to do is read a single string from a known location having a known encoding then you're probabily better off reading it directly from the MemoryMap. The StringFinder analysis can be used for that, but it's probably overkill. In any case, here's the overkill version to find a 2-byte little endian length-encoded UTF-8 string: The encoders can also be used to decode directly from a stream of octets. For instance, lets say you have a vector of octets that map 1:1 to code values, and then you want to decode the code values as a UTF-8 stream to get some code points. All decoders are implemented as state machines to make it efficient to send the same octets to many decoders without having to rescan/reread from a memory map. The UTF-8 decoder decodes one octet at a time and when it enters the FINAL_STATE or COMPLETED_STATE then a decoded code value can be consumed. One byte in a sequence that encodes a code value. Definition at line 169 of file BinaryString.h. A sequence of octets. Definition at line 170 of file BinaryString.h. One value in a sequence that encodes a code point. Definition at line 171 of file BinaryString.h. A sequence of code values. Definition at line 172 of file BinaryString.h. One character in a coded character set. Definition at line 173 of file BinaryString.h. A sequence of code points, i.e., a string. Definition at line 174 of file BinaryString.h. Decoder state. Negative values are reserved. A decoder must follow these rules when transitioning from one state to another: reset. decodedoes not change the state. decodetransitions to ERROR_STATE. consumetransitions to INITIAL_STATE. All other transitions are user defined. Definition at line 195 of file BinaryString.h. Returns true for COMPLETED_STATE or FINAL_STATE. Initialize the diagnostics facility. This is called by Rose::Diagnostics::initialize. Returns a new no-op character encoding form. Returns a new UTF-8 character encoding form. Returns a new UTF-16 character encoding form. Returns a new basic character encoding scheme. Returns a new basic length encoding scheme. Returns a new printable ASCII predicate. Returns a new predicate that matches all code points. Returns a new length-prefixed string encoder. Returns a new encoder for length-encoded printable ASCII strings. A byte order must be specified for length encodings larger than a single byte. Returns a new encoder for multi-byte length-encoded printable ASCII strings. Returns a new encoder for NUL-terminated printable ASCII strings. Returns a new encoder for multi-byte NUL-terminated printable ASCII strings. Diagnostics specific to string analysis.
http://rosecompiler.org/ROSE_HTML_Reference/namespaceRose_1_1BinaryAnalysis_1_1Strings.html
CC-MAIN-2019-47
refinedweb
786
58.69
There's been a lot of talk lately about the DeepZoom technology in Silverlight 2. If you have seen it yet you should definitely goto HardRock's memorabilia site. That is a 2 billion pixels image that you see in that 640X480 segment there. Instead of getting all those pixels down the wire and killing your system, you only get the pixels needed to display the zoom level you are in. To me one of the reasons this is cool is because it introduces a new paradigm of photo sharing. You can have a long shot picture that captures the essence of your album and then have smaller pictures embedded at the right places which can be zoomed into. I wanted to do that for some photos I'd taken when I went rafting with some friends in the Himalayas some time back. Here is the picture of the camp and other pictures embedded in approximate positions of where they were. (Ok I know it sucks, I pretend to have some artistic sense whereas I don't but in my defense I didn't know about deepzoom then or I'd have taken a good long range shot) So how do I go about doing this? First thing is obviously you need Silverlight. Then if you are going to use a dynamic language (and there is no reason to use anything else) then you need to download dynamic silverlight. John Lam has a series of excellent posts that introduce dynamic silverlight. To create the deepzoom images, you would need DeepZoom composer. The DeepZoom composer is fairly straightforward. Import the pics you want in your project and drop on to the composer. Make sure that you zoom into the main image and drop the embedded images with the right size. Once your images are ready all you need is a simple app.xaml like so: <Canvas x:Class="System.Windows.Controls.Canvas" xmlns="" xmlns:x="" x: <StackPanel> <MultiScaleImage ViewportWidth="1.0" x: </StackPanel> </Canvas> Now you need some simple python code that hooks the mouse click/wheel event handlers and zooms in and out appropriately. Scott Hanselman has an excellent sample of that in C#. I just wrote python code that does the same thing (ofcourse its python so its simpler and more elegant :)). In the constructor, you need to load the xaml and hook up the events: class DeepZoomScene(object): def __init__(self): self.scene = Application.Current.LoadRootVisual(Canvas(), "app.xaml") self.scene.msi.MouseLeftButtonDown += self.mouseButtonDown self.scene.msi.MouseLeftButtonUp += self.mouseButtonUp self.scene.msi.MouseMove += self.mouseMove self.scene.msi.MouseEnter += self.mouseEnter self.scene.msi.MouseLeave += self.mouseLeave if HtmlPage.IsEnabled: HtmlPage.Window.AttachEvent("DOMMouseScroll", EventHandler[HtmlEventArgs](self.mouseWheel)) HtmlPage.Window.AttachEvent("onmousewheel", EventHandler[HtmlEventArgs](self.mouseWheel)) HtmlPage.Document.AttachEvent("onmousewheel", EventHandler[HtmlEventArgs](self.mouseWheel)) Now its just a matter of writing the event handlers. For example, here is the scroll wheel handler (thanks to Pete Blois for his C# sample). Notice how using a dynamic language massively reduces the lines of code needed. def mouseWheel(self, sender, args): if self.IsMouseOver: delta = args.EventObject.GetProperty("wheelDelta") if HtmlPage.Window.GetProperty("opera") != None: delta = -delta if delta == None: delta = -(args.EventObject.GetProperty("detail"))/3 if HtmlPage.BrowserInformation.UserAgent.IndexOf("Macintosh") != -1: delta = delta*3 if delta > 0: self.zoom(1.1, self.lastMousePos) else: self.zoom(0.9, self.lastMousePos) (The entire python file is attached). After this, ideally all I'd have to do should be run chiron and create the xap. But there seems to be a bug in Silverlight beta1 wherein if I have the deepzoom files in both the xap then it refuses to read the read the file (this doesnt happen for normal jpgs - only with MultiScaleImage). So I xap'ed just the app folder by doing chiron /d:app /z:app.xap and then keeping my deepzoom folder along with the xap. Note: The code is provided "as is" and all that - aka don't blame if it blows up. PingBack from I've got a number of emails complaining that folks haven't heard much from the DLR (Dynamic Language I’ve got a number of emails complaining that folks haven’t heard much from the DLR (Dynamic Language microsoft has withdrawn the DeepZoom composer, all links to it are dead including the multiple references when searching MSDN.com @memals, the link in the post above still seems to work.
https://blogs.msdn.microsoft.com/srivatsn/2008/03/17/deepzoom-in-dynamic-languages/
CC-MAIN-2018-26
refinedweb
740
55.95
You can use the following syntax to save a Seaborn plot to a file: import seaborn as sns line_plot = sns.lineplot(x=x, y=y) fig = line_plot.get_figure() fig.savefig('my_lineplot.png') The following examples show how to use this syntax in practice. Example 1: Save Seaborn Plot to PNG File The following code shows how to save a Seaborn plot to a PNG file: import seaborn as sns #set theme style sns.set_style('darkgrid') #define data x = [1, 2, 3, 4, 5, 6] y = [8, 13, 14, 11, 16, 22] #create line plot and save as PNG file line_plot = sns.lineplot(x=x, y=y) fig = line_plot.get_figure() fig.savefig('my_lineplot.png') If we navigate to the location where we saved the file, we can view it: Note that we could also use .jpg, .pdf, or other file extensions to save the plot to a different type of file. Example 2: Save Seaborn Plot to PNG File with Tight Layout By default, Seaborn adds padding around the outside of the figure. To remove this padding, we can use the bbox_inches=’tight’ argument: fig.savefig('my_lineplot.png', bbox_inches='tight') Notice that there is minimal padding around the outside of the plot now. Example 3: Save Seaborn Plot to PNG File with Custom Size You can use the dpi argument to increase the size of the Seaborn plot when saving it to a file: fig.savefig('my_lineplot.png', dpi=100) Notice that this plot is much larger than the previous two. The larger the value you use for dpi, the larger the plot will be. Additional Resources The following tutorials explain how to perform other common plotting functions in Seaborn: How to Create Multiple Seaborn Plots in One Figure How to Adjust the Figure Size of a Seaborn Plot How to Add a Title to Seaborn Plots
https://www.statology.org/save-seaborn-plot/
CC-MAIN-2021-39
refinedweb
306
69.52
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question) Introduction In my previous blog post on Code 39 barcodes, I've shown you guys how to generate Code 39 barcodes using managed code. Today, we'll take it another step forward and make barcode generation available through an .ashx ASP.NET 2.0 HTTP handler. About ASP.NET 2.0 Generic Handlers One of the great things in ASP.NET is the concepts of "generic handlers". Basically you can look at it as just another kind of object that can process HTTP requests, however outside the scope of a page (which targets 'classic' HTML-based output). You might have heard about HTTP handlers too and basically these are the same except for the fact that you can bind an HTTP handler to any file extension (as long as you IIS configuration permits) whileas a generic handler just lives behind the .ashx extension and is directly supported in Visual Studio 2005 web site projects. In the end all handlers implement System.Web.IHttpHandler. Last but not least, you'll be able to host any ASP.NET HTTP Handler directly in IIS 7 too (which I'll blog about later on). Our online Code39 barcode generator First of all create a new web site project in Visual Studio 2005: Next, add a new generic handler called "Code39Generator.ashx" to the project: This will generate the following piece of code: <%@ WebHandler Language="C#" Class="Code39Generator" %>using System;using System.Web;public class Code39Generator : IHttpHandler { public void ProcessRequest (HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World"); } public bool IsReusable { get { return false; } }} Exactly, "Hello World" made it to the built-in code snippets in Visual Studio 2005. Our task is to create a suitable ProcessRequest method that returns an image with a requested barcode. Requested by means of the querystring that is. The IsReusable property is used by the ASP.NET runtime to find out whether the instance of our handler can be reused for multiple requests. That's fine for us, as we don't have intentions to turn our handler instance to garbage once a request has passed through the pipeline: public bool IsReusable { get { return true; } } On to the real stuff now. First of all make sure the Code39 and Code39Settings classes (see previous post) are available in the context of the web site project. You can do that by CTRL-C,V-ing the code to a class file or by referencing a class library project. I've chosen to walk the former path as shown below: Make sure to declare the classes as follows: using Back to our .ashx file. First let's fix some namespace imports: Real stuff happens a little further, in the ProcessRequest method which we define like this: The basic idea is simple: just take the generated image from our Code39 class's Paint method and send it back to the client as an image/png type of HTTP response. In order to do so, we'll need the bytes of the image. Note: It would be easier if we could write this: img.Save(context.Response.OutputStream, However this causes the following exception to occur: There are lots of forum threads on this issue (just Live Search for it), most of which have to do with permissions when trying to save images on the server's disk. However, I decided to create a code-based workaround as I have some indications permissions might not be the issue in this case (and even if they were, I rather like to have the solution just work in xcopy deployment scenarios). Therefore, here it is, the GetImagesBytes method: This one just works fine. So, press F5 to run the web site project and see the ASP.NET Development Server getting ready to serve you: Right-click the icon and choose "Open in Web Browser" if the system didn't do this already by itself. You should see the directory listing now: Click the Code39Generator.ashx file and patch the browser address bar by appending ?code=BART to it (e.g. - the port will much likely be different on your machine). There we go: Stay tuned for even more barcode fun soon! Great work thanks a lot. Just a small issue, when i try to print the page the barcodes appear as solid black boxes. if i import the file into fireworks and set canvas to 'white' it fixes the problem. any ideas? Thanks! I can't directly repro the issue over here (i.e. things print fine). You might want to do some experiments with Bitmap bmp = new Bitmap(w, h, PixelFormat.Format32bppArgb); in Code39.cs and the codec selection in the .ashx file to experiment with different output formats: foreach (ImageCodecInfo e in ImageCodecInfo.GetImageEncoders()) { if (e.MimeType == "image/png") { codec = e; break; } } Hey, Ok so i figured out that firefox doesnt like png files very much. That was the reason it printed black, it works perfectly in IE though. The only thing is in IE it prints with a light blue background. I tried changing the format to "image/jpeg" but it didnt even display the image like that. Any ideas? Thanks! Pingback from Code 39 in C# « .NET i takie tam Pingback from community.bartdesmet.net/.../4450.aspx
http://community.bartdesmet.net/blogs/bart/archive/2006/09/19/4450.aspx
crawl-002
refinedweb
892
72.56
Im trying to make a console app to solve math in order to make things go faster. However, there is an error and it says expected primary-expression before "else" and also says expected ':' before "else". Here's the code: #include <cstdlib> #include <iostream> using namespace std; int main(int argc, char *argv[]) { int a, b, c, d, e, f, g, h, i, j; int answer; int result1, result2, result3, result4, result5, result6; int s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, s11, s12, s13; int binomialformula, herosformula, trianglearea, pythagorean; cout << "Type binomialformula, herosformula, trianglearea, or pythagorean: "; cin >> answer; if (answer == binomialformula) cout << "a="; cin >> a; cout << "b="; cin >> b; cout << "c="; cin >> c; s1 = b * b; s2 = 4 * a * c; s3 = s2; s4 = s1 - s2; s5 = 2 * a; result1 = -b; result2 = s4; result3 = s5; cout << "Answer: ["; cout << result1; cout << " +or- square root of "; cout << result2; cout << "] / "; cout << result3; cout << "......."; else if (answer == herosformula) cout << "side 1 = "; cin >> d; cout << "side 2 = "; cin >> e; cout << "side 3 = "; cin >> f; s6 = d + e + f; s7 = s6 / 2; s8 = s7 - d; s9 = s7 - e; s10 = s7 - f; result4 = s7 * s8 * s9 * s10; cout << "Answer: square root of "; cout << result4; cout << "........"; else if (answer == trianglearea) cout << "base = "; cin >> g; cout << "height = "; cin >> h; s11 = g * h; result5 = s11 / 2; cout << "Area = "; cout << result5; cout << "......."; else if (answer == pythagorean) cout << "leg = "; cin >> i; cout << "hypotnuse = "; cin >> j; s12 = i * i; s13 = j * j; result6 = s13 - s12; cout << "Answer = square root of "; cout << result6; cout << "......."; else cout << "Please try again.........."; system("PAUSE"); return EXIT_SUCCESS; }
https://www.daniweb.com/programming/software-development/threads/122161/please-help-with-code-expected-primary-expression
CC-MAIN-2018-13
refinedweb
261
55.81
With @import at-rules slowly being phased out of the main implementation of Sass (dart-sass) and eventually deprecated, its time to learn how to use @use rules and the neat features that come along with it. The main implementation of Sass is dart-sass and for the sake of this article, I will be using the dart-sass implementation as it gets new features before any of the other implementations. Before jumping into the details of @use and index files. The @import at-rules in Sass are slowly being phased out and will eventually be deprecated in the next few years. The Sass team discourages the continued use of the @import rule. Sass will gradually phase it out over the next few years, and eventually remove it from the language entirely. Prefer the @use rule instead. Only Dart Sass currently supports @use. Users of other implementations must use the @import rule instead. On GitHub, I helped drive the transition from @import to @use at-rules for mdn/mdn-minimalist which is the base Sass that powers MDN Web Docs. This conversion required me to study the Sass docs for transitioning from @import to @use and the details in this article are things I think are important when getting started with using @use and or working towards replacing your @import at-rules. A new way to import Before the @use at-rule was introduced to Sass, we relied on importing things (mixins, variables, stylesheets) with the @import at-rule. Unfortunately, the @import rules introduced many serious issues that can be described on the Sass docs. A few are: - @import makes all variables, mixins, and functions globally accessible. This makes it very difficult for people (or tools) to tell where anything is defined. - Because everything’s global, libraries must prefix to all their members to avoid naming collisions. - Each stylesheet is executed and its CSS emitted every time it’s @imported, which increases compilation time and produces bloated output. Due to all of these flaws, the Sass team introduced a brand new and improved way to import your Sass stylesheets (mixins, functions, and variables) with @use. And honestly, it is so darn good. The old @import usage made everything globally accessible and now with @use the default namespace is the last component of the URL unless otherwise specified. Using @import Before @use was introduced in dart-sass, if we had a directory of partial files containing some mixins like this: @mixin screen-reader-only() { ... } @mixin invert($c) { filter: invert(#{$c}); } @mixin square($x, $y, $c) { width: $x; height: $y; background: $c; } We would have to import each file by using numerous @import at-rules and one by one import the partial files to our main file for usage and eventually compilation. /* Import the mixins */ @import "./mixins/sr-only"; @import "./mixins/invert"; @import "./mixins/square"; .hidden { @include sr-only(); } .demo { @include invert(0.30); @include square(20px, 20px, #f06); } If you only had a few mixins or files to import, this implementation wouldn't be that all bad. But if the mixins directory had 50 or more partial files, then writing each @import line-by-line would start to be overwhelming in large codebases. The power of @use at-rules With the introduction of @use at-rules, we have the ability to use index files. Using these index files allow us to use @forward rules in a _index.scss file so that when we load the URL of the directory ./mixins/, all of the partial files will be loaded with it. Giving us access to an entire directory of sass files with a single @use at-rule. Below is an example of using @forward to load stylesheets inside an index file. @forward "./mixins/sr-only"; @forward "./mixins/invert"; @forward "./mixins/square"; Now instead of writing three separate @import or @use rules to load the URLs of the mixins we need. Simply load the URL representing the directory of partial files with a defined index file. In our case, the mixins directory can be loaded to make all of the forwarded stylesheets available with a single @use rule. Not only is this usage super clean and maintainable but saves quite a bit of time when many partial files need to be loaded. Note: The @use at-rules must be placed at the top of the file, before any other content. @use "./mixins/"; .hidden { @include mixins.sr-only(): } .demo { @include mixins.invert(0.30); @include mixins.invert(20px, 20px, #f06); } Voila! One thing to note is that the "generic" @use at-rule usage will define a default namespace for the loaded content as the final component of the URL. So in our case, the namespace for this @use rule will be mixins. And accessing the loaded members within the directory will simply require you to reference the namespace and use dot notation like mixins.foo(). Sometimes you don't want to have a namespace attached to the loaded folder. Sass gives us the flexibility to define a custom namespace with @use "<URL>" as <namespace>; @use "./mixins/" as m; .hidden { @include m.sr-only(): } .demo { @include m.invert(0.30); @include m.invert(20px, 20px, #f06); } Or completely disregard a namespace by not defining one with @use "<URL>" as *;. This makes it so you can reference loaded members without a namespace and without using dot notation, just as you would if they were defined in the same file. Only do this if you know there won't be any naming conflicts with other loaded members. @use "./mixins/" as *; .hidden { @include sr-only(): } .demo { @include invert(0.30); @include invert(20px, 20px, #f06); } There is more to discuss but this should cover the basics of @use at-rules. I'm hoping the information and examples in this article can provide Sass users a quick walkthrough of @use and index files. While also helping people in converting their projects from @import to @use since Sass will be phasing out @import over the next few years.
https://tannerdolby.netlify.app/writing/using-index-files-in-sass/
CC-MAIN-2021-49
refinedweb
998
63.9
6. Gunicorn¶ 6.1. Why Gunicorn?¶ We now need to replace the Django development server with a Python application server. I will explain later why we need this. For now we need to select which Python application server to use. There are three popular servers: mod_wsgi, uWSGI, and Gunicorn. mod_wsgi is for Apache only, and I prefer to use a method that can be used with either Apache or nginx. This will make it easier to change the web server, should such a need arise. I also find Gunicorn easier to setup and maintain. I used uWSGI for a couple of years and was overwhelmed by its features. Many of them duplicate features that already exist in Apache or nginx or other parts of the stack, and thus they are rarely, if ever, needed. Its documentation is a bit chaotic. The developers themselves admit it: “We try to make our best to have good documentation but it is a hard work. Sorry for that.” I recall hitting problems week after week and spending hours to solve them each time. Gunicorn, on the other hand, does exactly what you want and no more. It is simple and works fine. So I recommend it unless in your particular case there is a compelling reason to use one of the others, and so far I haven’t met any such compelling reason. 6.2. Installing and running Gunicorn¶ We will install Gunicorn with pip rather than with apt, because the packaged Gunicorn (both in Debian 8 and Ubuntu 16.04) supports only Python 2. /opt/$DJANGO_PROJECT/venv/bin/pip install gunicorn Now run Django with Gunicorn: su $DJANGO_USER source /opt/$DJANGO_PROJECT/venv/bin/activate export PYTHONPATH=/etc/opt/$DJANGO_PROJECT:/opt/$DJANGO_PROJECT export DJANGO_SETTINGS_MODULE=settings gunicorn $DJANGO_PROJECT.wsgi:application You can also write it as one long command, like this: PYTHONPATH=/etc/opt/$DJANGO_PROJECT:/opt/$DJANGO_PROJECT \ DJANGO_SETTINGS_MODULE=settings \ su $DJANGO_USER -c "/opt/$DJANGO_PROJECT/venv/bin/gunicorn \ $DJANGO_PROJECT.wsgi:application" Either of the two versions above will start Gunicorn, which will be listening at port 8000, like the Django development server did. Visit, and you should see your Django project in action. What actually happens here is that gunicorn, a Python program, does something like from $DJANGO_PROJECT.wsgi import application. It uses $DJANGO_PROJECT.wsgi and application because we told it so in the command line. Open the file /opt/$DJANGO_PROJECT/$DJANGO_PROJECT/wsgi.py to see that application is defined there. In fact, application is a Python callable. Now each time Gunicorn receives an HTTP request, it calls application() in a standardized way that is specified by the WSGI specification. The fact that the interface of this function is standardized is what permits you to choose between many different Python application servers such as Gunicorn, uWSGI, or mod_wsgi, and why each of these can interact with many Python application frameworks like Django or Flask. The reason we aren’t using the Django development server is that it is meant for, well, development. It has some neat features for development, such as that it serves static files, and that it automatically restarts itself whenever the project files change. It is, however, totally inadequate for production; for example, it might leave files or connections open, and it does not support processing many requests at the same time, which you really want. Gunicorn, on the other hand, does the multi-processing part correctly, leaving to Django only the things that Django can do well. Gunicorn is actually a web server, like Apache and nginx. However, it does only one thing and does it well: it runs Python WSGI-compliant applications. It cannot serve static files and there’s many other features Apache and nginx have that Gunicorn does not. This is why we put Apache or nginx in front of Gunicorn and proxy-pass requests to it. The accurate name for Gunicorn, uWSGI, and mod_wsgi would be “specialized web servers that run Python WSGI-compliant applications”, but this is too long, which is why I’ve been using the vaguer “Python application servers” instead. Gunicorn has many parameters that can configure its behaviour. Most of them work fine with their default values. Still, we need to modify a few. Let’s run it again, but this time with a few parameters: su $DJANGO_USER source /opt/$DJANGO_PROJECT/venv/bin/activate export PYTHONPATH=/etc/opt/$DJANGO_PROJECT:/opt/$DJANGO_PROJECT export DJANGO_SETTINGS_MODULE=settings gunicorn --workers=4 \ --log-file=/var/log/$DJANGO_PROJECT/gunicorn.log \ --bind=127.0.0.1:8000 --bind=[::1]:8000 \ $DJANGO_PROJECT.wsgi:application Here is what these parameters mean: --workers=4 Gunicorn starts a number of processes called “workers”, and each process, each worker that is, serves one request at a time. To serve five concurrent requests, five workers are needed; if there are more concurrent requests than workers, they will be queued. You probably need two to five workers per processor core. Four workers are a good starting point for a single-core machine. The reason you don’t want to increase this too much is that your Django project’s RAM consumption is approximately proportional to the number of workers, as each worker is effectively a distinct instance of the Django project. If you are short on RAM, you might want to consider decreasing the number of workers. If you get many concurrent requests and your CPU is underused (usually meaning your Django projects do a lot of disk/database access) and you can spare the RAM, you can increase the number of workers. Tip Check your CPU and RAM usage If your server gets busy, the Linux topcommand will show you useful information about the amount of free RAM, the RAM consumed by your Django project (and other system processes), and the CPU usage for various processes. You can read more about it in The top command: memory management and The top command: CPU usage. --log-file=/var/log/$DJANGO_PROJECT/gunicorn.log - I believe this is self-explanatory. --bind=127.0.0.1:8000 This tells Gunicorn to listen on port 8000 of the local network interface. This is the default, but we specify it here for two reasons: - It’s such an important setting that you need to see it to know what you’ve done. Besides, you could be running many applications on the same server, and one could be listening on 8000, another on 8001, and so on. So, for uniformity, always specify this. - We specify --bindtwice (see below), to also listen on IPv6. The second time would override the default anyway. --bind=[::1]:8000 This tells Gunicorn to also listen on port 8000 of the local IPv6 network interface. This must be specified if IPv6 is enabled on the virtual server. It is not specified, things may or may not work, and the system may be a bit slower even if things work. The reason is that the front-end web server, Apache or nginx, has been told to forward the requests to. It will ask the the resolver what “localhost” means. If the system is IPv6-enabled, the resolver will reply with two results, ::1, which is the IPv6 address for the localhost, and 127.0.0.1. The web server might then decide to try the IPv6 version first. If Gunicorn has not been configured to listen to that address, then nothing will be listening at port 8000 of ::1, so the connection will be refused. The web server will then probably try the IPv4 version, which will work, but it will have made a useless attempt first. I could make some experiments to determine exactly what happens in such cases, and not speak with “maybe” and “probably”, but it doesn’t matter. If your server has IPv6, you must set it up correctly and use this option. If not, you should not use this option. 6.3. Configuring systemd¶ The only thing that remains is to make Gunicorn start automatically. For this, we will configure it as a service in systemd. Note Older systems don’t have systemd systemd is relatively a novelty. It exists only in Debian 8 and later, and Ubuntu 15.04 and later. In older systems you need to start Gunicorn in another way. I recommend supervisor, which you can install with apt install supervisor. The first program the kernel starts after it boots is systemd. For this reason, the process id of systemd is 1. Enter the command ps 1 and you will probably see that the process with id 1 is /sbin/init, but if you look at it with ls -lh /sbin/init, you will see it’s a symbolic link to systemd. After systemd starts, it has many tasks, one of which is to start and manage the system services. We will tell it that Gunicorn is one of these services by creating file /etc/systemd/system/$DJANGO_PROJECT.service, with the following After creating that file, if you enter service $DJANGO_PROJECT start, it will start Gunicorn. However, it will not start it automatically at boot until we tell it systemctl enable $DJANGO_PROJECT. The [Service] section of the configuration file should be self-explanatory, so I will only explain the other two sections. Systemd doesn’t only manage services; it also manages devices, sockets, swap space, and other stuff. All these are called units; “unit” is, so to speak, the superclass. The [Unit] section contains configuration that is common to all unit types. The only option we need to specify there is Description, which is free text. Its purpose is only to show in the UI of management tools. Although $DJANGO_PROJECT will work as a description, it’s better to use something more verbose. As the systemd documentation says, “Apache2 Web Server” is a good example. Bad examples are “high-performance light-weight HTTP server” (too generic) or “Apache2” (too specific and meaningless for people who do not know Apache). The [Install] section tells systemd what to do when the service is enabled. The WantedBy option specifies dependencies. If, for example, we wanted to start Gunicorn before nginx, we would specify WantedBy=nginx.service. This is too strict a dependency, so we just specify WantedBy=multi-user.target. A target is a unit type that represents a state of the system. The multi-user target is a state all GNU/Linux systems reach in normal operations. Desktop systems go beyond that to the “graphical” target, which “wants” a multi-user system and adds a graphical login screen to it; but we want Gunicorn to start regardless whether we have a graphical login screen (we probably don’t, as it is a waste of resources on a server). As I already said, you tell systemd to automatically start the service at boot (and automatically stop it at system shutdown) in this way: systemctl enable $DJANGO_PROJECT Do you remember that in nginx and Apache you enable a site just by creating a symbolic link to sites-available from sites-enabled? Likewise, systemctl enable does nothing but create a symbolic link. The dependencies we have specified in the [Install] section of the configuration file determine where the symbolic link will be created (sometimes more than one symbolic links are created). After you enable the service, try to restart the server, and check that your Django project has started automatically. As you may have guessed, you can disable the service like this: systemctl disable $DJANGO_PROJECT This does not make use of the information in the [Install] section; it just removes all symbolic links. 6.4. More about systemd¶ While I don’t want to bother you with history, if you don’t read this section you will eventually get confused by the many ways you can manage a service. For example, if you want to tell nginx to reload its configuration, you can do it with either of these commands: systemctl reload nginx service nginx reload /etc/init.d/nginx reload Before systemd, the first program that was started by the kernel was init. This was much less smart than systemd and did not know what a “service” is. All init could do was execute programs or scripts. So if we wanted to start a service we would write a script that started the service and put it in /etc/init.d, and enable it by linking it from /etc/rc2.d. When init brought the system to “runlevel 2”, the equivalent of systemd’s multi-user target, it would execute the scripts in /etc/rc2.d. Actually it wasn’t init itself that did that, but other programs that init was configured to run, but this doesn’t matter. What matters is that the way you would start, stop, or restart nginx, or tell it to reload its configuration, or check its running status, was this: /etc/init.d/nginx start /etc/init.d/nginx stop /etc/init.d/nginx restart /etc/init.d/nginx reload /etc/init.d/nginx status The problem with these commands was that they might not always work correctly, mostly because of environment variables that might have been set, so the service script was introduced around 2005, which, as its documentation says, runs an init script “in as predictable an environment as possible, removing most environment variables and with the current working directory set to /.” So a better alternative for the above commands was service nginx start service nginx stop service nginx restart service nginx reload service nginx status The new way of doing these with systemd is the following: systemctl start nginx systemctl stop nginx systemctl restart nginx systemctl reload nginx systemctl status nginx Both systemctl and service will work the same with your Gunicorn service, because service is a backwards compatible way to run systemctl. You can’t manage your service with an /etc/init.d script, because we haven’t created any such script (and it would have been very tedious to do so, which is why we preferred to use supervisor before we had systemd). For nginx and Apache, all three ways are available, because most services packaged with the operating system are still managed with init scripts, and systemd has a backwards compatible way of dealing with such scripts. In future versions of Debian and Ubuntu, it is likely that the init scripts will be replaced with systemd configuration files like the one we wrote for Gunicorn, so the /etc/init.d way will cease to exist. Of the remaining two newer ways, I don’t know which is better. service has the benefit that it exists in non-Linux Unix systems, such as FreeBSD, so if you use both GNU/Linux and FreeBSD you can use the same command in both. The systemctl version may be more consistent with other systemd commands, like the ones for enabling and disabling services. Use whichever you like. 6.5. The top command: memory management¶ If your server gets busy and you wonder whether its RAM and CPU are enough, the Linux top command is a useful tool. Execute it simply by entering top. You can exit top by pressing q on the keyboard. When you execute top you will see an image similar to Fig. 6.1. Fig. 6.1 The top command Let’s examine available RAM first, which in Fig. 6.1 is indicated in the red box. The output of top is designed so that it fits in an 80-character wide terminal. For the RAM, the five values (total, used, free, buffers, and cached) can’t fit on the line that is labeled “KiB Mem”, so the last one has been moved to the line below, that is, the “cached Mem” indication belongs in “KiB Mem” and not in “KiB Swap”. The “total” amount of RAM is simply the total amount of RAM; it is as much as you asked your virtual server to have. The “used” plus the “free” equals the total. Linux does heavy caching, which I explain below, so the “used” should be close to the total, and the “free” should be close to zero. Since RAM is much faster than the disk, Linux caches information from the disk in RAM. It does so in a variety of ways: - If you open a file, read it, close it, then you open it again and read it again, the second time it will be much faster; this is because Linux has cached the contents of the file in RAM. - Whenever you write a file, you are likely to read it again, so Linux caches it. - In order to speed up disk writing, Linux doesn’t actually write to the disk when your program says f.write(data), not even when you close the file, not even when your program ends. It keeps the data in the cache and writes it later, attempting to optimize disk head movement. This is why some data may be lost when the system is powered off instead of properly shut down. The part of RAM that is used for Linux’s disk cache is what top shows as “buffers” and “cached”. Buffers is also a kind of cache, so it is the sum of “buffers” and “cache” that matters (the difference between “buffers” and “cached” doesn’t really matter unless you are a kernel developer). “Buffers” is usually negligible, so it’s enough to only look at “cache”. Linux doesn’t want your RAM sitting down doing nothing, so if there is RAM available, it will use it for caching. Give it more RAM and it will cache more. If your server has a substantial amount of RAM labeled “free”, it may mean that you have so much RAM that Linux can’t fill it in even with its disk cache. This probably means the machine is larger than it needs to be, so it’s a waste of resources. If, on the other hand, the cache is very small, this may mean that the system is short on RAM. On a healthy system, the cache should be 20–50% of RAM. Since we are talking about RAM, let’s also examine the amount of RAM used by processes. By default top sorts processes by CPU usage, but you can type M (Shift + m) to sort by memory usage (you can go back to sort by CPU usage by typing P). The RAM used by each process is indicated by the “RES” column in KiB and the “%MEM” column in percentage. There are two related columns; “VIRT”, for virtual memory, and “SHR”, for shared memory. First of all, you need to forget the Microsoft terminology. Windows calls “virtual memory” what everyone else calls “swap space”; and what everyone else calls “virtual memory” is a very different thing from swap space. In order to better understand what virtual memory is, let’s see it with this C program (it doesn’t matter if you don’t speak C): #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> int main() { char c; void *p; /* Allocate 2 GB of memory */ p = malloc(2L * 1024 * 1024 * 1024); if (!p) { fprintf(stderr, "Can't allocate memory: %s\n", strerror(errno)); exit(1); } /* Do nothing until the user presses Enter */ fputs("Press Enter to continue...", stderr); while((c = fgetc(stdin)) != EOF && c != '\n') ; /* Free memory and exit */ free(p); exit(0); } When I run this program on my laptop, and while it is waiting for me to press Enter, this is what top shows about it: . PID ... VIRT RES SHR S %CPU %MEM ... COMMAND 13687 ... 2101236 688 612 S 0.0 0.0 ... virtdemo It indicates 2 GB VIRT, but actually uses less than 1 MB of RAM, while swap usage is still at zero. Overall, running the program has had a negligible effect on the system. The reason is that the malloc function has only allocated virtual memory; “virtual” as in “not real”. The operating system has provided 2 GB of virtual address space to the program, but the program has not used any of that. If the program had used some of this virtual memory (i.e. if it had written to it), the operating system would have automatically allocated some RAM and would have mapped the used virtual address space to the real address space in the RAM. So virtual memory is neither swap nor swap plus RAM; it’s virtual. The operating system maps only the used part of the process’s virtual memory space to something real; usually RAM, sometimes swap. Many programs allocate much more virtual memory than they actually use. For this reason, the VIRT column of top is not really useful. The RES column, that stands for “resident”, indicates the part of RAM actually used. The SHR column indicates how much memory the program potentially shares with other processes. Usually all of that memory is included in the RES column. For example, in Fig. 6.1, there are four apache2 processes which I show again here: . PID ... VIRT RES SHR S %CPU %MEM ... COMMAND 23268 ... 458772 37752 26820 S 0.2 3.7 ... apache2 16481 ... 461176 55132 41840 S 0.1 5.4 ... apache2 23237 ... 455604 14884 9032 S 0.1 1.5 ... apache2 23374 ... 459716 38876 27296 S 0.1 3.8 ... apache2 It is unlikely that the total amount of RAM used by these four processes is the sum of the RES column (about 140 MB); it is more likely that something like 9 MB is shared among all of them, which would bring the total to about 110 MB. Maybe even less. They might also be sharing something (such as system libraries) with non-apache processes. It is not really possible to know how much of the memory marked as shared is actually being shared, and by how many processes, but it is something you need to take into account in order to explain why the total memory usage on your system is less than the sum of the resident memory for all processes. Let’s now talk about swap. Swap is disk space used for temporarily writing (swapping) RAM. Linux uses it in two cases. The first one is if a program has actually used some RAM but has left it unused for a long time. If a process has written something to RAM but has not read it back for several hours, it means the RAM is being wasted. Linux doesn’t like that, so it may save that part of RAM to the disk (to the swap space), which will free up the RAM for something more useful (such as caching). This is the case in Fig. 6.1. The system is far from low on memory, and yet it has used a considerable amount of swap space. The only explanation is that some processes have had unused data in RAM for too long. When one of these processes eventually attempts to use swapped memory, the operating system will move it from the swap space back to the RAM (if there’s not enough free RAM, it will swap something else or discard some of its cache). The second case in which Linux will use swap is if it’s low on memory. This is a bad thing to happen and will greatly slow down the system, sometimes to a grinding halt. You can understand that this is the case from the fact that swap usage will be considerable while at the same time the free and cached RAM will be very low. Sometimes you will be unable to even run top when this happens. Whereas in Windows the swap space (confusingly called “virtual memory”) is a file, on Linux it is usually a disk partition. You can find out where swap is stored on your system by examining the contents of file /proc/swaps, for example by executing cat /proc/swaps. (The “files” inside the /proc directory aren’t real; they are created by the kernel and they do not exist on the disk. cat prints the contents of files, similar to less, but does not paginate.) 6.6. The top command: CPU usage¶ The third line of top has eight numbers which add up to 100%. They are user, system, nice, idle, waiting, hardware interrupts, software interrupts, and steal, and indicate where the CPU spent its time in the last three seconds: - us (user) and sy (system) indicate how much of its time the processor was running programs in user mode and in kernel mode. Most code runs in user mode; but when a process asks the Linux kernel to do something (allocate memory, access the disk, network, or other device, start another process, etc.), the kernel switches to kernel mode, which means it has some priviliges that user mode doesn’t have. (For example, kernel mode has access to all RAM and can modify the mapping between the processes’ virtual memory and RAM/swap; whereas user mode simply has access to the virtual address space and doesn’t know what happens behind the scenes.) - ni (nice) indicates how much of its time the processor was running with a positive “niceness” value. If many processes need the CPU at the same time, a “nice” process has lower priority. The “niceness” is a number up to 19. A process with a “niceness” of 19 will practically only run when the CPU would otherwise be idle. For example, the GNOME desktop environment’s Desktop Search finds stuff in your files, and it does so very fast because it uses indexes. These indexes are updated in the background by the “tracker” process, which runs with a “niceness” of 19 in order to not make the rest of the system slower. Processes may also run with a negative niceness (up to -20), which means they have higher priority. In the list of processes, the NI column indicates the “niceness”. Most processes have the default zero niceness, and it is unlikely you will ever need to know more about all that. - id (idle) and wa (waiting) indicate how much time the CPU was sitting down doing nothing. “Waiting” is a special case of idle; it means that while the CPU was idle there was at least one process waiting for disk I/O. A high value of “waiting” indicates heavy disk usage. - The meaning of time spent in hi (hardware interrupts) and si (software interrupts) is very technical. If this is non-negligible, it indicates heavy I/O (such as disk or network). - st (steal) is for virtual machines. When nonzero, it indicates that for that amount of time the virtual machine needed to run something on the (virtual) CPU, but it had to wait because the real CPU was unavailable, either because it was doing something else (e.g. servicing another virtual machine on the same host) or because of reaching the CPU usage quota. If the machine has more than one CPUs or cores, the “%Cpu(s)” line of top shows data collectively for all CPUs; but you can press 1 to toggle between that and showing information for each individual CPU. In the processes list, the %CPU column indicates the amount of time the CPU was working for that process, either in user mode or in kernel mode (when kernel code is running, most of the time it is in order to service a process, so this time is accounted for in the process). The %CPU column can add up to more than 100% if you have more than one cores; for four cores it can add up to 400% and so on. Finally, let’s discuss about the CPU load. When your system is doing nothing, the CPU load is zero. If there is one process using the CPU, the load is one. If there is one process using the CPU and another process that wants to run and is queued for the CPU to become available, the load is two. The three numbers in the orange box in Fig. 6.1 are the load average in the last one, five, and 15 minutes. The load average should generally be less than the number of CPU cores, and preferably under 0.7 times the number of cores. It’s OK if it spikes sometimes, so the load average for the last minute can occasionally go over the number of cores, but the 5- or 15-minute average should stay low. For more information about the load average, there’s an excellent blog post by Andre Lewis, Understanding Linux CPU Load - when should you be worried? 6.7. Chapter summary¶ Install gunicornin your virtualenv. Create file /etc/systemd/system/$DJANGO_PROJECT.servicewith these Enable the service with systemctl enable $DJANGO_PROJECT, and start/stop/restart it or get its status with systemctl $COMMAND $DJANGO_PROJECT, where $COMMAND is start, stop, restart or status.
https://djangodeployment.readthedocs.io/en/latest/06-gunicorn.html
CC-MAIN-2019-35
refinedweb
4,803
61.87
Next.js is a React framework that allows programmers to create fast, static websites. It enables us to create hybrid apps with both server-rendered and statically created pages. There are also built-in integrations such as ESLint and TypeScript. Its growing popularity has made it widely been used in Web Application Development. In ThoughtLabs Belgium we are also using this framework for one of the client and we are in love with it and we really like recommending this Next.js framework to our clients. What is GraphQL? GraphQL is an API query language introduced by Facebook in 2012 that allows us to consume an API in a different way than the usual REST API. It’s made to make APIs more adaptable and quick. Instead of sending a GET call, we gather data with GraphQL. The GraphQL endpoint accepts a ‘query,’ which contains the data you wish to grab. country { "name": "Belgium", "capital": "Brussels", "currency": "EUR", "population": "200000" } you can submit a request for only the name and currency. What is Apollo GraphQL and how does it work? Apollo GraphQL is a GraphQL implementation that allows you to declare your queries within the UI components that require them, and then have those components automatically updated when query results arrive or change. The useQuery Hook encapsulates any other functionality for getting data, tracking loading and error conditions, and changing your UI when using Apollo Client. This encapsulation makes it simple to integrate query results into your components. Getting Started We’ll make a Next.js web application that displays a list of countries and their data in this tutorial. We have a few choices for obtaining data in Next.js, but we’ll select the static generation approach (SSG). The getStaticProps method in Next.js allows us to collect data at various stages of the lifecycle, allowing us to create a totally static app that serves data that has already been displayed to the page to the browser. In this tutorial, we’ll use the GraphQL API to construct a Next.js web application that displays a list of countries and their data. For the app, we’ll install the necessary packages. npx create-next-app country-list-with-graphql npm install graphql npm install @apollo/client Start your development server when the packages have been installed. cd country-list-with-graphql npm run dev Adding Apollo client to Next.js web application Clean up the code in your ‘pages/index.js’ directory, which should now look like this. export default function Home() { return( <div></div> ) } Then import the apollo client that was previously installed and define a new function beneath our component. import { ApolloClient, InMemoryCache, gql } from "@apollo/client"; export default function Home() { return <div></div>; } export async function getStaticProps() {} We’ll get props to utilise in our component with the getStaticProps method. We’re now ready to use the GraphQl API to get data. Here, we’ll utilise the Apollo Client to interact with the Countries GraphQL server. We’ll use the Next.js getStaticProps method to make our API request, which will allow us to dynamically construct props for our page. Appolo may use InMemoryCache to save the results of GraphQl queries in a cache. For passing queries to Apollo Client, gql is the recommended method. Now we must create a new Apollo Client instance within getStaticProps as follows: const client = new ApolloClient({ uri: "", cache: new InMemoryCache(), }); Then we can make a query const { data } = await client.query({ query: gql` query { countries { name native capital currency continent { name } phone languages { name } } } `, }); This produces a new GraphQl query within the gql tag, then uses client.query to make a new query request, and lastly, data is destructured from the results, which is where the information we require is saved. We describe what we want from the API within the query, which is countries, which returns a list of all countries, and then we can define what we want from each country. We must now return data from getStaticProps and console log it in our component in order to view what the data looks like in the console. This is because getStaticProps runs during the build process. return { props: { data: data.countries, }, }; To view what our data looks like, we can now console log the results in the component. export default function Home(results) { console.log(results) return <div></div>; } This is how your console should appear Adding the countries data We have the data for the countries, which we obtained by using the Apollo Client to make a request to the countries’ GraphQL server. Create a directory called components and a file called countries.js at the root of our project, and set it up like this: export default function Countries () { return( <div></div> ) } Return to the pages/index.js directory, import the Countries component, and render it with the results passed in as props. <section> <div> <div> <h1>Countries Data</h1> </div> <Countries countries={results} /> </div> </section> Return to the components/countries.js directory and add the code below. export default function Countries({ countries }) { return ( <section> <div> {countries.data.map((country, index) => ( <div key={index}> <h3>{country.name}</h3> <h5> Capital: <span> {country.capital} </span>{" "} </h5> <p> Continent: <span> {country.continent.name} </span>{" "} </p> <p> Languages: {country.languages.map((item, index) => ( <span key={index}>{item.name},</span> ))} </p> <p> Currency: <span>{country.currency}</span> </p> <p> Phone: <span>+{country.phone}</span> </p> </div> ))} </div> </section> ); } The countries list is now complete, and we can view all of the data we requested. Additional data can be added to the GraphQl query. Check out the documentation, then head to the playground to put some questions to the test.
https://thoughtlabs.be/blogs/using-apollo-graphql-to-fetch-data-in-next-js/
CC-MAIN-2022-40
refinedweb
950
57.37
Using the Google API with Socialite. As you progress through this post, it is assumed you have Laravel and Laravel Socialite installed. If you haven’t done that, please refer to the Socialite documentation on GitHub. Create an Application in the Google API Console Because our app will be a using Google for authentication and as a data resource, you must create an app in the Google API Console. Look for the “Create Project” link in the submenu at the top of the page to get started. Once you have an app created in the Google API Console, you’ll need to create or locate three pieces of authentication information: a Google server key, a client ID, and an app secret. Your app secret will be provided when you create your app in the Google API Console. The server key and client ID can both be found under the “Credentials” link in the sidebar of the Google API Console. If you don’t see the server key or client ID listed on the Credentials page, you’ll need to create them using the blue “Create Credentials” button. Once you have all three pieces of authentication information, add them, along with your app redirect URL, to your Laravel .env file. GOOGLE_SERVER_KEY=AIzaSyC_g8Uj5GGAqnPZaZAmlVMkUj0DXOVw0Z8 GOOGLE_CLIENT_ID=53500906325-ocfb3qbl0inpb249gnuir4988kn3ef52.apps.googleusercontent.com GOOGLE_APP_SECRET=YnceM3Bdn6JpboaFgc27B3Im GOOGLE_REDIRECT= Install the Google API PHP Client The next requirement for this project is to add the Google API PHP Client to your Laravel project. Just use Composer to install the Google API PHP Client. composer require google/apiclient:^2.0 After running this command, reference the Google API PHP Client in your auth/LoginController.php file. You’ll also want to reference any Google Service you want to use from the Google API PHP Client. In this example, we’re going to use Google’s People API to query a list of a Google user’s contacts. To do so, you’ll need to reference Google_Service_People in your auth/LoginController.php file as well. <?php namespace App\Http\Controllers\Auth; use Socialite; use Google_Client; use Google_Service_People; Declare API Scopes As part of the Socialite installation process, you added two methods to your auth /LoginController.php file: redirectToProvider() and handleProviderCallback(). Make sure you declare your API scopes in the redirectToProvider() method. In this example, we’ll be querying a Google user’s contacts using the API, so pass Google_Service_People::CONTACTS_READONLY to the scopes method on the Socialite object. public function redirectToProvider() { return Socialite::driver('google') ->scopes(['openid', 'profile', 'email', Google_Service_People::CONTACTS_READONLY]) ->redirect(); } Enable the API Endpoint Anytime you want to use a scope in the Google API, you need to enable the corresponding API service in the Google API Console. Return to the Google API Console and click “Library” in the side menu. The Google People API does not show in the list of popular API endpoints, so you’ll need to search for it using the provided search bar. Enable Google’s People API for your app. Use the Socialite Token for the Google API PHP Client Laravel Socialite and the Google API PHP Client have small differences in their data structure requirements. The token stored and provided by Socialite doesn’t match the data type the Google API PHP Client expects. Socialite provides an object, but the Google client expects a JSON array. In the handleProviderCallback() method in your auth/LoginController.php, you’ll need to create the array for the Google_Client using the token, refreshToken, and expiresIn properties of the Socialite object, as seen below in the $google_client_token variable (array). You can then JSON encode that array for use with the Google_Client::setAccessToken method.)); } After you’ve set the access token for the Google_Client library, you can query data from the API’s endpoints you’ve enabled and added to the scope.)); $service = new Google_Service_People($client); $optParams = array('requestMask.includeField' => 'person.phone_numbers,person.names,person.email_addresses'); $results = $service->people_connections->listPeopleConnections('people/me',$optParams); dd($results); } Important Note About Google’s People API Google’s People API documentation seems to suggest that email addresses come back as part of a default query, but that doesn’t seem to be true. To resolve this, you need to add requestMask.includeField as a parameter in the request. Refresh Tokens Socialite should handle a token refresh (if it is provided by the service) if an access token expires. If the token has expired, you’ll make a new request using Socialite, then pass the new access token to the Google API PHP Client in the same way demonstrated above. Try It Out Assuming you’re using php artisan serve to serve your site, you can visit on your development server to try it out! You should be prompted to log into your Google Account and let your Google Application access your account information and contact list. After clicking “Allow,” you should see a list of contacts from your Google account in the dd() output. Where Next? The code above is a basic example. You’ll want to store the Socialite access token in your DB or in a session variable as part of a practical application for users. That will get you closer to implementing this feature in an advanced way. Newsletter Join the weekly newsletter and never miss out on new tips, tutorials, and more. Laravel Jobs Laravel v5.4.18 is now released Laravel V5.4.18 is now released and available. This is a maintenance release but it does include a few new features t… Building maintainable PHP apps using Composer This tutorial brought to you by Recognize the problem of trying to use somebody’s library and having to copy it into…
https://laravel-news.com/google-api-socialite
CC-MAIN-2018-13
refinedweb
944
62.17
A representation of time zone information. Syntax #include <prtime.h> typedef struct PRTimeParameters { PRInt32 tp_gmt_offset; PRInt32 tp_dst_offset; } PRTimeParameters; Description Each geographic location has a standard time zone, and if Daylight Saving Time (DST) is practiced, a daylight time zone. The PRTimeParameters structure represents the local time zone information in terms of the offset (in seconds) from GMT. The overall offset is broken into two components: tp_gmt_offset - The offset of the local standard time from GMT. tp_dst_offset - If daylight savings time (DST) is in effect, the DST adjustment from the local standard time. This is most commonly 1 hour, but may also be 30 minutes or some other amount. If DST is not in effect, the tp_dst_offset component is 0. For example, the US Pacific Time Zone has both a standard time zone (Pacific Standard Time, or PST) and a daylight time zone (Pacific Daylight Time, or PDT). - In PST, the local time is 8 hours behind GMT, so tp_gmt_offsetis -28800 seconds. tp_dst_offsetis 0, indicating that daylight saving time is not in effect. - In PDT, the clock is turned forward by one hour, so the local time is 7 hours behind GMT. This is broken down as -8 + 1 hours, so tp_gmt_offsetis -28800 seconds, and tp_dst_offsetis 3600 seconds. A second example is Japan, which is 9 hours ahead of GMT. Japan does not use daylight saving time, so the only time zone is Japan Standard Time (JST). In JST tp_gmt_offset is 32400 seconds, and tp_dst_offset is 0.
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PRTimeParameters
CC-MAIN-2019-26
refinedweb
246
64.41
Intro Due to a lot of hard work and co-operative thinking, with notable contributions from Eric Scheid and Sascha Carlin, we have (I think) rough consensus on the list of dates we could possibly have in Atom. It seems unlikely that all of these can find a home in Atom. Sam Ruby also usefully pointed out that in the case that we don't get all the kinds of dates any one application needs, doing an extension should be easy since Dublin Core has lots of different dates and a handy namespace to put them in. The goal here is to get a feeling for how many of the possibilities are likely to have enough backing to have a realistic chance of getting in. Filling Out the Survey Here are the kinds of dates Atom could have, in alphabetical order of name. All dates are constrained to RFC3339 syntax as closely as possible. For the purposes of this survey, please do not argue about the definitions of the dates. I have gone back and reviewed the mailing list and I'm confident that these definitions are close enough to a a largely-shared view as to be useful. Created: the date the entry was originally created Dateline:the date the publisher wishes visibly to assign to the entry (note RFC3339 problems on this one) Issued: the date the entry was first made available Modified: the most recent date on which any aspect of the entry's content changed Updated: the most recent date on which the publisher wishes to draw attention to the entry's having changed To help maximize the information yield, let's sort ourselves into three roles. When you fill out the survey, consider categorizing yourself Readers: people who write software that reads syndication feeds Publishers: people who generate Atom feeds. Others: Other levels of interest Here are the possible choices in the survey: -2: I won't be able to use Atom if this is included -1: Atom would suffer from having this included 0: I don't care. +1: Atom would benefit from having this included +1R: Atom would benefit from having this included, and it should be REQUIRED in every Entry +2: I won't be able to use Atom if this isn't available +2R: I won't be able to use Atom if this isn't available, and it should be REQUIRED in every Entry Please Add Yourself to the Survey Below This Line [RogerBenningfield]: That was tough, because I had to -1 some things as a Publisher that would have been better described as -1R, or "Atom would suffer from having this as a required element". While I personally think four core date elements probably amounts to two too many from a view-source POV, I'm not actually opposed to any of them as long as they're optional. [ArveBersvendsen]: While I have answered the survey, I would like to state that I think this survey is premature and incomplete as long as we don't agree on the specific meaning of some of the dates - we really need to have consensus on "What does X mean" before we can actually form an opinion on X. [SamRuby]: I would like my vote to be interpreted as "-1 if optional and core". Optional core elements don't promote interoperability. Update: I acknowledge comments below by GrahamParks and RogerBenningfield that clear precedence rules may mitigate this concern. [SaschaCarlin]: Sam, sorry, I don't get what your vote means then [GrahamParks]: I don't think optional core elements cause problems if there are clear rules about what to do when one is missing (eg No dateline => dateline =issued). Required core elements force people to pollute fields they can't otherwise fill. [RogerBenningfield]: I agree with Sam and Graham simultaneously. The only out between those two extremes (a lot of OCEs that confuse people, or a lot of RCEs full of bad data) is to strip down the number of core elements to two (issued and modified) and broaden their definitions to be as widely applicable as possible. That means throwing out the subtle bits, leaving them for extensions. Something was issued at a given time, and something was modified at a given time. That's it. No other implications or qualifications. [AsbjornUlsberg]: I agree with ArveBersvendsen in his above comment. [KenMacLeod]: I agree with SamRuby, ArveBersvendsen, and AsbjornUlsberg, this survey is seriously skewed by using "loaded" terms with lots of implicit meaning that is not captured by the "local" definitions provided with them. [RobertSayre]: I was happy to discover I don't care about any of these. I'm think some of them are crufty, but not really damaging. [BobWyman]: As a reader of feeds, our primary concern is to detect how the feed generator wants entries to be displayed (i.e. DateLine) and when they want to call our attention to a change (i.e. Updated). We would like to be able to detect and track *any* change but that is less important than being able to communicate the writer's intent. [WalterUnderwood]: My "Publisher" perspective is "can I make an Atom feed from the data in our search index?" My "Reader" perspective is "is this useful to the crawler?" As a publisher, if created, issued, or modified were reqired, they would be -2 for us. The index does not have that information. I assumed that dateline is optional, because it is author-supplied. [HenriSivonen]: The system I have implemented Atom 0.3 feeds for does not record the creation date. Of course, it is not that I could not use Atom if that was required, but I could not use it without putting some other date (issued) there.
http://www.intertwingly.net/wiki/pie/DateSurvey?action=highlight&value=RobertSayre
CC-MAIN-2017-47
refinedweb
958
55.78
10 May 2011 15:00 [Source: ICIS news] LONDON (ICIS)--DuPont will not raise its Danish kroner (DKr) 700 ($135, €94.50)/share bid for ?xml:namespace> On 29 April, Wilmington, Delaware-headquartered DuPont raised its bid from DKr665/share first offered in January, and extended the tender offer – for the third time – to 13 May. “To be clear, we will not raise our price or further amend or extend our offer,” Kullman said, adding the terms represent DuPont’s “best and final offer”. Kullman made the announcement as the deadline approaches to ensure “there is no confusion in the market regarding our offer or our intentions”. She added that “all major Danish institutional investors” support DuPont’s increased offer. DuPont did not disclose the percentage of Danisco shares tendered to date. For the deal to close, a minimum of 80% of outstanding Danisco shares must be tendered. At close of business on 29 April, Danisco shareholders had tendered about 48% of the outstanding shares to DuPont. ($1 = DKr5.2) ($1 = €0.70)
http://www.icis.com/Articles/2011/05/10/9458635/dupont-will-not-raise-bid-for-danisco-again-ceo-kullman.html
CC-MAIN-2015-11
refinedweb
173
71.95
Scatter pictures with Google Charts In a recent post on his blog Matt Cutts asks: I almost wanted to call this post “Stupid Google Tricks” :-) What fun diagrams can you imagine making with the Google Charts Service? Here’s a stupid trick: you can use the Python Imaging Library to convert a picture into a URL which Google charts will render as the original picture. Here’s the original picture: here’s the version served up by Google charts: here’s the code: import Image import string def scatter_pixels(img_file): """Return the URL of a scatter plot of the supplied image The image will be rendered square and black on white. Adapt the code if you want something else. """ # Use simple chart encoding. To make things really simple # use a square image where each X or Y position corresponds # to a single encode value. simple = string.uppercase + string.lowercase + string.digits rsimple = simple[::-1] # Google charts Y reverses PIL Y w = len(simple) W = w * 3 img = Image.open(img_file).resize((w, w)).convert("1") pels = img.load() black_pels = [(x, y) for x in range(w) for y in range(w) if pels[x, y] == 0] xs = "".join(simple[x] for x, _ in black_pels) ys = "".join(rsimple[y] for _, y in black_pels) sqside = 3.0 return ( "?" "cht=s&" # Draw a scatter graph "chd=s:%(xs)s,%(ys)s&" # using simple encoding and "chm=s,000000,1,2.0,%(sqside)r,0&"# square black markers "chs=%(W)rx%(W)r" # at this size. ) % locals() and here’s the url it generates:…&chs=186x186 Smallprint. Google charts may return a 400 error for an image with a long URL (meaning lots of black pixels in this case). The upper limit on URL length doesn’t seem to be documented but a quick trawl through topics on the google charts group suggests others have bumped into it too. Connoisseurs of whacky pictures should pay CSS Homer Simpson a visit.
http://wordaligned.org/articles/scatter-pictures-with-google-charts
crawl-002
refinedweb
328
71.65
SYNOPSISnama [options] [project_name] DESCRIPTIONNama performs multitrack recording, effects processing, editing, mixing, mastering, live performance and general-purpose audio processing, using the Ecasound realtime audio engine. Audio FunctionalityAudio projects may be developed using tracks, buses, effects, sends, inserts, marks, regions, fades, sequences and edits. Each track may contain one or more WAV files, which may be recorded or imported. Effects processing by LADSPA, LV2 and Ecasound plugins may be performed in realtime, or cached (e.g. frozen) to a file. The user may toggle between cached and dynamic processing for a track. Audio regions may be altered, duplicated, time-shifted or replaced. Nama supports MIDI functionality via midish. Presets and templatesTo facilitate reuse, a track's plugins and inserts can be stored as an effect chain. Effect profiles and project templates provide templating for groups of tracks and entire projects, respectively. Audio frameworkAudio IO is via JACK or ALSA. Soundcard IO is normally routed to JACK with transparent fallback to ALSA. PersistenceProject data parameters related to audio configuration are serialized as JSON and tracked using Git when available. The entire project history is retained and may be managed using branches and tags. Nama supports Ladish Level 1 session handling. User interfacesNama has fully featured terminal command prompt, a Tk GUI, and experimental OSC and remote-command modes. The command prompt accepts Nama commands, Ecasound interactive-mode commands, shell commands and perl code. It has command history and autocompletion. The help system provides documentation and keyword search covering Nama commands and effects. The hotkey mode provides a convenient way to select, view, and modify effect parameters. By default, Nama displays a graphic user interface while the command processor runs in a terminal window. OPTIONS - --gui, -g - Start Nama in GUI mode - --text, -t - Start Nama in text mode - --config, -f - Specify configuration file (default: ~/.namarc) - --project-root, -d - Specify project root directory - --create-project, -c - Create project if it doesn't exist - --net-eci, -n - Use Ecasound's Net-ECI interface - --libecasoundc, -l - Use Ecasound's libecasoundc interface - --save-alsa, -a - Save/restore alsa state with project data - --help, -h - This help display Debugging options: - --no-static-effects-data, -s - Don't load effects data - --no-state, -m - Don't load project state - --no-static-effects-cache, -e - Bypass effects data cache - --regenerate-effects-cache, -r - Regenerate the effects data cache - --no-reconfigure-engine, -R - Don't automatically configure engine - --debugging-output, -D - Emit debugging information - --fake-jack, -J - Simulate JACK environment - --fake-alsa, -A - Simulate ALSA environment - --no-ecasound, -E - Don't spawn Ecasound process - --execute-command, -X - Supply a command to execute CONTROLLING NAMA/ECASOUNDThe Ecasound audio engine is configured through use of chain setups that specify the signal processing network. Nama serves as an intermediary, taking high-level commands from the user, generating appropriate chain setups for user tasks such as recording, playback, mixing, etc., and running the audio engine. Configuration CommandsConfiguration commands affect future runs of the audio engine. For example, rec, play, mon and off determine whether the current track will get its audio stream from an external (e.g. live) source, whether an existing WAV file will be played back, and whether a new WAV file will be recorded. Nama responds to these commands by reconfiguring the engine and displaying the updated track status. See 'man ::ChainSetup' for details on how the chain setup created. Realtime CommandsOnce a chain setup is loaded and the engine is launched, commands can be issued to control the realtime behavior of the audio processing engine. These commands include transport "start" and "stop", playback repositioning commands such "forward", "rewind" and "setpos". Effects may be added, modified or removed while the engine is running. ConfigurationGeneral configuration of sound devices and program options is performed by editing the .namarc file. On Nama's first run, a default version of .namarc is usually placed in the user's home directory. Tk GRAPHICAL UIInvoked by default if Tk is installed, this interface provides a subset of Nama's functionality on two windows: Main WindowThe top section has buttons for creating, loading and saving projects, adding tracks, adding effects to tracks. In short, for setup. Below are buttons for controlling the transport (start, stop and friends) and for setting marks. The GUI project name bar and time display change color to indicate whether the upcoming operation will include live recording (red), mixdown (yellow) or playback (green). Effects WindowThe effects window provides sliders for each effect parameter of each track. Parameter range, defaults, and log/linear scaling hints are automatically detected. Text-entry widgets are used to enter parameters values for plugins without hinted ranges. Any parameter label can be clicked to add a parameter controller. Terminal WindowThe command prompt is available in the terminal window during GUI operation. Text commands are used to access Nama's more advanced functions. TEXT USER INTERFACEPress the Enter key if necessary to get the command prompt, which will look something like this: - "nama sax ('h' for help)>" In this instance, 'sax' is the current track. When using buses, the bus is indicated before the track: - "nama Strings/violin ('h' for help)>" At the prompt, you can enter Nama and Ecasound commands, Perl code preceded by "eval" or shell code preceded by "!". Multiple commands on a single line are allowed if delimited by semicolons. Usually the lines are split on semicolons and the parts are executed sequentially, however if the line begins with "eval" or "!" the entire line (up to double semicolons ';;' if present) will be given to the corresponding interpreter. You can access command history using up-arrow/down-arrow. Type "help" for general help, "help command" for help with "command", "help foo" for help with commands containing the string "foo". "help_effect foo bar" lists all plugins/presets/controller containing both foo and bar. Tab-completion is provided for Nama commands, Ecasound-iam commands, plugin/preset/controller names, and project names. Many effects have abbreviations, such as 'afx' for 'add_effect'. TRACKSEach track has a descriptive name (i.e. vocal) and an integer track-number assigned when the track is created. New user tracks initially belong to the Main bus. Track output signals are usually mixed and pass through the Master track on the way to soundcard for monitoring. The following sections describes track attributes and their effects. WidthSpecifying 'mono' means the track has one input channel, which will be recorded as a mono WAV file. Mono track signals are automatically duplicated to stereo and a pan effect is provided. Specifying 'stereo' for a track means that two channels of audio input will be recorded as an interleaved stereo WAV file. You can also use a 'stereo' declaration to avoid the automatic channel copy usually applied to single-channel sources. Specifying N channels for a track ('set width N') means N successive input channels will be recorded as an N-channel interleaved WAV file. REC/PLAY/MON/OFFEach track, including Master and Mixdown, has its own REC/MON/PLAY/OFF setting. The MON setting means that the track source is connected to the track input, and the track output is supplied for monitoring by the Main bus and other submixes if any. The REC setting prepares the track is ready to record a WAV file. The PLAY setting enqueues an audio file for playback from disk as the track source. REC and PLAY settings also create the monitoring routes associated with MON status. OFF setting tells Nama to remove the track from the audio network. OFF status also results when no audio source is available. A track with no recorded WAV files will show OFF status, even if set to PLAY. Bus setting Buses can force the status of their member tracks to OFF. Nama provides MON and OFF settings for buses. OFF (set by "bus_off") removes all member tracks from the chain setup, MON (set by "bus_mon" restores them. The mixplay command sets the Mixdown track to PLAY and the Main bus to OFF. Version NumbersMultiple WAV files (``takes'') can be recorded for each track. These are distinguished by a version number that increments with each recording run, i.e. sax_1.wav, sax_2.wav, etc. All WAV files recorded in the same run have the same version numbers. The version numbers of files for playback can be selected at the bus or track level. By setting the bus version to 5, you can play back version 5 of several tracks at once. Version 5 could signify the fifth take of a song, or the fifth song of a live recording session. The track version setting, if present, overrides the bus setting. Setting the track version to zero restores control of the version number to the bus. The Main bus version setting does not currently propagate to other buses. MarksMarks in Nama are similar to those in other audio editing software. One limitation is that mark positions are relative to the beginning of an Ecasound chain setup. If your project involves a single track, and you will be shortening the stream by setting a region to play, set any marks you need after defining the region. RegionsThe "region" command allows you to define endpoints for a portion of an audio file. You can then use the "shift" command to move the region to the desired time position. Each track can have one region definition. To create multiple regions, the "new_region" command takes a pair of marks to create a read-only copy of the current track with the specified region definition. You can control this region as you would any other other track, applying effects, adjusting volume, etc. Currently, regions are not clipped out of their host track. This feature may be implemented in future. Using Tracks from Other Projects The "link_track" clones a read-only track from another track, which may belong to a different project. EffectsEach track gets volume and pan effects by default. New effects added using "add_effect" are applied before pan volume controls. You can position effects anywhere you choose using "insert_effect" or "position_effect". Fades Nama allows you to place fades on any track. Fades are defined by mark position and duration. An additional volume operator, -eadb, is applied to each track to host the envelope controller that implements fades. Sends and Inserts The "send" command can route a track's post-fader output to a soundcard channel or JACK client in addition to the normal mixer input. Nama currently allows one aux send per track. The "add_insert" command configures a pre- or post-fader send-and-return to soundcard channels or JACK clients. Wet and dry signal paths are provided, with a default setting of 100% wet. Each track can have one pre-fader and one post-fader insert. BunchesA bunch is just a list of track names. Bunch names are used with the keyword "for" to apply one or more commands to several tracks at once. A bunch can be created with the "new_bunch" command. Any bus name can also be treated as a bunch. Finally, several system defined bunches are available: - rec, mon, off - All tracks with the corresponding setting in the current bus BusesSub Buses Buses enable multiple tracks to be routed through a single mix track before feeding the Main mixer bus (or possibly, another bus.) The following commands create a bus and assign three tracks to it. The mix track takes the name of the bus and is stereo by default. # create a bus named Strings with a same-named mix track add_bus Strings # create tracks for the bus add_tracks violin cello bass # move the tracks from the Main bus (default) to the Strings bus for violin cello bass; move_to_bus Strings # use the mix track to control bus output volume Strings vol - 10 Submixes Submixes are a type of bus used to provide instrument monitors, or to send the outputs from multiple user tracks to an external program such as jconverter. ROUTING General NotesWhile Nama can address tracks by either name and track number, the chain setups use the track number exclusively. The Master track (mixer output control) is always chain 1, the Mixdown track is always chain 2. In single-engine mode, Nama uses Ecasound loop devices where necessary to connect two tracks, or to allow one track to have multiple inputs or outputs. Each loop device adds one buffer, which increases latency. In dual-engine mode, JACK ports are used for interconnections instead of loop devices. Flow DiagramsThe following diagrams apply to Nama's single-engine, mode. (The same topology is used in dual-engine mode.) Let's examine the signal flow from track 3, the first available user track. Assume track 3 is named ``sax''. We will divide the signal flow into track and mixer sections. Parentheses show the track number/name. The stereo outputs of each user track terminate at Master_in, a loop device at the mixer input. Track, REC status Sound device --+---(3)----> Master_in /JACK client | +---(R3)---> sax_1.wav REC status indicates that the source of the signal is the soundcard or JACK client. The input signal will be written directly to a file except in the special preview and doodle modes, or if "rec_disable" is issued. Track, MON status sax_1.wav ------(3)----> Master_in Mixer, with mixdown enabled In the second part of the flow graph, the mixed signal is delivered to an output device through the Master chain, which can host effects. Usually the Master track provides final control before audio output or mixdown. Master_in --(1)--> Master_out --+--------> Sound device | +-->(2)--> Mixdown_1.wav Mastering Mode In mastering mode (invoked by "master_on" and released "master_off") the following network, receives the Master track signal as input and provides an output to the soundcard or WAV file. +- Low -+ | | Master_in --- Eq --+- Mid -+--- Boost -> soundcard/wav_out | | +- High + The Eq track hosts an equalizer. The Low, Mid and High tracks each apply a bandpass filter, a compressor and a spatialiser. The Boost track applies gain and a limiter. These effects and their default parameters are defined in the configuration file .namarc. MixdownThe "mixdown" command configures Nama for mixdown. The Mixdown track is set to REC (equivalent to "Mixdown rec") and the audio monitoring output is turned off (equivalent to "Master off"). Mixdown proceeds after you start the transport. As a convenience, Mixdown_nn.wav will be symlinked to <branch_name_nn.wav> in the project directory. (If git is disabled or not available <project_name_nn.wav> is used instead.) Corresponding encoded files are created if the ``mixdown_encodings'' option is set. Acceptable values are a space-separated list. The default is ``mixdown_encodings: ogg mp3''. The Preview and Doodle Modes, and the Eager SettingThese non-recording modes, invoked by "preview" and "doodle" commands tweak the routing rules for special purposes. Preview mode disables recording of WAV files to disk. Doodle mode disables PLAY inputs while excluding any tracks with the same source as a currently routed track. The "arm" command releases both preview and doodle modes. The eager setting causes the engine to start immediately following a reconfiguration. These modes are unnecessary in Nama's dual-engine mode. Saving ProjectsThe "save" command is the usual way to preserve your work. When you type "save", Settings related to the state of the project are saved in the file State.json in the current project directory. State.json is tracked by git. "save" updates several other data files as well: Aux.json, in the current project directory, contains data that is part of the project (such as command history, track comments, and current operating modes) but with no direct effect on the project audio. global_effect_chains.json, in the project root directory, contains system and user defined effect chains. Save without Git "save somename.json" will save project state to a file of that name. Similarly "get somename.json" will load the corresponding file. The .json suffix may be omitted if ``use_git: 0'' is set in .namarc. Save with Git When git is installed, Nama uses it to store snapshots of every step in the history of your project. While you can continue using the same "save" and "get" with snapshots, the underlying version control gives them huge advantages over files: (1) they can sprout branches, (2) they retain their history and (3) they are never overwritten. When you type "save initial-mix", the latest snapshot is tagged with the name ``initial-mix'', which you can recall later with the command "get initial-mix". You can include a comment with the snapshot: "save initial-mix "sounds good enough to send to the front office"" Nama lets you create new branches, starting at any snapshot. To start a new branch called compressed-mix starting at a snapshot called initial-mix you would say: "new_branch compressed-mix initial-mix" If you want to go back to working on the master branch, use "branch master". You can also issue native git commands at the Nama prompt. Git history example All projects begin on the ``master'' branch. Because this is the default branch, it is not displayed in the prompt. Otherwise ``master'' is not special in any way. In the graphs below, the letters indicate named snapshots. create test-project ... save a ... save b ... save c ---a---b---c (master) get a ... save d ... save e ... save f d---e---f (a-branch) / -----a----b---c (master) Now, you want to go back to try something different at ``c'': get c ... save g d---e---f (a-branch) / ----a----b---c (master) \ g (c-branch CURRENT HEAD) You could also go back to master, and restart from there: get master ... save h ... save i d---e---f (a-branch) / ----a----b---c---h---i (master CURRENT HEAD) \ g (c-branch) While the merging of branches may be possible, the function has not been tested. ExitingWhen you type "quit" Nama will automatically save your work to State.json. If you don't want this behavior, use Ctrl-C to exit Nama. Jack ports list fileUse source filename.ports to ask Nama to connect multiple JACK ports listed in a file filename.ports to the input port(s) of that track. If the track is stereo, ports from the list are alternately connected to left and right channels. Track editsAn edit consists of audio clips and data structures associated with a particular track and version. The edit replaces part of the original WAV file, allowing you to fix wrong notes, or substitute one phrase for another. Each track can host multiple edits. Edits are non-destructive; they are achieved by using Ecasound's ability to crossfade and sequence. Select the track to be edited and the correct version. Before creating the edit, you will now need to create three marks: - play start point =item * rec start point =item * rec end point The edit will replace the audio between the rec start and rec end points. There are two ways to set these points. set_edit_points command Position the playback head a few seconds before the edit. Enter the set_edit_points command. This will start the engine. Hit the P key three times to designate the playback start, punch-in and punch-out positions. Specify points individually Position the playback head at the position you want playback for the edit to start. Enter the set_play_start_mark command. Use the same procedure to set the rec start and rec end positions using the set_rec_start_mark and set_rec_end_mark commands. Provide marks as arguments to new_edit (not implemented) Type new_edit play_start_mark rec_start_mark rec_end_mark.) Create the edit Enter the new_edit command to create the necessary tracks and data structures. Use preview_edit to confirm the edit positions. The engine will run and you will hear the host track with the target region removed. Playback will be restricted to the edit region. You may use preview_out to hear the clip to be removed. Use list_marks to see the edit marks and modify_mark to nudge them into perfect position. Once you are satisfied with the mark positions, you are ready to record your edit. Enter start_edit. Playback will begin at first mark. The replacement clip will be recorded from the source specified in the original track. Each start_edit command will record an additional version on the edit track. redo_edit will delete (destructively) the most recent audio clip and begin recording anew. You may specify another range for editing and use the editing procedure again as many times as you like. Edits may not overlap. Merging edits merge_edits will recursively merge all edits applied to the current track and version, creating a new version. I recommend that you merge edits when you are satisfied, with the results to protect your edits against an accidental change in mark, region or version settings. restore_edits acts on a merged version of the current track, selecting the prior unmerged version with all edits and region definitions in ``live'' form. You may continue to create new edits. TO BE IMPLEMENTED list_edits will label the edits by index and time. end_edit_mode will restore normal playback mode destroy_edit Behind the scenes, the host track becomes the mix track to a bus. Sources for the bus are the original audio track, and zero or more edits, each represented by one track object. REMOTE CONTROLYou can send now send commands from a remote process, and also get information back. Understand that this code opens a remote execution hole. In .namarc you need something like: remote_control_port: 57000 Then Nama will set up a listener for remote commands. The usual return value will be a single newline. However, if you send an 'eval' command followed by perl code, the return value will be the result of the perl code executed with a newline appended. If the result is a list, the items will be joined by spaces into a single string. If the result is an object or data structure, it will be returned in a serialized form. For example, if you send this string: eval $this_track->name The return value will be the name of the current track. TEXT COMMANDS Help commandshelp (h) - Display help on Nama commands. - help [ <integer:help_topic_index> | <string:help_topic_name> | <string:command_name> ] help marks # display the help category marks and all commands containing marks help 6 # display help on the effects category help mfx # display help on modify_effect - shortcut mfx help_effect (hfx he) - Display detailed help on LADSPA or LV2 effects. - help_effect <string:label> | <integer:unique_id> help_effect 1970 # display help on Fons Adriaensen's parametric EQ (LADSPA) help_effect etd # prints a short message to consult Ecasound manpage, # where the etd chain operator is documented. hfx lv2-vocProc # display detailed help on the LV2 VocProc effect find_effect (ffx fe) - Display one-line help for effects matching the search string(s). - find_effect <string:keyword1> [ <string:keyword2>... ] find_effect compressor # List all effects containing ``compressor'' in their name or parameters fe feedback # List all effects matching ``feedback'' # (for example a delay with a feedback parameter) General commandsexit (quit q) - Exit Nama, saving settings (the current project). - exit memoize - Enable WAV directory caching, so Nama won't have to scan the entire project folder for new files after every run. (default) - memoize unmemoize - Disable WAV directory caching. - unmemoize Transport commandsstop (s) - Stop transport. Stop the engine, when recording or playing back. - stop start (t) - Start the transport rolling - start rec # prepare the curent track to be recorded. start # Start the engine/transport rolling (play now!) stop # Stop the engine, cleanup, prepare to review getpos (gp) - Get the current playhead position (in seconds). - getpos start # Start the engine. gp # Get the current position of the playhead. Where am I? setpos (sp) - Set current playhead position (in seconds). - setpos <float:position_seconds> setpos 65.5 forward (fw) - Move playback position forwards (in seconds). - forward <float:increment_seconds> fw 23.7 rewind (rw) - Move playback position backwards (in seconds). - rewind <float:decrement_seconds> rewind 6.5 to_start (beg) - Set the playback head to the start. A synonym for setpos 0. - to_start to_end (end) - Set the playback head to end minus 10 seconds. - to_end ecasound_start - Ecasound-only start. Nama will not monitor the transport. For diagnostic use. - ecasound_start ecasound_stop - Ecasound-only stop. Nama will not monitor the transport. For diagnostic use. - ecasound_stop restart_ecasound - Restart the Ecasound process. May help if Ecasound has crashed or is behaving oddly. - restart_ecasound preview (song) - Enter the preview mode. Configure Nama for playback and passthru of live inputs without recording (for mic test, rehearsal, etc.) - preview rec # Set the current track to record from its source. preview # Enter the preview mode. start # Playback begins. You can play live, adjust effects, # forward, rewind, etc. stop # Stop the engine/transport. arm # Restore to normal recording/playback mode. doodle (live) - Enter doodle mode. Passthru of live inputs without recording. No playback. Intended for rehearsing and adjusting effects. - doodle doodle # Switch into doodle mode. start # start the engine/transport running. (fool around) stop # Stop the engine. arm # Return to normal mode, allowing play and record to disk Mix commandsmixdown (mxd) - Enter mixdown mode for subsequent engine runs. You will record a new mix each time you use the start command until you leave the mixdown mode using ``mixoff''. - mixdown mixdown # Enter mixdown mode start # Start the transport. The mix will be recorded by the # Mixdown track. The engine will run until the # longest track ends. (After mixdown Nama places # a symlink to the WAV file and possibly ogg/mp3 # encoded versions in the project directory.) mixoff # Return to the normal mode. mixplay (mxp) - Enter Mixdown play mode, setting user tracks to OFF and only playing the Mixdown track. Use ``mixoff'' to leave this mode. - mixplay mixplay # Enter the Mixdown play mode. start # Play the Mixdown track. stop # Stop playback. mixoff # Return to normal mode. mixoff (mxo) - Leave the mixdown or mixplay mode. Sets Mixdown track to OFF, user tracks to MON. - mixoff automix - Normalize track volume levels and fix DC-offsets, then mixdown. - automix master_on (mr) - Turn on the mastering mode, adding tracks Eq, Low, Mid, High and Boost, if necessary. The mastering setup allows for one EQ and a three-band multiband compression and a final boosting stage. Using ``master_off'' to leave the mastering mode. - master_on mr # Turn on master mode. start # Start the playback. # Now you can adjust the Boost or global EQ. stop # Stop the engine. master_off (mro) - Leave mastering mode. The mastering network is disabled. - master_off Track commandsadd_track (add new) - Create a new audio track. - add_track <string:name> add_track clarinet # create a mono track called clarinet with input # from soundcard channel 1. add_tracks - Create one or more new tracks in one go. - add_tracks <string:name1> [ <string:name2>... ] add_tracks violin viola contra_bass add_midi_track (amt) - Create a new MIDI track. - add_midi_track <string:name> link_track (link) - Create a read-only track, that uses audio files from another track. - link_track [<string:project_name>] <string:track_name> <string:link_name> link my_song_part1 Mixdown part_1 # Create a read-only track ``part_1'' in the current project # using files from track ``Mixdown'' in project ``my_song_part_1''. # link_track compressed_piano piano # Create a read-only track ``compressed_piano'' using files from # track ``piano''. This is one way to provide wet and dry # (processed and unprocessed) versions of same source. # Another way would be to use inserts. import_audio (import) - Import a sound file (wav, ogg, mp3, etc.) to the current track, resampling it if necessary. The imported file is set as current version. - import_audio <string:full_path_to_file> [ <integer:frequency> ] import /home/samples/bells.flac # import the file bells.flac to the current track import /home/music/song.mp3 44100 # import song.mp3, specifying the frequency set_track - Directly set current track parameters (use with care!). - set_track <string:track_field> <value> record (rec) - Set the current track to record its source. Creates the monitoring route if necessary. Recording to disk will begin on the next engine start. Use the ``mon'' or ``off'' commands to disable recording. - record rec # Set the current track to record. start # A new version (take) will be written to disk, # creating a file such as sax_1.wav. Other tracks # may be recording or playing back as well. stop # Stop the recording/playback, automatically enter playback mode play - Set the current track to playback the currently selected version. Creates the monitoring route if necessary. The selected audio file will play the next time the engine starts. - play mon - Create a monitoring route for the current track at the next opportunity. - mon off - Remove the monitoring route for the current track and all track I/O at the next opportunity. You can re-include it using ``mon'', ``play'' or ``rec'' commands. - off source (src r) - Set the current track's input (source), for example to a soundcard channel, or JACK client name - source <integer:soundcard_channel> | <string:jack_client_name> | <string:jack_port_name> <string:jack_ports_list> | | <string:track_name> | <string:loop_id> | 'jack' | 'null' source 3 # Take input from soundcard channel 3 (3/4 if track is stereo) # source null # Track's input is silence. This is useful for when an effect such # as a metronome or signal generator provides a source. # source LinuxSampler # Record input from the JACK client named LinuxSampler. # source synth:output_3 # record from the JACK client synth, using the # port output_3 (see he jackd and jack_lsp manpages # for more information). # source jack # This leaves the track input exposed as JACK ports # such as Nama:sax_in_1 for manual connection. # source kit.ports # The JACK ports listed in the file kit.ports (if it exists) # will be connected to the track input. # # Ports are listed pairwise in the .ports files for stereo tracks. # This is convenient for use with the Hydrogen drumkit, # whose outputs use one JACK port per voice. send (aux) - Set an aux send for the current track. Remove sends with remove_send . - send <integer:soundcard_channel> | <string:jack_client_name> | <string:loop_id> send 3 # Send the track output to soundcard channel 3. send jconvolver # Send the track output to the jconvolver JACK client. remove_send (nosend noaux) - Remove aux send from the current track. - remove_send stereo - Configure the current track to record two channels of audio - stereo mono - Configure the current track to record one channel of audio - mono set_version (version ver) - Select a WAV file, by version number, for current track playback (Overrides a bus-level version setting) - set_version <integer:version_number> piano # Select the piano track. version 2 # Select the second recorded version sh # Display information about the current track destroy_current_wav - Remove the currently selected recording version from the current track after confirming user intent. This DESTRUCTIVE command removes the underlying audio file from your disk. Use with caution. - destroy_current_wav list_versions (lver) - List WAV versions of the current track. This will print the numbers. - list_versions list_versions # May print something like: 1 2 5 7 9 # The other versions might have been deleted earlier by you. vol (v) - Change or show the current track's volume. - vol [ [ + | - | / | * ] <float:volume> ] vol * 1.5 # Multiply the current volume by 1.5 vol 75 # Set the current volume to 75 # Depending on your namarc configuration, this means # either 75% of full volume (-ea) or 75 dB (-eadb). vol - 5.7 # Decrease current volume by 5.7 (percent or dB) vol # Display the volume setting of the current track. mute (c cut) - Mute the current track by reducing the volume parameter. Use ``unmute'' to restore the former volume level. - mute unmute (nomute C uncut) - Restore previous volume level. It can be used after mute or solo. - unmute unity - Set the current track's volume to unity. This will change the volume to the default value (100% or 0 dB). - unity vol 55 # Set volume to 55 unity # Set volume to the unity value. vol # Display the current volume (should be 100 or 0, # depending on your settings in namarc.) solo (sl) - Mute all tracks but the current track or the tracks or bunches specified. You can reverse this with nosolo. - solo [ <strack_name_1> | <string:bunch_name_1> ] [ <string:track_name_2 | <string:bunch_name_2> ] ... solo # Mute all tracks but the current track. nosolo # Unmute all tracks, restoring prior state. solo piano bass Drums # Mute everything but piano, bass and Drums. nosolo (nsl) - Unmute all tracks which have been muted by a solo command. Tracks that had been muted before the solo command stay muted. - nosolo all - Unmute all tracks that are currently muted - all piano # Select track piano mute # Mute the piano track. sax # Select the track sax. solo # Mute other tracks nosolo # Unmute other tracks (piano is still muted) all # all tracks play pan (p) - Change or display the current panning position of the current track. Panning is moving the audio in the stereo panorama between right and left. Position is given in percent. 0 is hard left and 100 hard right, 50% is dead centre. - pan [ <float:pan_position_in_percent> ] pan 75 # Pan the track to a position between centre and hard right p 50 # Move the current track to the centre. pan # Show the current position of the track in the stereo panorama. pan_right (pr) - Pan the current track hard right. this is a synonym for pan 100. Can be reversed with pan_back. - pan_right pan_left (pl) - Pan the current track hard left. This is a synonym for pan 0. Can be reversed with pan_back. - pan_left pan_center (pc) - Pan the current track to the centre. This is a synonym for pan 50. Can be reversed with pan_back. - pan_center pan_back (pb) - Restore the current track's pan position prior to pan_left, pan_right or pan_center commands. - pan_back show_tracks (lt show) - Show a list of tracks, including their index number, volume, pan position, recording status and source. - show_tracks show_tracks_all (sha showa) - Like show_tracks, but includes hidden tracks as well. Useful for debugging. - show_tracks_all show_bus_tracks (ltb showb) - Show a list of tracks in the current bus. - show_bus_tracks show_track (sh -fart) - Display full information about the current track: index, recording status, effects and controllers, inserts, the selected WAV version, and signal width (channel count). - show_track Setup commandsshow_mode (shm) - Display the current record/playback mode. this will indicate the mode (doodle, preview, etc.) and possible record/playback settings. - show_mode Track commandsshow_track_latency (shl) - Display the latency information for the current track. - show_track_latency Diagnostics commandsshow_latency_all (shla) - Dump all latency data. - show_latency_all Track commandsset_region (srg) - Specify a playback region for the current track using marks. Can be reversed with remove_region. - set_region <string:start_mark_name> <string:end_mark_name> sax # Select ``sax'' as the current track. setpos 2.5 # Move the playhead to 2.5 seconds. mark sax_start # Create a mark sp 120.5 # Move playhead to 120.5 seconds. mark sax_end # Create another mark set_region sax_start sax_end # Play only the audio from 2.5 to 120.5 seconds. add_region - Make a copy of the current track using the supplied a region definition. The original track is untouched. - add_region <string:start_mark_name> | <float:start_time> <string:end_mark_name> | <float:end_time> [ <string:region_name> ] sax # Select ``sax'' as the current track. add_region sax_start 66.7 trimmed_sax # Create ``trimmed_sax'', a copy of ``sax'' with a region defined # from mark ``sax_start'' to 66.7 seconds. remove_region (rrg) - Remove the region definition from the current track. Remove the current track if it is an auxiliary track. - remove_region shift_track (shift playat pat) - Choose an initial delay before playing a track or region. Can be reversed by unshift_track. - shift_track <string:start_mark_name> | <integer:start_mark_index | <float:start_seconds> piano # Select ``piano'' as current track. shift 6.7 # Move the start of track to 6.7 seconds. unshift_track (unshift) - Restore the playback start time of a track or region to 0. - unshift_track modifiers (mods mod) - Add/show modifiers for the current track (man ecasound for details). This provides direct control over Ecasound track modifiers It is not needed for normal work. - modifiers [ Audio file sequencing parameters ] modifiers select 5 15.2 # Apply Ecasound's select modifier to the current track. # The usual way to accomplish this is with a region definition. nomodifiers (nomods nomod) - Remove modifiers from the current track. - nomodifiers normalize (ecanormalize) - Apply ecanormalize to the current track version. This will raise the gain/volume of the current track as far as possible without clipping and store it that way on disk. Note: this will permanently change the file. - normalize fixdc (ecafixdc) - Fix the DC-offset of the current track using ecafixdc. Note: This will permanently change the file. - fixdc autofix_tracks (autofix) - Apply ecafixdc and ecanormalize to all current versions of all tracks, set to playback (MON). - autofix_tracks remove_track - Remove the current track with its effects, inserts, etc. Audio files are unchanged. - remove_track Bus commandsbus_mon (bmon) - Set the current bus mix_track to monitor (the default behaviour). - bus_mon bus_off (boff) - Set current bus mixtrack to OFF. Can be reversed with bus_rec or bus_mon. - bus_off Group commandsbus_version (bver gver) - Set the default monitoring version for tracks in the current bus. - bus_version add_bunch (abn) - - add_bunch <string:bunch_name> [ <string:track_name_1> | <integer:track_index_1> ] ... add_bunch strings violin cello bass # Create a bunch ``strings'' with tracks violin, cello and bass. for strings; mute # Mute all tracks in the strings bunch. for strings; vol * 0.8 # Lower the volume of all tracks in bunch ``strings'' by a # a factor of 0.8. list_bunches (lbn) - Display a list of all bunches and their tracks. - list_bunches remove_bunch (rbn) - Remove the specified bunches. This does not remove the tracks, only the grouping. - remove_bunch <string:bunch_name> [ <string:bunch_name> ] ... add_to_bunch (atbn) - Add track(s) to an existing bunch. - add_to_bunch <string:bunch_name> <string:track1> [ <string:track2> ] ... add_to_bunch woodwind oboe sax flute Project commandscommit (ci) - Commit Nama's current state - commit <string:message> tag - Git tag the current branch HEAD commit - tag <string:tag_name> [<string:message>] branch (br) - Change to named branch - branch <string:branch_name> list_branches (lb lbr) - List branches - list_branches new_branch (nbr) - Create a new branch - new_branch <string:new_branch_name> [<string:existing_branch_name>] save_state (keep save) - Save the project settings as file or git snapshot - save_state [ <string:settings_target> [ <string:message> ] ] get_state (get recall retrieve) - Retrieve project settings from file or snapshot - get_state <string:settings_target> list_projects (lp) - List all projects. This will list all Nama projects, which are stored in the Nama project root directory. - list_projects new_project (create) - Create or open a new empty Nama project. - new_project <string:new_project_name> create jam load_project (load) - Load an existing project. This will load the project from the default project state file. If you wish to load a project state saved to a user specific file, load the project and then use get_state. - load_project <string:existing_project_name> load my_old_song project_name (project name) - Display the name of the current project. - project_name new_project_template (npt) - Make a project template based on the current project. This will include tracks and busses. - new_project_template <string:template_name> [ <string:template_description> ] new_project_template my_band_setup ``tracks and busses for bass, drums and me'' use_project_template (upt apt) - Use a template to create tracks in a newly created, empty project. - use_project_template <string:template_name> apt my_band_setup # Will add all the tracks for your basic band setup. list_project_templates (lpt) - List all project templates. - list_project_templates destroy_project_template - Remove one or more project templates. - destroy_project_template <string:template_name1> [ <string:template_name2> ] ... Setup commandsgenerate (gen) - Generate an Ecasound chain setup for audio processing manually. Mainly useful for diagnostics and debugging. - generate arm - Generate and connect a setup to record or playback. If you are in dodle or preview mode, this will bring you back to normal mode. - arm arm_start (arms) - Generate and connect the setup and then start. This means, that you can directly record or listen to your tracks. - arm_start connect (con) - Connect the setup, so everything is ready to run. Ifusing JACK, this means, that Nama will connect to all the necessary JACK ports. - connect disconnect (dcon) - Disconnect the setup. If running with JACK, this will disconnect from all JACK ports. - disconnect show_chain_setup (chains) - Show the underlying Ecasound chain setup for the current working condition. Mainly useful for diagnostics and debugging. - show_chain_setup loop (l) - Loop the playback between two points. Can be stopped with loop_disable - loop <string:start_mark_name> | <integer:start_mark_index> | <float:start_time_in_secs> <string:end_mark_name> | <integer:end_mark_index> | <float:end_time_in_secs> loop 1.5 10.0 # Loop between 1.5 and 10.0 seconds. loop 1 5 # Loop between marks with indeices 1 and 5, see list_marks. loop sax_start 12.6 # Loop between mark sax_start and 12.6 seconds. noloop (nl) - Disable looping. - noloop Effect commandsadd_controller (acl) - Add a controller to an effect (current effect, by default). Controllers can be modified by using mfx and removed using rfx. - add_controller [ <string:operator_id> ] <string:effect_code> [ <float:param1> <float:param2> ] ... add_effect etd 100 2 2 50 50 # Add a stero delay of 100ms. # the delay will get the effect ID E . # Now we want to slowly change the delay to 200ms. acl E klg 1 100 200 2 0 100 15 200 # Change the delay time linearly (klg) add_effect (afx) - Add an effect - add_effect [ (before <fx_alias> | first | last ) ] <fx_alias> [ <float:param1> <float:param2>... ] ``before'', ``first'' and ``last'' can be abbreviated ``b'', ``f'' and ``l'', respectively. We want to add the decimator effect (a LADSPA plugin). help_effect decimator # Print help about its paramters/controls. # We see two input controls: bitrate and samplerate afx decimator 12 22050 # prints ``Added GO (Decimator)'' # We have added the decimator with 12bits and a sample rate of 22050Hz. # GO is the effect ID, which you will need to modify it. add_effect_last (afxl) - Same as add-effect last - add_effect_last <fx_alias> [ <float:param1> <float:param2>... ] add_effect_first (afxf) - Same as add-effect first - add_effect_first <fx_alias> [ <float:param1> <float:param2>... ] add_effect_before (afxb) - Same as add-effect before - add_effect_before <fx_alias> <fx_alias> [ <float:param1> <float:param2>... ] modify_effect (mfx) - Modify effect parameter(s). - modify_effect [ <fx_alias> ] <integer:parameter> [ + | - | * | / ] <float:value> fx_alias can be: a position, effect ID, nickname or effect code To change the roomsize of our reverb effect to 62 lfx # We see that reverb has unique effect ID AF and roomsize is the # first parameter. mfx AF 1 62 # mfx AF,BG 1 75 # Change the first parameter of both AF and BG to 75. # mfx CE 6,10 -3 # Change parameters 6 and 10 of effect CE to -3 # mfx D 4 + 10 # Increase the fourth parameter of effect D by 10. # mfx A,B,C 3,6 * 5 # Adjust parameters 3 and 6 of effect A, B and C by a factor of 5. remove_effect (rfx) - Remove effects. They don't have to be on the current track. - remove_effect <fx_alias1> [ <fx_alias2> ] ... position_effect (pfx) - Position an effect before another effect (use 'ZZZ' for end). - position_effect <string:id_to_move> <string:position_id> position_effect G F # Move effecit with unique ID G before F. show_effect (sfx) - Show information about an effect. Default is to print information on the current effect. - show_effect [ <string:effect_id1> ] [ <string:effect_id2> ] ... sfx # Print name, unique ID and parameters/controls of the current effect. sfx H # Print out all information about effect with unique ID H. dump_effect (dfx) - Dump all data of current effect object - dump_effect list_effects (lfx) - Print a short list of all effects on the current track, only including unique ID and effect name. - list_effects General commandshotkeys (hk) - Use this command to set the hotkey mode. (You may also use the hash symbol '#'.) Hit the Escape key to return to command mode. - hotkeys hotkeys_always (hka) - Activate hotkeys mode after each command. - hotkeys_always hotkeys_off (hko) - Disable hotkeys always mode - hotkeys_off hotkeys_list (hkl lhk) - List hotkey bindings - hotkeys_list Effect commandsadd_insert (ain) - Add an external send/return insert to current track. - add_insert External: ( pre | post ) <string:send_id> [ <string:return_id> ] Local wet/dry: local add_insert pre jconvolver # Add a prefader insert. The current track signal is sent # to jconvolver and returned to the vol/pan controls. add_insert post jconvolver csound # Send the current track postfader signal (after vol/pan # controls) to jconvolver, getting the return from csound. guitar # Select the guitar track ain local # Create a local insert guitar-1-wet # Select the wet arm afx G2reverb 50 5.0 0.6 0.5 0 -16 -20 # add a reverb afx etc 6 100 45 2.5 # add a chorus effect on the reverbed signal guitar # Change back to the main guitar track wet 25 # Set the balance between wet/dry track to 25% wet, 75% dry. set_insert_wetness (wet) - Set wet/dry balance of the insert for the current track. The balance is given in percent, 0 meaning dry and 100 wet signal only. - set_insert_wetness [ pre | post ] <n_wetness> wet pre 50 # Set the prefader insert to be balanced 50/50 wet/dry. wet 100 # Simpler if there's only one insert remove_insert (rin) - Remove an insert from the current track. - remove_insert [ pre | post ] rin # If there is only one insert on the current track, remove it. remove_insert post # Remove the postfader insert from the current track. ctrl_register (crg) - List all Ecasound controllers. Controllers include linear controllers and oscillators. - ctrl_register preset_register (prg) - List all Ecasound effect presets. See the Ecasound manpage for more detail on effect_presets. - preset_register ladspa_register (lrg) - List all LADSPA plugins, that Ecasound/Nama can find. - ladspa_register Mark commandslist_marks (lmk lm) - List all marks with index, name and their respective positions in time. - list_marks to_mark (tmk tom) - Move the playhead to the named mark or mark index. - to_mark <string:mark_name> | <integer:mark_index> to_mark sax_start # Jump to the position marked by sax_mark. tmk 2 # Move to the mark with the index 2. add_mark (mark amk k) - Drop a new mark at the current playback position. this will fail, if a mark is already placed on that exact position. - add_mark [ <string:mark_id> ] mark start # Create a mark named ``start'' at the current position. remove_mark (rmk) - Remove a mark - remove_mark [ <string:mark_name> | <integer:mark_index> ] remove_mark start # remove the mark named start rmk 16 # Remove the mark with the index 16. rmk # Remove the current mark next_mark (nmk) - Move the playhead to the next mark. - next_mark previous_mark (pmk) - Move the playhead to the previous mark. - previous_mark name_mark - Give a name to the current mark. - name_mark <string:mark_name> modify_mark (move_mark mmk) - Change the position (time) of the current mark. - modify_mark [ + | - ] <float:seconds> move_mark + 2.3 # Move the current mark 2.3 seconds forward from its mmk 16.8 # Move the current mark to 16.8 seconds, no matter, where it is now. Diagnostics commandsengine_status (egs) - Display the Ecasound audio processing engine status. - engine_status dump_track (dump) - Dump current track data. - dump_track dump_group (dumpg) - Dump group settings for user tracks. - dump_group dump_all (dumpa) - Dump most internal state data. - dump_all dump_io - Show chain inputs and outputs. - dump_io Help commandslist_history (lh) - List the command history. Every project stores its own command history. - list_history Bus commandsadd_submix_cooked - Create a submix using all tracks in bus ``Main'' - add_submix_cooked <string:name> <destination> add_submix_cooked front_of_house 7 # send a custom mix named ``front_of_house'' # to soundcard channels 7/8 add_submix_raw (asr) - Add a submix using tracks in Main bus (unprocessed signals, lower latency) - add_submix_raw <string:name> <destination> asbr Reverb jconv # Add a raw send bus called Reverb, with its output add_bus (abs) - Add a sub bus. This is a bus, as known from other DAWs. The default output goes to a mix track and that is routed to the mixer (the Master track). All busses begin with a capital letter! - add_bus <string:name> [ <string:track_name> | <string:jack_client> | <integer:soundcard_channel> ] abs Brass # Add a bus, ``Brass'', routed to the Main bus (e.g. mixer) abs special csound # Add a bus, ``special'' routed to JACK client ``csound'' update_submix (usm) - Include tracks added since the submix was created. - update_submix <string:name> update_submix Reverb remove_bus - Remove a bus or submix - remove_bus <string:bus_name> list_buses (lbs) - List buses and their parameters (TODO). - list_buses set_bus (sbs) - Set bus parameters. This command is intended for advanced users. - set_bus <string:busname> <key> <value> Effect commandsoverwrite_effect_chain (oec) - Create a new effect chain, overwriting an existing one of the same name. - overwrite_effect_chain Same as for new_effect_chain new_effect_chain (nec) - Create an effect chain, a named list of effects with all parameter settings. Useful for storing effect setups for particular instruments. - new_effect_chain <string:name> [ <effect_id_1> <effect_id_2>... ] new_effect_chain my_piano # Create a new effect chain, ``my_piano'', storing all # effects and their settings from the current track # except the fader (vol/pan) settings. nec my_guitar A C F G H # Create a new effect chain, ``my_guitar'', # storing the effects with IDs A, C, F, G, H and # their respective settings. delete_effect_chain (dec destroy_effect_chain) - Delete an effect chain definition. Does not affect the project state. This command is not reversible by undo. - delete_effect_chain <string:effect_chain_name> find_effect_chains (fec) - Dump effect chains, matching key/value pairs if provided - find_effect_chains [ <string:key_1> <string:value_1> ] ... fec # List all effect chains with their effects. find_user_effect_chains (fuec) - List all *user* created effect chains, matching key/value pairs, if provided. - find_user_effect_chains [ <string:key_1> <string:value_1> ] ... bypass_effects (bypass bfx) - Bypass effects on the current track. With no parameters default to bypassing the current effect. - bypass_effects [ <string:effect_id_1> <string:effect_id_2>... | 'all' ] bypass all # Bypass all effects on the current track, except vol and pan. bypass AF # Only bypass the effect with the unique ID AF. bring_back_effects (restore_effects bbfx) - Restore effects. If no parameter is given, the default is to restore the current effect. - bring_back_effects [ <string:effect_id_1> <string:effect_id_2> ... | 'all' ] bbfx # Restore the current effect. restore_effect AF # Restore the effect with the unique ID AF. bring_back_effects all # Restore all effects. new_effect_profile (nep) - Create a new effect profile. An effect profile is a named group of effect chains for multiple tracks. Useful for storing a basic template of standard effects for a group of instruments, like a drum kit. - new_effect_profile <string:bunch_name> [ <string:effect_profile_name> ] add_bunch Drums snare toms kick # Create a buch called Drums. nep Drums my_drum_effects # Create an effect profile, call my_drum_effects apply_effect_profile (aep) - Apply an effect profile. this will add all the effects in it to the list of tracks stored in the effect profile. Note: You must give the tracks the same names as in the original project, where you created the effect profile. - apply_effect_profile <string:effect_profile_name> destroy_effect_profile - Delete an effect profile. This will delete the effect profile definition from your disk. All projects, which use this effect profile will NOT be affected. - destroy_effect_profile <string:effect_profile_name> list_effect_profiles (lep) - List all effect profiles. - list_effect_profiles show_effect_profiles (sepr) - List effect profile. - show_effect_profiles full_effect_profiles (fep) - Dump effect profile data structure. - full_effect_profiles Track commandscache_track (cache ct bounce freeze) - Cache the current track. Same as freezing or bouncing. This is useful for larger projects or low-power CPUs, since effects do not have to be recomputed for subsequent engine runs. Cache_track stores the effects-processed output of the current track as a new version (WAV file) which becomes the current version. The current effects, inserts and region definition are removed and stored. To go back to the original track state, use the uncache_track command. The show_track display appends a ``c'' to version numbers created by cache_track (and therefore reversible by uncache) - cache_track [ <float:additional_processing_time> ] cache 10 # Cache the curent track and append 10 seconds extra time, Effect commandsuncache_track (uncache unc) - Select the uncached track version. This restores effects, but not inserts. - uncache_track General commandsdo_script (do) - Execute Nama commands from a file in the main project's directory or in the Nama project root directory. A script is a list of Nama commands, just as you would type them on the Nama prompt. - do_script <string:filename> do prepare_my_drums # Execute the script prepare_my_drums. scan - Re-read the project's .wav directory. Mainly useful for troubleshooting. - scan Effect commandsadd_fade (afd fade) - Add a fade-in or fade-out to the current track. - add_fade ( in | out ) marks/times (see examples) fade in mark1 # Fade in,starting at mark1 and using the # default fade time of 0.5 seconds. fade out mark2 2 # Fade out over 2 seconds, starting at mark2 . fade out 2 mark2 # Fade out over 2 seconds, ending at mark2 . fade in mark1 mark2 # Fade in starting at mark1, ending at mark2 . remove_fade (rfd) - Remove a fade from the current track. - remove_fade <integer:fade_index_1> [ <integer:fade_index_2> ] ... list_fade # Print a list of all fades and their tracks. rfd 2 # Remove the fade with the index (n) 2. list_fade (lfd) - List all fades. - list_fade Track commandsadd_comment (comment ac) - Add a comment to the current track (replacing any previous comment). A comment maybe a short discription, notes on instrument settings, etc. - add_comment <string:comment> ac ``Guitar, treble on 50%'' remove_comment (rc) - Remove a comment from the current track. - remove_comment show_comment (sc) - Show the comment for the current track. - show_comment show_comments (sca) - Show all track comments. - show_comments add_version_comment (avc) - Add a version comment (replacing any previous user comment). This will add a comment for the current version of the current track. - add_version_comment <string:comment> avc ``The good take with the clear 6/8'' remove_version_comment (rvc) - Remove version comment(s) from the current track. - remove_version_comment show_version_comment (svc) - Show version comment(s) of the curent track. - show_version_comment show_version_comments_all (svca) - Show all version comments for the current track. - show_version_comments_all set_system_version_comment (ssvc) - Set a system version comment. Useful for testing and diagnostics. - set_system_version_comment <string:comment> Midi commandsmidish_command (m) - Send the command text to the midish MIDI sequencer. Midish must be installed and enabled in namarc. See the midish manpage and fullonline documentation for more. - midish_command <string:command_text> m tracknew my_midi_track midish_mode_on (mmo) - All users commands sent to midish, until - midish_mode_on midish_mode_off (mmx) - Exit midish mode, restore default Nama command mode, no midish sync - midish_mode_off midish_mode_off_ready_to_play (mmxrp) - Exit midish mode, sync midish start (p) with Ecasound - midish_mode_off_ready_to_play midish_mode_off_ready_to_record (mmxrr) - Exit midish mode, sync midish start (r) with Ecasound - midish_mode_off_ready_to_record Edit commandsnew_edit (ned) - Create an edit for the current track and version. - new_edit set_edit_points (sep) - Mark play-start, record-start and record-end positions for the current edit. - set_edit_points list_edits (led) - List all edits for current track and version. - list_edits select_edit (sed) - Select an edit to modify or delete. After selection it is the current edit. - select_edit <integer:edit_index> end_edit_mode (eem) - Switch back to normal playback/record mode. The track will play full length again. Edits are managed via a sub- bus. - end_edit_mode destroy_edit - Remove an edit and all associated audio files. If no parameter is given, the default is to destroy the current edit. Note: The data will be lost permanently. Use with care! - destroy_edit [ <integer:edit_index> ] preview_edit_in (pei) - Play the track region without the edit segment. - preview_edit_in preview_edit_out (peo) - Play the removed edit segment. - preview_edit_out play_edit (ped) - Play a completed edit. - play_edit record_edit (red) - Record an audio file for the current edit. - record_edit edit_track (et) - Set the edit track as the current track. - edit_track host_track_alias (hta) - Set the host track alias as the current track. - host_track_alias host_track (ht) - Set the host track (edit sub-bus mix track) as the current track. - host_track version_mix_track (vmt) - Set the version mix track as the current track. - version_mix_track play_start_mark (psm) - Select (and move to) play start mark of the current edit. - play_start_mark rec_start_mark (rsm) - Select (and move to) rec start mark of the current edit. - rec_start_mark rec_end_mark (rem) - Select (and move to) rec end mark of the current edit. - rec_end_mark set_play_start_mark (spsm) - Set play_start_mark to the current playback position. - set_play_start_mark set_rec_start_mark (srsm) - Set rec_start_mark to the current playback position. - set_rec_start_mark set_rec_end_mark (srem) - Set rec_end_mark to current playback position. - set_rec_end_mark disable_edits (ded) - Turn off the edits for the current track and playback the original. This will exclude the edit sub bus. - disable_edits merge_edits (med) - Mix edits and original into a new host-track. this will write a new audio file to disk and the host track will have a new version for this. - merge_edits Track commandsexplode_track - Make the current track into a sub bus, with one track for each version. - explode_track move_to_bus (mtb) - Move the current track to another bus. A new track is always in the Main bus. So to reverse this action use move_to_bus Main . - move_to_bus <string:bus_name> asub Drums # Create a new sub bus, called Drums. snare # Make snare the current track. mtb Drums # Move the snare track into the sub bus Drums. promote_version_to_track (pvt) - Create a read-only track using the specified version of the current track. - promote_version_to_track <integer:version_number> General commandsread_user_customizations (ruc) - Re-read the user customizations file 'custom.pl'. - read_user_customizations Setup commandslimit_run_time (lr) - Stop recording after the last audio file finishes playing. Can be turned off with limit_run_time_off. - limit_run_time [ <float:additional_seconds> ] limit_run_time_off (lro) - Disable the recording stop timer. - limit_run_time_off offset_run (ofr) - Record/play from a mark, rather than from the start, i.e. 0.0 seconds. - offset_run <string:mark_name> offset_run_off (ofro) - Turn back to starting from 0. - offset_run_off General commandsview_waveform (wview) - Launch mhwavedit to view/edit waveform of the current track and version. This requires to start Nama on a graphical terminal, like xterm or gterm or from GNOME via alt+F2 . - view_waveform edit_waveform (wedit) - Launch audacity to view/edit the waveform of the current track and version. This requires starting Nama on a graphical terminal like xterm or gterm or from GNOME starting Nama using alt+F2 . - edit_waveform Setup commandsrerecord (rerec) - Record as before. This will set all the tracks to record, which have been recorded just before you listened back. - rerecord for piano guitar;rec # Set piano and guitar track to record. # do your recording and ilstening. # You want to record another version of both piano and guitar: rerec # Sets piano and guitar to record again. Track commandsanalyze_level (anl) - Print Ecasound amplitude analysis for current track. This will show highest volume and statistics. - analyze_level General commandsfor - Execute command(s) for several tracks. - for <string:track_name_1> [ <string:track_name_2>} ... ; <string:commands> for piano guitar; vol - 3; pan 75 # reduce volume and pan right for snare kick toms cymbals; mtb Drums # move tracks to bus Drums Project commandsgit - Execute git command in the project directory - git <string:command_name> [arguments] Track commandsedit_rec_setup_hook (ersh) - Edit the REC hook script for current track - edit_rec_setup_hook edit_rec_cleanup_hook (erch) - Edit the REC cleanup hook script for current track - edit_rec_cleanup_hook remove_fader_effect (rffx) - Remove vol pan or fader on current track - remove_fader_effect vol | pan | fader rename_track - Rename a track and its WAV files - rename_track <string:old_track> <string:new_track> Sequence commandsnew_sequence (nsq) - Define a new sequence - new_sequence <string:name> <track1, track2,...> select_sequence (slsq) - Select named sequence as current sequence - select_sequence list_sequences (lsq) - List all user sequences - list_sequences show_sequence (ssq) - Display clips making up current sequence - show_sequence append_to_sequence (asq) - Append items to sequence - append_to_sequence [<string:name1>,...] asq chorus # append chorus track to current sequence asq # append current track to current sequence insert_in_sequence (isq) - Insert items into sequence before index i - insert_in_sequence <string:name1> [<string:name2>,...] <integer:index> remove_from_sequence (rsq) - Remove items from sequence - remove_from_sequence <integer:index1> [<integer:index2>,...] delete_sequence (dsq) - Delete entire sequence - delete_sequence <string:sequence> add_spacer (asp) - Add a spacer to the current sequence, in specified position, or appending (if no position is given) - add_spacer <float:duration> [<integer:position>] convert_to_sequence (csq) - Convert the current track to a sequence - convert_to_sequence merge_sequence (msq) - Cache and MON the current sequence mix track, disable the sequence - merge_sequence snip - Create a sequence from the current track by removing the region(s) defined by mark pair(s). Not supported if the current track is already a sequence. - snip <mark_pair1> [<mark_pair2>...] snip cut1-start cut1-end cut2-start cut2-end This removes cut1 and cut2 regions from the current track by creating a sequence. compose (compose_sequence compose_into_sequence) - Compose a new sequence using the region(s) of the named track defined by mark pair(s). If the sequence of that name exists, append the regions to that sequence (compose_into_sequence). - compose <string:sequence_name> <string:trackname> <mark_pair1> [<mark_pair2>...] compose speeches conference-audio speaker1-start speaker1-end speaker2-start speaker2-end This creates a ``speeches'' sequence with two clips for speaker1 and speaker2. General commandsundo - Roll back last commit (use ``git log'' to see specific commands) Note: redo is not supported yet - undo redo - Restore the last undone commit (TODO) - redo show_head_commit (show_head last_command last) - Show the last commit, which undo will roll back. A commit may contain multiple commands. The last_* aliases are meaningful when autosave: undo is set. In that case each commit contains only a single command - show_head_commit Mode commandseager - Set eager mode - eager on | off Engine commandsnew_engine (neg) - Start a named Ecasound engine, or bind to an existing engine - new_engine <string:engine_name> <integer:port> select_engine (seg) - Select an ecasound engine (advanced users only!) - select_engine <string:engine_name> Track commandsset_track_engine_group (steg) - Set the current track's engine affiliation - set_track_engine_group <string:engine_name> Bus commandsset_bus_engine_group (sbeg) - Set the current bus's engine affiliation - set_bus_engine_group <string:engine_name> select_submix (ssm) - Set the target for the trim command - select_submix <string:submix_name> trim_submix (trim tsm) - Control a submix fader - trim_submix # reduce vol of current track in in_ear_monitor by 3dB select_submix in_ear_monitor trim vol - 3 Effect commandsnickname_effect (nfx nick) - Add a nickname to the current effect (and create an alias) - nickname_effect <lower_case_string:nickname> add_track guitar afx Plate nick reverb # current effect gets name ``reverb1'' mfx reverb1 1 0.05 # modify first reverb effect on current track mfx reverb 1 2 # works, because current track has one effect named ``reverb'' afx reverb # add another Plate effect, gets name ``reverb2'' rfx reverb # Error, multiple reverb effects are present on this # track. Please use a numerical suffix. mfx reverb2 1 3 # modify second reverb effect rfx reverb1 # removes reverb1 ifx reverb2 reverb # insert another reverb effect (reverb3) before reverb2 rfx reverb3 # remove reverb3 rfx reverb # removes reverb2, as it is the sole remain reverb effect delete_nickname_definition (dnd) - Delete a nickname definition. Previously named effects keep their names. - delete_nickname_definition afx Plate # add Plate effect nick reverb # name it ``reverb'', and create a nickname for Plate dnd reverb # removes nickname definition afx reverb # error remove_nickname (rnick) - Remove the ``name'' attribute of the current effect - remove_nickname afx Plate nick reverb mfx reverb 1 3 rnick mfx reverb 1 3 # Error: effect named ``reverb'' not found on current track list_nickname_definitions (lnd) - List defined nicknames - list_nickname_definitions set_effect_name (sen) - Set a nickname only (don't creating an alias) - set_effect_name <string:name> set_effect_surname (ses) - Set an effect surname - set_effect_surname <string:surname> remove_effect_name (ren) - Remove current effect name - remove_effect_name remove_effect_surname (res) - Remove current effect surname - remove_effect_surname Track commandsselect_track - Set a particular track as the current, or default track against which track-related commands are executed. - select_track <string:track-name> | <integer:track-number> REALTIME OPERATIONNama selects realtime or nonrealtime parameters based on the realtime_profile, ecasound_buffersize and ecasound_globals fields in .namarc. You can optionally specify the buffersizes as a multiple of the JACK period size. Note that for best realtime operation under JACK you will have to configure jackd appropriately as well. The realtime and auto profiles are useful when using Nama/Ecasound for live fx processing or live monitoring. The realtime profile sets a small buffersize and other low latency settings whenever a soundcard or JACK client is connected. The nonrealtime profile uses a bigger buffer, providing extended margins for stable operation. It is suitable for post-processing, or for recording without live monitoring responsibilities. The auto profile defaults to nonrealtime settings. It switches to realtime, low-latency settings when a track has a live input. DIAGNOSTICSOn any change in setup, the GUI display updates and "show_tracks" command is executed automatically showing what to expect the next time the engine is started. You can use the "chains" command to verify the Ecasound chain setup. (The Ecasound command "cs-save-as mysetup.ecs" will additionally store all engine data, effects as well as routing.) The "dump" command displays data for the current track. The "dumpall" command shows all state that would be saved. This is the same output that is written to the State.yml file when you issue the "save" command. BUGS AND LIMITATIONSNo latency compensation across signal paths is provided at present. This feature is under development. SECURITY CONCERNSIf you are using Nama with the NetECI interface (i.e. if Audio::Ecasound is not installed) you should block TCP port 2868 if your computer is exposed to the Internet. INSTALLATIONThe following commands, available on Unixlike systems with Perl installed, will pull in Nama and other Perl libraries required for text mode operation: "cpanm Audio::Nama" -or- "PERL_MM_USE_DEFAULT=1 cpan Audio::Nama" To use the GUI, you will need to install Tk: "cpanm Tk" You may want to install Audio::Ecasound if you prefer not to run Ecasound in server mode: "cpanm Audio::Ecasound" You can pull the source code as follows: "git clone git://github.com/bolangi/nama.git" Consult the BUILD file for build instructions. SUPPORTThe Nama mailing list is a suitable forum for questions regarding Nama installation, usage, bugs, feature requests, etc. For questions and discussion related to Ecasound PATCHESThe modules that make up this application are the preprocessed output from several source files. Patches against these source files are preferred. AUTHORJoel Roth, <[email protected]> CONTRIBUTORSAlex Stone Brett McCoy Dubphil F. Silvain ++ Joy Bausch Julien Claassen ++ Kevin Utter Lars Bjørndal Philippe Schelté Philipp Überbacher Raphaël Mouneyres ++ Rusty Perez S. Massy ++ This is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License, Version 3.
http://manpages.org/nama
CC-MAIN-2021-25
refinedweb
10,668
57.87
I would like to create a function which can take either 1 or 2 arguments. Currently, I have a function which takes exactly 2 arguments through CMD: def test(self,countName,optionalArg): if countName == "lowest": #something if optionalArg == "furthest: #something else: #something else if __name__ == '__main__': countName = sys.argv[1] optionalArg = sys.argv[2] temp = len(sys.argv) for i in xrange(1,temp): sys.argv.pop() I would then run: python filename.py lowest furthest Using this means that passing the second arg is a must. If I try to run my script just by passing one arg, it encounters an error (as expected). My question is, how do you create an optional argument, which could either be passed or not, depending on the situation? For example: python filename.py lowest In this situation, I expect the program to perform the "#something else" script, as nothing was passed and it is different than "furthest". Please do not write the code for me, I am here to learn :) This is explained in the FineManual(tm): Note that in Python, the expression defining the default value for an optional argument is eval'd ony once when the def statement is executed (which is at first import for a top-level function), which can lead to unexpected behaviours (cf "Least Astonishment" in Python: The Mutable Default Argument). Also, the "default value" has to be an expression, not a statement, so you cannot do any error handling here. wrt/ your case with trying to use sys.argv[2] as a default value, it's wrong for at least two reasons: len(sys.argv) < 3 sys.argv, so you cannot reuse it in a different context The right solution here is to handle all user input ( sys.argv or whatever) in the "entry point" code (the __main__ section) - your function should know nothing about where the arguments values came from ( sys.argv, an HTTP request, a text file or whatever). So to make a long story short: use either a hardcoded value (if it makes sense) or a "sentinel" value ( None is a good candidate) as default value for your optional argument, and do all the user inputs parsing in the __main__ section (or even better in a main() function called from the __main__ section so you don't pollute the module's namespace with irrelevant variables): def func(arg, optarg=None): #code here def main(*args): #parse args #call func with the right args if __name__ == "__main__": import sys main(*sys.argv A very, VERY ugly way is to use exceptions for check if the parameter is defined: import sys if __name__ == '__main__': arg = sys.argv[1] try: optionalArg = sys.argv[2] except IndexError: optionalArg = "" else: print "sure, it was defined." I advise you not to use it, because you should never be in a situation where you don't know if a variable is defined or not, there are better ways to handle this, but in some cases (not yours) can be usefull, I add it only for this reason. Variant: def somefunc(*args, **kwargs): if 'optional_arg' in kwargs: print kwargs['optional_arg'] You can use *args to pass any number of arguments and receive them as a list, or use **kwargs to send keyword arguments and receive them as a dictionary. Here is an example for *args: def test(count_name, *args): if count_name == "lowest": print('count_name received.') if args and args[-1] == "furthest": print('2nd arg received.') else: print('No second arg given.') return None if __name__ == '__main__': count_name = 'lowest' optional_arg = 'furthest' print('Test 1:') test(count_name, optional_arg) # Displays: # Test 1: # count_name received. # 2nd arg received. print('\n\nTest 2:') test(count_name) # Test 2: # count_name received. # No second arg given. This is how you pass optional arguments, not by defining a default value for a variable, which initialises the variable in the memory.
http://www.devsplanet.com/question/35265234
CC-MAIN-2017-04
refinedweb
641
61.77
Simple, Cheap and Multiplatform Robotic Arm - Powered by Viper Creates with Viper, a simple robotic arm with a Nucleo board, two servo motors and a cheap laser indicator. You can move the arm by writing the coordinates that express the degrees. For the comunication to the Nucleo board, you can use a serial port, and a simple Viper protocol. Follow the simple steps. Materials 1 x Viper IDE (free) 2 x TowerPro SG90 9G Servo Motor($ 9,00) 1 x cheap laser pen ($ 5,00) 1 x 3D printed structure ($ 4,00) Step 1: What Is Viper? Viper is a a Python development suite that supports fast design and integration with sensors and cloud services (). VIPER runs on all the 32bit ARM Pro and DIY microcontrollers like Arduino, UDOO, Particle, ST Nucleo. Viper start with a great Kickstarter campain. Viper use a VIPER IDE, a multi-platform and browser-based developing environment with cloud sync and board management features. Step 2: Download and Install the Viper IDE Go to and download the Viper. Is available also the Viper Google play App for Android () Download the Windows, Linux and iOS installers from Viper Download page (security messages can be shown on Windows, please accept and go ahead, this issue will be solved soon). Install it and launch through the Viper shortcut Create a Viper user through the dedicated button. Check your email and verify your new account by clicking on the provided link. Once the account has been verified the Viper IDE automatically logs you into the Viper cloud (the first time you create a user account an IDE restart can be required. If you have sign-in issue please restart the IDE) To make the board usable, you need to Viperize it. Viperization is the process of installing the Viper VM on a board. The process can be launched by clicking on the dedicated button available on the Viper IDE top bar. Some boards require to be putted in upload mode, the IDE will guide you through the entire process Don’t forget to install your boards USB drivers, downloading them from: Arduino DUE driver page () ST Nucleo F401RE driver and firmware page () UDOO board serial driver download page. () Step 3: The Viper IDE Viper is integrated with a dedicated development environment. The Viper IDE is a browser-based development environment that runs on Windows, Linux and Mac. Board Management Toolbar Through the Viper IDE Boards Management Bar all the connected devices can be managed. The Boards Management Bar can be used for Viperizing the connected boards by uploading on them the Viper Virtual Machine. Moreover, all the Viper supported boards are listed also if not connected as Virtual Boards allowing code verification for specific platforms without requiring physical connection. When launched, the Viper IDE starts a Python local server that controls the connected peripherals showing all the available boards on the Boards Management Bar. Step 4: Add Servo Motor Library After the installation of Viper App, and the registration to community of Viper, you can install the Servo Motor Library. The Libraries are a pre-prepared code to simplify the use of determinate hardware, or particular functions. You can add the Library by the package menu. You can search "Servo Motor Library", and after you can add this library to the Viper IDE. After you can use this library, on your project, by a simple line command. On the top of your code you can write: from servo import servo Step 5: Viperize the Nucleo and Upload the Code Open the Viper IDE. After this operation, plug in the ST Nucleo. In the top menù choose ST Nucleo F401RE. Now you do the "viperization" of the board. For this you can press the Viper button. The viper button is near the shield menù, the symbol is the V of Viper. This operation is fully reversible. In other words you upload a new firmware on the board. Ok now you have a Viper board! For create a new project you can click the + (new project) on the Projects menù. Then copy the code of the code file, and paste it on the window. The Viper IDE save the project automatically. If you are a member of Viper community, then do you have a cloud space for your projects. You can reload this project by the cloud of Viper whenever you want by other computer in the world. Verify your project by clicking the verify button. If all is OK you can upload the code on the board, by pressing the upload button. Step 6: Serial Console and System Messages The Viper IDE also includes a System Messages Panel on which all the compiler, debugger and server messages are reported. Moreover a Serial Terminal Console is also integrated in the right bar. The Viper IDE supports multiple Serial Terminal Console opened in parallel. Various projects output can be monitored from the same panel switching between the opened tab. When a Serial Terminal Console is opened it is automatically configured to open the serial port assigned to the selected board (USB OS assigned serial port). The IDE console baud-rate is set by default at different values depending on the selected board. The baud rate for a board is displayed during bytecode upload. This console has to be considered as an integrated debugging output to be used with the Viper default print function. To read a board serial port configured with a different baudrate an external serial port terminal like Putty should be used. Any other external port terminals can conflict with the Viper Serial Terminal Console. Users should close the Serial Terminal Console before launching any other external terminals. Step 7: Connect the Wires The connections of wires is very simple. You just plug the power source of Nucleo, in the servo motors, and connect the data port for the servo motor X and Y. The Nucleo port used for the servo motor control are D10 and D11. On the picture you can see a little handmade shield that multiply the power port of Nucleo. You can use also a breadboard for this. See the pictures for the installation. Pay attention in this procedure. You can burn the servo motors or the Nucleo shield. Ok now you have connect all the wires! Step 8: Print the Pieces and Assembly the Structure For the structure of the robotic arm you can print the piece of plastic that combines the two servo motors. Download the structure from Thingiverse: or print it by the 3Dhubs. 3DHubs is a community of 3D printers. Choose a 3D print of your zone and print the piece. The cost of printing are very cheap, because the piece is very little and simple. For the materials of printing I prefer the abs or colorfabb xt filament for this work, because the PLA not is the first choose for this kind of pieces. In this case, in fact, the piece are functionally, and the plastic is under the vibrations that can break the structure. Step 9: Use a Viper IDE for the Comunication By the Viper app you can connect to the board by use a serial communication. After the opened the "serial" window, you can put the coordinates on the command line and click enter to send the numbers to the board. Express the coordinates in degrees. From 0 to 180 degrees. If you use the 0, the motor go to the 0 position, and switch off the circuit of the servo. Wait the start signal: All servo ON Then write the numbers of degrees from 0 to 180. Step 10: Final Result Now you have a robotic arm! In this case the arm moves a laser pen. You can use this arm for every kind of operation you can imagine. Plug in the board on the computer, open the serial port, and put inside the coordinates for X and Y axis. Choose the X and Y coordinates between 0 and 180 degrees. The arm moves the laser on the coordinates that you have chosen. Remember if you use the 0, the motor go to the 0 position, and switch off the circuit of the servo.
http://www.instructables.com/id/Simple-Cheap-and-Multiplatform-Robotic-Arm-Powered/
CC-MAIN-2017-17
refinedweb
1,367
72.16
Sep 25, 2008 06:02 PM|LINK hi, i am inserting data from xml into sql 2005 table. now before reading xml file i want to validate the xml file with the schema. if validated i want to open and do my operation. else i want to log the error. any suggestions... i am able to insert the xml data into database table(with openxml) but i need the validation part any suggestion Sep 25, 2008 07:36 PM|LINK Hi Csharp22, you need to do something like this: /// <summary> /// Validates a xml against a xml schema. /// </summary> public class XmlValidator { private bool _IsValid = true; /// <summary> /// Creates a new instance of XmlValidator. /// </summary> /// <param name="xml">Path to xml file.</param> /// <param name="xsd">Path to xml schema file.</param> public XmlValidator(string xml, string xsd) { System.Xml.XmlDocument xmlDoc = new System.Xml.XmlDocument(); xmlDoc.Load(xml); using (System.IO.FileStream fsStr = new System.IO.FileStream(xsd, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read)) { xmlDoc.Schemas.Add(System.Xml.Schema.XmlSchema.Read(fsStr, this.Validator)); } xmlDoc.Validate(this.Validator); } /// <summary> /// Indicates if the xml file has a valid schema. /// </summary> public bool IsValid { get { return this._IsValid; } } private void Validator(object sender, System.Xml.Schema.ValidationEventArgs e) { //Check severity (if you need Warnings be treated as erros change this) if (e.Severity == System.Xml.Schema.XmlSeverityType.Error) { this._IsValid = false; } } } Bye, Miguel. Sep 25, 2008 09:43 PM|LINK Oks, the only way is to doing an assembly with visual studio 2005/2008 and embed in Sql Server 2005, then map the assembly procedures to stored procedures in order to use them inside a Sql Server Stored Procedure. Do you have any experience in Visual Studio (C# or Visual Basic.NET)? Bye, Miguel. Sep 25, 2008 10:02 PM|LINK Oks, you need to install a Visual Studio if you don't have yet. In you can download and install it. Once you have installed it you have to create a new Poject (Class Library) in C# language. When you do those steps please post a new message and I guide you step by step to do it. Bye, Miguel. Sep 25, 2008 10:21 PM|LINK Oks, you are quick enough like Fernando Alonso [:P]. The next step is to add a new class to the project (right click in the solution explorer on the project name and add new file, class). Name it XmlValidator.cs. How dou you named the project? 27 replies Last post Oct 08, 2008 07:20 PM by Csharp22
http://forums.asp.net/p/1325687/2646528.aspx/1?Re+xml+file+validation+with+xsd
CC-MAIN-2013-20
refinedweb
430
66.74
Welcome to trotter, a Dart library that simplifies working with structures commonly encountered in combinatorics such as combinations and permutations. Trotter gives the developer access to pseuso-lists that "contain" all arrangements of objects taken from a specified set. For example, the following programme creates a pseudo-list "containing" all the 3-permutations of the first five letters and reports some information. import "package:trotter/trotter.dart"; void main() { var perms3 = new Permutations(3, "abcde".split("")); print("There are ${perms3.length} 3-permutations of the objects in ${perms3.elements}."); print("The first 3-permutation is ${perms3[0]}."); print("The first three 3-permutations are: ${perms3.range(0, 3)}."); } There are 60 3-permutations of the objects in [a, b, c, d, e]. The first 3-permutation is [a, b, c]. The first three 3-permutations are: [[a, b, c], [a, c, b], [c, a, b]]. The classes defined in trotter technically provide a mapping between integers and the structures contained within a pseudo-list; they do not store the structures in memory. This allows us to work with pseudo-lists "containing" very large numbers of arrangements with very little overhead. For example, consider the following programme that works with a very large list of permutations. import "package:trotter/trotter.dart"; void main() { var perms10 = new Permutations(10, "abcdefghijklmno".split("")); print("There are ${perms10.length} 10-permutations of the first 15 letters."); print("The 10,000,000,000th permutation 'stored' in perms10 is ${perms10[9999999999]}."); } There are 10897286400 10-permutations of the first 15 letters. The 10,000,000,000th permutation 'stored' in perms10 is [m, k, j, d, e, g, f, i, c, n]. Trotter contains four classes for working with some items taken from a list. Their distinguishing properties can be summarised in the following table. All of these classes can be used similarly to the way Permutations was used in the examples above. Further, a class Subsets exists to create a pseudo-list of all the subsets of objects stored in a list. For example, the following programme creates a pseudo-list containing all the subsets (combinations of any size) created from the first five letters. import "package:trotter/trotter.dart"; void main() { var subs = new Subsets("abcde".split("")); print("There are ${subs.length} subsets of the objects in ${subs.elements}."); print("The first subset is the empty set: ${subs[0]}."); print("The tenth subset in subs contains the elements ${subs[9]}."); } There are 32 subsets of the objects in [a, b, c, d, e]. The first subset is the empty set: []. The tenth subset in subs contains the elements [a, d]. First Dart release: support for classes: Add this to your package's pubspec.yaml file: dependencies: trotter: "^0.5.0" You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:trotter/trotter.dart'; This package version is not analyzed, because it is more than two years old. Check the latest stable version for its analysis.
https://pub.dartlang.org/packages/trotter/versions/0.5.0
CC-MAIN-2018-30
refinedweb
517
59.9
24 October 2013 15:54 [Source: ICIS news] LONDON (ICIS)--Farabi Petrochemicals has awarded a contract for a concept study of its planned Jazan project in ?xml:namespace> The Jazan project would, if realised, include a world-scale linear alkyl benzene (LAB) plant and a range of units producing specialty chemicals derived from diesel feedstock. Foster Wheeler said it would also recommend which technologies the project should use for the production of low-aromatics solvent and for the treatment of heavy fuel oil, and that it would develop a capital and operating cost estimate. The company expects to complete the study in the first quarter of 2014. Capacity details were not disclosed. Farabi is a producer of normal paraffin and
http://www.icis.com/Articles/2013/10/24/9718649/saudi-farabi-petchem-awards-contract-for-jazan-concept-study.html
CC-MAIN-2015-06
refinedweb
120
51.99
You can subscribe to this list here. Showing 25 50 100 250 results of 403 >>>>> oops shouldn't have gone to developers... You are looking for matplotlib.axes.Subplot.set_position set_position(self, pos) method of matplotlib.axes.Subplot instance Set the axes position with pos = [left, bottom, width, height] in relative 0,1 coords ACCEPTS: len(4) sequence of floats axes_top.set_position([left, bottom, width, height]) etc. On Friday 31 March 2006 08:38, Andrew B. Young wrote: > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live > webcast and join the prime developer group breaking into this new coding > territory! > > _______________________________________________ > Matplotlib-users mailing list > Matplotlib-users@... > -- Malte Marquarding - Scientific Computing Group Australia Telescope National Facility Radiophysics Laboratory, PO Box 76, Epping, Australia, 1710 Phone: (+61) 2 9372 4485 (work) (+61) 421 805 164 (mobile) >>>>> "Caleb" == Caleb Hattingh <caleb.hattingh@...> writes: Caleb> Hi John I posted the following item on comp.lang.python, Caleb> but actually you're exactly who I was looking for, and I Caleb> could your address off one of your responses to another's Caleb> question (I didn't know you read news). By the way, Caleb> Matplotlib is one of the best python addons I have ---no Caleb> more printing Excel graphs to postscript files :) Thanks! Caleb> I tried several Google searches to no avail. I read Caleb> through pretty much most of the online docs at the Caleb> matplotlib sourceforge site, but didn't find what I was Caleb> looking for. I went through the axis.py and ticker.py code Caleb> today, trying to find out how to set the number of points Caleb> (ticks) on an axis in Matplotlib. Caleb> I know that something like >>>> xticks(arange(5)) Caleb> will make the x-axis have the values specified, but Caleb> Matplotlib appears to have a very nice way of setting axis Caleb> ticks with human-friendly values that round in just the Caleb> right way for a given set of data. I still want that Caleb> functionality, but I want to set how many ticks are on a Caleb> given axis. Caleb> It seems that the locater() classes are where I should Caleb> look, and there seem to be some defaults in ticker.py: Caleb> class AutoLocator(MaxNLocator): def __init__(self): Caleb> MaxNLocator.__init__(self, nbins=9, steps=[1, 2, 5, 10]) Caleb> I don't understand what this means :) Caleb> I would prefer not to hack this directly in the matplotlib Caleb> code. How can I change the number of ticks on an axis Caleb> programmatically without messing with the other ticklabel Caleb> functionality? Caleb Caleb> You probably know exactly what I need to do? Yes, you will want to use a MaxNLocator. Note that the MaxNLocator sets the maximum number of *intervals* so the maxnumber of ticks will be the max number of intervals plus one. You could probably adapt this code to make an ExactNLocator. If you do, please send it our way. from matplotlib.ticker import MaxNLocator from pylab import figure, show, nx fig = figure() ax = fig.add_subplot(111) ax.plot(nx.mlab.rand(1000)) ax.xaxis.set_major_locator(MaxNLocator(4)) show() Also, please post questions to the matplotlib-users mailing list rather than to me directly, as there are plenty of experts there (unlink on c.l.python) who can help you. Glad you're enjoying matplotlib! JDH Yes indeed, I upgraded from 0.87.1 to 0.87.2 and now it works perfectly. Thanks! Le Jeudi 30 Mars 2006 19:02, Theodore R Drain a =E9crit=A0: > Nicolas, > I'm guessing you're using an older version. I believe this has been fixed > in the current repository. > > Ted > ps: Anytime you have a problem, it would help a lot if you could supply a= ll >=3D=3Dlnk&kid=3D110944&bid=3D241720&dat=3D121642_____= __________ > >________________________________ Matplotlib-users mailing list > > Matplotlib-users@... > > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting langua= ge > that extends applications into web and mobile media. Attend the live > webcast and join the prime developer group breaking into this new coding > territory! > =3D121642 > _______________________________________________ > Matplotlib-users mailing list > Matplotlib-users@... > =2D-=20 =2D------------------------------------------------------------ Nicolas Dubuit (+33 4 42 25 40 25) CEA-Cadarache bat.513 / 13100 ST PAUL LEZ DURANCE CEDEX =2D------------------------------------------------------------ Nicolas, I'm guessing you're using an older version. I believe this has been fixed in the current repository. Ted ps: Anytime you have a problem, it would help a lot if you could supply all=lnk&kid=110944&bid=241720&dat=121642_______________________________________________ > Matplotlib-users mailing list > Matplotlib-users@... > > >>>>> "Sarah" == Sarah Mount <mount.sarah@...> writes: Sarah> Hi all, maybe this is a daft question, but is there a Sarah> simple way of drawing several histograms on top of one Sarah> another on the same axes? Sarah> Right now I can't even figure out how to change the bar Sarah> colour in a script. I've looked through the code in axes.py Sarah> and this looks like it *ought* to be right, but isn't: Sarah> ax1 = pylab.axes(...) n, bins, patches = ax1.hist(data, Sarah> color='r') Sarah> Any ideas would be very much appreciated ... Ahh yes, the docs could be a little clearer. They say kwargs are used to update the properties of the hist bars To make sense of this you would need to know that the bars are matplotlib.patch.Rectangle objects, and know what properties could be set on them. We have a goal of making the documentation thorough with respect to kwargs. In the meantime, scroll through the rectangle class documentation at for insight. Here is a hint about how to find these kinds of things out yourself: Fire up an interactive python shell with support for matplotlib (see). I use ipython () in pylab mode. Make a histogram and use the setp functionality to inspect the properties of the patches returned by hist. patches are 2D objects like polygons, circles and rectangles. johnh@...:~> ipython -pylab /home/titan/johnh/local/lib/python2.3/site-packages/matplotlib/__init__.py:892: Python 2.3.4 (#12, Jul 2 2004, 09:48:10) Type "copyright", "credits" or "license" for more information. IPython 0.7.2.sv]: n,bins,patches = hist(randn(1000), 20) In [2]: setp(patches) alpha: float animated: [True | False] antialiased or aa: [True | False] bounds: (left, bottom, width, height) clip_box: a matplotlib.transform.Bbox instance clip_on: [True | False] edgecolor or ec: any matplotlib color - see help(colors) facecolor or fc: any matplotlib color - see help(colors) figure: a matplotlib.figure.Figure instance fill: [True | False] hatch: unknown height: float label: any string linewidth or lw: float lod: [True | False] transform: a matplotlib.transform transformation instance visible: [True | False] width: float x: float y: float zorder: any number Scrolling through this list, you see things like edgecolor and facecolor. You can pass these as kwargs to hist, or use setp In [3]: setp(patches, edgecolor='g', facecolor='b', linewidth=2); Hope this helps, JDH Hi =3D pylab.axes(...) n, bins, patches =3D ax1.hist(data, color=3D'r') Any ideas would be very much appreciated ... Sarah -- >>>>> . Hi there, It seems that figsize is not taken into account when using the QtAgg backend, neither with rc, nor as argument , ie figure (figsize=(2,3)) Does anyone have a workaround for this? Thanks Nicolas >>>>> "Ralf" == Ralf Gommers <r.gommers@...> writes: Ralf> Hi everyone, Guess no-one has a trick yet to put in an axis Ralf> break. My question is now, should this not be on the goals Ralf> list at least? Ralf> If the developers think it is a good idea to implement this, Ralf> I would like to have a go at it. As I'm not all that Ralf> familiar with the matplotlib internals it would be great if Ralf> someone has any ideas on how to go about it. This is not possible and is not easy. Currently, we don't even have independent control of the lines that surround the white axes box (the left and right y-axis lines and the upper and lower x-axis lines. It is a long-standing wish to be able to control these independently of the axes box, and this would be a good place for you to start. See axis.py and axes.py. JDH >>>>> "Steve" == Steve Schmerler <elcorto@...> writes: Steve> Hi I discovered that when I plot many (e.g. 20) data sets Steve> in one plot and request a normal boxed legend the Steve> whitespace between the lower and upper box bounds and the Steve> firet and last legend entries increases with the number of Steve> data sets (i.e. legend entries). Is there a way to make Steve> this whitespace offset independent from the length of the Steve> legend? Not currently, though it is a good idea. Right now you can control the legend "pad" which is the fractional whitespace inside the legend border. So you should be able to tweak this parameter in the final output to get something that looks about right, but having this in points rather than in relative coords makes more sense ultimately. JDH >>>>> "James" == James Boyle <boyle5@...> writes: James> The effects are observed from with a single python session. James> Out of curiosity - Is there any way of clearing the cache James> within a session ? OK, I think I see what is going on. text.Text is calling FigureCanvas.draw_text with a font_manager.FontProperties instance. backend_ps is using the __hash__ method of the FontProperties class to create a cache mapping the font property to the ttf font found in the RendererPS._get_font_ttf method. def _get_font_ttf(self, prop): key = hash(prop) font = _fontd.get(key) if font is None: fname = fontManager.findfont(prop) font = FT2Font(str(fname)) _fontd[key] = font if fname not in _type42: _type42.append(fname) font.clear() size = prop.get_size_in_points() font.set_size(size, 72.0) return font the hash(prop) call as noted above calls the __hash__ method of the FontProperties def __hash__(self): return hash( ( tuple(self.__family), self.__style, self.__variant, self.__weight, self.__stretch, self.__size, self.__parent_size, self.fname)) My first guess w/o looking further was that the "family" entry is 'sans-serif' but not the actual font list (eg Lucida versus Bitstream) and this is the source of your woes. Basically, we need to make the hash method smarter to take account of the actual family list. You might insert some debug print statements into the font_manager class to see what this tuple being passed to hash actually is. On second glance, the family seems to be set properly if family is None: family = rcParams['font.'+rcParams['font.family']] if rcParams['font.family]' is 'sans-serif', then family should be rcParams['font.sans-serif'] which is what we want since this is your new list. But this is only on the family=None branch, so we need to find out if a) it is working like it should on this branch and b) what is being passed if family is not None. You asked about clearing the cache: import matplotlib.backends.backend_ps as ps ps._fontd = {} Let us know what you find out... JDH The effects are observed from with a single python session. Out of curiosity - Is there any way of clearing the cache within a session ? Thanks --Jim On Mar 29, 2006, at 1:27 PM, John Hunter wrote: >>>>>> > In gnuplot, as one plots multiple lines on the same graph, the default behavior is that gnuplot automatically selects linestyle, colors and markers. In matplotlib, the default behavior (if I don't specify format etc) is to draw every line in solid style, no markers, blue color. I much prefer gnuplot's default behavior -- is there some way of configuring matplotlib to do the same? Thanks, Diwaker -- Web/Blog/Gallery: >>>>> Thanks for your help. Encouraged by your results I tried some more experiments. There appears to be some hysteresis in matplotlib fonts ( at least my installation). Unsaid(!!!) in my message was that I initially ran the PS plot with the default fonts and generated a defective PS file, and then I tried to change the font properties. After running the rc(font, **fontDict) command I still generated a defective file but a different size - so something changed. However, if I just run the rc(font, **fontDict) command sequence first without ever trying the defaults fonts all goes well and I get a fine PS file. Evidently, some aspects of the old font are retained. Thanks again, --Jim On Mar 29, 2006, at 8:54 AM, jswhit@... wrote: > > On Wed, 29 Mar 2006 10:34:08 -0500, "Darren Dale" <dd55@...> > said: >> On Tuesday 28 March 2006 19:05, you wrote: >>> plotlibrc setting: >>> font.sans-serif : Lucida Grande, Verdana, Geneva, Lucida, >>> Bitstream >>> Vera Sans, Arial, Helvetica, Avant Garde, sans-serif >>> I get a postscript file that I cannot view. >>> BUT if I change the matplotlibrc file to: >>> font.sans-serif : Bitstream Vera Sans >>> All goes well and the PS file is fine. This has been discussed on the >>> list previously as an OS X font issue. >>> >>> My idea was to use the following code to set the font.sans-serif >>> dynamically. >>> However, it does not seem to work in that the ps file is not usable >>> as >>> if Lucida Grande was still the font.sans-serif setting. >>> There might well be something very obvious - From the font manager >>> code I surmised that the 'sans-serif' entry was a list but I could be >>> mistaken: >>> >>> import matplotlib >>> matplotlib.use('PS') >>> from matplotlib import pylab >>> import Numeric >>> N = Numeric >>> PL = pylab >>> x = N.arrayrange(100.) >>> y = N.arrayrange(100.) >>> fontDict = {'family':'sans-serif', >>> 'style': 'normal', >>> 'variant':'normal', >>> 'weight': 'medium', >>> 'stretch':'normal', >>> 'size': 12.0, >>> 'sans-serif':['Bitstream Vera Sans']} >>> PL.rc('font',**fontDict) >>> PL.plot(x,y**2) >>> PL.savefig('crap') >>> PL.clf() >> >> Your second script works fine for me. I was able to switch the font in >> the >> postscript file, between Bitstream Vera Sans and Arial, by modifying >> your >> fontDict. I'm using svn mpl on linux, but I dont think anything has >> changed >> since 0.87.2 that would effect the results. >> >> Are there any Mac users with a free moment to run his script? >> >> Darren >> > > Darren and Jim: Works for me on 10.4. -Jeff > >>>>> "Imara" == Imara Jarrett <imara28@...> writes: Imara> Hi there, I am using a 'for loop' to generate multiple Imara> scatter plots from my data. So far, so good. Imara> I would like to have consistent xticks and yticks for each Imara> scatter plot. However, it seems that when I define xticks Imara> and yticks (remains the same for each scatter plot), I get Imara> different xticks and yticks for each scatterplot depending Imara> on the data. Imara> How can I make the xticks and yticks consistent over ALL my Imara> scatterplots, regardless of the data used to generate them? some thing like: xvals = 1,2,3 yvals = 4,5,6 for data in mydata: fig = figure(1) ax = fig.add_subplot(111) ax.scatter(blah, blah, data) ax.set_xticks(xvals) ax.set_yticks(yvals) fig.savefig('myfig') close(1) > John Hunter <jdhunter@...> writes: >> why would you ask for ticks midnight and noon, and then hide half of >> them. Why not just ask for them at noon On Wed, 29 Mar 2006, Jouni K Seppanen apparently wrote: > Because I want the tick lines at midnight and labels at noon. > If I add an extra show() to the script, it works... there is something > going on that I don't understand: I do not think this is so unusual for time series data. Example: year end marked with tick marks, but year label set at June, between the tick marks. Cheers, Alan Isaac >>>>> "Jouni" == Jouni K Seppanen <jks@...> writes: Jouni> John Hunter <jdhunter@...> writes: >> why would you ask for ticks midnight and noon, and then hide >> half of them. Why not just ask for them at noon Jouni> Because I want the tick lines at midnight and labels at Jouni> noon. I see -- one hack would be to use major and minor ticks. Make the major ticks at midnight and the minor ticks at noon. Make the major ticklabels invisible and the minor tick labels visible. Should work. One way of making the major ticks invisible is to use a NullFormatter for the major formatter. Jouni> If I add an extra show() to the script, it works... there Jouni> is something going on that I don't understand: What you may be seeing with the double show is a side effect of matplotlib not creating all the ticks it needs until it is drawn , and when it creates extra ticks it uses the first tick, the protoTick from axis.py, to determine the properties of the new ticks. JDH Charlie Moad wrote: > Typically I use OSX's python, but I don't want to start that debate. No need for debate, I just wanted to know, so I wouldn't be duplicating effort. I'm trying to support the new Universal Build (and the 2.4.1 Framework Build before that), so I wanted to know if I'm duplicating effort. > New users should probably be > pointed to the prebuilt 2.4 framework on pythonmac. Yup! > I haven't been keeping up with the intel status though. There is a new universal build of 2.4.3 We're all hoping it will become the "standard" build for OS-X >= 10.3.9, for both PPC and Intel. However, it can't really be that until we can get all the critical packages built for it. wxPython is a big hang up now, and I consider matplotlib critical too. > Are eggs accepted by pythonmac yet or they still using mpkg's exclusively? We'd like to put eggs on there too. ideally, we'd make a little launcher app that would fire up and run easy_install to install them when double clicked. That's been talked about, but not yet done. In the meantime, something is better than nothing! > Cocoa-agg works and has been there for the last few releases. Will it build by default on OS-X? > Apple's > python interface to quartz doesn't have much of the text support > included. Darn. And it's proprietary isn't it? > I think it is much better to focus on the Agg backend and > use it in gui toolkits. That is a good plan, unless we go to using All-Cairo Doing both seems way too redundant. I'm investigating Cairo vs. Agg for another project. These are my quick thoughts: Cairo Pluses: In theory: native, hardware accelerated back-ends for various platforms PDF and PS support: That's very nice, then we'd really have only one back-end! (what about Tex?) It's being used for GTK2 and part of Mozilla, so it should see a lot of activity and testing. Cairo Minuses: It doesn't look like there's much activity on Windows, but that's gotten better with the Mozilla folks getting involved. I don't know about OS-X Agg pluses: It works now! It has fabulous anti-aliasing (I haven't compared to Cairo yet) It's smaller and simpler. Anyone else have some thoughts? @...
http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200603
CC-MAIN-2014-15
refinedweb
3,222
73.78
Wikiversity talk:Policies From Wikiversity [edit] Original Policies I propose that Wikiversity start by adopting existing Wikipedia policy. The Wikiversity community can then discuss the creation of new policies that will diverge from Wikipedia's policies. New Wikiversity policies should reflect the education-oriented mission of Wikiversity as defined in the Wikiversity project proposal that was approved by the Wikimedia Foundation Board of Trustees. --JWSchmidt 13:44, 15 August 2006 (UTC) - For me, this seems too much. Wikipedia has tons of policies, the vast majority of which are specific to its needs. I think we should only begin with the most basic of Wikimedia policies, Civility, and then develop our own. NPOV, for example, doesn't even fit with what I believe education to be all about - education, for me, is more along the lines of "What do you think? Do you agree/disagree?". Cormaggio 13:50, 15 August 2006 (UTC) - We know that Wikipedia policy provides a viable way to run a wiki project. I suggest using Wikipedia policy as a temporary set of rules; Wikipedia policy would provide stabilty while the Wikiversity community creates new policy. As soon as the Wikiversity community creates its own rules, the Wikipedia rules would no longer be in effect. --JWSchmidt 13:58, 15 August 2006 (UTC) - Yes, but why bother adopting them all to begin with? There are so many which are obviously irrelevant - though I say civility is a basic one - and the rest we can work out as we go along. No projects are forced to adopt any principles but the core Wikimedia values (and NPOV, like I've said, is one that i think should go for a start). Cormaggio 14:10, 15 August 2006 (UTC) - Wikipedia policy could provide a starting point. The approach would be "easy come, easy go". It is easy to adopt Wikipedia policy as a default for when no Wikiversity policy exists. When new Wikiversity policy is constructed by the community it would automatically displace the default Wikipedia policies. In other words, it would be easy for the Wikiversity community to decide which Wikipedia policies are not relevant, either explicitly or implicitly just by ignoring them. Some Wikipedia policies could be modified to suit Wikiversity needs rather than having to start each Wikiversity policy from zero. --JWSchmidt 14:43, 15 August 2006 (UTC) - Take into account that there are lots of Wikipedias, not just one. And they have different policies each. I think there shouldn't be that automatic Wikipedia policy. You can add a line stating Wikipedia general policies should be followed but no more. Common sense does the rest, and there's usually an intention to follow it. But don't set that on the policy, please. It only helps (vandal?, troll?) people saying, this hidden wikipedia policy says so, and it's stupid to apply this here, but we must obey, so you all are wrong. Platonides 14:51, 15 August 2006 (UTC) It sounds like we need a list of proposed Wikiversity policies. --JWSchmidt 15:07, 15 August 2006 (UTC) - Indeed, here's what I think would be a good start: - Observe copyrights (IMO very important so the project doesn't end up getting shut down after all this work because Wikimedia gets sued by some textbook publisher) - Civility, Wikiquette, no personal attacks, and the like (This is especially important in a quasi-academic setting. I would imagine that some of the material covered here might be pretty controversial so it's additionally important not to resort to personal attacks) - Be bold (In the beginning stages and with few people involved, initiative is often times more important than that all agree to a plan of action. Things that go in the wrong direction can be corrected fairly easily with the technology we have available) - Thoughts? What else would be good? -- sebmol ? 15:22, 15 August 2006 (UTC) - Added: Diversity (With that I mean, that in a learning environment, it's more important to have a diversity in material than a basis in scientific verifiability. Teaching literature or poetry isn't really science, inherently POV, but obviously important nonetheless. So we should create an environment where people feel comfortable creating and discussing controversial materials as well, even if science doesn't back them up.) - I think there should be a modified NPOV policy. I think NPOV should apply to the Wikiversity namespace of Wikiversity. An NPOV policy would help guide the meta discussion that will take place in the Wikiversity namespace: "where there are or have been conflicting views, these should be presented fairly, but not asserted" --JWSchmidt 15:38, 15 August 2006 (UTC) --- I think that the original "5 pillars" should be perhaps copied over and used in theory, but I have to agree with Cormogo that the policies that exist here should be ones that the users for Wikiversity need and use come up with. There are some policies (such as What Wikiversity is not and others) that should be copied over from Meta. I'll start them as proposed policies to begin with, and certainly the original research guidelines need to be discussed as well. --Robert Horning 16:13, 15 August 2006 (UTC) - Open discussions for setting polices is a great way to acquire initial ideas, however, polls are not condusive to gathering the best language for a sound policy. George Washington, Thomas Jefferson and John Adams appointed themselves dictators and designers of the US Constitution before they later delegated the refinements (ammendments) over to the American people. Wikiversity, in my opinion, should appoint a very small committee to draw up basic policy. This committee should be in total agreement with Wikiversity's Mission Statement. These policies should reflect the intent and charactor of Wikiversity and not the whims of the world community. Wikiversity can tweek the policies of any great education entity to suit its own intent and personality. A time saver. you will never please everyone. In summary: be the dictator for now, appoint your own policy making team, and tweek it as you go to reflect the Mission Statement. Everyone with a rearend has an opinion. This is mine. User: TigreNoir - Honesty/integrity should be one policy. This is notable, because, unlike in wikipedia, here one may say things which are tentative. Yet one must not intentionally mislead; one must not claim something that he cannot support. As one learns more, she may adjust her statements. But one must always let others know the status. Confucius:「知之為知之,不知為不知,是知也」--Hillgentleman 09:10, 23 October 2006 (UTC) [edit] GDFL and copyright If this has been discussed before, please forgive me. But I was just thinking about GDFL and copyrights. Suppose a class on Wikiversity requires the students to write papers (original work of their own). Will those papers be posted on Wikiversity, and if so, are they automatically GDFL, as are contributions to Wikipedia? I imagine that many people will not want their hard work so easily given away. Some people might think it's great, or not care, but if down the road we have people writing 20+ page final papers, will they really want them distributed on the internet? I am just brainstormin here. --Fang Aili 15:59, 15 August 2006 (UTC) - I think that in general, we should use GFDL or cc-by-sa. I personally would prefer the latter because it doesn't come with all the complications of the GFDL and was created for content such as ours. As to papers, from the university I've attended, the university took over the rights for all papers submitted by students so they could use them for publications, contests, future courses, etc. I therefore don't think that would really be a problem here either. What we can do, howeve, is having those pages excluded from search engines so they won't be distributed as easily. -- sebmol ? 16:30, 15 August 2006 (UTC) "all the complications of the GFDL" <-- please list them "created for content such as ours" <-- what is different from the GFDL? We should keep Wikiversity under the GFDL because nearly every project is. Some things (particularly on Wikibooks) will have to be copied to here, so it'd be easier to keep things uniformly licensed. The only exception is Wikinews, and consequently, things can't be copied from other projects to Wikinews because of license incompatibility. Messedrocker 01:19, 16 August 2006 (UTC) Growing and maintaining the commons is precisely Wikiversity's primary mission. We happen to focus on learning where several other Wikimedia projects focus on high quality reference works. The GFDL has been proven in the past and is a well known copyleft mechanism. Further, there is a separate Foundation (GNU Foundation) dedicated to maintaining and protecting it as a viable copyleft mechanism. If a learner has a valuable paper it certainly should not be published here until their organization is setup to exploit the valuable innovation has a large lead to market. We could change this in the future perhaps as Wikiversity grows and has an advanced grid like capability to mix and match specific teams and sponser with all applicable necessities such as secrecy and access management. For now I think we would be wise to keep things as simple as possible until we have the size, experience and dedicated resources to manage a complicated mix. In short, wiklars wiklars should not publish valuable proprietary information on wikiversity. The submittal form currently makes this crystal clear for any who read it. There is nothing to stop a local learner from publishing under copyright at another web site and then referring to the material with a link embedded in local FDL'ed overview materials. Mirwin 07:58, 18 August 2006 (UTC) [edit] Original Research Guidelines I know this needs to be turned into a whole different page for ongoing discussion, but as Anthere made mention in her announcement on Foundation-l (see for details), the original research provisions of Wikiversity are going to be something the board of trustees will look at very carefully. This is also something that needs to turn into a formal policy discussion at this point. --Robert Horning 16:13, 15 August 2006 (UTC) - From what I've heard, part of the reason why Wikinews started up is additionally to function as a reliable source for Wikipedia. See, original research is allowed on Wikinews, but the condition is that you must supply evidence and a way to verify it. One of the "students" could do some investigating involving Wikiversity resources, and publish a thesis based on it. The "student" should subsequently, however, provide verifiable proof on the discussion page. Once everything is checked over and peer reviewed, that page could function as a usable source for Wikipedia. Just my thoughts, though. Messedrocker 01:23, 16 August 2006 (UTC) [edit] Voting is evil This is not the appropriate way to start policies.--69.111.161.61 01:57, 19 August 2006 (UTC) - Right now, the votes allow us to judge which policies need discussion. --JWSchmidt 01:59, 19 August 2006 (UTC) - The advantage of this poll is that it's a good way to have a rudimentary set of policies quickly implemented, considering that some of these are a given (like Wikiversity:Be bold). Additionally, this still allows discussion, and if people agree with a point made by a certain user, they may change their votes accordingly. Voting in itself is not evil, but voting and not allowing discussion is. Messedrocker 02:02, 19 August 2006 (UTC) - This is a very bad start, frankly. It is not sensible to try to set rules a priori, before the problems we may or may not face are even particularly well understood. Voting has always been the best way to get the worst outcome in a wiki. This is a grand way to kill the project and get it off on the wrong foot from the very beginning. Wikis are about wisdom, not control, about freedom in a spirit of kindness, not about rules. Trust yourselves, trust each other, love each other, and work together in a spirit of helpful togetherness and we might just do something spectacular here. Go down a path of rules-making from day one, and you will kill the very spirit that makes a wiki run.--69.111.161.61 02:05, 19 August 2006 (UTC) - If we have no standards, then we have nothing to strive for. And if the rules fail us, we invoke WV:IAR and decide that the rules are no good and should be changed. There is discussion, not only on Wikiversity:Policies, but on Wikiversity:Colloquium and the mailing list. The poll, based on what JWSchmidt says, is not a binding decision, but helps shows which are a given and which are a bit controversial. Messedrocker 02:16, 19 August 2006 (UTC) [edit] Aghast!!! This whole page leaves me aghast! To be complete it needs only one further policy: "All students shall seek permission before leaving class to go to the washroom." Those who have proposed these rules seem more fit to run a borstal than a university. To me it is not a matter of whether I think this policy good, and that one bad. It is a completely wrongheaded approach totally unsuited to any kind of free and open education. In some instances it even seems that certain Wikipedians are taking this as an opportunity to have their most disliked policies rendered inapplicable in Wikiversity. It establishes the presumption that by virtue of being there at the beginning a certain cabal has earned the right to dictate policy that may be difficult alter when it must be applied to real world situations. If we really believe that we are taking a new and imaginative approach to education let's act in a manner consistent with that. Policy should follow practice; it should not dictate it. I am not so naïve as to believe that we will never need policy, but I at least recognize that they must evolve as and when they are needed. The voting should be completely removed from the page, and it is a great temptation to act unilaterally to do this. I will instead add an additional section petitioning that the voting process that has been undertaken be viewed as fundamentally flawed. In doing so I would ask that all who have already "voted" remove all their votes without regard to the specific policy in question. Wikipedia enumerates five pillars, of which one, "Wikipedia is an encyclopedia" is by its nature not applicable to the sister projects. This is the pillar of identity which each project must develop in its own way over in the course of its own development. The other four: neutrality, freedom, civility and initiative could be considered broadly applicable. This does not so much as imply that the elaborative comments at w:Wikipedia:Five pillars are also pertinent to Wikiversity. The elaboration in this project will come over time. "Free Content" does not imply that we are necessarily bound by GFDL and only GFDL, and Neutral Point Of View does not imply that there can be no mutually conflicting views or discourse on the path that leads to an idea's centre of gravity. Eclecticology 17:37, 22 August 2006 (UTC) - You're not the only one that sees it like this. There's a reason I removed all the discussion from the page and moved it to the respective talk pages. I'm not happy at all at the bureaucratization that was attempted here in the beginning. The policies that were adopted are the ones you were talking about btw. -- sebmol ? 17:42, 22 August 2006 (UTC) - Let me second the comment above. Don't worry, this project is still in its infancy, and there are other editors who share your concerns. --HappyCamper 17:45, 22 August 2006 (UTC) - Actually, I should elaborate - I'm quite worried myself actually, but with so much activity going on right now, one can only do so much, and trust that the Magic of the Wiki will prevail. I hope that at minimum, there will be those special nooks and crannies around Wikiversity which would emulate the "ideal" that much of us are thinking about. --HappyCamper 17:51, 22 August 2006 (UTC) - The level of activity is indeed a problem because of the opportunities it provides for those people who just love to make rules. It is trite to say that great vigilance is required to keep that process from getting out of hand, but some of us would prefer engaging in more fertile pursuits. Eclecticology 19:22, 22 August 2006 (UTC) - I agree that in some sense, Wikiversity will create much of its "identity" in future years, but the Wikiversity community did not start from scratch on August 15, 2006. The development of policy at Wikiversity is being guided by the contents of an approved project proposal. Wikiversity is not a university. Voting is useful as a quick way to help judge consensus; nobody takessensible people do not view voting as something to get excited about. --JWSchmidt 18:02, 22 August 2006 (UTC) - That in itself is porbably a source of contention that we should recognize. As for what to do about it, well, I'm not sure anyone really knows precisely what that would be. For myself, I don't think this point would be something to worry about necessarily. --HappyCamper 18:13, 22 August 2006 (UTC) - Voting is evil. Since voting is "not ... something to get excited about" we can avoid using it. While there are times when a poll can be used to gain a sense of the opinion, the premature institution of a vote has the effect of excluding any middle ground that might be discovered in a reasonable discussion. - In my view Wikiversity is a university in the finest and broadest sense of that term. This has nothing to do with granting degrees, a practice which I would personally oppose. Any prehistoric discussions that took place before August 15 can only be viewed as arriving at a provisional consensus. If someone wants to maintain a policy that was developped there it still needs to be reviewed, and will stand or fall on its own merits. Eclecticology 19:22, 22 August 2006 (UTC) - No policies were included in the Wikiversity project proposal. The approved project proposal defines a plan for this project that was accepted by the Board of Trustees. The approved proposal explicitly describes the components of the original proposal that were rejected by the board (see: Wikiversity:Original proposal). The Board of Trustees had good reasons for rejecting some elements of the original proposal. The approved proposal is an agreement between the Wikiversity community and the Foundation. If the Wikiversity community works towards the goals that were proposed and approved then the community will have the support of the Foundation. All Wikiversity participants need to recognize that efforts to do an end-run around the expressed wishes of the Board can be damaging to the success of the project and also risk termination of Wikiversity. --JWSchmidt 20:55, 22 August 2006 (UTC) - The points raised in the reasons for the original rejection are limited enough. Nobody is seriously suggesting granting degrees, and there would already have been sufficient development in the idea of e-courses to not be too concerned about that. The rejection in the resolution was only three lines long so I don't see the point of making it more elaborate than it is. Nobody is even talking about an end-run around foundation policy. Eclecticology 21:17, 22 August 2006 (UTC) [edit] Common voting page While I agree with sebmol that the voting needed to be taken off of this page, I disagree that the voting should all be done on the discussion pages for each policy. The main concern is for something to happen like the Wikiversity:Privacy policy that was moved to official status when obviously there wasn't even a vote on the topic. Or to see somebody "railroad" a policy into enforcement before there could be a community concensus on the subject. I simply don't have the hours of the day to devote to Wikiversity 24/7/365 and keep up on all of the discussions and be able to monitor each and every policy page. I will try to add my voice to the discussions when I have an opinion, but I am very concerned about these "stealth policies" being created and then significant custodian action happen as a result of these policies going to enforced status. I say this because I've seen it happen on other projects. All I'm suggesting is that we create a common "vote" page that lists all of the policies that are currently up for discussion. Perhaps to also establish some kind of criteria for when policy is going to be going before the community for formal approval as well, as there have been a number of policies put up for a vote that were clearly in need of major overhaul and substantial changes being made after the voting started. This is not good for Wikiversity. It is also nice to do a quick glance at the list of policies up for review and see what ones you still need to get involved with, if you havn't expressed a positive or negative opinion on the policy. I suggest this so that we can have even more participation with forming community concensus, as holding votes on individual policy discussion pages seems to serve as a deterrant, especially if the voting gets lost somewhere in the middle of the discussion page or there are multiple voting sections from previous attempts to get it approved. --Robert Horning 23:28, 23 August 2006 (UTC) [edit] Categories of Other? Anyone who knows which category the "Other" policies are supposed to be in, let me know. (The preceding unsigned comment was added by TimNelson (talk • contribs) 06:29, 30 August 2006.) [edit] update I just updated the lists to reflect which proposed policies have been adopted. Please check to see if I missed anything.--mikeu 18:56, 8 March 2007 (UTC) - Note: apparently the voting on Cite sources was against and the voting on Verifiability was split. See [1]. Can't see why you did this Mike, other than reasons of academc enlightenment. --McCormack 13:02, 27 March 2008 (UTC) - More likely it was a mistake on my part. I did ask for others to proofread to see if I got anything wrong. I actually discovered the cite sources in the wrong cat this morning and started to fix it before seeing your note. It looks like JWS made this change to Verifiability. --mikeu talk 17:03, 27 March 2008 (UTC) [edit] Stop at Secondary Education [edit] Introduction My I suggest that, for the time being at least, all teaching and class/lecture based courses constructed and carried out on Wikiversiry be aimed no higher than Secondary, and/or are only skills based (eg, computer programming, etc). [edit] Reasons (a list of a few, if anyone wishes to add more, please add them) - Subjects of Higher the Secondary contain materials which stretch the human imagination, and in some areas, go beyond. Courses can no long be based on a person’s intuition, and formalisation of terms and definitions are a must to carry on some areas (eg. Mathematics). --Fattony 4001 20:53, 30 March 2007 (UTC) - People may start to teach ideas which have not been Peer Reviewed or that might not have been accepted by the intellectual community. --Fattony 4001 20:53, 30 March 2007 (UTC) - Secondary Education (and those below) are well regulated by Governments and Professional Bodies, and specifications exist that courses and classes can be based on. --Fattony 4001 20:53, 30 March 2007 (UTC) - It is not sufficient to know things at Post-Secondary Education, but to also understand them, so if a student was to ask a teacher a question such as “Where did Mathematics originate?” The teacher gives a better answer than “Ancient Greece” The not so short answer here. --Fattony 4001 20:53, 30 March 2007 (UTC) - A lot of material Post-Secondary level that is not skills based is very specialised, and very few people would be interested in it. For example, I could run a course on: Set Theory without the Axiom of Choice. This is a very difficult, conceptually, and much of the subject matter has not been discussed or verified, and there are only a handful of specialists in the world that could talk to you about it. Another even better example of this is concerning the Axiom of constructability. There exists only two people in the world who talk about ordinal Turing machines (one a PhD student at the University of Bristol, and one resides in Germany). This means that out of about six and a half billon people, only two have knowledge or understanding, enabling them to teach subject matter based on this idea, and such ideas would only interest Pure Mathematicians and Philosophers. --Fattony 4001 20:53, 30 March 2007 (UTC) [edit] Objections (Please list objections here) It seems silly to me to attempt to tell volunteers what level of pedagogical materials meeting the standards of the community they can or cannot engage in exchanging on Wikiversity. While no doubt some volunteers will show up willing to attempt to regulate others, other volunteers have no obligation to pay any extra special attention to them or their list of proscribed materials. Peer review or disclosure requirements or accurate labeling are different from saying no advanced materials. Clearly perpetual motion schemes will eventually attract volunteers demanding fair disclosure and labeling. user:mirwin [edit] Final Thoughts I would not be opposed to Discussion Groups or Seminars, nor creating material for those currently in education above Secondary, however to teach it must be postponed until a clear set of guidelines about teaching Post-Secondary subject matter has been formulated and set in stone. --Fattony 4001 20:52, 30 March 2007 (UTC) [edit] List of official policies As of June 2008, the list of official policies on this page was mainly put together by User:Mu301 in March 2007, probably on the basis of him looking at the tags on the top of the policies, rather than at any votes. As the tags on the top of policies may have been changed (by other users, at earlier stages) without consensus or voting, some policies crept into the official list through this backdoor route of. As it has been argued that lack of objection may be seen as agreement with this, I am registering my objection to this process. Proposed policies remain proposed until properly agreed upon. --McCormack 08:35, 26 June 2008 (UTC) [edit] Policy Discussion With all of these Proposed Policies how long do some of them take, most of them have been around since 2006 and it's now 2008 shouldn't they have already been approved now - or are they still being proposed and altered. Dark Mage 18:32, 27 August 2008 (UTC) - I agree. I would like to see policies developed in learning projects after they are developed there. They should be put through a period of "take this out, add this" and then put to vote. Once approved there should be a measure of their effectiveness. All approved policies should be reviewed as part of the same learning project. The learning project should strip the policies to the bare bones of what wikimedia want, analyse them and then put their future to vote. We also need a procedure for every policy so we have no repeat of the nonsense that has been occurring here the last month and a time-scale for voting and repeated analysis. That' my complete view on things. Two years is a long time. Donek (talk) - Go raibh mile maith agaibh 22:18, 27 August 2008 (UTC) - I sympathise with both of you. However the problem with the original policy discussion in August 2006 (and why it was boycotted by some) was that the community was too small to produce meaningful decisions. Today the members of the active community are mostly different, but not any larger. Opinions among current editors are also more varied than you might realise. I'm not sure a large policy discussion will get anywhere at the current time - unfortunately. --McCormack 04:41, 28 August 2008 (UTC)
http://en.wikiversity.org/wiki/Wikiversity_talk:Policies
crawl-002
refinedweb
4,760
58.92
5 SUCCESSFUL WAYS TO ACHIEVE SUSTAINABLE CONSTRUCTION Paradise found How Crystal Springs Resort remains Jersey’s favorite place to hang Chris Mulvihill, CMO, Crystal Springs Resort Exclusive Inside: How blockchain technology can impact your business When déjà vu strikes with your commercial GC See our annual Lighting & GC lists May/June 2019 • Official magazine of Committed to offering premium services while helping our clients meet their supplier diversity initiatives. ŠHolly Baumann Photography RESTAURANTS UHC Construction Services is proud to announce National Certification by the Women’s Business Enterprise National Council (WBENC). CIRCLE NO. 1 Retail | Restaurant | Hospitality | Medical | Financial uhccorp.com 866-931-0118 • info@uhccorp.com * Visit us at the 10th Annual 2020 CCR Summit and tell us about one of our ads for a special gift. May/June • 2019 Vol. 18, No.3 88 26 166 FEATURES 26 Paradise found How Crystal Springs Resort remains Jersey’s favorite place to hang 166 Great Chemistry R&D lab unites sustainable products and lean construction 78 Light of day Intelligence in emergency lighting improves building safety 170 Eliminating the middleman How blockchain technology can impact your business 88 Storytelling in adaptive reuse Inside KETV-7’s Burlington Station 174 The Green Wave 5 successful ways to achieve sustainable construction 148 Don’t I know you? When déjà vu strikes with your commercial general contractor 2 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 170. 2 800-718-2255 May/June • 2019 Vol. 18, No.3 SPECIAL COVERAGE Industry Events 18 CCRP – Atlanta, GA 22 CCRP – Minneapolis, MN INDUSTRY SEGMENTS 36 General Contracint 64 Lighting DEPARTMENTS 6 Editor’s Note 12 Industry News 188 Commercial Construction & Renovation Data 190 Ad Index 192 Publisher’s Note 22 158 SPECIAL SECTION Commercial Kitchens 129 Fast. Affordable. Healthy. The Just Salad way continues to be a leader in fresh food options 140 Waterfront Evolution How the House of Que is making New Jersey love Texas-style barbeque Healthcare 152 Gaining a foothold Flooring renovation reinforces Chicago area VA Hospital’s outreach to its patients 129 Multi-Housing 158 Cold as ice Minnesota Townhome complex deals blow to winter conditions Federal Construction 162 Wtih honor Army Corps shares love of preservation for Ulysses Grant’s family Craft Brand and Marketing 180 Keeping bees Inside the story of the Catskill Provisions brand 180 4 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 3 EDITOR’S NOTE EDITOR’S NOTE by Michael J. Pallerino Click on this D espite its modest foray into physical stores, online furniture retailer Wayfair still generates nearly all of its $7 billion or so in annual revenue from internet purchases. The Boston-based brand had researched the brick and mortar approach in the past, including a series of pop-up stores in the New England and New Jersey areas during last year’s holiday season. After opening an outlet store in Kentucky recently, it announced plans to open four more pop-up shops later this summer. So why a retail store in the Natick Mall in Boston? Executives say that the 3,700-squarefoot, “front of the house” store will include service and home-design experts that offer consultations to customers. Shoppers also will be able to buy an assortment of home decor products and place orders for home deliveries. Wayfair is not alone in the pursuit of brick and mortar locations. Online retailers like Allbirds, Amazon, Bonobos, Casper, In a time when technology is forcing us to think creatively every day, it was just a matter of time before the online brands got physical, so to speak. Glossier and Warby Parker have all jumped into the fray. For brands that started out in the e-commerce game, brick-and-mortar retail is just to promising to pass up. It is not about putting other retailers out of business, forcing acquisitions or rising visibility as much as it is about having the ability to turn online data into insight, thereby creating a seamless and convenient shopping experience across all channels. And that is a pretty valuable asset to have in your corner these days. Being able to collect and analyze data enables a brand to refine its strategy, including testing new markets with pop-up stores or seeing just what its customers really want in any given area. Perhaps no brand is ready to take advantage of this strategy better than Amazon. With four Amazon Go stores in the United States today, rumor has it that the brand plans to open thousands of new locations over the next few years. The Amazon strategy, which is revolutionizing the retail scene with strategies like its “Just Walk Out” approach, also is making a hard play for the grocery market. Retailers like Walmart and Costco are now having to deal with Amazon’s tech approach to buying groceries. Have mercy. Is any of this fair? With new stores popping up and more concrete strategies being rolled out all of the time, it is great for business. In a time when technology is forcing us to think creatively every day, it was just a matter of time before the online brands got physical, so to speak. Just how good it is for your business is worth watching.. CCR — MAY : JUNE — MAY : JUNE 2019 Integrated Solutions That Deliver Quality and Speed To Market Program Management > Supply Chain > Installation > ClearThread ÂŽ Reporting Fixture installation and merchandising program in 230 retail stores in U.S. and Canada. Completed in 2 weeks. Technology deployment in 2,500 restaurants, with ongoing installation in 500+ locations monthly. Your resource for multi-unit initiatives to transform spaces. n Sustainable lighting initiative involving 55 hotels and 6,275 guestrooms in 11 weeks. 7,000 surveys completed and 80,000+ photos captured. Let our team of 1,600 professionals help your brand maximize the customer experience. We provide a single point of contact for all your remodel, rollout and retrofit needs, including: Program management n Site survey and data collection n Fixture, graphic and equipment installations n Merchandising n Ongoing support and maintenance 877.7DAVACO CIRCLE NO. 7 n n Digital technologies n n Supply chain Special initiatives 62019 Š DAVACO GREGG LOLLIS Sr. Director, Design Development Chick-fil-A BOB WITKEN Director of Construction & Development Uncle Julio’s Corp. DAVID SHOTWELL Construction Manager, Flynn Restaurant Group ISYOL E. CABRERA Director Design and Construction Carvel DEMETRIA PETERSON Construction Manager II Checkers & Rally’s Drive in Restaurants DAVID THOMPSON Director of Construction WHICH WICH® SUPERIOR SANDWICHES HOSPITALITY SAMUEL D. BUCKINGHAM, RS CMCA AMS President & Co-Founder Evergreen Financial Partners LLC JEFF ROARK Principal/Partner Little GENERAL CONTRACTOR MATT SCHIMENTI President Schimenti Construction DEVELOPMENT/PROJECT MANAGEMENT KAY BARRETT. NCIDQ, CDP Senior Vice President, Cushman & Wakefield MIKE KRAUS Principal Kraus-Manning RICK TAKACH President and CEO Vesta Hospitality TOMMY LINSTROTH LU SACHARSKI Vice President of Operations and Project Management Interserv Hospitality JOHN LAPINS VP of Design & Construction Auro Hotels JOE THOMAS Vice President Engineering Loews Hotels Executive VP & Director of Hospitality HKS Principal Trident Sustainability Group International Director JLL ROBERT RAUCH CEO RAR Hospitality Faculty Assoc., Arizona State University NUNZIO DESANTIS PUNIT R. SHAH President Liberty Group of Companies JOHN COOPER Senior Vice President Development RB Hotel Development GARY RALL Vice President of Design and Development, Holiday Inn Club Vacations ARCHITECTS/ENGINEERS STEVE JONES JIM SHEUCHENKO President Property Management Advisors LLC STEVEN R. OLSON, AIA CHRIS VARNEY Principal, Executive Vice President EMG ADA CONSULTANT GINA NODA President Connect Source Consulting Group, LLC. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 President CESO, Inc. BRAD GASKINS Principal The McIntosh Group ACADEMIA DR. MARK LEE LEVINE Professor Burns School/ Daniels College University of Denver “Your Vision, Our Expertise” • Construction Manager • General Contractor Thirty nine years of professional and quality construction management services SPECIALIZING IN: Tenant Fit-Outs Ground Up Retail Big Box Remodels Renovation Hospitality Fitness Licensed Contractor in all 50 States 101 East Town Street - Suite 401 - Columbus, OH 43215 • 614.235.0057 CIRCLE NO. 8 INDUSTRY NEWS INDUSTRY NEWS AroundtheIndustry Hospitality Taco Bell Taco Bell has unveiled a limited-time hotel in Palm Springs named “The Bell: A Taco Bell Hotel and Resort,” which the brand describes as a “Tacoasis in the desert” that will provide a brand-themed experience for visitors. Nobu Hotel The Nobu Hotel London Portman Square will open in 2020, replacing the Radisson Blu in London. The Nobu hotel brand arrived in the city two years ago with the Nobu Hotel London Shoreditch. Sister Hotels Sister projects Moxy Louisville Downtown and Hotel Distil in Kentucky are on track to open in the fall. The Moxy property will have 110 guest rooms, while Hotel Distil will have 205 keys. Country Inn & Suites Radisson Hotel Group promises “accelerated growth” for the chain’s Country Inn & Suites brand, with special attention paid to California and Texas. The upper-midscale, select-service brand could triple or quadruple its presence in the United States alone. AC Hotel NYC Marriott’s modular-designed AC Hotel going up in New York City will be the tallest prefabricated hotel in the world. Dream Hotel Group Dream Hotel Group plans to add 20 properties to its portfolio in the next three years. The company has 19 hotels under four brands. TWA Hotel The TWA Hotel at New York’s John F. Kennedy International Airport will launch an infinity pool with runway views. The pool will be open year-round and heated up to 100 degrees Fahrenheit in winter. Marriott’s Autograph Collection Marriott plans to add a dozen Autograph Collection properties in Europe this year. Schloss Lieser in the Moselle region of Germany, the Shelbourne in Dublin and the Academia of Athens are among the new properties coming in 2019. Rosewood Hotels Rosewood Hotels has 21 properties in its development pipeline. 21c Museum Hotels/MGallery Paris-based Accor has added 21c Museum Hotels to the MGallery Hotel Collection, simultaneously bringing the MGallery brand into North America. The 21c Museum Hotels/MGallery brand will combine art with boutique hotels and chef-driven restaurants in 26 countries. Hotel Hendricks Luxury Hotel Hendricks will open on Fifth Avenue and West 38th Street in New York City. The event space will feature dramatic views of the Empire State Building just four blocks to the south. InterContinental InterContinental is planning to create a second design lab in Atlanta. Restaurants Restaurant Brands International Restaurant Brands International (RBI), the parent of Burger King, Tim Hortons and Popeyes Louisiana Kitchen, plans to grow to more than 40,000 global locations in the next decade. Sprouts Farmers Market Sprouts Farmers Market is entering new markets this year, with plans to open more than half of its future stores in new territory. The grocer has also introduced prototype stores centered around customer experience improvements in the meat, deli, seafood and bakery departments. Meijer Meijer is expanding its presence in Ohio with three new supercenters. The new additions mark the chain’s first move into Northeast Ohio. Domino’s Pizza Domino’s Pizza aims to add 10,000 new restaurants around the world, bringing its global total to 25,000 units. 7-Eleven 7-Eleven’s new Dallas-area convenience store is the first of six test stores the company plans to open around the United States. Wegmans Wegmans is entering New York City for the first time with a 74,000-squarefoot store in the Brooklyn Navy Yard. The supermarket is expected to open in the fall and will be one of three new locations to open this year. 12 Whole Foods Whole Foods Market is testing a bodega-style store format in New York City’s Chelsea neighborhood. The small-format store, called Whole Foods Market Daily Shop, emphasizes grab-and-go shopping, with a focus on local favorites such as bagels, breads and a coffee bar. Bloomin’ Brands Bloomin’ Brands will open the first U.S. location of its Aussie Grill by Outback fast-casual sandwich concept in Tampa, Florida. The concept debuted earlier this year outside the United States as part of the Outback Steakhouse parent’s international growth plan. Godiva Chocolate brand Godiva opened the first of 2,000 planned cafes in New York City. Unlike the brand’s 800 existing stores, the cafes will feature a full menu of sandwiches, coffees and treats, including a croissant-waffle hybrid called the croiffle. Lidl Lidl has opened Lidl Express, a small-format store at its Arlington, Virginia. The store, about 1,000 square feet, offers many grab-andgo and fresh items. Taylor Gourmet’s The new owner of Taylor Gourmet plans to reopen at least five of the restaurants that closed abruptly when the Washington, D.C.-area chain filed for Chapter 7 bankruptcy last fall. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Restaurants (continued) Cava Group Cava Group has plans to grow both its 80-unit fast-casual Cava Grill concept and Zoe’s Kitchen. Cava Grill also aims to expand distribution of its packaged dips and spreads to Whole Foods Market stores around the United States this year. Cub Foods Cub Foods is opening its first urban concept store in Minneapolis. The 46,000-square-foot location, the company’s smallest to date, will put a premium on speed with grab-and-go options like a popcorn shop, burrito bar, juice bar and sushi bar. The Little Beet The Little Beet’s recent opening of a restaurant in Miami marks the chain’s first foray outside its core markets of New York City and Washington, D.C. The chain is studying new markets and tailoring the concept to those consumers as it plans to open 15 more locations by the end of next year. Capriotti’s Sandwich Shop Capriotti’s Sandwich Shop is in growth mode with plans to grow to 500 franchises by 2025. The chain has been revamping stores as it grows, trimming the footprint and moving refrigerators and ovens to the front to highlight its signature slow-roasted turkey. Starbucks Starbucks has grown to 30,000 stores worldwide with the opening of its newest Reserve store in Shenzhen, China. The Seattle-based coffee giant opened its first China location 20 years ago and now has 3,700 units in the country. Rise Biscuits Fast-casual chain Rise Biscuits Donuts has changed its name to Rise Southern Biscuits & Righteous Chicken. The change signals a shift away from sweet doughnuts to a more savory menu as the 15-unit chain ramps up franchise growth plans. Retail Bath & Body Works Bath & Body Works will renovate 175 stores, open 46 new ones and shutter 24 locations this year. IKEA IKEA opened its first U.S. IKEA Planning Studio in New York City, a smaller-format urban concept where shoppers will be able to browse products and place orders for home delivery. The Manhattan location will give shoppers the option of making consultation appointments with designers. Nordstrom Nordstrom will open two outpost locations in New York City this fall, in addition to its planned seven-story full-line department store. The merchandise-free small-format stores in the West Village and Upper East Side will be pickup and return spots for online purchases, and will offer styling and tailoring services. Crate & Barrel Crate & Barrel will open a CB2 store near a namesake location in the Knox Street shopping district of Dallas. The home goods retailer launched the CB2 format in 1999 as a less-expensive alternative for young city dwellers. Wayfair Online retailer Wayfair plans to open its first full-service store in Natick, Massachusetts this fall. It also plans four pop-up shops this summer. Hy-Vee/HealthMarket Hy-Vee will open its second HealthMarket specialty store in Sun Prairie, Wisconsin. The HealthMarket concept includes an abundance of fresh items plus pharmacy, health clinic and fitness studio services. Sneakersnstuff Sneakersnstuff has launched a 3,500-square-foot store in Venice, California, its second location in the United States. Red Wing Red Wing has opened its first Manhattan store, with plans to grow by 1,000 new locations through 2024. The brand, known for its work boots, operates 700 international stores. Nine West Nine West has emerged from Chapter 11 bankruptcy reorganization after nearly a year with a new name and a plan to firm up partnerships for future growth. The newly named Premier Brands Group Holdings includes brands such as Anne Klein and Gloria Vanderbilt. J. Crew J. Crew Group is expanding its casual sibling brand, Madewell, with four new stores that have already opened this year, including a location at Hudson Yards in New York City. Madewell is scheduled to open six more locations by next February. Barneys Barneys New York will open its only New Jersey location at the 3-million-square-foot American Dream mall under construction at the Meadowlands. The two-story New Jersey flagship will include a Freds at Barneys New York restaurant. Dollar General Dollar General will open 975 new locations and remodel 1,000 existing stores this year, with self-checkouts and improvements to sections, including health and beauty. The 15,300-store retailer also will expand its fresh food and home goods offerings. Five Below Eclectic low-priced retailer Five Below will open up to 150 new locations and end the year with about 900 U.S. stores. Walmart Walmart will invest $11 billion in its stores this year, with plans to remodel 500 locations for expanding digital commerce and improving the supply chain. Remodels will include a long list of changes, including adding self-checkout kiosks, re-merchandising grocery and electronics sections and adding consultation rooms in the pharmacy departments. Toys R Us Tru Kids Brands, the new owner of Toys R Us, will open a few 10,000-square-foot stores in the United States in time for the holidays this year. The retailer expects 70 more international locations to open in 2019. Sears Sears will debut its first three small-format Sears Home & Life stores in Alaska, Kansas and Louisiana. The retailer will offer appliances, mattresses, tools and home services. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 13 INDUSTRY NEWS INDUSTRY NEWS They said it... Return of the cafeteria “We believe the main thing we need to do is invest in people— and better people.” Who doesn't love an old-fashioned diner? If today's consumer continues to have a say, it looks like the cafeteria-style restaurant is making a bit of comeback. According to a report by Datassential, 21 percent of consumers love them, 34 percent like them and 33 percent are indifferent or neutral. And when it comes to walking through the front door, the survey found that 72 percent want cafeteria concepts to have a salad bar, 67 percent want healthy foods, 56 percent want more global flavors and 43 percent want to see plant-focused foods. — H.E. Butt Grocery President Craig Boyan on why technology should be used to create jobs and improve the lives of both customers and employees “It is equally important for us that our guests have amazing dining experiences as it is for us to engage with local artists, crafters and brewers, too.” — Joe Jackson, VP of F&B at Marcus Hotels & Resorts, on how the brand is prioritizing the food and beverage experience as part of its overall stay at each of its properties — MAY : JUNE 2019 The numbers game 25,312 30 13,573 The number of fast-casual concepts operating in the United States as of last fall, according to NPD Group. According to the National Restaurant Association, quickservice chains still command about three-quarters of total restaurant traffic, and sales at quickservice and fast casuals combined are on track to grow 3.2 percent to $246.7 billion. The percentage increase of grocery stores in 2018, making it one of the strongest sectors in retail, according to a report by JLL. The data shows that there were twice as many store openings as closings last year, the study found. The number of projects in various stages of development and construction in the global hotel construction pipeline, according to Lodging Econometrics. The United States accounted for more than twofifths of the global pipeline with 5,530 projects, the study found. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 15 INDUSTRY NEWS INDUSTRY NEWS In Memoriam Arthur “Art/Artie” Jay Benson Room(s) to grow U.S. hotel construction pipeline numbers up in April P owered by the upper upscale segment, hotel rooms in the development phase increased 9.9 percent year-over-year in April (203,890 rooms) in the United States, according to STR. The report showed that a majority of the construction activity continues to be focused in the upper midscale and upscale segments, while upper upscale projects represented the largest percentage increase in activity year-over-year. The areas include: upper midscale, 67,495 rooms (+4.5 percent); upscale, 61,347 rooms (+4.9 percent), and upper upscale, 24,543 rooms (+12.2 percent). Here’s a look at the five markets that reported more than 6,000 rooms under construction: Born on Sept. 3, 1939 in the Bronx, New York, there was not much that Arthur Jay Benson did not accomplish. “Art” or “Artie,” as his friends knew him, passed away recently in his home in Burnsville, North Carolina at age 79. A Captain in the United States Air Force, Art was stationed all over the world, including stints in Cape Cod, Sweetwater, Texas, Thule, Greenland and Keflavik, Iceland. He was a graduate of Martin Van Buren High School and Adelphi University, where he earned his bachelor’s degree. In 1966, he partnered with his father, and went on to become president and CEO of SureAir Ltd. in Stamford, Connecticut, where he pioneered the industry of National Heating and Air Conditioning Service Management which became the model of today’s national facility management. He retired in 2001 to spend his days with his wife, Sandra. Art lived all over the country, including taking up residence in Queens and Pound Ridge, New York, Port Aransas, Texas, Bonita Springs, Florida, and later in Burnsville. A huge baseball fan (Yankees and Mets) and golfer, Art was a beautiful singer and musician, amazing photographer, crossword puzzle enthusiast, voracious reader and history buff, and writer. In 2011, he published a book chronicling his late wife Sandy’s two year battle with cancer. He is survived by his children, Matthew, Alan (Meredith), Mark (Hallie) and Leah (Matthew), grandchildren Justin, Sophia, Jack, Luke, Ben, Grace and Nicholas, and cousins Audrey (Ken) Michaels and Daryle (Dick) Prager. The Commercial Construction & Renovation family would like to extend his sympathy to Art and his family. He will truly be missed by all of us. 16 New York: 13,976 rooms Las Vegas: 8,435 rooms Orlando: 7,310 rooms Dallas: 6,438 rooms Los Angeles/ Long Beach: 6,113 rooms 11.3% 5.1% 5.7% 7.1% 5.8% The Green Wave New Buildings Institute releases 2019 count zero energy buildings report 580. It’s an important number to remember. According to the “2019 Getting to Zero Project List” by the New Building Institute (NBI), that’s how many buildings use only as much energy as is produced through clean, renewable resources over the course of a year. If you are keeping score, that is a 10-fold increase since NBI started tracking buildings in 2012. Emerging buildings are those that have a stated goal of achieving zero energy, but do not yet have 12-months of energy use and production data to share or have not yet hit the zero energy performance target. Just how important will be getting to zero be? Data by Grand View Research project $78.8 billion of growth in the global net-zero-energy building market by 2025. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019. 11 Bullseye See you in Minneapolis, CCRP hits the axe range in Atlanta for networking gig See you in Minneapolis, MN May 9th, 2019 I s that an axe? We know what you’re thinking, but don’t knock the exercise until you give it a try. That was the line of thinking for attendees of the Commercial Construction & Renovation People (CCRP) networking event at Bury the Hatchet in Atlanta, a bar made for people who like to, you guessed it, throw axes. The backdrop once again was the perfect spot for a night of networking and industry conversation. If you’re looking for different options to expand your list of contacts, reach out to Kristen Corson at 770-990-7702 or via email at kristenc@ccr-people.com. Thank You CCRP Atla Spons REGISTERED COMPANIES: Aaron’s Inc. CMC Aviation Institute of Maintenance Coast 2 Coast The Beam Team Boyd Gaming Equipment Management Group Coastal Mississippi Continental Restaurant Federal Heath Feed Restaurant Interstate Signcrafters ProCoat Products Jim N’ Nicks Bar-B-Q Quality Equipment Management JLL Jones Sign Storefloors GPD Group L2M The Beam Team Floor & Décor Focus Brands/Carvel Dunham’s Sports Celestial Meetings Elro Signs HTC Flooring Lakeview Construction Verizon Wireless Chain Store Maintenance Entouch ICON Mitsubishi/Jet Towels Window Film Depot You to Our Atlanta, GA onsors: Thank You to Our CCRP Atlanta, GA Thank you to our sponsors: Sponsors: The Beam Team JLL Ken Demkse, Senior Vice President Retail Multi-Site Project Management 3344 Peachtree Road NE Atlanta, GA 30326 Ph: (404) 964-8901 ken.demske@am.jll.com Quality Equipment Management Jones Sign Co. Tim Hill, Vice President 1350 Bluegrass Lakes Pkwy Alpharetta, GA 30004 (630) 816-0631 timhill@thebeamteam.com Laurie Pysher, Customer Relations Supervisor 3630 North Parkway Cumming, GA 30040 (678) 867-6575 laurie.pysher@qemanagement.com apolis, MN May 9th, 2019 18 Thank You to Our CCRP Atlanta, GA Sponsors: INDUSTRY NEWS INDUSTRY EVENTS • CCRP Ron Hunter: Vice President Sales 503 South 301 Tampa, FL 33169 (727) 809-1251 rhunter@jonssign.com See you in Minneapolis, MN May 9th, 2019 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 3. 1. 2. 4. 5. 1. J ulie Starzynski, Floor & Décor; Julia Versteegh, Storefloors; Patricia Parajon, Equipment Management Group 2. Ashleigh Peppers, JLL; John Stallman, Lakeview Construction 4. Brody Corson, Aviation Institute of Maintenance; David Corson, CCR 5. Chris Caldwell, Consultant; Dan Eberhardt, The Beam Team 3. Steve Winston, The Beam Team; Laurie Pysher, QEM MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 19 INDUSTRY NEWS INDUSTRY EVENTS • CCRP 1. 2. 4. 5. 6. 7. 9. 10. 1. K evin Fleming, The Beam Team; Lori O’Brien, ICON 2. A ri Covacevich, Coastal Mississippi; Marlo Yarbrough, Boyd Gaming 3. L isa Schwartz, ProCoat Products; Tim West, Coast 2 Coast; Jeff Mahler, L2M 4. John Palmer, Dunham Sports; Chris Heba, Feed Restaurant; Joe Talley, Continental Restaurant 5. Jim Rieckel, Entouch; John Catanese & Laura Riendneau with Chain Store Maintenance; Jace Barrera, The Beam Team 6. ATL Axe Throwing Champions: Chris Caldwell, Consultant; Julia Versteegh, Storefloors 20 3. 8. 11. 7. Kevin Kilgore, Jim ‘N Nick’s BBQ; Marilyn Brennan, Interstate Signcrafters; George Farrelley, Mens Wearhouse 8. Brody Corson, Aviation Institute of Maintenance; Larry Schwartz, HTC Flooring 9. Scott Kerman, Jet Towel/Mitsubishi; Frank Rhodes, Elro Signs; Ian Bannister, Window Film Depot 10. Ron Hunter, Jones Sign; , Jimmy Johnson & David Physer with QEM 11. Nick Trimmer, Equipment Management Group; Matt Smith, Federal Heath COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 12 INDUSTRY NEWS INDUSTRY EVENTS • CCRP BBQ, football and barbarians W CCRP’s Minneapolis networking event goes all medieval ho else can combine BBQ, football and a touch of barbarians? Did you say Commercial Construction & Renovation People (CCRP)? The networking event, sponsored by RCA, took a little Nordic turn with a tour of U.S Bank Stadium, home of the Minnesota Vikings, and then some of the Twin Cities’ best BBQ at Erik The Red Nordic BBQ & Barbarian Bar. If you’re going to network, why not go for something different. To add this type of events to your to-do list, reach out to Kristen Corson at 770-990-7702 or via email at kristenc@ccr-people.com. See you in Philadelphia, PA June 13th, 2019 See you in Philadelphia, PA June 13th, 2019 REGISTERED COMPANIES: Bishop Fixtures ACME Enterprises Inc Bosco Development Aluma Spec CBRE ANP Lighting Chain Store Maintenance ArcVision Thank You to Our Command Center Commercial Contractors Inc Ardex Americas CCRP Minneapolis, MN Davis & Associates Assa Abloy Sponsors: Diehl & Partners LLC Elder-Jones Inc EMG Corp ESI - Engineered Structures ICON IDQ JLL Jones Sign L&S Lighting Corp L2M Architects Lakeview Construction Inc. Life Time National Contractors Inc. nParellel Permit.com Retail Construction Services Thank You to Our CCRP Minneapolis, MN Sponsors: Thank You to Our CCRP Minneapolis, MN Sponsors: Abra Auto Body & Glass Thank You to Our CCRP Minneapolis, MN Thank you to our sponsors: Sponsors: Serigraphics Sterling Systems Target Corp The Beam Team UHC Corp UNFI Store Design Services Verizon Wireless Wallace Engineers Retail Contractors Association Serigraphics Carol Montoya, CAE, Executive Director Adam Halverson: President carol@retailcontractors.org 2401 Nevada Avenue North See you in Philadelphia, PA June 13th, 2019 2800 Eisenhower Ave, Suite 210 Minneapolis, MN 55427 Alexandria, VA 22314 (763) 270-3311 (703) 683-5637 • Fax: (703) 683-0018 adamh@serigraphicssigns.com See you in Philadelphia, PA June 13th, 2019 22 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 1. 2. 4. 3. 5. 6. 7. 1. M ark Palmquist, CBRE; Leslie Burton, UHC Corp 2. Z ach Hanson and Maurissa McNellis with National Contractors Inc 3. J eff Mahler, L2M; Michael Papec, Engineered Structures Inc 4. A nthony Johnson, Davis & Associates; Jerry Fisher, ANP Lighting 5. David Corson, CCR; Jan MacKenzie, ASSA ABLOY; MK Nelson, Bishop Fixtures 6. Dave deNeui, Abra Auto Body & Glass; Tim Hill, The Beam Team; Jay Heid, Abra Auto Body & Glass 7. Bob Schmidt, Bosco Development; Vaun Podlogar, State Permits; John Stallman, Lakeview Construction MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 23 INDUSTRY NEWS INDUSTRY EVENTS • CCRP 1. 2. 3. 4. 5. 7. 6. 1. Steve Hirtz, Mike Waich & Jon Jasper with Jones Sign 5. Adam Halverson, Serigraphics; Mike Klein, Life Time 2. Joe McMahon, Retail Construction Services; Ross Stecklein, Retail Construction Services; David Fritz, National Contractors; Derrick Diedrick, National Contractors Inc 6. Ken Sharkey, Commercial Contractors Inc; Sharon & Steve Bachman with Retail Construction Services & Sandy Sharkey with Commercial Contractors Inc 3. Justin Parish, Engineered Structures Inc; Kelly O’Brien, Serigraphics; Janine Buettner, ArcVision 7. Seth Wellnitz, Command Center; Brian Perkkio, Elder-Jones, Dwight Enget, Command Center; Justin Elder, Elder-Jones 4. Bill Marcato, Wallace Engineering; Jeff Seba, EMG Corp; Win Rice, Wallace Engineering 24 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 THE GUY ON THE LEFT WILL HAVE TO FIGURE OUT HOW TO BECOME OSHA COMPLIANT. THE GUY ON THE RIGHT DOESN’T. THE LATICRETE SUPERCAP UNDERLAYMENT SYSTEM IS ALSO GREAT FOR WHAT IT DOESN’T CREATE: SILICA DUST. ® ® ■ Exceeds new OSHA rules. Adds no silica dust into the building ■ No more hassles with small bags and small pumps ■ Only a hose goes into the building ■ Pump our self-leveling underlayment up to 50 stories ■ ■ The fastest, cleanest, safest and most convenient underlayment system out there Now available as Ready-Mix Delivery Service Ask your LATICRETE sales representative for more information. laticretesupercap.com | 866-704-2247 SCA-380-0419 ©2019 LATICRETE International, Inc. All trademarks shown are the intellectual properties of their respective owners. CIRCLE NO. 13 PERFECTLY FLAT FLOORS. DELIVERED.™ 26 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Paradise found How Crystal Springs Resort remains Jersey’s favorite place to hang By Eric Balinski “T his is New Jersey?!?!” This is something that Chris Mulvihill never tires of hearing from astonished guests the first time they experience Crys- tal. Commercial Construction & Renovation sat down with Chris Mulvihill, CMO at Crystal Springs Resort, to get his take on where New Jersey’s favorite paradise is heading and what places it at the top of so many must-visit lists. Give us a snapshot of Crystal Springs brand? Crystal Springs Resort is many things to many people. We were originally known as golf resort and are actually named after one of our six golf courses (incidentally one of the toughest in the country—No. 36 in the United States, according to Golf Digest). With the addition of two hotels, two spas, three pool complexes, a sports club and wellness center, mountaintop lake and nature center, 10 restaurants and world class culinary program, it is now more accurate to refer to Crystal Springs Resort as the Northeast’s largest golf, spa and culinary resort. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 27 PARADISE FOUND Tell us what makes the Crystal Springs brand so unique? Wow, where do I start? We are best in class in so many areas of operation: New Jersey’s No. 1 rated public golf course, one of the state’s only AAA 4 Diamond hotels, NJ’s most highly decorated restaurant, Wine Enthusiast Hall of Fame wine cellar, home to the NJ Wine & Food Festival, and you know what’s better than a luxury spa? Two of them. But this all being said, for my answer, I’ll go with location, location, location. I often tell people, maybe there are 100 spectacular resorts in the world, and one day, if you have time, you should go and see them all. But guess what? There is only one resort in the world that you can drive to within an hour of the George Washington Bridge from New York City, and that’s us. Of course you can fly to some other great places, but in the time it would take you to get through airport security, you could be up at Crystal Springs with a cold drink in your hand by the pool. So for the 20 million or so people who live in the greater New York metro market who I care about, that is what really makes us unique. What are today’s guests looking for? Authentic experiences with a connection to nature. With more people opting to live in the city and spending more time tied to their phones 28 and tablets, there is a growing demand for meaningful recreational experiences that allow guests to unplug and reconnect with nature. Just offering nature-themed experiences is not adequate. Today’s consumers are somewhat jaded and wary of marketers with superficial “back-to nature” offerings comparable to producers of so called “free range” eggs produced in poultry houses with 10,000 hens that all share a single door to a postage stamp yard. Fortunately, our resort is surrounded by thousands of acres of woodlands and farms with stunning mountain and valley views. You cannot fake that or make that up. In addition to being only a few miles from the Appalachian Trail, we have our own nature trails on property as well as pristine mountaintop lake and nature center. We also work with several local farms and foragers to source fresh local ingredients for our menus and we work with multiple local partners to arrange farm tours and immersive local agriculture experiences. What type of guests are you targeting? We have a very diverse set of offerings and audiences, which help us maximize our occupancy year-round. Roughly half of our business is “Groups,” comprised of wedding parties, corporate offsite meetings, golf outings and family reunions. The other half is “Leisure” business, which is comprised of family vacations, couples retreats, girlfriend getaways, golfer vacations and ski trips. Our targeting varies quite a COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 The Lightweight Champion! Don't fight your next Large Heavy Tile installation. Put MULTIMAX™ Lite in your corner. Maximum Non-sag ■ Contains No silica sand ■ Exceeds ANSI A118.15 ■ 40 minute open time ■ l 1.800.243.4788 A-8491-0519 ©2019 LATICRETE International, Inc. All trademarks shown are the intellectual properties of their respective owners. *See Data Sheet 230.99 for complete warranty information. §Antimicrobial technology protects the treated article against mold and mildew deterioration. Antimicrobial technology is not designed to replace normal cleaning practices or protect users. HYDROMATIC CURE CHEMISTRY CIRCLE NO. 14 PARADISE FOUND bit depending upon the time of year and what our occupancy looks like in the upcoming months. The great thing about doing my job in 2019 is that digital advertising now allows us to target these audiences on a pinpointed basis and to present ourselves differently to each audience, depending upon which aspects of the resort are the most important to them. For example, instead of running an ad for a “ski and stay” promotion in the newspaper or on the radio, where we will spend money to promote to non-skiers, we can advertise online only to skiers, or for that matter, skiers who have shown a history of taking time off mid-week if that happens to be the area we need to promote. Plus, when those people respond to our ads, we can bring them through a section of the website highlighting our outdoor heated snow pools and après ski drink specials. What are the demands these customers place on the company? In my experience, it is critical that marketing and operations work very closely to make sure the picture painted by marketing translates into the experience that is delivered to the guest. Word spreads fast on social media and via online reviews, so doing right by your guests pays off in referrals and repeat visits. It is not enough to just provide a clean room and timely service. Guests want the experience that was promised, so the amenities and activities delivered need to support that. It’s the little extra things that make all the difference. How is your geographic location figure into your marketing and operation? The worst and best thing about our location is that we are in New Jersey. I will not be politically correct here. Nobody from Chicago or California wants to go to New Jersey for vacation. When they hear New Jersey, they picture Tony Soprano, oil refineries and the Jersey Turnpike. As a marketer, I decided a long time ago that I will not die charging up the mountain trying to convince the world that New Jersey (or at least our corner of it) is actually a very beautiful place. The other side of that coin, the shiny side, is that we have the country’s largest and most affluent population center in our backyard, so what do I care what people in California think? This actually gives us a very powerful elevator pitch to millions of people: “Experience (insert X here) at NYC’s Closest Resort.” Whether X is goat yoga or wine cellar tours, the message of proximity and convenience is always the same. What do you see is the difference between being an independent like Crystal Springs and a company operating under the flag of a major brand? On one hand, we do not have the resources of a Marriott or Hyatt, but we certainly have more scale than most independents. And we can be much more innovative and nimbler than the typical lumbering large operator. Since all of our assets are in one market, we can really cater to the nuances market, whereas the management at a chain hotel may have little latitude in adjusting the chain’s offerings to their local audience. This is really a competitive advantage for us when it comes to offering authentic experiences. We partner with local farms, orchards, vineyards, foragers and other producers to provide on property and off property programming in ways I just cannot see a chain hotel ever delivering. What trends are you seeing are there today? Wellness retreats, interactive nature experiences, interactive agricultural experiences and interactive culinary experiences. 30 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 15 PARADISE FOUND What is the secret to creating a “must visit” resort in today’s competitive landscape? I’m going to have to go back to my earlier answer: Location. Today’s consumers crave authentic experiences in a beautiful setting, but they also do not have a lot of time. So our secret is that nobody has an assembly of amenities anywhere close to what we provide in such a bucolic setting. And we’re only one hour from the George Washington Bridge. New Yorkers will still take their week in Nantucket or Sun Valley, but when they want to get away for two days, we are their go-to spot. How does the upcoming design/renovation project cater to what your guests are looking for? We are about to renovate the rooms in Grand Cascades Lodge, the larger of our two hotels. The hotel was built 10 years ago and as one of the few AAA 4 Diamond properties in New Jersey. We need to keep the rooms in alignment with the high expectations we are setting for our guests elsewhere throughout the property. A big factor driving visits to Crystal Springs is the beautiful setting in which we reside. Our guests come to escape the city and to reconnect with nature. We want to bring that experience and feeling to them in the rooms. What you will see in the renovation is a fresh and vibrant look with a lot of stone and wood that connects the rooms to the surrounding outdoor mountain and valley views. For this project, we have retained INC Design, which just did a fantastic job for us on the renovation of our flagship restaurant, Restaurant Latour. What’s the biggest issue(s) today related to the resort business? One of the biggest challenges that a multi-faceted resort like ours faces is the need to be different things to different people, based on the wide array of programming we are offering at any particular time 32 to fill in gaps to maximize our occupancy. This is not a new issue, but is rather timeless one in the resort business. What is new is the ever increasing ability of digital marketing to address this challenge. In the past, if you ran a print or radio ad, you could target your audiences and segment your advertising based on the publication or radio station that you chose, but everyone in your audience would see or hear the same ad. What if there is no strong local golf publication or radio station dedicated to golfers? Today, not only can we pinpoint our advertising based on whether you are a golfer, skier, wine lover, spa enthusiast, etc., but we can also position messaging and depict families or couples or groups of friends, depending on other demographic information. Talk about sustainability. What are your doing in this area? We use paper straws and of course we recycle, but I expect most any responsible resort would do as much. What I am excited to tell you is that we are on track to go online later this year with a solar field that will provide Grand Cascades Lodge and The Crystal Springs Clubhouse the majority of its power needs. We are also involved in a member of other initiatives, including the development of a new habitat for bees, butterflies and birds on Black Bear, one of our six golf courses. We did this in partnership with Jersey Central Power & Light (JCP&L). We are members of the New Jersey Audubon Corporate Stewardship Council, which emphasizes voluntary environmental stewardship, sustainability, conservation partnerships and public education. And sometimes being good to the environment can be good to your bottom line. We are also in the process of replacing substantially all of our lighting with LED to save consumption and are looking to add additional EV charge stations. Many golfers like the idea of charging their EVs while they play, and we want to accommodate them. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE. 16 9/30/18 6:53 PM PARADISE FOUND What do you see as some of your biggest opportunities moving ahead? As we continue to build awareness in the marketplace and as our occupancy increases, our biggest opportunity is corporate midweek business. While we have always had the advantage of a beautiful setting and proximity to NYC relative to other resorts, we are going to the next level by offering content and experiences for our corporate groups that they are not able to get elsewhere. Presently, we have over 60 highly unique meeting add-on group experiences in the categories of team building and wellness such as goat yoga, meet the bee keeper, and guided hikes to hunt for edible mushrooms. CCR One-on-one with... Chris Mulvihill, CMO Crystal Springs Resort What’s the most rewarding part of your job? I love to see people at the resort having a good time. I will often visit the pools, restaurants or golf course posing as a guest and strike up conversations to find out how our guests heard about us and what it was that they saw or heard that compelled them to visit. That is really fun for me. I also never get tired of hearing out of state guests’ reactions when they arrive for a wedding and are blown away when they see how nice it is here. What was the best advice you ever received? To “starve the mediocre and feed the superstars.” I actually got this advice in another line of business in relation to sales, but it applies to marketing. It is fairly common knowledge that in any field of sales that 20 percent of the sales people typically generate 80 percent of the results. In advertising, the same is true. Some campaigns work very well and can really move the needle, but only if you have the resources to double or triple down on them when you find they are working. The advice is to rob as much time and resources from other things to support your best performing campaigns. What’s the best thing a client ever said to you? I enjoy reading comments from people on Facebook telling their friends that they think their phones must 34 be tapped since Crystal Springs keeps showing up in their news feed after they were just talking about us with a friend. While they are not necessarily addressing this to me, it tells me that my marketing is well targeted. Name the three strongest traits any leader should have. Vision, enthusiasm and humility. What is the true key to success for any manager? Hire good people and treat them well. Beside Crystal Springs what is your favorite vacation spot or resort and why? Chatham, Cape Cod. I’ve been going there for 20 years. If you enjoy the beach/shore, you really can’t beat it. It’s right on the elbow of the Cape and within a half mile radius you be on Nantucket Sound, on the Atlantic, in one of multiple harbors, bays or ponds. How do you like to spend your down time? I have a great wife and four great kids, ranging in age from 11 to 17. I really enjoy my time with them. My wife and me both come from large families and enjoy spending time with our extended families. Probably my most relaxing time is digging littleneck clams and cooking them for friends and family to enjoy. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE. 17 SPECIAL REPORT GENERAL CONTRACTING Report spotlights industry’s leading GC firms I f you’re looking for the industry’s leading general contractors (GCs), you are in the right spot. Our annual listing provides all of the information you need to find the right company in the retail, restaurant, hospitality & other commercial sectors. The listings include the contact information and contact person at each company. If your firm didn’t make the list, contact publisher David Corson at davidc@ccr-mag.com. For a digital version, visit us online at. 36 RESTAURANT Lendlease.................................................... $2,505,211,046.00 The Whiting-Turner Contracting Company....... $398,000,000.00 Hoar Construction........................................ $305,470,000.00 O’Neil Industries, Inc.................................... $120,000,000.00 Pepper Construction Group.......................... $101,700,000.00 Wolverine Building Group............................. $95,000,000.00 Rockford Construction................................. $72,899,000.00 Broadway Construction Group...................... $72,471,355.00 ESI, Engineered Structures........................... $60,680,000.00 DonahueFavret Contractors, Inc................... $22,000,000.00 HEALTHCARE HOSPITALITY The Whiting-Turner Contracting Company........ $248,000,000.00 Lendlease......................................................... $163,549,159.00 Digney York Associates..................................... $94,000,000.00 EBCO General Contractor, LTD.......................... $86,672,211.00 O’Neil Industries, Inc......................................... $85,000,000.00 Pepper Construction Group............................... $83,900,000.00 MYCON General Contractors, Inc...................... $49,000,000.00 IDC Construction, LLC....................................... $40,000,000.00 Broadway Construction Group.......................... $28,294,089.00 Donnelly Construction....................................... $25,000,000.00 Icon................................................................ Gray............................................................... The Whiting-Turner Contracting Company....... Lendlease....................................................... Marco Contractors, Inc.................................... Fortney & Weygandt, Inc................................. Prairie Contractors, Inc.................................... Beam Team Construction................................ Wolverine Building Group................................ Lakeview Construction.................................... The Whiting-Turner Contracting Company.... $1,150,000,000.00 O’Neil Industries, Inc.................................... $313,000,000.00 Lendlease.................................................... $297,555,471.00 Pepper Construction Group.......................... $181,400,000.00 Hoar Construction........................................ $122,068,000.00 S.M. Wilson & Co......................................... $43,587,806.00 EBCO General Contractor, LTD...................... $36,387,376.00 DonahueFavret Contractors, Inc................... $32,000,000.00 Rockford Construction................................. $20,178,000.00 Horizon Retail Construction, Inc.................... $13,100,000.00 TOTAL BILLINGS RETAIL The Whiting-Turner Contracting Company.... $435,000,000.00 O’Neil Industries, Inc.................................... $263,000,000.00 Schimenti Construction Company................ $224,000,000.00 ESI, Engineered Structures........................... $197,753,268.00 Horizon Retail Construction, Inc.................... $157,900,000.00 MYCON General Contractors, Inc.................. $141,000,000.00 Rockford Construction................................. $130,189,000.00 Hoar Construction........................................ $127,065,000.00 Gray............................................................ $94,500,000.00 Pepper Construction Group.......................... $81,300,000.00 MULTI-HOUSING Top Ten Totals The Whiting-Turner Contracting Company..... Lendlease.................................................. Pepper Construction Group........................ O’Neil Industries, Inc.................................. Gray.......................................................... Hoar Construction...................................... Rockford Construction............................... ESI, Engineered Structures......................... Schimenti Construction Company.............. MYCON General Contractors, Inc................ COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 $58,800,000.00 $44,000,000.00 $37,000,000.00 $26,587,229.00 $24,000,000.00 $22,300,000.00 $21,519,800.00 $20,000,000.00 $20,000,000.00 $17,900,000.00 $8,445,000,000.00 $3,907,690,717.00 $1,260,000,000.00 $1,254,000,000.00 $1,095,762,203.00 $869,047,000.00 $402,247,000.00 $401,750,530.00 $275,000,000.00 $240,000,000.00 CIRCLE NO. 18 SPECIAL REPORT GENERAL CONTRACTING Acme Enterprises Inc. Bogart Construction, Inc. Jeff Lomber, President 15751 Martin Road Roseville, MI 48066 (586) 771-4800 info@acme-enterprises.com Year Established: N/A Anderson & Rodgers Commercial James Spataro, General Manager 170 Prosperous Pl. Lexington, KY 40509 (859) 309-3021 info@andersonandrodgers.com Year Established: 2013 No. of Employees: 10 Retail: $2,000,000.00 Restaurants: $1,000,000.00 Hospitality: $1,000,000.00 Healthcare: $4,000,000.00 Multi-Family: $1,000,000.00 Federal: $500,000.00 Other: $4,000,000.00 Total: $13,500,000.00 Completed Projects as of 12/31/18: 22 Square Footage: Retail: 30,000 Hospitality: 15,000 Restaurants: 5,000 Federal: 5,000 Healthcare: 20,000 Multi-Family: 10,000 Other: 301,000 Total: 386,000 Specialize In: Healthcare, Government, Hotels, Restaurants, Multi-Family DannyStone, Dir of Business Development 9980 Irvine Center Dr., #200 Irvine, CA 92618 (949) 453-1400 Fax: (949) 453-1414 dstone@bogartconstruction.com Year Established: 1991 No. of Employees: 60 Retail: $62,000,000.00 Restaurants: $5,000,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $5,000,000.00 Total: $72,000,000.00 Completed Projects as of 12/31/18: 60 Square Footage: Retail: 650 Hospitality: N/A Restaurants: 30 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 10 Total: 690 Specialize In: Big-Box/Department, Groceries, Specialty Stores, Shopping Centers, Restaurants Boss Facility Services, Inc. Keith Keingstein, President 1 Roebling Cart Ronkonkoma, NY 11779 (632) 361-7430 info@bossfacilityservices.com Year Established: 2001 No. of Employees: 65 Retail: $15,600,000.00 Restaurants: $400,000.00 Hospitality: N/A Healthcare: 300,000.00 Multi-Family: N/A Federal: N/A Other: $400,000.00 Total:$20,300,000.00 Completed Projects as of 12/31/18: 14,840 Square Footage: Retail: N/A, Hospitality: N/A, Restaurants: N/A, Federal: N/A, Healthcare: N/A, Multi-Family: N/A, Other: N/A, Total: N/A Specialize In: Big-Box/Department, Healthcare, Specialty Stores, Restaurants, Education Beam Team Construction Broadway Construction Group Tim Hill, VP 1350 Bluegrass Lakes Pkwy. Alpharetta, GA 30004 (630) 816-0631 timhill@thebeamteam.com Year Established: N/A No. of Employees: 600 Retail: $45,000,000.00 Restaurants: $20,000,000.00 Hospitality: $20,000,000.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $85,000,000.00 Completed Projects as of 12/31/18: 8,000 Square Footage: Retail: 120,000 Hospitality: 10,000 Restaurants: 10,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 10,000 Total: 150,000 Specialize In: Groceries, Drug Stores, Hotels, Restaurants 38 Joseph Aiello, COO 140 Broadway, 41st Floor New York, NY 10005 (212) 834-4688 jaiello@broadwaycg.com Year Established: 2013 No. of Employees: 51 Retail: N/A Restaurants: N/A Hospitality: $28,294,089.00 Healthcare: N/A Multi-Family: $72,471,355.00 Federal: N/A Other: N/A Total: $100,765,444.00 Completed Projects as of 12/31/18: 16 Square Footage: Retail: N/A Hospitality: 175,000 Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: 1,472,000 Other: N/A Total: 1,647,000 Specialize In: Hotels, Multi-Family COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 19 SPECIAL REPORT GENERAL CONTRACTING Buildrite Construction Corp. Command Center, Inc. BryanAlexander, President 600 Chastain Rd., Suite 326 Kennesaw, GA 30144 (770) 971-0787 Fax: (770) 973-3373 bryan@buildriteconstruction.com Year Established: 1982 No. of Employees: N/A Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $26,913,048.00 Completed Projects as of 12/31/18: 288 Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: N/A Specialize In: Specialty Stores, Shopping Centers, Restaurants, Education, Retail DwightEnget, Corp. Business Development 3609 S Wadsworth Blvd. Lakewood, CO80235 (480) 390-8484 dwight.enget@commandonline.com Year Established: 2006 No. of Employees: 300: Temp Labor Commonwealth Building, Inc. CDO Group Chris Fontaine, President Vinny Catullo, Director of Business Development 333 Harrison St. Oak park, IL 60304 (908) 627-1778 vinnyc@cdogroup.com Year Established: 1998 No. of Employees: 50, Hospitality, Health and Wellness CKP Construction Todd Barbour, President 1616 S Kentucky, Suite C325 Amarillo, TX 79102 (806) 420-0696 tbarbour@ckpconstruction.com Year Established: 2016 No. of Employees: 18 Retail: $2,500,000.00 Restaurants: $12,700,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $11,000,000.00 Total: $26,200,000.00 Completed Projects as of 12/31/18: 43 Square Footage: Retail: 12,455 Hospitality: N/A Restaurants: 72,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 68,000, Total: 152,455 Specialize In: Big-Box/Department, Restaurants, Medical Offices, Athletic Facilities 40 265 Willard St. Quincy, MA 02169 (617) 770-0050 Fax: (617) 472-4734 cfontaine@combuild.com Year Established: 1979 No. of Employees: 35 Retail: $25,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $3,000,000.00 Total: $28,000,000.00 Completed Projects as of 12/31/18: 43 Square Footage: Retail: 600,000 Hospitality: N/A Restaurants: 8,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 50,000 Total: 658,000 Specialize In: Big-Box/Department, Drug Stores, Healthcare, Specialty Stores, Shopping Centers, Restaurants, Education, Special Projects & Maintenance Construction Advantage Mike Rothholtz, President 1112 Hibbard Rd. Wilmette, IL 60091 (847) 853-9300 constructadvantage@sbcglobal.net Year Established: 1998 No. of Employees: Varies Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: N/A Completed Projects as of 12/31/18: Varies Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: N/A Specialize In: Drug Stores, Healthcare, Specialty Stores, Restaurants COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 • BUILDING CONFIDENCE • DELIVERING RESULTS NATIONAL GENERAL CONTRACTOR CIRCLE NO. 20 SPECIAL REPORT GENERAL CONTRACTING Construction One DAVACO Don Skorupski, Business Development 101 E Town St.Suite, 401 Columbus, OH43215 (614) 235-0057 Fax: (614) 237-6769 dskorupski@constructionone.com Year Established: 1980 No. of Employees: 80 Retail: $40,000,000.00 Restaurants: $5,000,000.00 Hospitality: $10,000,000.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $55,000,000.00 Completed Projects as of 12/31/18: 102 Square Footage: Retail: 630,000 Hospitality: 340,000 Restaurants: 110,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 1,080,000 Specialize In: Big-Box/Department, Groceries, Healthcare, Specialty Stores, Hotels, Restaurants Paul Hamer, EVP 4050 Valley View Ln., Suite 150 Irving, TX 75038 (877) 7DAVACO info@davacoinc.com Year Established: 1990 No. of Employees: 1,600, Specialty Stores, Shopping Centers, Hotels, Restaurants De Jager Construction Inc. Dan De Jager, President Core States Group 75 60th St. Natalie Rodriguez, Marketing Manager 201 South Maple Ave., Suite 300 Ambler, PA 19002 (813) 391-8755 nrodriguez@core-states.com Year Established: 1999 No. of Employees: 350 Retail: $11,198,137.00 Restaurants: $6,248,102.00 Hospitality: $214,527.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $27,310,178.00 Total: $44,970,944.00 Completed Projects as of 12/31/18: 587 Square Footage: Retail: 139,308 Hospitality: 2,861 Restaurants: 49,984 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 216,000 Total: 408,153 Specialize In: Big-Box/Department, Groceries, Drug Stores, Specialty Stores, Hotels, Restaurants Wyoming, MI 49548 (616) 530-0060 Fax: (616) 530-8619 dj1@dejagerci.com Year Established: 1970, No. of Employees: 40 Retail: $23,947,260.00 Restaurants: $712,740.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $24,660,000.00 Completed Projects as of 12/31/18: 61 Square Footage: Retail: 564,712 Hospitality: N/A Restaurants: 5,900 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 570,612 Specialize In: Big-Box/Department, Specialty Stores, Shopping Centers, Restaurants Desco Professional Builders Inc. Dalo Construction, Inc. Robert Anderson, President Belden Bowman, Treasurer 2812 US RT 40 Tipp City, OH 45371 (937) 898-0953 Ext. 106 Fax: (937) 898-0974 belden_bowman@daloinc.com Year Established: 1974 No. of Employees: 33 Retail: $40,000,000.00 Restaurants: $5,000,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $45,000,000.00 Completed Projects as of 12/31/18: 47 Square Footage: Retail: 1,085,760 Hospitality: N/A Restaurants: 40,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 1,125,760 Specialize In: Groceries, Shopping Centers, Restaurants 42 290 Somers Rd. Ellington, CT 06029 (860) 870-7070 Fax: (860) 870-1074 banderson@descopro.com Year Established: 1983, No. of Employees: 48 Retail: Yes Restaurants: Yes Hospitality: N/A Healthcare: Yes Multi-Family: N/A Federal: N/A Other: N/A Total: $25,000,000.00 Completed Projects as of 12/31/18: 56 & Millwork Mfg. Square Footage: Retail: 309,189 Hospitality: N/A Restaurants: 22,530 Federal: N/A Healthcare: 14,000 Multi-Family: N/A Other: 190,639 Total: 536,358 Specialize In: Healthcare, Casinos, Government, Specialty Stores, Shopping Centers, Restaurants, Education, Offices, Millwork Manufacturing for all of the above COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 ily pitality I Multi-Fam os H I nt ra au st Re Retail I WHAT CAN F&W BUILD FOR YOU ? g t I Value Engineerin en em ag an M m ra og Pr esign-Build I Rollout D I ng cti ra nt Co al Gener FORTNEY & WEYGANDT, INC. #BuildwithFW 31269 Bradley Road, North Olmsted, OH 44070 I P: 440.716.4000 I F: 440.716.4010 CIRCLE NO. 21 SPECIAL REPORT GENERAL CONTRACTING DeWees Construction Inc. DLP Construction Co. Inc. Allen Galloway, Sr. VP 35 N Baldwin P.O. Box 681 Bargersville, IN 46106 (317) 709-5135 Fax: (317) 422-5142 allen@deweesconstruction.com Year Established: 1994, No. of Employees: 15 Retail: $1,998,000.00 Restaurants: $2,695,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $710,000.00 Total: $5,403,000.00 Completed Projects as of 12/31/18: 13 Square Footage: Retail: 8,600 Hospitality: N/A Restaurants: 12,400 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 3,000 Total: 24,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Casinos, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Multi-Family Lynn Kaden, Dir. of Business Development 5935 Shiloh Rd. E, Suite 100 Alpharetta, GA 30005 (770) 887-3573 Fax: (770) 887-2357 lkaden@dlpconstruction.com Year Established: 1996, No. of Employees: 43 Retail: $31,732,382.00 Restaurants: $1,200,000.00 Hospitality: N/A Healthcare: $5,700,000.00 Multi-Family: N/A Federal: N/A Other: N/A Total: $38,632,382.00 Completed Projects as of 12/31/18: 91 Square Footage: Retail: 2,130,000 Hospitality: N/A Restaurants: 15,000 Federal: N/A Healthcare: 95,000 Multi-Family: N/A Other: N/A Total: 2,240,000 Specialize In: Big-Box/Department, Specialty Stores, Shopping Centers, Restaurants DonahueFavret Diamond Contractors Contractors, Inc. Lori Perry, Owner 4224 NE Port Dr. Lees Summit, MO 64064 (816) 650-9200 Fax: (816) 650-9279 loriperry@diamondcontractors.com Year Established: 1994, No. of Employees: 50 Retail: $16,000,000.00 Restaurants: N/A Hospitality: $94,000,000.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $16,000,000.00 Completed Projects as of 12/31/18: 354 Square Footage: Retail: 2,490,187 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A, Multi-Family: N/A Other: N/A Total: 2,490,187 Specialize In: Groceries, Drug Stores, Specialty Stores, Shopping Centers, Restaurants, Retail Tenant Finish Digney York Associates Deanne Kuzmic, Dir. of Business Development 1919 Gallows Rd. Vienna, VA 22182 (703) 790-5281 dkuzmic@digneyyork.com Year Established: 1985, No. of Employees: 45 Retail: N/A Restaurants: N/A Hospitality: $94,000,000.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $94,000,000.00 Completed Projects as of 12/31/18: 23 Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A, Multi-Family: N/A Other: N/A Total: N/A Specialize In: Hotels 44 Bryan Hodnett, Dir. of Business Development 3030 E Causeway Approach Mandeville, LA 70448 (800) 626-4431 Fax: (985) 626-3572 dfcinfo@donahuefavret.com Year Established: 1979 No. of Employees: 47 Retail: $4,000,000.00 Restaurants: N/A Hospitality: $1,000,000.00 Healthcare: $32,000,000.00 Multi-Family: $22,000,000.00 Federal: N/A Other: $28,000,000.00 Total: $87,000,000.00 Completed Projects as of 12/31/18: 16 Square Footage: Retail: 30,000 Hospitality: 11,160 Restaurants: N/A Federal: N/A Healthcare: 160,266 Multi-Family: N/A Other: 299,503 Total: 500,929 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Multi-Family, Office Buildings, Tenant Improvements, Faith-Based Donnelly Construction Doug Berry, Sr. PM 577 Route 23 S Wayne, NJ 07470 (973) 672-1800 Ext. 100 Fax: (973) 677-1824 dberry@donnellyind.com Year Established: 1977 No. of Employees: 100 Retail: $5,000,000.00 Restaurants: $5,000,000.00 Hospitality: $25,000,000.00 Healthcare: $5,000,000.00 Multi-Family: N/A Federal: N/A Other: N/A Total: $40,000,000.00 Completed Projects as of 12/31/18: 100 Square Footage: Retail: 15,000 Hospitality: 100,000 Restaurants: 15,000 Federal: N/A Healthcare: 15,000 Multi-Family: N/A Other: N/A Total: 145,000 Specialize In: Big-Box/Department, Healthcare, Government, Specialty Stores, Hotels, Country Clubs COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 22 SPECIAL REPORT GENERAL CONTRACTING DWM Comprehensive Encore Construction Inc. Facility Solutions Joe McCafferty, President Bennett Van Wert, National Sales Manager 2 Northway Ln. Latham, NY 12110 (888) 396-9111 bvanwert@dwminc.com Year Established: 1997 No. of Employees: 75 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $30,200,000.00 Completed Projects as of 12/31/18: 8,867 Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: N/A Specialize In: Big-Box/Department, Groceries, Healthcare, Specialty Stores, Restaurants, Education 2014 Renard Ct., Suite J Annapolis, MD 21401 (443) 214-5379 Fax: (410) 573-5070 joe@encoreconstruction.net Year Established: 2003 No. of Employees:, Drug Stores, Healthcare, Specialty Stores, Restaurants, Facade Renovations, LL Turnover E.C. Provini Co., Inc. ESI, Engineered Structures Joseph Lembo, President 1 Bethany Rd., Unit 24 Hazlet, NJ 07730 (732) 739-8884 Fax: (732) 739-8886 jlembo@ecprovini.com Year Established: 1986 No. of Employees: 30 Retail: $36,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $36,000,000.00 Completed Projects as of 12/31/18: 125 Square Footage: Retail: 560,000 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 560,000 Specialize In: Big-Box/Department, Specialty Stores, Shopping Centers EBCO General Contractor, LTD. William A. Egger, VP, Corporate Development 804 E 1st St. Cameron, TX 76520 (254) 697-8516 Fax: (254) 697-8656 william.egger@ebcogc.com Year Established: 1986 No. of Employees: 75 Retail: $5,133,431.00 Restaurants: $15,395,162.00 Hospitality: $86,672,211.00 Healthcare: $36,387,376.00 Multi-Family: N/A Federal: N/A Other: $1,465,241.00 Total: $145,053,421.00 Completed Projects as of 12/31/18: 37 Square Footage: Retail: 27,750 Hospitality: 541,700 Restaurants: 87,970 Federal: N/A Healthcare: 103,950 Multi-Family: N/A Other: 7,350 Total: 768,720 Specialize In: Healthcare, Hotels, Restaurants 46 Mike Magill, VP of Business Development & Marketing 3330 E. Louise Dr., Suite 300 Meridian, ID 83642 (208) 362-3040 brandankirby@esiconstruction.com Year Established: 1973, No. of Employees: 500 Retail: $197,753,268.00 Restaurants: $12,971,739.00 Hospitality: N/A Healthcare: N/A Multi-Family: $60,680,000.00 Federal: N/A Other: N/A Total: $401,750,530.00 Completed Projects as of 12/31/18: 48 Square Footage: Retail: 11,808,473 Hospitality: N/A Restaurants: 67,260 Federal: N/A Healthcare: N/A, Multi-Family: 2,124,675 Other: N/A Total: 17,028,277 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Government, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Multi-Family, Public Works, Industrial/Manufacturing, Commerical Office, Tenant Improvement, Mission Critical FCP Services James Loukusa, CEO 3185 Terminal Dr. Eagan, MN 55121 (651) 789-0790 jloukusa@fcpservices.com Year Established: 1990, Government, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Multi-Family COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 23 SPECIAL REPORT GENERAL CONTRACTING Federal Heath FRONTIER Building Steve Abrams, Director of Specialty Contracting 2300 State Hwy. 121 Euless, TX 76039 (262) 636-0040 sabrams@federalheath.com Year Established: 1901 No. of Employees: 692 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $22,000,000.00 Completed Projects as of 12/31/18: N/A Square Footage: Retail: N/A, Hospitality: N/A, Restaurants: N/A, Federal: N/A, Healthcare: N/A, Multi-Family: N/A, Other: N/A, Total: N/A Specialize In: Big-Box/Department, Groceries, Drug Stores, Specialty Stores, Restaurants, Convenience Stores, Petroleum Jazmine Woods, Business Development 1801 SW 3rd Ave., Suite 500 Miami, FL 33129 (305) 692-9992 Fax: (305) 749-8673 jwoods@frontierbuilding.com Year Established: 2002 No. of Employees: 60 Go Green Construction Anthony Wincko, Vice President Flynn Construction 3471 Babcock Blvd., Suite 205 Jennifer Kilgore, VP, Sales & Marketing 600 Penn Ave. Pittsburgh, PA15221 (412) 342-0555 Fax: (412) 243-7925 jkilgore@flynn-construction.com Year Established: 1989 No. of Employees: 50 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $55,000,000.00 Completed Projects as of 12/31/18: 70 Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: N/A Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Government, Specialty Stores, Hotels, Restaurants, Multi-Family Pittsburgh, PA 15237 (412) 367-5870 Fax: (412) 367-5871 anthony@ggc-pgh.com Year Established: 2009 No. of Employees: 34 Retail: $29,150,229.00 Restaurants: $27,067.00 Hospitality: N/A Healthcare: $1,285,399.00 Multi-Family: N/A Federal: N/A Other: N/A Total: $30,462,695.00 Completed Projects as of 12/31/18: 97 Square Footage: Retail: 325,727, Hospitality: N/A, Restaurants: 3,487, Federal: N/A, Healthcare: 7,355, Multi-Family: N/A, Other: N/A, Total: 336,569 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Specialty Stores, Shopping Centers, Hotels Fortney & Weygandt, Inc. Gray Mitch Lapin, President 31269 Bradley Rd. North Olmsted, OH 44070 (440) 716-4000 Fax: (440) 716-4010 mlapin@fortneyweygandt.com Year Established: 1978 No. of Employees: 109 Retail: $69,215,000.00 Restaurants: $22,300,000.00 Hospitality: $7,479,100.00 Healthcare: $775,000.00 Multi-Family: $5,540,000.00 Federal: N/A Other: $1,095,000.00 Total: $106,404,100.00 Completed Projects as of 12/31/18: N/A Square Footage: Retail: 432,150 Hospitality: 55,165 Restaurants: 185,000 Federal: N/A Healthcare: 3,000 Multi-Family: 38,032 Other: 5,000 Total: 718,347 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Specialty Stores, Shopping Centers, Hotels, Restaurants, Multi-Family, Senior Living, Commercial Office 48 Eric Berg, Chief Operating Officer, West Region 421 E Cerritos Ave. Anaheim, CA 92805 (714) 491-1317 Fax: (714) 333-9700 eberg@gray.com Year Established: 1960 No. of Employees: 922 Retail: $94,500,000.00 Restaurants: $44,000,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $957,262,203.00 Total: $1,095,762,203.00 Completed Projects as of 12/31/18: 478 Square Footage: Retail: 3,600,000 Hospitality: N/A Restaurants: 226,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 2,900,000 Total: 6,726,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 R.E. CRAWFORD CONSTRUCTION, LLC IS a recognized leader in the construction industry, setting the mark for others to follow. Our team of professionals takes ownership in everything that we do. We are committed to exceeding your expectations through our company culture of integrity without compromise, professionalism and accountability. General Contractor ¡ Licensed in 43 States CIRCLE NO. 24 SPECIAL REPORT GENERAL CONTRACTING Hanna Design Group Hirsch Construction Corp Jeffrey Sabaj, Director of Business Development 650 E Algonquin Rd. Schaumburg, IL 60173 (847) 719-0373 jsabaj@hannadesigngroup.com Year Established: 1993 No. of Employees:, Healthcare, Specialty Stores, Shopping Centers, Restaurants Adam Hirsch, President 222 Rosewood Dr., 5th Floor Danvers, MA 01923 (978) 762-8744 Fax: (978) 762-8455 ahirsch@hirschcorp.com Year Established: 1983 No. of Employees: N/A Retail: $30,000,000.00 Restaurants: $10,000,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $40,000,000.00 Completed Projects as of 12/31/18: 60 Square Footage: Retail: 180,000 Hospitality: N/A Restaurants: 60,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 240,000 Specialize In: Big-Box/Department, Specialty Stores, Restaurants, Airport Work, Spas Harmon Construction, Inc. Hoar Construction Ardell Mitchell, Vice President 621 S State St. North Vernon, IN 47265 (812) 346-2048 Fax: (812) 346-2054 ardell.mitchell@harmonconstruction.com Year Established: 1955 No. of Employees: 85 Retail: $3,000,000.00 Restaurants: $2,500,000.00 Hospitality: $23,000,000.00 Healthcare: $4,500,000.00 Multi-Family: N/A Federal: N/A Other: $4,000,000.00 Total: $37,000,000.00 Completed Projects as of 12/31/18: 120 Square Footage: Retail: 22,500 Hospitality: 150,000 Restaurants: 15,000 Federal: N/A Healthcare: 12,000 Multi-Family: N/A Other: 28,000 Total: 227,500 Specialize In: Healthcare, Casinos, Restaurants Tiffany Fessler, Communications Manager Two Metroplex Dr., Suite 400 Birmingham, AL 35209 (205) 803-2121 Fax: (205) 423-2323 info@hoar.com Year Established: 1940, No. of Employees: 684 Retail: $127,065,000.00 Restaurants: $2,569,000.00 Hospitality: $22,693,000.00 Healthcare: $122,068,000.00 Multi-Family: $305,470,000.00 Federal: $1,295,000.00 Other: $287,887,000.00 Total: $869,047,000.00 Completed Projects as of 12/31/18: 40 Square Footage: Retail: 1,940,091 Hospitality: 412,123 Restaurants: 7,000 Federal: 425,300 Healthcare: 1,433,156 Multi-Family: 6,387,936 Other: 2,903,029 Total: 13,508,635 Specialize In: Big-Box/Department, Groceries, Healthcare, Government, Shopping Centers, Hotels, Education, Multi-Family Healy Construction Horizon Retail Services, Inc. Construction, Inc. James T. Healy, Director of Construction 14000 S Keeler Ave. Crestwood, IL 60418 (708) 396-0440 Fax: (708) 396-0412 jth@healyconstructionservices.com Year Established: 1988 No. of Employees:, Government, Specialty Stores, Shopping Centers, Hotels, Restaurants 50 Horizon RETAIL CONSTRUCTION, INC. Stefanie Andersen, Marketing Manager 9999 E Exploration Ct. Sturtevant, WI 53177 (262) 638-6000 Fax: (262) 638-6015 sales@horizonretail.com Year Established: 1993 No. of Employees: 337 Retail: $157,900,000.00 Restaurants: $15,400,000.00 Hospitality: N/A Healthcare: $13,100,000.00 Multi-Family: N/A Federal: N/A Other: $25,300,000.00 Total: $211,700,000.00 Completed Projects as of 12/31/18: 1,633 Square Footage: Retail: 3,781,878 Hospitality: N/A Restaurants: 177,517 Federal: N/A Healthcare: 162,680 Multi-Family: N/A Other: 621,515 Total: 4,743,590 Specialize In: Big-Box/Department, Drug Stores, Healthcare, Specialty Stores, Restaurants, Financial Institutions, Airport Concessions, Entertainment COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Serving for more than three decades as a PREMIERE GENERAL CONTRACTOR 100% WOMAN-OWNED, Shames Construction BUILDS BEYOND EXPECTATIONS Shames gives back to the community through these charitable contributions: • ACE Scholarships • Wounded Warriors • St. Anthony’s Hospital - Flight for Life CALIFORNIA 925.606.3000 COLORADO 303.253.3200 CIRCLE NO. 25 SPECIAL REPORT GENERAL CONTRACTING Hunter Building Corp. Kingsmen Projects Peter Ferri, President 14609 Kimberly Ln., Suite A Houston, TX 77079 (281) 377-6550 Fax: (281) 377-8600 pferri@hunterbuilding.com Year Established: 2007 No. of Employees: 15 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $13,500,000.00 Completed Projects as of 12/31/18: 32 Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 940,000 Specialize In: Big-Box/Department, Specialty Stores, Shopping Centers, Restaurants, Commercial-Office Stephen Hekman, Vice President, US 3525 Hyland Ave. Costa Mesa, CA 92626 (619) 719-8950 stephen@kingsment-usa.com Year Established: 1973 No. of Employees: 1,900 Retail: $5,000,000.00 Restaurants: $1,200,000.00 Hospitality: $500,000 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $6,700,000 Completed Projects as of 12/31/18: 500+ Square Footage: Retail: 70,000, Hospitality: 10,000, Restaurants: 20,000, Federal: N/A, Healthcare: N/A, Multi-Family: N/A, Other: N/A, Total: 100,000 Specialize In: Specialty Stores, Shopping Centers, Hotels, Restaurants Icon Knoebel Construction, Inc. Kevin Hughes, EVP Sales & Marketing 1701 Golf Rd., I-900 Rolling Meadows, IL 60008 (877) 740-4266 khughes@iconid.com Year Established: 1931 No. of Employees: 400 Retail: $10,200,000.00 Restaurants: $58,800,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $1,000,000.00 Total: $70,000,000.00 Completed Projects as of 12/31/18: 3,975 Square Footage: Retail: 4,500,000 Hospitality: N/A Restaurants: 50,000,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 500,000 Total: 55,000,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Specialty Stores, Restaurants IDC Construction, LLC Blake Williams, VP Development 1000 Churchill Ct. Woodstock, GA 30188 (678) 213-1110 Fax: (678) 213-1109 bwilliams@idcconstruction.com Year Established: 1999 No. of Employees: 25 Retail: N/A Restaurants: N/A Hospitality: $40,000,000.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $40,000,000.00 Completed Projects as of 12/31/18: 10 Square Footage: Retail: N/A Hospitality: 36,000,000 Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 36,000,000 Specialize In: Hotels 52 Susan Bowen, Dir. of Business Development 18333 Wings Corporate Dr. Chesterfield, MO 63005 (636) 326-4100 Ext. 240 Fax: (636) 326-4101 sbowen@knoebelcon.com Year Established: 1981 No. of Employees: 68 Retail: $56,642,804.00 Restaurants: $12,333,785.00 Hospitality: N/A Healthcare: N/A Multi-Family: $3,522,077.00 Federal: N/A Other: $7,658,944.00 Total: $80,157,610.00 Completed Projects as of 12/31/18: 66 Square Footage: Retail: 754,170 Hospitality: N/A Restaurants: 38,200 Federal: N/A Healthcare: N/A Multi-Family: 10,710 Other: 14,220 Total: 817,300 Specialize In: Big-Box/Department, Groceries, Healthcare, Specialty Stores, Shopping Centers, Restaurants, Multi-Family Lakeview Construction John Stallman, Marketing Director 10505 Corp. Dr. Pleasant Prairie, WI 53158 (262) 857-3336 Ext. 241 Year Established: 1993 No. of Employees: 115 Retail: $76,500,000.00 Restaurants: $17,900,000.00 Hospitality: N/A Healthcare: $3,500,000.00 Multi-Family: N/A Federal: N/A Other: N/A Total: $97,900,000.00 Completed Projects as of 12/31/18: N/A Square Footage: Retail: 2,000,000 Hospitality: N/A Restaurants: 500,000 Federal: N/A Healthcare: 40,000 Multi-Family: N/A Other: N/A Total: 2,540,000 Specialize In: Big-Box/Department, Healthcare, Specialty Stores, Shopping Centers, Restaurants, Retail COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 26 SPECIAL REPORT GENERAL CONTRACTING LCS Facility Group MC Construction Management Joe Fairley, Vice President 36 Cottage St. Poughkeepsie, NY 12601 (845) 485-7000 Fax: (845) 485-7052 joseph.fairley@lcsfacilitygroup.com Year Established: 2001, No. of Employees: 450 Retail: N/A Restaurants: N/A Hospitality: $12,000,000.00 Healthcare: $10,000,000.00 Multi-Family: N/A Federal: N/A Other: $3,000,000.00 Total: $25,000,000.00 Completed Projects as of 12/31/18: 55 Square Footage: Retail: N/A Hospitality: 1,000,000 Restaurants: N/A Federal: N/A Healthcare: 2,000,000 Multi-Family: N/A Other: 1,000,000 Total: 4,000,000 Specialize In: Healthcare, Government, Hotels, Restaurants, Education, Multi-Family, Commercial Office, Warehouse, Industrial, Manufacturing Jim McClymonds, President 38012 N Linda Dr. Cave Creek, AZ 85331 (480) 367-8600 Ext. 107 Fax: (480) 367-8625 jmcclymonds@mcbuilders.net Year Established: 2001, No. of Employees: 25 Retail: $21,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: $2,000,000.00 Multi-Family: N/A Federal: N/A Other: N/A Total: $23,000,000.00 Completed Projects as of 12/31/18: 48 Square Footage: Retail: 250,000 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: 25,000 Multi-Family: N/A Other: N/A Total: 275,000 Specialize In: Big-Box/Department, Healthcare, Specialty Stores, Shopping Centers MYCON General Contractors, Inc. Lendlease Dana Walters, Vice President Andrew Council, General Manager, Construction 200 Park Ave. New York, NY 10166 (212) 592-6800 Fax: (212) 592-6988 americas@lendlease.com Year Established: 1917, No. of Employees: 1,730 Retail: $6,599,331.00 Restaurants: $26,587,229.00 Hospitality: $163,549,159.00 Healthcare: $297,555,471.00 Multi-Family: $2,505,211,046.00 Federal: N/A Other: $908,188,481.00 Total: $3,907,690,717.00 Completed Projects as of 12/31/18: 142 Square Footage: Retail: 117,800 Hospitality: 120,000 Restaurants: 217,500 Federal: N/A Healthcare: 51,000 Multi-Family: 5,386,971 Other: 2,291,495 Total: 8,184,786 Specialize In: Healthcare, Hotels, Education, Multi-Family Business Development 17311 Dallas Pkwy., Suite 300 Dallas, TX 75248 (972) 529-2444 dwalters@mycon.com Year Established: 1987 No. of Employees: 152 Retail: $141,000,000.00 Restaurants: N/A Hospitality: $49,000,000.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $50,000,000.00 Total: $240,000,000.00 Completed Projects as of 12/31/18: 87 Square Footage: Retail: 1,800,000 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 3,000,000 Total: 4,800,000 Specialize In: Big-Box/Department, Groceries, Healthcare, Government, Specialty Stores, Shopping Centers, Hotels Marco Contractors, Inc. N-STORE Services Samra R Savioz, National Director of Business Development 100 Commonwealth Dr. Warrendale, PA 15086 (724) 814-4547 ssavioz@marcocontractors.com Year Established: 1941, No. of Employees: 225 Retail: 63,000,000 Restaurants: 24,000,000 Hospitality: 6,000,000 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: 93,000,000 Completed Projects as of 12/31/18: 13,250 Square Footage: Retail: N/A, Hospitality: N/A, Restaurants: N/A, Federal: N/A, Healthcare: N/A, Multi-Family: N/A, Other: N/A, Total: N/A Specialize In: Big-Box/Department, Drug Stores, Specialty Stores, Shopping Centers, Restaurants 54 Kevin Zigrang, Director of Business Development 160 Chesterfield Industrial Blvd. Chesterfield, MO 63005 (636) 778-0448 kevin@gnhservices.com Year Established: 1983 No. of Employees: 77 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: N/A Completed Projects as of 12/31/18: 519 Square Footage: Retail: 1,900,000 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: 1,875,000 Multi-Family: N/A Other: 1,225,000 Total: 5,000,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Specialty Stores, Shopping Centers, Restaurants COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Construction Management • General Contractor • Fixture Contractor Self-Perform Services • Special Projects & Rollouts Contractor of The Year by Top Notch Indiana General Contractor of The Year by Indiana Subcontractors Association Gold Summit Safety Award National Contractor MBE - Minority Business Enterprise CIRCLE NO. 27 812.379.9547 | SPECIAL REPORT GENERAL CONTRACTING National Contractors, Inc. Pepper Construction Group Michael Dudley, Vice President 2500 Orchard Lane Excelsior, MN 55331 (952) 881-6123 Fax: (952) 881-6321 mdudley@ncigc.com Year Established: 1990 No. of Employees: 28 O’Neil Industries, Inc. Dean Arnold, Retired Vice President-Consultant 1245 W Washington Blvd. Chicago, IL 60607 (773) 755-1611 darnold@weoneil.com Year Established: 1925, No. of Employees: 474 Retail: $263,000,000.00 Restaurants: $6,000,000.00 Hospitality: $85,000,000.00 Healthcare: $313,000,000.00 Multi-Family: $120,000,000.00 Federal: N/A Other: $467,000,000.00 Total: $1,254,000,000.00 Completed Projects as of 12/31/18: 178 Square Footage: Retail: 930,000 Hospitality: 94,000 Restaurants: 7,000 Federal: N/A Healthcare: 250,000 Multi-Family: 57,000 Other: 662,000 Total: 2,000,000 Specialize In: Healthcare, Casinos, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Multi-Family, Office, Residential, Manufacturing, Transportation, Power P&C Construction, Inc. Nic Cornelison, Vice President 2500 E 18th St. Chattanooga, TN 37404 (423) 493-0051 Fax: (423) 493-0058 nic@pc-const.com Year Established: 1993, No. of Employees: 72 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: N/A Completed Projects as of 12/31/18: 294 Square Footage: Retail: 635,000 Hospitality: N/A Restaurants: 62,000 Federal: 30,000 Healthcare: 17,000 Multi-Family: 125,000 Other: 230,000 Total: 1,099,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Government, Specialty Stores, Shopping Centers, Restaurants, Education, Multi-Family, Office 56 J. Scott Pepper, Vice President 643 N. Orleans Street Chicago, IL 60654 (312) 266-4700 info@pepperconstruction.com Year Established: 1927, No. of Employees: 1,070 Retail: $81,300,000 Restaurants: N/A Hospitality: $83,900,000 Healthcare: $181,400,000 Multi-Family: $101,700,000 Federal: N/A Other: $811,700,000 Total: $1,260,000,000 Completed Projects as of 12/31/18: 470 Square Footage: Retail: 871,000 Hospitality: 937,000 Restaurants: N/A Federal: N/A Healthcare: 771,000 Multi-Family: 1,065,000 Other: 8,500,000 Total: 12,144,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Casinos, Government, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Multi-Family, Commercial Office, Data Centers, Industrial / Manufacturing, Institutional, Entertainment Poettker Construction Company Danielle Bergmann, Director of Marketing 380 S Germantown Rd. Breese, IL 62230 (618) 526-7213 Fax: (618) 526-7654 dbergmann@poettkerconstruction.com Year Established: 1980, No. of Employees: 135 Retail: $30,503,403.00 Restaurants: N/A Hospitality: N/A Healthcare: $3,236,392.00 Multi-Family: N/A Federal: $57,412,716.00 Other: $36,290,254.00 Total: $127,442,765.00 Completed Projects as of 12/31/18: 28 Square Footage: Retail: 1,494,966 Hospitality: N/A Restaurants: N/A Federal: 158,720 Healthcare: N/A Multi-Family: N/A Other: 46,525 Total: 1,700,211 Specialize In: Big-Box/Department, Groceries, Healthcare, Government, Shopping Centers, Hotels, Education, Commercial & Corporate (Office), Industrial/Warehouse/Distribution, Recreational Prairie Contractors, Inc. Peter Hegarty, President 9318 Gulfstream Rd., Unit C Frankfort, IL 60423 (815) 469-1904 Fax: (815) 469-5436 phegarty@prairie-us.com Year Established: 2003 No. of Employees: 25 Retail: $4,847,200.00 Restaurants: $21,519,800.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $26,367,000.00 Completed Projects as of 12/31/18: 69 Square Footage: Retail: 28,000 Hospitality: N/A Restaurants: 120,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 148,000 Specialize In: Specialty Stores, Restaurants COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 BUILD ON OUR EXPERIENCE COMMERCIAL CRAFTSMANSHIP PERFECTING OUR CRAFT TO BETTER YOUR BUSINESS CIRCLE NO. 28 SPECIAL REPORT GENERAL CONTRACTING Prime Retail Services, Inc. Retail Construction Services, Inc. Jeff Terry, Director of Business Development 3617 Southland Dr. Flowery Branch, GA 30542 (866) 504-3511 Fax: (866) 584-3605 jterry@primeretailservices.com Year Established: 2003 No. of Employees: 600+ Retail: $32,000,000.00 Restaurants: $6,000,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: $1,000,000 Other: N/A Total: $39,000,000.00 Completed Projects as of 12/31/18: N/A Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: N/A Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Government, Hotels, Restaurants, Education, Banking and Financial Facilities Ross Stecklein, Director of Business Development 11343 39th St. N Lake Elmo, MN 55042 (651) 704-9000 Fax: (651) 704-9100 rstecklein@retailconstruction.com Year Established: 1984,, Specialty Stores, Shopping Centers, Hotels, Restaurants, Other Rockerz Inc. RobertSmith, Dir. Business/National Accounts PTS Contracting 100 Commonwealth Alan Briskman, Principal 75 Virginia Rd. White Plains, NY 10603 (914) 290-4166 alan@ptscontracting.com Year Established: 2013 No. of Employees: 8 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: $11,000,000.00 Multi-Family: $8,000,000.00 Federal: N/A Other: N/A Total: $19,000,000.00 Completed Projects as of 12/31/18: 14 Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: 40,000 Multi-Family: 33,000 Other: N/A Total: 73,000 Specialize In: Healthcare, Specialty Stores, Multi-Family R.E. Crawford Construction, LLC Susan Courter, Director of Business Development 6650 Professional Pkwy. W, #100 Sarasota, FL 34240 (941) 907-0010 Fax: (941) 907-0030 scourter@recrawford.com Year Established: 2005, No. of Employees: 44 Retail: $27,100,000.00 Restaurants: $4,800,000.00 Hospitality: N/A Healthcare: $1,150,000.00 Multi-Family: N/A Federal: N/A Other: $2,700,000.00 Total: $35,750,000.00 Completed Projects as of 12/31/18: 74 Square Footage: Retail: 198,800 Hospitality: N/A Restaurants: 25,600 Federal: N/A Healthcare: 13,500 Multi-Family: N/A Other: 11,000 Total: 235,400 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Government, Specialty Stores, Shopping Centers, Restaurants 58 Warrendale, PA 15086 (724) 612-6520 rsmith@rockerzinc.com Year Established: 2004, No. of Employees: 55 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $10,500,000 Completed Projects as of 12/31/18: 300+ Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 4,500,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Casinos, Government, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Multi-Family, Other Rockford Construction Jennifer Boezwinkle, Executive Vice President 601 First St. NW Grand Rapids, MI 49503 (616) 285-6933 jboezwinkle@rockfordconstruction.com Year Established: 1987 No. of Employees: N/A Retail: $130,189,000.00 Restaurants: N/A Hospitality: N/A Healthcare: $20,178,000.00 Multi-Family: $72,899,000.00 Federal: N/A Other: $178,981,000.00 Total: $402,247,000.00 Completed Projects as of 12/31/18: 369 Square Footage: Retail: 2,822,550 Hospitality: 185,296 Restaurants: 98,541 Federal: N/A Healthcare: 221,905 Multi-Family: 1,627,912 Other: 2,821,422 Total: 7,777,626 Specialize In: Big-Box/Department, Groceries, Healthcare, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Multi-Family, Industrial and Manufacturing COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Royal Services S.M. Wilson & Co. Jamie Leeper, Director of Business Development 19175 Metcalf Ave. Overland Park, KS 66221 (913) 387-3436 jleeper@royalsolves.com Year Established: 1993 No. of Employees: 48 Retail: $20,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $2,000,000.00 Total: $22,000,000 Completed Projects as of 12/31/18: 10,000 Square Footage: Retail: N/A, Hospitality: N/A, Restaurants: N/A, Federal: N/A, Healthcare: N/A, Multi-Family: N/A, Other: N/A, Total: N/A Specialize In: Specialty Stores RT Stevens Construction, Inc. Troy Stevens, President 420 McKinley St., Suite 111-313 Corona, CA 92879 (951) 280-9361 Fax: (951) 549-9360 tstevens@rtstevens.com Year Established: 1988 No. of Employees: N/A Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: N/A Completed Projects as of 12/31/18: 34 Square Footage: Retail: 109,242 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: N/A Specialize In: Healthcare, Government, Specialty Stores, Shopping Centers, Restaurants Coleen Olson, Executive Assistant 2185 Hampton Ave. St. Louis, MO 63139 (314) 645-9595 Fax: (314) 645-1700 coleen.olson@smwilson.com Year Established: 1921, No. of Employees: 124 Retail: $46,618,130.00 Restaurants: N/A Hospitality: N/A Healthcare: $43,587,806.00 Multi-Family: $20,020,502.00 Federal: $3,341,379.00 Other: $71,240,842.00 Total: $184,808,659.00 Completed Projects as of 12/31/18: 23 Square Footage: Retail: 1,815,203 Hospitality: N/A Restaurants: N/A Federal: 371,000 Healthcare: 884,261 Multi-Family: 310,000 Other: 1,898,468 Total: 5,278,932 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Casinos, Government, Specialty Stores, Shopping Centers, Hotels, Education, Multi-Family, Industrial Sachse Construction Miha Pusta, Business Development 1528 Woodward Ave., Suite 600 Detroit, MI 48226 (313) 481-8263 Fax: (313) 481-8250 mpusta@sachse.net Year Established: 1991, No. of Employees: 165 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: N/A Completed Projects as of 12/31/18: 185 Square Footage: Retail: 16,000 Hospitality: 95,000 Restaurants: 48,600 Federal: N/A Healthcare: 26,000 Multi-Family: 219,000 Other: 178,000 Total: 1,056,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Casinos, Shopping Centers, Hotels, Restaurants, Education, Multi-Family S.L. Hayden Construction Inc. SAJO Inc. Steve Hayden, President 3015 S Burleson Blvd. Burleson, TX 76028 (817) 783-7900 Fax: (817) 783-7902 shayden@hcichicago.com Year Established: 1940, No. of Employees: 40 Retail: $21,000,000.00 Restaurants: $14,270,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $35,270,000.00 Completed Projects as of 12/31/18: 43 Square Footage: Retail: 12,352 Hospitality: N/A Restaurants: 64,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 76,352 Specialize In: Big-Box/Department, Shopping Centers, Hotels, Restaurants, Facility Maintenance Rocco Raco, Director of Marketing & Business Development 1320 Graham Mont-Royal, QC H3P 3C8 Canada (877) 901-7256 Fax: (514) 385-1863 rocco@sajo.com Year Established: 1977 No. of Employees: 170: Specialty Stores, Retail MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 59 SPECIAL REPORT GENERAL CONTRACTING Schimenti Construction Sharpe Contractors, LLC Company Brian Mulligan, Vice President Joseph Rotondo, Executive Vice President 650 Danbury Rd. Norwalk, CT 06877 (914) 244-9100 Fax: (914) 244-9104 marketing@schimenti.com Year Established: 1994 No. of Employees: 205 Retail: $224,000,000.00 Restaurants: $16,000,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $35,000,000.00 Total: $275,000,000.00 Completed Projects as of 12/31/18: 160 Square Footage: Retail: 1,040,000 Hospitality: N/A Restaurants: 120,000 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 340,000 Total: 1,500,000 Specialize In: Big-Box/Department, Groceries, Specialty Stores, Restaurants 425 Buford Hwy. NW, Suite 204 Suwanee, GA 30024 (678) 765-8680 bmulligan@sharpegc.com Year Established: N/A No. of Employees: 15 Retail: N/A Restaurants: N/A Hospitality: $17,352,275.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $17,352,275.00 Completed Projects as of 12/31/18: 3 Square Footage: Retail: 17,000 Hospitality: 54,000 Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 71,000 Specialize In: Specialty Stores, Shopping Centers, Hotels, Restaurants Solex Contracting Inc. Jerry Allen, President Scott Contracting, LLC 42146 Remington Ave. Johnny Wilkins, Director of Business Development 702 Old Peachtree Road NW, Suite 100 Suwanee, GA 30024 (770) 274-0534 johnny.wilkins@scott-contracting.com Year Established: 2003 No. of Employees: 48 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: $12,000,000.00 Multi-Family: N/A Federal: N/A Other: $25,000,000.00 Total: $37,000,000 Completed Projects as of 12/31/18: 120+ Square Footage: Retail: N/A, Hospitality: N/A, Restaurants: N/A, Federal: N/A, Healthcare: N/A, Multi-Family: N/A, Other: N/A, Total: N/A Specialize In: Healthcare, Office Interiors CONTRACTING INC. Temecula, CA 92590 (951) 308-1706 Fax: (951) 308-1856 jerry@solexcontracting.com Year Established: 2005 No. of Employees: 95 Retail: $30,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $10,000,000.00 Total: $40,000,000.00 Completed Projects as of 12/31/18: 150 Square Footage: Retail: 450,000 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 150,000 Total: 600,000 Specialize In: Big-Box/Department, Specialty Stores, Restaurants, Tower Erection/Telecommunications SOS Retail Services Shames Construction Eli Lessing, Director of Carolyn Shames, President/CEO 5826 Brisa St. Livermore, CA94550 (925) 606-3000 Fax: (925) 606-3003 cshames@shames.com Year Established: 1987 No. of Employees: 54 Retail: $77,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $77,000,000.00 Completed Projects as of 12/31/18: 20 Square Footage: Retail: 2,700,000 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 2,700,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Specialty Stores, Shopping Centers, Restaurants 60 Business Development 201 Rosa Helm Way Franklin, TN 37067 (615) 550-4343 elessing@sos-retailservices.com Year Established: 2009 No. of Employees: COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Spiegelglass Taylor Bros. Construction Co., Inc. Construction Company Jeff Chandler, Vice President Tim Spiegelglass, Owner 18 Worthington Access Dr. Maryland Heights, MO 63043 (314) 569-2300 Fax: (314) 569-0788 tim@spiegelglass-gc.com Year Established: 1904 No. of Employees: 15 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: Completed Projects as of 12/31/18: N/A Square Footage: Retail: N/A, Hospitality: N/A, Restaurants: N/A, Federal: N/A, Healthcare: N/A, Multi-Family: N/A, Other: N/A, Total: N/A Specialize In: Specialty Stores, Restaurants, Tenant Improvement Retail 4555 Middle Rd. Columbus, IN 47203 (812) 379-9547 Fax: (812) 372-4759 jeff.chandler@tbcci.com Year Established: 1933 No. of Employees: 250 Retail: $70,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: $5,000,000.00 Multi-Family: N/A Federal: $1,000,000.00 Other: $11,000,000.00 Total: $87,000,000.00 Completed Projects as of 12/31/18: 575 Square Footage: Retail: 5,000,000 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: 250,000 Multi-Family: N/A Other: 500,000 Total: 5,750,000 Specialize In: Big-Box/Department, Healthcare, Casinos, Specialty Stores Excellence in General Contracng Services Quality. Service. Value. Safety. Proudly serving clients naonwide for 35 years. We manage and facilitate the construcon process, so you can focus on driving Established 1983. 951 West 7th Street | Fort Worth, TX 76102 | 817.877.3800 | Addional offices in Phoenix, AZ and Charleston, SC. CIRCLE NO. 29 MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 61 SPECIAL REPORT GENERAL CONTRACTING CBC1260195 TDS Construction, Inc. TRICON Construction Christi Bock, VP of Operations 4239 63rd St. W Bradenton, FL 34209 (941) 795-6100 Fax: (941) 795-6101 christi.bock@tdsconstruction.com Year Established: 1987, No. of Employees: 63 Retail: $52,900,000.00 Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $52,900,000.00 Completed Projects as of 12/31/18: 49 Square Footage: Retail: 3,464,751 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 3,464,751 Specialize In: Big-Box/Department, Groceries, Specialty Stores, Shopping Centers Rich Carlucci, Vice President 3433 Marshall Ln. Bensalem, PA 19020 (267) 223-1060 Fax: (215) 633-8363 r.carlucci@tricon-construction.com Retail: $7,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $7,000,000.00 Completed Projects as of 12/31/18: 50 Square Footage: Retail: 555,000 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 555,000 Specialize In: Big-Box/Department, Drug Stores, Casinos, Specialty Stores UHC Construction Services Leslie Burton, Director of Timberwolff Business Development Construction, Inc. 154 E Aurora Rd., #155 Mike Wolff, President 1659 Arrow Rte. Upland, CA 91786 (909) 949-0380 Fax: (909) 949-8500 mike@timberwolff.com Year Established: 1989, No. of Employees: 50 Retail: N/A Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: N/A Completed Projects as of 12/31/18: 160 Square Footage: Retail: N/A Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: N/A Specialize In: Big-Box/Department, Healthcare, Specialty Stores, Shopping Centers, Restaurants, Commercial Dental/Medical Offices Northfield, OH 44067 (216) 544-7588 lburton@uhccorp.com Year Established: 2006 No. of Employees: N/A Retail: $25,000,000.00 Restaurants: $11,000,000.00 Hospitality: $1,000,000.00 Healthcare: $1,000,000.00 Multi-Family: N/A Federal: N/A Other: N/A Total: $38,000,000.00 Completed Projects as of 12/31/18: 412 Square Footage: Retail: 4,125,000 Hospitality: 8,000 Restaurants: 2,112,000 Federal: N/A Healthcare: 14,000 Multi-Family: N/A Other: N/A Total: 6,259,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Specialty Stores, Hotels, Restaurants, Financial/Banking Triad Construction VENATOR Contracting Group, LLC Donna Coneley, Vice President of Development 2206 O’Day Rd. Pearland, TX 77581 (281) 485-4700 Fax: (281) 485-7722 d.coneley@triadrc.com Year Established: 2008, No. of Employees: 65 Retail: $5,842,710.00 Restaurants: N/A Hospitality: $750,000.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $41,445,761.00 Total: $48,038,471.00 Completed Projects as of 12/31/18: 32 Square Footage: Retail: 48,689 Hospitality: 5,769 Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 436,271 Total: 490,729 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Government, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Environmentally Controlled Storage Facilities 62 Suzette Novak, Director of Business Development 44930 Vic Wertz Dr. Clinton Township, MI 48036 (586) 229-2428 Fax: (586) 229-2428 suzette@venatorcontracting.com Year Established: 2010 No. of Employees: 12 Retail: $5,000,000.00 Restaurants: $3,500,000.00 Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: $587,000.00 Total: $9,087,000.00 Completed Projects as of 12/31/18: 23 Square Footage: Retail: 134,900 Hospitality: N/A Restaurants: 34,500 Federal: N/A Healthcare: N/A Multi-Family: N/A Other: 10,000 Total: 179,400 Specialize In: Big-Box/Department, Specialty Stores, Restaurants, Salons, Community Centers COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Warwick Construction, Inc. Winkel Construction, Inc. Walt Watzinger, Vice President 365 FM 1959 Houston, TX 77034 (832) 448-7000 Fax: (832) 448-3000 CONSTRUCTION walt@warwickconstruction.com Year Established: 1999, No. of Employees: 80 Retail: $75,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $75,000,000.00 Completed Projects as of 12/31/18: 124 Square Footage: Retail: N/A, Hospitality: N/A, Restaurants: N/A, Federal: N/A, Healthcare: N/A, Multi-Family: N/A, Other: N/A, Total: N/A Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Specialty Stores, Shopping Centers, Restaurants Rick Winkel, C.E.O. 1919 W Main St. Inverness, FL 34452 (352) 860-0500 Fax: (352) 860-0700 rickw@winkel-construction.com Year Established: 1972, Wolverine Building Group Weekes Construction, Inc. Mike Houseman, Hunter Weekes, Vice President 237 Rhett St. Greenville, SC29601 (864) 233-0061 Fax: (864) 235-9971 hweekes@weekesconstruction.com Year Established: 1975, No. of Employees: 32 Retail: $36,000,000.00 Restaurants: N/A Hospitality: N/A Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $36,000,000.00 Completed Projects as of 12/31/18: 1975 Square Footage: Retail: 599,212 Hospitality: N/A Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 599,212 Specialize In: N/A The Whiting-Turner Contracting Company Bob Minutoli, Jr., Division Vice President 135 W Central Blvd., Suite 840 Orlando, FL 32801 (407) 370-4500 bob.minutoli@whiting-turner.com Year Established: 1909, No. of Employees: 3,867 Retail: $435,000,000.00 Restaurants: $37,000,000.00 Hospitality: $248,000,000.00 Healthcare: $1,150,000,000.00 Multi-Family: $398,000,000.00 Federal: $321,000,000.00 Other: $5,856,000,000.00 Total: $8,445,000,000.00 Completed Projects as of 12/31/18: 350+ Square Footage: Retail: 7,098,881 Hospitality: 3,628,337 Restaurants: 193,270 Federal: 1,958,599 Healthcare: 24,553,753 Multi-Family: 22,026,000 Other: N/A Total: 59,458,840 Specialize In: Big-Box/Department, Groceries, Drug Stores, Healthcare, Casinos, Government, Specialty Stores, Shopping Centers, Hotels, Restaurants, Education, Multi-Family, E-Commerce, Data Center, Warehouse & Distribution, Theme Parks, Sports Facilities President/North America 4045 Barden SE Grand Rapids, MI 49512 (616) 299-4381 Fax: (616) 949-6211 mhouseman@wolvgroup.com Year Established: N/A, No. of Employees: N/A Retail: $20,000,000.00 Restaurants: $20,000,000.00 Hospitality: $15,000,000.00 Healthcare: $5,000,000.00 Multi-Family: $95,000,000.00 Federal: N/A Other: $40,000,000.00 Total: $195,000,000.00 Completed Projects as of 12/31/18: N/A Square Footage: Retail: 700,000 Hospitality: 10,000 Restaurants: 50,000 Federal: N/A Healthcare: 10,000 Multi-Family: 180,000 Other: N/A Total: 950,000 Specialize In: Big-Box/Department, Groceries, Drug Stores, Specialty Stores, Shopping Centers, Hotels, Restaurants, Multi-Family Zerr Enterprises, Inc. Mike Zerr, President 1545 S Acoma St. Denver, CO 80223 (303) 758-7776 Fax: (303) 758-7770 mike.zerr@zerrenterprises.com Year Established: 1998, No. of Employees: 16 Retail: N/A Restaurants: N/A Hospitality: $18,000,000.00 Healthcare: N/A Multi-Family: N/A Federal: N/A Other: N/A Total: $18,000,000.00 Completed Projects as of 12/31/18: 28 Square Footage: Retail: N/A Hospitality: 880,000 Restaurants: N/A Federal: N/A Healthcare: N/A Multi-Family: N/A Other: N/A Total: 880,000 Specialize In: Hotels, Restaurants MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 63 SPECIAL REPORT LIGHTING Lighting manufacturers on display in annual listing I t is all about lighting these days. To help you keep up with the industry’s top vendors, our annual listing gives you the contact person and contact information you need to get started. To see how to get listed in the next report, email publisher David Corson at davidc@ccr-mag.com. For a digital version, visit us online at. Above All Lighting Inc. Acuity Brands Inc. Ying Su, Marketing Manager 1501 Industrial Way N Toms River, NJ 08755 (866) 222-8866 yingsu@abovealllighting.com Lighting Product Type: Light Bulbs, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial Monica Weglicki-Sanchez, Sr. Manager Analyst & Media Relations, Thought Leadership One Lithonia Way Conyers, GA 30012 (470) 413-2340 monica.sanchez@acuitybrands.com Lighting Product Type: Connected LED Luminaires by Litonia Markets Served: Retail, Healthcare, Corporate, Education, Shopping Malls, Commercial Michael Giardina, Product Manager 6122 S Eastern Ave. Los Angeles, CA 90040 (323) 213-4626 info@acclaimlighting.com Lighting Product Type: Accent Lighting, Solid State Lighting Fixtures, LED Linear Indoor, LED Linear Outdoor, Exterior/Outdoor Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial Communications Manager 960 Flaherty Dr. New Bedford, MA 02745 (508) 985-1240 llyons@atkore.com Lighting Product Type: Armored Cable, Metal Clad Cable, Flexible and Liquidtight Conduit Markets Served: Retail, Hospitality, Healthcare, Restaurants, Education, Shopping Malls, Commercial, Multi-Family AFC Cable Acclaim Lighting, LLC Lindsay Lyons, Marketing Altman Lighting ACS Uni-Fab Julie Smith, General Manager Lindsay Lyons, Marketing Communications Manager 960 Flaherty Dr. New Bedford, MA 02745 (508) 985-1240 llyons@atkore.com Lighting Product Type: Highbay Lighting, Modular Lighting, Pre-Fabricated Assemblies, Raised Floor Systems, and Underfloor Systems Markets Served: Hospitality, Healthcare, Corporate, Education, Multi-Family, Telecommunications, Office Space, Casinos, and Hotels 64 57 Alexander St. Yonkers, NY 10701 (914) 476-7987 marketing@altmanlighting.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, LED Linear Indoor, Recessed Lighting, Track Lighting, Exterior/Outdoor Lighting, Commercial Lighting, Entertainment and Theatrical Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Commercial, Museum, Art Gallery, Performance Arts Venues, Houses of Worship COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 30 SPECIAL REPORT LIGHTING American Permalight, Inc. Auroralight Marina Batzke, General Manager 237th St., #101 Torrance, CA 90505 (310) 891-0924 info@americanpermalight.com Lighting Product Type: Emergency: Egress Path Marking Lighting Markets Served: Hospitality, Healthcare, Corporate, Education, Commercial Amerlux, LLC Bill Plageman, Vice President of Marketing and Product Development 178 Bauer Rd. Oakland, NJ 07436 (973) 882-5010 bplageman@amerlux.com Lighting Product Type: Accent Lighting, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, Recessed Lighting, Track Lighting, Shelving Lighting, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial Jason McCulloch, Director of Sales and Marketing 2742 Loker Ave. W Carlsbad, CA 92010 (877) 942-1179 Fax: (760) 931-2916 sales@auroralight.com Lighting Product Type: Accent Lighting, Solid State Lighting Fixtures, LED Linear Outdoor, Wall Sconces, Exterior/Outdoor Lighting, Landscape Lighting, Underwater Markets Served: Hospitality, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family, Residential Autec Power Systems Billy Bautista, Marketing Director 31328 Via Colinas, Suite 102 Westlake Village, CA 91362 (818) 338-7788 marketing@autec.com Lighting Product Type: Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting, LED Drivers Markets Served: Hospitality, Healthcare, Corporate, Education, Commercial, Custom, Agri/Horticulture ANP Lighting Avenue Lighting Ron Foster, Owner 9044 Del Mar Ave. Montclair, CA 91763 (909) 239-3855 rpfoster@anplighting.com Lighting Product Type: Exterior/Outdoor Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Chris Titizian, CEO 9000 Fullbright Ave. Chatsworth, CA 91331 (800) 798-0409 Fax: (888) 870-0105 info@avenuelighting.com Lighting Product Type: Accent Lighting, Light Bulbs, Close to Ceiling Fixtures, Solid State Lighting Fixtures, LED Linear Indoor, Wall Sconces, Exterior/Outdoor Lighting, Custom Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Aurea Lighting Big Ass Fans Beth Pfefferle, Vice President, Marketing 116 John St. Lowell, MA 01852 (978) 459-4500 bpfefferle@bambuglobal.com Lighting Product Type: Close to Ceiling Fixtures, Solid State Lighting Fixtures, LED Linear Indoor, Recessed Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family 66 Alex Risen, Public Relations 2348 Innovation Dr. Lexington, KY 40511 (877) BIG-FANS alex.risen@bigassfans.com Lighting Product Type: Close to Ceiling Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Task Lighting, Exterior/Outdoor Lighting, Security Lighting Markets Served: Retail, Hospitality, Restaurants, Corporate, Education, Shopping Malls, Commercial, Industrial COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Combine energy efficiency & safety in stairwell lighting simple sensor solutions Motion controlled bi-level lighting provides an energy saving solution for applications such as stairwells, offices, libraries, laundry rooms, conference rooms and any other areas where maximum light levels may not be needed 100% of the time Bi-level motion sensor controlled LED luminaires offer increased energy savings with standby power as low as 3 watts without leaving the area dark. Our proven ultra-sonic sensor technology immediately switches the fixture to full light output when occupancy is detected. For office and retail spaces Narrow Linear Series ENERGY SAVING Standby Mode When Unoccupied 100% Brightness For Safety Upon Occupancy slim | appealing | no led hot spots Architectural Brilliance The NL line is one of the slimmest LED luminaires LaMar makes: at only 2.25â€? wide, these fixtures can fit discreetly into almost any architectural application. Comprised of three product groups, the Narrow Linear Wall (NLW), Narrow Linear Surface (NLS) and the Narrow Linear Recessed (NLR), this product line is ideal for retail, office, educational facility or architectural applications with most being suitable for continuous row installations (up & along a wall [90Ëš angle] or suspended from ceiling). Cable Hung or Wall Direct/Indirect Cable/Surface or Wall mount Cable/Surface Recessed optional wall bracket DE IN THE MA USA 485 Smith Street, Farmingdale, NY 11735 | tel 631.777.7700 | fax 631.777.7705 | CIRCLE NO. 31 SPECIAL REPORT LIGHTING Bitro Group ConTech Lighting Fritz Meyne, Jr., Vice President Sales 300 Lodi St. Hackensack, NY 07601 (201) 641-1004 fritzm@bitrogroup.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, LED Linear Indoor, LED Linear Outdoor, Shelving Lighting, Exterior/Outdoor Lighting, Landscape Lighting, Commercial Lighting, Point of Purchase, RGB, Plastics Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Marine 725 Landwehr Rd. Northbrook, IL 60062 (847) 559-5500 info@contechlighting.com Lighting Product Type: Accent Lighting, Light Bulbs, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, Recessed Lighting, Track Lighting, Task Lighting, Shelving Lighting, Wall Sconces, Exterior/Outdoor Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Restaurants, Corporate, Education, Shopping Malls, Commercial Blueprint Lighting Controlled Power Kelly Aaron, Chief Luminary/ Company Creative Director 601 W 26th St., Suite M258 New York, NY 10001 (212) 243-6300 kelly@blueprintlighting.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Wall Sconces Markets Served: Retail, Hospitality, Restaurants, Corporate, Commercial, Multi-Family + x Cope Boost Lighting Lindsay Lyons, Marketing Christine Pappas, Business Development 3235 Satellite Place, Bldg. 400, Ste. 358 Duluth, GA 30096 (470) 209-3668 christine@boostlightinginc.com Lighting Product Type: Solid State Lighting Fixtures (Commercial Industrial and Residential Categories) Markets Served: Healthcare, Education, Commercial, Multi-Family, Commercial, Industrial and Residential LIGHTING Suzanne Hooley, Marketing Director 1955 Stephenson Hwy. Troy, MI 48083 (800) 521-4792 shooley@controlledpwr.com Lighting Product Type: Emergency Lighting Inverters, Egress Lighting Solutions Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Communications Manager 960 Flaherty Dr. New Bedford, MA 02745 (508) 985-1240 llyons@atkore.com Lighting Product Type: Highbay Lighting, Commercial Lighting, Cable Trays and Cable Management Solutions Markets Served: Retail, Hospitality, Healthcare, Education, Commercial Cree Lighting Commercial lighting Carrie Martinelli, Director, Farren Halcovich National Account Sales Manager 81161 Indio Blvd. Indio, Ca 92201 800-755-0155 farrenhalcovich@yahoo, Multi-Family 68 Marketing Communications & Events 4401 Silicon Dr. Durham, NC 27703 (919) 407-5476 cmartinelli@creelighting.com Lighting Product Type: Accent Lighting, Light Bulbs, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Track Lighting, Task Lighting, Shelving Lighting, Exterior/Outdoor Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family, Airports, Auto Dealerships, Industrial/Warehouse, Petroleum & Convenience Store COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE CIRCLE NO. 32 NATIONAL ACCOUNT LIGHTING DISTRIBUTOR • 631.595.2000 • info@lidolighting.com SPECIAL REPORT LIGHTING D & P Custom Lights & Enlighted Wiring Systems, Inc. Mark Milligan, Senior Vice Rita Schenkel, Inside Sales Representative 900 63rd Ave. N Nashville, TN 37209 (615) 350-7800 Fax: (615) 350-8310 info@dandpcustomlights.com Lighting Product Type: Checkout Lights Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial President of Marketing 930 Benecia Ave. Sunnyvale, CA 94085 (650) 964-1094 mark.milligan@enlightedinc.com Lighting Product Type: N/A Markets Served: N/A ET2 Lighting EarthTronics, Inc. Adena Sperling, Marketing Director Kevin Youngquist, Executive Vice President 380 W Western Ave., Suite 301 Muskegon, MI 49440 (231) 332-1188 Fax: (231) 726-5029 contact@earthtronics.com Lighting Product Type: Accent Lighting, Light Bulbs, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, Recessed Lighting, Track Lighting, Task Lighting, Wall Sconces, Exterior/Outdoor Lighting, Security Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial Flex Lighting Solutions Eaton Luis Acena, Sr. Marketing Manager Kyra Mitchell Lewis 1121 Highway 74 S Peachtree City, GA 30269 770-486-4800 eaton.com/lighting PTCmarcom@eaton.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highway Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Track Lighting, Task Lighting, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Elemental LED Kris Hong, Sr. Marketing Manager 885 Trademark Dr., Suite 200 Reno, NV 89521 (877) 817-6028 kris.hong@elementalled.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Task Lighting, Shelving Lighting, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family 70 7320 W 162nd St. Overland Park, KS 66085 (913) 851-3000 lighting@flex.com Lighting Product Type: Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, Recessed Lighting, Commercial Lighting, Indoor Sports Lighting Markets Served: Retail, Corporate, Education, Commercial Fulham Andy Firchau, Marketing Manager 12705 S Van Ness Ave. Hawthorne, CA 90250 (323) 779-2980 Fax: (323) 754-1141 afirchau@fulham.com Lighting Product Type: Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Wall Sconces, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting, Wireless Controlled LED Drivers Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 SIZE DOES MATTER Cylinder One® HO produces class-leading output in one of the smallest available profiles. CYLINDER NE HO CYLINDER NE HO . 6 inch Aperture ONE HO CYLINDER ® ® ® . 12,000 Lumen . 15º, 22º, 40º, and 70º Reflector Options via Quick Change Reflectors . Best In Class Dimming from 0-100% . Aria Wireless DMX Technology Built-In . 10-Day Quick Ship on Most Models via the Acclaim Modular Systems Platform . Ideal for Theatres, Airports, Convention Centers, Shopping Malls and Other High Ceiling Applications W I R E L E S S D M X CIRCLE NO. 33 SOLID STATE: SOLID PERFORMANCE +31 (0) 45 - 5468560 EU +44 (0) 1162786177 UK +1 323 213 4626 USA SALES@ACCLAIMLIGHTING.COM ACCLAIMLIGHTING.COM SPECIAL REPORT LIGHTING Genesis Lighting Solutions IdentiCom Sign Solutions Doug Head, Executive Vice President 700 Parker Square, Suite 205 Flower Mound, TX 75028 (469) 322-1900 doug@adart.com Lighting Product Type: Accent Lighting, Light Bulbs, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Track Lighting, Task Lighting, Shelving Lighting, Wall Sconces, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting, Parking Lot Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial John DiNunzio, President 24657 Halsted Rd. Farmington Hills, MI 48335 (248) 344-9590 Fax: (248) 946-4198 info@identicomsigns.com Lighting Product Type: Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Exterior/Outdoor Lighting, Landscape Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial Innovations in Lighting Rob Bruck, President Griplock Systems 136 N California Ave. Bryan Shamblin, Sales Director 1029 Cindy Ln. Carpinteria, CA 93013 (805) 566-0064 bryans@griplocksystems.com Lighting Product Type: Cable Suspension Systems Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial City of Industry, CA 91744 (818) 732-9238 Fax: (818) 796-4724 anna@bruckconcepts.com Lighting Product Type: Solid State Lighting Fixtures, Task Lighting, Wall Sconces, Decorative Markets Served: Hospitality, Commercial Michael Kerber, Director of LED Development 11745 Sappington Barracks Rd. St. Louis, MO 63127 (800) 542-9941 information@hanleyledsolutions.com Lighting Product Type: LED Linear Outdoor, Exterior/Outdoor Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Shopping Malls, Commercial 15 Harbor Park Dr. Port Washington, NY 11050 (800) 527-7796 Fax: (855) 265-5768 info@jescolighting.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Track Lighting, Shelving Lighting, Wall Sconces, Exterior/Outdoor Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial JESCO Lighting Group HanleyLED Richard Kurtz, President & CEO HyLite LED, LLC Shahill Amin, VP of Marketing and Sales 3705 Centre Cir. Fort Mill, SC 29715 (803) 336-2230 Fax: (803) 336-2231 shahilamin@hylite.us Lighting Product Type: Light Bulbs, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Exterior/Outdoor Lighting, Security Lighting, Commercial Lighting, Industrial Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family, Industrial 72 Kalwall Corporation Amy Keller, VP International Sales and Marketing 1111 Candia Rd. Manchester, NH 03105 (603) 627-3861 Fax: (603) 627-7905 info@kalwall.com Lighting Product Type: Daylighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 LaMar Lighting Co., Inc. LIDO Lighting Nicole Calise, Director of Marketing 485 Smith St. Farmingdale, NY 11735 (631) 777-7700 Fax: (631) 777-7705 nicole@lamarlighting.com Lighting Product Type: Close to Ceiling Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Task Lighting, Wall Sconces, Exterior/Outdoor Lighting, Security Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family, Other Bill Pierro Jr., LC, President 966 Grand Blvd. Deer Park, NY 11729 (631) 595-2000 Fax: (631) 595-7010 billpierro@lidolighting.com Lighting Product Type: Accent Lighting, Light Bulbs, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Track Lighting, Task Lighting, Shelving Lighting, Wall Sconces, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting, Lighting Controls Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family, Millwork LEDVANCE Lightheaded Lighting Ltd. Glen Gracia, Head of Communications, USC 200 Ballardvale St. Wilmington, MA 02189 (978) 753-5185 glen.gracia@ledvance.com Lighting Product Type: Accent Lighting, Light Bulbs, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Task Lighting, Shelving Lighting, Wall Sconces, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Steve Dewar, VP Business Development 1150-572 Nicola Pl. Port Coquitlam, BC Canada V3B OK4 (604) 464-5644 Fax: (604) 464-0888 info@lightheadedlighting.com Lighting Product Type: Accent Lighting, Solid State Lighting Fixtures, Recessed Lighting, Shelving Lighting, Wall Sconces, Exterior/ Outdoor Lighting, Security Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Lighting Services Inc. Sales Legrand 2 Holt Dr. Kelsey London, Account Supervisor 415 Madison Ave. New York, NY 10007 (212) 829-0002, Ext. 125 kelsey.london@sharpthink.com Lighting Product Type: Landscape Lighting, Lighting Controls Markets Served: Hospitality, Healthcare, Corporate, Education, Shopping Malls, Commercial, Multi-Family Stony Point, NY 10980 (845) 942-2800 Fax: (845) 942-2177 sales@maillsi.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, LED Linear Indoor, Recessed Lighting, Track Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Museum/Galleries Legrand, North & Central America LUXX Light Technology (Wattstopper) Andreas Weyer, Managing Partner Jared Morello, Director of Product Management 2234 Rutherford Rd. Carlsbad, CA 92008 (760) 804-9701 jared.morello@legrand.us Lighting Product Type: Lighting Controls Markets Served: Healthcare, Corporate, Education, Commercial, Multi-Family 4425 S Kansas Ave. St. Francis, WI 53235 (414) 763-3141 info@luxx.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Track Lighting, Task Lighting, Shelving Lighting, Wall Sconces, Exterior/Outdoor Lighting, Commercial Lighting, LED Light Panels Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 73 SPECIAL REPORT LIGHTING Maxim Lighting OSRAM Adena Sperling, Marketing Director Modular Lighting Instruments John Yriberri, North America Market Leader One PPG Pl., Floor 31 Pittsburgh, PA 15222 (800) 674-9691 welcome.us@supermodular.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, LED Linear Indoor, Recessed Lighting, Track Lighting, Wall Sconces, Commercial Lighting, Architectural Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Nora Lighting Kevin Solano, Marketing Manager 6505 Gayhart St. Commerce, CA 90040 (323) 767-2600 Fax: (500) 500-9955 kevin.solano@noralighting.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, LED Linear Indoor, Recessed Lighting, Track Lighting, Task Lighting, Shelving Lighting, Wall Sconces, Exterior/Outdoor Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Education, Shopping Malls, Commercial, Multi-Family Original BTC Anna Lee, Showroom & Account Manager 56 Greene St. New York, NY 10013 (646) 759-9007 anna@originalbtc.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Task Lighting, Shelving Lighting, Wall Sconces, Exterior/Outdoor Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family 74 Ellen Miller, Head of Media Relations, Americas Region 200 Ballardvale St. Wilmington, MA 01887 (978) 570-3755 e.miller@osram.com Lighting Product Type: Accent Lighting, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Track Lighting, Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting, Lighting Management Systems, Horticultural Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial Philips Lighting Heather Milcarek, Marketing Director 200 Franklin Square Dr. Somerset, NJ 08873 (732) 563-3468 heather.milcarek@philips, Government, Grocery, Petrol, Public Spaces Project Light Inc. Jenni Collier, Director of Projects 4976 Hudson Dr. Stow, OH 44224 (330) 688-9026 Fax: (330) 688-9026 jenni@projectlightinc.com Lighting Product Type: N/A Markets Served: Retail, Hospitality, Restaurants, Commercial Quattrobi Inc. Marina, Managing Director 311 W 43rd St., #11-101 New York, NY 10036 (929) 422-2361 info@quattrobi.net Lighting Product Type: Accent Lighting, LED Linear Indoor, Recessed Lighting, Track Lighting, Task Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Restaurants, Corporate, Shopping Malls, Commercial COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Regency Lighting Solar Electric Power Company Mark Heerema, Sr. Director National Accounts 195 Chastain Meadows Ct., Suite 100 Kennesaw, GA 30144 (800) 284-2024 mark.heerema@regencylighting.com Lighting Product Type: Accent Lighting, Light Bulbs, Close to Ceiling Fixtures, Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Track Lighting, Task Lighting, Shelving Lighting, Wall Sconces, Exterior/Outdoor Lighting, Security Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Robe Lighting Inc. Lisa Caro, Marketing Coordinator 3410 Davie Rd., #401 Davie, FL 33314 (954) 680-1901 Fax: (954) 680-1910 info@robelighting.com Lighting Product Type: Accent Lighting, LED Linear Indoor, Exterior/ Outdoor Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Stephanie Holloran, National Sales Manager 1521 SE Palm Court Stuart, FL 34994 (772) 220-6615 Fax: (772) 220-8616 info@sepconet.com Lighting Product Type: Exterior/Outdoor Lighting, Security Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial Sonneman- A Way of Light Matthew Sonneman, Director Sales and Market Development 20 North Ave. Larchmont, NY 10538 (914) 834-3600 matts@sonneman.com Lighting Product Type: Accent Lighting, Close to Ceiling Fixtures, Solid State Lighting Fixtures, LED Linear Indoor, LED Linear Outdoor, Recessed Lighting, Track Lighting, Task Lighting, Wall Sconces, Exterior/Outdoor Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Specialty Lighting Teresa Carpenter, Marketing Samsung Semiconductor, Inc. P.O. Box 780 Sunghoon Jung, Sales & Marketing Manager 11800 Amberpark Dr., #225 Alpharetta, GA 30004 sunghoon.j@samsung.com Lighting Product Type: LED Component Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Sentry Electric LLC Michael Shatzkin, Dir. Of Marketing & Bus. Development 185 Buffalo Ave. Freeport, NY 11520 (516) 379-4660 Fax: (516) 378-0624 michael@sentrylighting.com Lighting Product Type: Solid State Lighting Fixtures, Exterior/Outdoor Lighting Markets Served: Hospitality, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family, Municipalities 4203 Fallston Rd. Fallston, NC 28042 (704) 538-6522 Ext. 207 Fax: (704) 538-0909 tcarpenter@specialtylighting.com Lighting Product Type: Solid State Lighting Fixtures, Highbay Lighting, LED Linear Indoor, Recessed Lighting, Task Lighting, Shelving Lighting, Exterior/Outdoor Lighting, Security Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Tivoli Stephen Ledesma, Marketing Manager 15602 Mosher Ave. Tustin, CA 92780 (714) 957-6101 Fax: (714) 427-3458 stephen@tivoliusa.com Lighting Product Type: Accent Lighting, Light Bulbs, LED Linear Indoor, LED Linear Outdoor, Wall Sconces, Exterior/Outdoor Lighting, Commercial Lighting, Theater (Safety Low Light) Markets Served: Hospitality, Restaurants, Commercial, Theater MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 75 SPECIAL REPORT LIGHTING Tridonic Inc. USA VELUX America, LLC Paul Montesino, Director of Product Marketing, Tridonic USA 3300 Route 9W Highland, NY 12528 (617) 595-8532 paul.montesino@tridonic.com Lighting Product Type: LED Drivers Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial Kelsey Webb, Public Relations/ Content Account Manager 104 Ben Casey Dr. Fort Mill, SC 29708 (803) 396-5700 kwebb@wrayward.com Lighting Product Type: Commercial Skylights Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Tuya Smart Vista Professional Kody Betonte, Outdoor Lighting Marketing Director, North America 75 E Santa Clara St., 6th Floor San Jose, CA 95113 (909) 460-3302 kody@tuya.com Lighting Product Type: Smart Lighting AI+loT Platform. The Smart Lighting Platform will allow Commercial lighting manufacturers to connect to the broader Tuya AI+loT platform, saving on energy and allowing for programmable outputs, diagnostics and dimming cycles. Markets Served: Commercial Ultralights Lighting Julia Restin-Morl, Business Development Manager 320 S Plumer Ave. Tucson, AZ 85719 (520) 623-9829 julia@ultralightslighting.com Lighting Product Type: Close to Ceiling Fixtures, LED Linear Indoor, Task Lighting * Decorative Task, Wall Sconces, Exterior/Outdoor Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family, Ecclesiastical, High End Residential Urban Neon Sign Co. Jim Malin, Sales Executive 500 Pine St., Suite 3A Holmes, PA 19043 (610) 804-0437 Fax: (610) 461-5566 jmalin@urbanneon.com Lighting Product Type: LED Linear Indoor, LED Linear Outdoor Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Shopping Malls, Commercial, Neon 76 Cruz Perez, Vice President of Sales of Marketing 1625 Surveyor Ave. Simi Valley, CA 93063 (805) 527-0987 or (800) 766-8478 Fax: (888) 670-8478 email@vistapro.com Lighting Product Type: LED Linear Outdoor, Exterior/Outdoor Lighting, Landscape Lighting, Commercial Lighting Markets Served: Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial, Multi-Family Waldmann Lighting Debbie Boton, Marketing Communications Coordinator 9 Century Dr. Wheeling, IL 60090 (800) 634-0007 Fax: (847) 520-1730 waldmann@waldmannlighting.com Lighting Product Type: Close to Ceiling Fixtures, Highbay Lighting, LED Linear Indoor, Recessed Lighting, Task Lighting, Exterior/Outdoor Lighting, Landscape Lighting, Commercial Lighting Markets Served: Hospitality, Healthcare, Restaurants, Corporate, Education, Shopping Malls, Commercial COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Classic Shapes for Modern Living An impressive mixture of finishes and materials available in different shade sizes – up to 48” diameter – allowing individuality within the design and a harmonious addition to any room. Integrated, dimmable, warm LED sources create the perfect illumination and a pleasant lighting atmosphere. As a family-owned and operated business, we are committed to manufacturing the highest quality, made in the U.S.A. products, and delivering outstanding expertise, service, and value. Shown: Oversized, Madison and M+D Lighting – visit us online or call us to discuss your customized lighting solution. ANPlighting.com / 1-800-548-3227 CIRCLE NO. 34 LIGHT OF DAY Intelligence in emergency lighting improves building safety By Russ Sharer 78 A ny commercial building has to conform to safety regulations, including the placement and maintenance of emergency lighting. Once installed, those emergency lights have to be tested regularly to ensure they are operating correctly and have sufficient battery power. But by installing smart emergency luminaires, you not only eliminate the need for manual testing, you lay the foundation for an intelligent emergency system that can increase building safety. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE a neat enclosure 800.527.7796 • Designed, developed and assembled in the U.S.A. • Shipped ready for installation CIRCLE NO. 35 LIGHT OF DAY Consider the possibilities of having smart emergency luminaires strategically placed throughout any building. Sensors in these luminaires can be used to detect hazards such as smoke or noxious gases to trigger an alarm. And if you connect these luminaires into a single ecosystem, you can consolidate access from a single location, making it easy to monitor and manage building conditions from a central dashboard. To create an intelligent emergency lighting ecosystem, you need two basic elements: onboard luminaire intelligence and connectivity. Since the new generation of emergency luminaires are made with solid-state technology, programmable intelligence is embedded in the LED semiconductors. All you need to do is create a twoway communications system for luminaire monitoring and to issue commands. A wireless emergency ecosystem Once you have an intelligent ecosystem in place, you can extend it to support other applications beyond emergency response and building access. To connect luminaires together you can use either a cabled network systems or a wireless network. With new construction, wiring luminaires into a single intelligent infrastructure is certainly an option. More luminaire vendors are experimenting with Power over Ethernet (PoE, IEEE 802.3) to deliver power and connectivity to luminaires. But PoE has not yet gained widespread acceptance and it won’t work for luminaire retrofits, which is why more luminaire manufacturers are starting to add wireless networking capability to LED drivers. There already are wireless standards for lighting communications. Zigbee (IEEE 802.15.4), for example, is a 20-year-old low-power radio platform specifically for lighting controls, although it can’t handle other types of data traffic. To create a robust emergency lighting infrastructure, you need a wireless approach that is scalable, and that can handle different types of command and control data. Bluetooth mesh is rapidly becoming the de facto standard for intelligent lighting communications. While Bluetooth has been around as an open, device-to-device communications standard for some time, Bluetooth mesh is relatively new, providing a peer-to-peer communications grid that is readily extensible. Since it is a mesh network, data traffic is broadcast to all the other Bluetooth mesh-enabled devices within range, creating redundant connections; nodes can be added or removed at will. The Bluetooth mesh grid is readily scalable since each node is a repeater, and it can handle two-way data traffic. And since it is a well-defined open standard, devices from different vendors are assured to be compatible. With a mesh network of smart emergency luminaires in place, you have a simpler means of testing emergency lighting and a foundation for smart building controls. Programmed emergency response By connecting emergency luminaires into a common ecosystem, you dramatically simplify testing and logging of emergency lighting. Standards such as CSA C22.2 NO. 141 require testing and logging of emergency systems. Rather than manually inspecting each light, you can use a central console to monitor emergency lights for readiness, run remote testing and log the results. You create new management and control possibilities by creating an ecosystem of programmable emergency luminaires, such as: • Testing emergency systems from anywhere, anytime, including function and battery duration tests, failure alerts and automatic logging • Real-time emergency monitoring • Remote maintenance including commissioning and firmware updates • Monitoring for unit failures and end-of-life for components • Intelligent emergency response, such as programmed evacuation procedures • Full integration with other security and emergency access systems • Data gathering to assess building traffic patterns, occupancy, and more Implementing an intelligent emergency lighting ecosystem also creates new possibilities for safer buildings. Sensors in emergency luminaires can be programmed to detect fire, smoke, carbon monoxide, 80 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 36 LIGHT OF DAY or even the sound of a gunshot. Using preprogrammed responses those sensors can implement a response such as sounding an alarm, activating emergency lights and alerting emergency services. The connected ecosystem can use machine learning to provide proactive as well as reactive responses. For example, emergency luminaire sensors can detect the location of a fire or hazard. Based on incoming data the system can respond by lighting a path toward a safe exit, or detecting room occupancy to ensure the danger zone is clear. They can even trigger other systems such as locking and unlocking fire doors. Since the ecosystem operates over Bluetooth mesh it can be accessed from any Bluetooth-enabled devices, such as a laptop or smart phone. Data also can be accessed over the internet, which makes it easier to not only alert first responders but help them locate hot spots as well as locate building occupants who may be trapped. The same intelligent ecosystem can be used for other applications. For example, Bluetooth sensors can control building access. Using Bluetooth tagging, individuals can be granted or denied access to specific areas based on the information on their badges. Visitors can be issued temporary passes with access credentials built in, and Bluetooth tagging can even be used for wayfinding, using Bluetooth beacons and mapping software that can run on your smartphone or tablet to guide you through the building or campus. A foundation for building automation Once you have an intelligent ecosystem in place, you can extend it to support other applications beyond emergency response and building access. For example, the same emergency luminaire sensors can be used to monitor building environmental conditions, such as ambient temperature and available light. The ecosystem can be programmed to respond to changing light conditions, either dimming room lighting, automatically lowering or raising blinds or turning off lights in unoccupied rooms. It also can be used to activate HVAC for consistent temperature and humidity. In fact, this type of intelligent lighting system is an ideal skeleton for building automation for the Internet of Things (IoT), especially since Bluetooth mesh can handle any type of data traffic. For example, DALI (Digital Addressable Lighting Interface) is a common standard for light dimming controls, but HVAC uses a (building automation system (BAS) protocol such as BACnet. While the wireless infrastructure can handle different types of data traffic, you still need a common protocol, such as IoT, to make disparate systems interoperable. With IoT you have a foundation platform that can support multiple building automation protocols, providing a central access point to all building management systems as well as access via the web. You can expect to see more smart LED drivers with Bluetooth mesh capability coming to market. Installing or retrofitting smart LED emergency luminaires will likely prove the simplest way to connect an entire building into an intelligent building management infrastructure. CCR Consider the possibilities of having smart emergency luminaires strategically placed throughout any building. Russ Sharer is VP of Global Marketing and Business Development for Fulham a manufacturer of innovative and energyefficient lighting sub-systems and components for lighting manufacturers worldwide. Russ Sharer is a business leader with over 25 years of experience in B2B marketing and sales, including successful software and network equipment start-ups. 82 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 INNOVATIVE THERMOPLASTIC SOLUTIONS PLASKOLITE.COM 800-848-9124 CIRCLE NO. 37 Bostik’s Ultra-Set® SingleStep2™ deadens sound traveling from one floor to another in this residential high-rise condominium. Sound dampening is nothing to keep quiet about And, why it’s more important than ever in today’s construction marketplace. T he commercial and multifamily combined building universe (in the United States) consists of office buildings, stores, hotels, warehouses, commercial garages… and, the still very hot multifamily housing sector. Recent statistics indicate there was a 4% increase for commercial and multifamily construction starts is the states during 2018, in particular for multifamily housing, which was up 8% to $95.1 billion, while the commercial building categories listed above, were up 1% to $117.3 billion. Multifamily housing in 2017 had fallen 8% after appearing to have reached a peak in 2016, before posting the 8% rebound in 2018. Builders of multifamily buildings, especially in the last few years, have a new catchphrase, “acoustic privacy,” and they are taking this term very seriously. A tenant’s noise complaints can possibly escalate to a point where they end up in front of a judge. As a result, more and 84 By Ron Treister more general contractors, architects, designers and engineers are putting acoustic privacy under the microscope from a construction project’s concept through its completion. In today’s highly competitive marketplace where buyers are more cautious than ever… and, an educated consumer is still the best customer, absolutely NOBODY wants to be disturbed by their neighbors! Working on lavish multifamily high-rises, savvy designers seldom plan for gathering rooms that could emit loud noises to be positioned next to residential living units. In other words, they’re not going to put a fitness room adjacent to apartments. Clearly, tenants don’t want to hear cacophonous sounds coming from a large, early morning spinning class… or the pounding of feet from a spirited Zumba workout. Overall, the entire building team, from the developer on down, for COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Advertorial the most part knows the vital importance of making sound decisions about sound management for their respective projects. A good indication of what experts consider when planning how best to address sound issues, was discussed by Scott Banda, Bostik’s Director of Marketing & Business Development, who stated, “There are three basic types of noise transmission found in today’s multi-family residences. ‘Airborne,’ such as voices or music; ‘Impact,’ from footsteps or dropped objects… and, ‘Structural Component Movement’ (resulting in squeaking or creaking floors). “Local building codes have standardized on two methods of measuring noise,” Banda continued. “STC (Sound Transmission Coefficient) measures the airborne sound moving through a floor, and IIC (Impact Isolation Class) measures sound from impact on the floor. Both can be measured in a laboratory as well as on-site. The International Building Code (IBC) requires a lab IIC of 50 dB (45 dB field measurements) and a lab STC of 50 dB (45 dB field measurements). In both cases, the higher the number, the better the performance of the sound abatement material. While not all local building codes are the same, the UBC is the most common benchmark in the U.S.” Banda said. Today, most developers, GC’s, their subs and suppliers are on the same page relative to soundproofing in construction, which can be defined as “a combination of many different means to achieve the goal of reducing sound pressure with respect to a specified sound source and receptor.” Or in layman terms, simply “to reduce the sound one hears.” In America, especially in higher-end residences, natural hardwood flooring continues to be in demand. Wood flooring has many advantages over other types of flooring. It adds value, warmth and style to both new and newly renovated homes. Installation generally will cost more than purchasing new carpet. However, premium wood floors can last more than 100 years with regular maintenance and minor repair. The majestic “look” of wood combined with the fact that it is a green, sustainable product of Mother Nature, makes it an even more attractive surfacing decision. And, because it doesn’t harbor allergens or dirt, as do carpets and rugs, natural wood flooring continues to be highly desirable and sought after. If wood flooring does have an ostensible downside or two, these most likely will have to do with sound. In particular, with “impact” such has the reverberations from footsteps or dropped items… and with ‘structural component movement.” (We’ve all walked on a creaky wood floor at one time or another.) “Right now,” added Banda, “adhesive acoustic membranes are emerging as the fastest-growing segment of the sound abatement systems market, especially in specifications for high-rise, residential construction. Other, more established products can be used in conjunction with high-performance adhesive membranes to further increase sound dampening, moving from a solution that simply passes code … to a system that dramatically increases the comfort of the space and virtually eliminates one of the most common complaints of multi-family living.” A new way of doing things To meet or exceed building code requirements… up until now architects and designers generally have specified cork, rubber or composite underlayments to function as sound abatement barriers relative to airborne or impact noise transmission. With these newer acoustic solutions, new developments in adhesives have surfaced. Strategically specified sound solutions can actually have positive effects on the project’s timeline. For example, advanced hardwood flooring adhesives that include sound abatement within their respective formulations can be installed in the same time required to install a gluedown hardwood floor. The installation of cork, rubber or composite membranes often requires the membrane to be glued down first, requiring a full day to cure before crews can come back to finish the hardwood installation. Architects and designers must make sure that if a separate membrane is used for acoustic purposes, the increased height of the floor will not cause any problems. For example, using 1/4-in. cork is effective to achieve building requirements, but this installation increases the height of the floor by 1/4 of an inch… and, requires two layers of adhesive. Built-in cabinets, base molding and other items may need to be modified, sometimes by another tradesperson (such as a carpenter), to accommodate the increased height, increasing the cost of the project. Proper adhesive systems can typically eliminate the 1/4-in. thickness of the underlayment, and generally only result in the thickness of two credit cards. Regarding floating floors, architects and designers should consider the acoustics within the room and extra expansion or contraction requirements. Floating floors can result in meeting IIC Scott Banda, Bostik and STC values, but do not significantly reduce the sound of footsteps within the room. When there is an impact on a floating hardwood floor, the boards vibrate and thus, downward motion is absorbed. However, the upward rebound is not absorbed, resulting in a hollow sound in the flooring. Proper adhesives can reduce the vibration of these boards in both directions, resulting in a quieter floor. Expansion and contraction of floating floors is also a concern. Bostik’s adhesives have elastomeric characteristics that create an anti-fracture membrane. That membrane bridges cracks up to 1/8 inch thick, that can occur in the substrate prior to… or after, installation. Elastomeric properties also allow the adhesive to move with the wood as it expands or contracts due to changes in humidity and temperature… throughout the entire life of the floor. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 85 Advertorial There is a belief that typical adhesive solutions are dependent on the skill of the contractor to establish and maintain the correct membrane thickness; that they are more susceptible to performance impact from inconsistencies in the substrate or flooring materials. To address this, Bostik’s Ultra-Set® SingleStep2™ was formulated with recycled rubber crumb spacers within the adhesive itself, to increase sound abatement performance and ensure the specified membrane thickness is consistent in size throughout the entire installation. Using the right system of sound abatement solutions can increase the overall comfort of residents… and, virtually eliminate complaints about noisy neighbors (the bane of multifamily living!) For architects and designers, these advanced adhesive membranes play a vital role in developing the best sound abatement systems for owners and developers Adam Abell, Bostik of high-rise residential buildings. “We want to hear exclamations of specifiers, building owners and endusers acknowledging that adhesive systems for both horizontal and vertical surfaces… now have the wherewithal to offer world-class sound dampening performance!” – Scott Banda, Director of Marketing & Business Development, Bostik Ultra-Set® SingleStep2™ provides Bostik’s highest level of acoustic performance. The adhesive contains Bostik’s patented AXIOS™ Tri-Linking™ Polymer Technology, offering polymer molecules that interweave themselves into a tight mesh, subsequently absorbing both impact and airborne sound waves. Ultra-Set® SingleStep2™ also contains 1 percent recycled rubber material; emits zero VOCs (as calculated per SCAQMD Rule 1168), and no water. And of course, with this product, there is no need for a separate acoustic membrane to be installed. “All-in-one adhesives like Bostik’s Ultra-Set® SingleStep2™,” concluded Banda, “are amazing, time-saving installation systems that in addition to their acoustic control properties, also offer a lifetime warranty for unlimited moisture vapor protection with no concrete moisture testing required. We believe that if end-users are investing in beautiful natural wood flooring, it should be installed using the optimal adhesive system. One that actually helps keep neighbors neighborly.” Noise out Let’s now talk about sound dampening regarding WALLS in a commercial project. Keeping noise blocked from one side of the building to another can be a challenging task, and again, may call for the expert input of an acoustical engineer. Imagine a large furniture retailer that sells products out of a beautiful showroom. And, all or most of the products are inventoried within the attached, huge warehouse, right behind it. These SKU’s are methodically stocked in a gargantuan shelving grid from floor to ceiling. In that warehouse, there are loudspeakers, forklifts, trucks coming into the loading docks and other sources of discordant “sounds” that needn’t be heard in any showroom. Because of today’s technologically-advanced porcelain products specifically crafted to optimize wall coverings in commercial settings… there are also new adhesive systems that not only have been created for perfect, time-saving and long-lasting installations. These systems also offer high-performance sound dampening properties. Meet Bosti-Set™, Bostik’s latest advancement in adhesive technology. According to Adam Abell, Bostik’s Market Manager, “Bosti-Set™ has been created to offer peak performance for thin gauged porcelain panels. One component Bosti-Set™ is an adhesive that revolutionizes how the design-build community works with thin gauged porcelain tile panels. It provides instant grab and holding power in a single coat application. Installers don’t need to back-butter both the panel and also the wall. Just one coat on one surface is all that is needed.” “Bosti-Set™ provides non-sag, instant grab of thin gauged porcelain tile panels in sizes as large as 1/4” x 5’ x 10’. Because installers need only to trowel on the back of the panel, installation time is reduced by as much as 50%. There is also no need for mixing, water, and electricity to install the panels,” Abell continued. “Increased time-savings!” Bosti-Set™ provides the ability to position panels up to 30 minutes while still preventing sagging. “But there is so much more,” exclaimed Abell. “Not only does Bosti-Set™ weigh up to 80% less than traditional mortar installations. It also guarantees exceptional acoustic performance for the long-term durability of the wall system. “Why should salespeople working hard to make their commissions within their showroom environment have their efforts constantly interrupted by dissonant noise emitted from the warehouse?” Abell asked. Scott Banda summed it all up by stating, “We want to hear exclamations of specifiers, building owners and end-users acknowledging that adhesive systems for both horizontal and vertical surfaces… now have the wherewithal to offer world-class sound dampening performance!” — MAY : JUNE 2019. 38 Storytelling in adaptive reuse Inside KETV-7’s Burlington Station By Kristi Nohavec A fter 40 years of vacancy and deterioration, Omaha’s historic Burlington train station has been repurposed as a state-of-the-art broadcast headquarters for Hearst’s ABC-TV affiliate KETV-7. Converting the dilapidated train station into a TV studio required architecture and engineering firm LEO A DALY and general contractor Lund-Ross to uncover hidden structural conditions, perform major surgery on load-bearing elements, and thoughtfully preserve a layered history while creating a modern workplace. 88 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Before renovation Throughout its 120-year history, Omaha’s Burlington Station played an important part in Omaha’s development as a city. Designed by famed architect Thomas Rogers Kimball, and opened in 1898, the station was built to “wow” travelers visiting the city during the Trans-Mississippi Exposition. Visitors from all over the world passed through its doors, hailing it as “the handsomest railway station ever seen.” A 1930 renovation altered the station’s original Greek Revival design into a Classical Revival style. The pitched roof was flattened, its columns removed, and the Grand Hall’s spiral staircase removed and infilled. As automobiles replaced Before renovation MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 89 STORYTELLING IN ADAPTIVE REUSE trains as the primary mode of transportation in the United States, the Burlington’s business suffered. In 1974, the building was shuttered and remained vacant until late 2015, when it reopened as KETV’s 7 Burlington Station. Through its ups and downs, the building has changed dramatically, collecting layers of history that embed the history of Omaha and its people. LEO A DALY’s design returned the near-condemned building to viability while preserving that layered history and adapting the building to a new use. Saving the structure It was a rehabilitation that many in the community thought was impossible. During its shuttered period, the building sustained extensive water damage and vandalism. For years, four deteriorated interior downspouts poured rainwater into each corner of the Great Hall and the rooms below. The masonry walls had cracked and were bleeding mortar. 90 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE. 39 STORYTELLING IN ADAPTIVE REUSE Below, on the track level, freeze-thaw had caused portions of the 1898 concrete tile floor to heave and collapse into pipe tunnels. While the main level steel structure was in sound condition, the structure at the track level was, in some areas, rusted through to a see-through metallic lace. Plaster finishes had disintegrated or been removed in all but one room, the east lobby. The exterior was also in an extremely deteriorated condition. The rehabilitation began with the removal of the passenger concourse and the 1955-era parking deck. Exterior wood doors and windows were restored where feasible or replaced with reproductions. The exterior granite, limestone, and brick were repointed and patched. A small supply of matching, salvaged brick was located and used for patching areas of missing brick. Inside, an unplanned structural intervention became necessary when crews discovered an existing masonry wall at track level was not properly supported. Instead of 92 Throughout its history, Burlington Station connected Omaha to the rest of the world, welcoming visitors from abroad during the Trans-Mississippi Exposition, processing mail, and sending generations off to war. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 the typical continuous pyramidal-brick footings on stone slabs seen elsewhere in the structure, the wall sat on a severely-rusted, below-grade steel beam supported by intermittent masonry pyramids in direct contact with the earth. To save the structure, helical piers were installed as part of the permanent new structure. Steel shoring beams supported the wall temporarily while a new concrete beam foundation was poured to replace the old steel beam. Adapting to news use To create a track-level news studio, six cast-iron columns had to be removed from the old baggage handling room. This required the installation of eight new steel columns supported by concrete micropiles extending 80 feet down to bedrock. The new columns support four primary steel transfer beams to collect the loads from the floor above. The loads include a 60-foot high, two-foot thick masonry wall, which weighed hundreds of thousands of pounds.… (415) PERMITS EXT. 2 | CIRCLE NO. 40 STORYTELLING IN ADAPTIVE REUSE This masonry wall supports the only remaining ornamental plaster wall and ceiling finishes on the main level, so maintaining strict deflection limits during erection and final installation was vital to ensure the wall and its finishes would remain undamaged. The transfer beams were manually leveraged and shimmed against the weight of the building above to achieve the design dead load deflection prior to receiving their load. In a building located less than 50 feet from an active rail line, vibration was a challenge to the studio’s sensitive acoustics. LEO A DALY worked closely with the owner’s representative, Broadcast Building Company (BBC), to design a box-within-a-box solution that provided vibration isolation from the existing structure at a reasonable cost. The resulting studio space is silent even with a train passing right outside the thick historic masonry walls.. 41 94 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Preserving layers of history From its period of vacancy, most of the building’s historic 1930 interior finishes were lost, revealing vestiges of its original Greek Revival design. In the Great Hall, one can see evidence of each chapter in the building’s history. The white glazed brick and mosaic tile floor remains from the 1898 design. Fluted marble trim and clay tile infill illustrate the extent of the 1930- and 1955-era renovations. The absence of plaster finishes and the preservation of graffiti acknowledge the period of abandonment. The interior design preserves and showcases these vestiges, along with what remains of the building's historic materials and patina. Stone, brick, steel, and decorative plaster finishes have been repointed, patched, and protected. New elements consist of simple materials and forms which create a calming background for the chaos of the news. A soothing color palette of white, gray, and buff is invigorated with punches of russet and blue. An oculus was cut in the floor where the original 1898 stair opening had been filled in 1930. The few historic features that remained, including mosaic stone floors, were repaired or restored. The 1930’s era marble clock surround was found in pieces on the floor of an adjacent room. Three missing pieces were reconstructed of cast-stone material using molds cast from original pieces. The surround was reassembled and hung in its original place on the east wall with a new clock face installed almost identical to the original. The east lobby is the only room in the building that was treated as a restoration rather than a rehabilitation. With its ornamental plaster rosette ceiling and mosaic stone floor still intact, it is the building’s strongest representation of its 1898 origins. Station to Station Throughout its history, Burlington Station connected Omaha to the rest of the world, welcoming visitors from abroad during the Trans-Mississippi Exposition, processing mail and sending generations off to war. Today, through thoughtful adaptive reuse, 7 Burlington Station now connects Omaha in a new way. By retaining its layered history, the design acts as a physical expression of journalistic integrity, allowing the building to tell its remarkable story just as KETV’s reporters tell the story of Omaha each day. CCR CIRCLE NO. 42 MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 95 COMMERCIAL CONSTRUCTION & RENOVATION PEOPLE 2019 SCHEDULE: February 26th Tampa, FL March 20th Fort Worth, TX April 9th Atlanta, GA May 9th Minneapolis, MN June 13th Philadelphia, PA July CIRCLE NO. 43 MAY-JUNE 2019 Amanda Hope Rainbow Angels Crusading the fight against childhood cancer and other life-threatening diseases 10 Lorraine Tallman Founder & CEO IN O GA E T S P NC TI FIDE N O C CONNECT. INFLUENCE. LEAD. leadupforwomen.com Contents May • June 2019 Owned & Operated by Women’s Association, LLC Mailing Address: PO 3908 Suwanee, GA 30024 Editorial Editor: Dalana Morse dalanam@leadupforwomen.com 817-405-4058 Amanda Hope Rainbow Angels Contributing Writer: Kate Pittman K8pittman@gmail.com 214.558.0295 PR and Social Media: Ashlyn Biggs Leyba Digital Marketing social@leadupforwomen.com 480.848.0927 10 Art Director: BOC design, Inc. brent@bocdesigninc.com 404-402-0125 Circulation/Subscriptions: subscriptions@leadupforwomen.com 4 5 6 Founder’s Corner You are STRONGER than you think, BRAVER than you believe and MORE than you ever imagined Advisory Board Editor’s Note Steps to building the right characteristics of a successful entrepreneur LEADERSHIP 18 A farm-to-food solutions’ leader who dreams bigger and advocates healthier. leadupforwomen.com 14 Lead Up for Women travels to Arizona where the weather is hot and the women are on Fire! 16 26 30 Atlanta women brought the passion, courage and humor Beating the odds one day at a time 10 Tips to gain confidence BUSINESS LIFESTYLE 22 Putting the ‘human’ in human resources 28 Becoming more present in life Lead Up for Women 3 Founders Corner You are STRONGER than you think, BRAVER than you believe and MORE than you ever imagined We are traveling the nation educating women of all diversity, race, and culture about Lead Up for Women and how our community of strong survivors and powerhouse leaders are purposefully supporting each other for what you need and what you can offer. We are hard at work every day spreading the word about what Lead Up means with everyone we meet. Monthly luncheons have proven to be a success as we enter our fourth month of gatherings. We are humbled and grateful for so many women who have aligned themselves to be part of our panels and share their stories. Our radio show—Lead Up for Women: Speak Up to Lead Up—launched March 27th and is already VoiceAmerica’s fastest growing new radio show, leading the way for their Women Series on the Empowerment Channel. Each week we interview the bold survivors, influential women leaders and those who can teach us how to laugh and love ourselves for exactly who we are. We invite you to be inspired to lead without permission through the 4 Lead Up for Women inspiration of our guests’ stories of survival, overcoming adversity and their celebrations in business, in their community and in their personal lives. If you missed a live show, no worries, we stream live on our Lead Up for Women Facebook page. Every show becomes “On Demand,” because as women, we are busy, so listening on our terms through the strong support of our group of women so you can live your best life. You just need to tap into your greatest power, YOU! You are the only you that has ever been and the only you that will ever be. Be you and be strong, because you are brilliant and the world needs you. We align with this so much, but it means nothing if you don’t hold yourself accountable on a daily basis through concrete daily actions. Those choices make or break us. All of the members of Lead Up for Women are here to offer you support and sisterhood to leading your best life and the journey starts today. What are you waiting for? Join us. Colleen Biggs May-June Consultant Founder & Principal Consultant Connect Source Consulting Group, LLC Founder Bialek Chiropractic Shannon Polvino PR and Account Manager Insight International LLC leadupforwomen.com Lead Up for Women 5 Editor’s Letter Steps to building the right characteristics of a successful entrepreneur TThe cost of generating a start-up business varies, but cost isn’t all that matters. What really makes a difference is the range of entrepreneurial characteristics that a small business owner has. While successful entrepreneurs don’t belong to any other planet, they do have some unique qualities that set them apart from the rest of us. Do you know what those qualities are? Here are some important characteristics that are needed to become a successful entrepreneur. 1. Pursue Something You Enjoy When you want to launch your start-up or small business, you’ll need to invest plenty of time and energy. You may need to work for hours on end, so it’s extremely important to choose an area of business that you are truly passionate about. If you don’t enjoy what you do, chances are your entrepreneurial venture won’t meet with success. 2. Practice Self-Discipline Dalana Morse is the editor of Lead Up For Women magazine. You can reach her at (817) 405-4058 or by email at dalanam@leadupforwomen.com. 6 Lead Up for Women The best part about becoming an entrepreneur is that you work without a boss. The worst is that not having a boss can easily lead to procrastination, which you need to avoid at any cost. One of the most important characteristics of an entrepreneur is practicing self-discipline. You are your own boss, so be a good one. 3. Always Plan in Advance Successful entrepreneurs never forget to plan things. In fact, planning is like a habit to them. No matter what part of the business it is, they always abide by strategic planning. This quality is highly essential, as it provides you with an opportunity to study and analyze things before they are implemented. If you want to become successful as an entrepreneur, develop the trait of planning each and every area of your small business. 4. Know How to Self-Promote Self-promotion is one of the most needed characteristics for obtaining entrepreneurial success. Your success is in your own hands. As a budding entrepreneur, you need to work day in and day out to reach your audience. People are waiting to know May-June 2019 who you are, what business you are in, and what products or services you have to offer. 5. Have a Strong Belief in Whatever You Do No matter what entrepreneurs do in business, they do it with plenty of confidence. They have unstoppable faith in their ideas and are quite sure it will work. If you watch a successful entrepreneur, you’ll notice a good amount of confidence in whatever they do. People with entrepreneurial traits have a firm belief in their ability to achieve the desired goals. 6. Find Opportunities to Learn One of the key qualities that sets entrepreneurs apart is that they are always in the learning stage. They learn from other successful people and from their competition. In fact, they don’t miss an opportunity where they think they can learn something new that can be utilized to boost the growth of their small business. Don’t be unhappy or jealous of the success of your competitors. Instead, learn from their success and use them to grow. 7. Know Your Customers Really Well. No wonder, they provide personal attention to their customers to keep them coming back for more.. 9. Negotiate Effectively Successful entrepreneurs are excellent negotiators. If you want to become an entrepreneur and obtain success with your start-up or small business, you must perfect the art of negotiation. You’ll need to negotiate a wide range of deals while establishing your business. By being an excellent negotiator, you’ll be able to create a win-win situation almost anywhere. 8. Sell an Experience 10. Continue to Expand Your Network Entrepreneurs don’t become successful by selling a product or service, they sell an experience. By offering a product or a service to your customers, the main goal of a true entrepreneur is to provide an experience that can’t be easily forgotten. One of the defining characteristics of entrepreneurs is that they are always trying to expand their business network. They use various ways to achieve this purpose. They regularly participate in community events. They attend industry-related exhibitions leadupforwomen.com and conferences. They join professional associations and clubs. 11. Persist Until You Succeed The journey of an entrepreneur is full of adventures. While making an effort to establish their small business, they pass through ups and downs. They suffer from different types of setbacks. They fail and then start over. It’s qualities like these that make up an entrepreneur. No matter what happens, you should persist until success kisses your feet. Which of these qualities do you possess and which do you need to learn to achieve success as an entrepreneur? Don’t be worried if you lack some characteristics. All of these characteristics can be learned or developed with practice. Lead Up for Women 7 hits the radio waves every week Whoever makes the statement that endless opportunity doesn’t exist, needs to stop limiting themselves by Are you ready programming in the world, with unparalleled scope and reach, to lead without which is why we teamed up with them. On March 27, 2019, we launched “Speak Up to Lead Up” with permission and host Colleen Biggs and co-host Dee Daniels, executive producer of take the steps VoiceAmerica. Are you ready to lead without permission and take needed to live the steps needed to live your best life? Whether you want to start the business of your dreams, learn the steps you need to take so your best life? you can LOVE what you do, or celebrate your present and future accomplishments, our radio show will dive into deeper subjects as we interview weekly guests that have already walked in your shoes. Let the experts guide you for a clearer path to your most successful future. Our show will be the perfect platform for all of our members to advertise their businesses, network and hear about upcoming events, as well as a recap and Dee speak with guests who have stories to share, have faced adversity and have become success stories in business, in their communities and in personal accomplishments. Join the strong and the brilliant ones and understand that the world is ready for you to be at your best. Listen to “Lead Up for Women” live every Wednesday at 1 p.m. (EST) or 10 a.m. (PST), on Voice America Empowerment. Visit our website or visit to bookmark our show and listen in live each week. Sponsorship Rates Full Do you have someone in mind you feel would be a great interview on the show? Do you have a mentor, coach, sponsor or have been inspired by an amazing leader, entrepreneur, employer or friend? If so, we want to hear from you. Please submit their name(s), May-June 2019 Amanda Hope Rainbow Angels Crusading the fight against childhood cancer and other life-threatening diseases Amanda Hope Rainbow Angels is a nonprofit, support and educational organization designated by the IRS as a 501(c)(3) tax-deductible, tax-exempt organization, founded in 2012 in celebration of Amanda Hope’s life. During Amanda’s three-year fight with Leukemia and nine-month battle with a brain tumor, she dreamed she would one day design a fun clothing line for kids, just like her, that would provide comfort and dignity during chemo treatments. Amanda’s life ended far too soon, but her dream lives on through Comfycozy’s for Chemo apparel. Her legacy continues with the expansion of programs and services. The nonprofit brings Amanda’s sunshine to some of the most difficult days through a program called Major Distractions. They host spa days, craft days, sports camps, meals of hope, teen nights, and many other events. Their little warriors love knowing there is always something fun in store. Their in-house Comfort and Care team of licensed therapists provides free counseling, play therapy and supportive services to families who have a child battling cancer, a blood disorder, or any other life-threatening illness. Services are provided to pedileadupforwomen.com atric patients, their siblings and their parents/caregivers. Individual, couples and family counseling appointments are available, as well as support communities and educational sessions. At this time Comfort and Care services are only offered to families in the Maricopa County area and/or for pediatric patients being treated at Banner Thunderbird, Cardon Children’s and Phoenix Children’s Hospital. Founder and CEO, Lorraine Tallman, and other members of the Amanda Hope team are focused on delivering compassionate and responsible advocacy, education and empowerment for families. Amanda Hope was a special little girl, the type of child who lit up the room with her smile. At the age of nine, she started experiencing severe headaches and flu-like symptoms. After several tests, it was confirmed that Amanda had Leukemia. The phone call confirming the diagnosis would forever change the lives of Amanda and her family. Lead Up for Women 11 It took three long years of chemotherapy to finally go into remission. Celebrations were had, a huge “No More Chemo Party” attended by family, friends, nurses and doctors. She went back to school and started to enjoy life again. Then one afternoon, three months later, Amanda mentioned she wasn’t feeling well. A battery of tests discovered she had a mass in her brain. Another painful journey of chemo and radiation began. Throughout her treatment, Amanda’s spirits never waned. She kept smiling and expressing her concern and caring for the other children in the hospital. On March 30, 2012, at 11:29 a.m., Amanda lost her battle to Leukemia. In her honor, the nonprofit carries out her vision of giving dignity back to our children with something as simple as a shirt designed for them. Comfycozy’s for Chemo with all children fighting life threatening diseases will continue to grow knowledge and correct data for children fighting cancer so we can get better research funded for our warriors. What do you see as some of your biggest opportunities moving ahead? What do you feel is the best way to connect with other women in business? To bring Amanda’s mission world-wide and have dignity in healthcare. Parents deserve and need a voice for choice. This will be done by teaching doctors and families how to communicate with each other in a respectful caring manner. Sharing the Amanda Needle and Networking with organizations that offer mentoring and educational opportunities, and growing from each other’s life experience has worked well for me. Who are the most important areas of your business that inspire you to thrive? I have three: my life, my families I serve, my board and operations, non-profit and business development. What is Amanda Hopes growth plan? To touch the lives of families suffering from a diagnosis they never thought they would hear. I continue to create relationships with hospitals to be the “go-to” person to help any family in need. What is the most rewarding part of changing lives? Amanda Hope and Lorraine Tallman. Empowering families to have a voice for choice and dignity for their treatment plan. They learn the necessary coping skills to keep fighting the journey before them. What is the biggest item currently on your to do list? Amanda’s Needle Educate all hospitals about the new incredible tool to access ports with one poke, the “Amanda Port Stabilizer.” Inspired by Amanda, a young, brave cancer warrior, the Port Stabilizer is designed to aid in access when inserting the infusion needle. What is your secret to making a non-profit a success? Our success is based on follow through. Don’t make commitments you cannot keep. We always make sure we understand the needs of our families. 12 Lead Up for Women May-June 2019 One-on-One with... Lorraine Tallman Founder & CEO Tell us about your family and how you manage priorities/balance? My family, husband, Marty, and three daughters, Leah, Rachael and Amanda, are gifts from Heaven. There wasn’t balance for a long time while Amanda fought cancer twice. Our whole world was centered around her chemo, radiation and hospital stays. Our date nights were late night picnics on the hospital floor and special getaways, one-on-one movies with our girls. Marty was the Rams coach, so he was at every game, school play and swim meet. He loved it all. How are you mentoring/sponsoring others? I love to mentor young non-profits. It does take a village. What are your strongest traits as a leader? What traits of other leaders inspire you? I never give up on what is right, and I’m mindful of my team and what they need to reach our goals. My favorite word is “Next.” Leaders who inspire me are women like Rosa Parks and Mother Teresa—women who are dedicated to standing up against all odds and doing whatever it takes to be the change. How has fighting for the lives of your family What book are you reading now? members changed you for the better? First, “Sandra Day O’Connor” by Evan Thomas, and “An The life lessons I have learned is that every day is a gift. Love, forgive and laugh whenever possible. My career is about making a difference daily. You never know, the smile you create today could save a life in the future. My motivation every day is my daughters last words, “Not everyone has a mama like you, promise me you’ll help every child fighting cancer. Promise.” Echo in the Darkness” by Francine Rivers How do you tap into the power of YOU that makes you unique and how has that pushed you forward? What does your typical day look like? My faith keeps me going every day, knowing that even though they have passed on, Amanda and Marty are with me every day. Who inspires you and why? What are your favorite hobbies? Hiking and riding my bike in the neighborhood. How do you like to spend your down time? Traveling Receiving notice from a hospital about a new patient diagnosed with cancer, then reaching out to let the families know we are here for them and sending them a Comfycozy’s or Chemo care pack, visiting a hospital for one of our Major Distraction events and providing free counseling or financial assistants. But mostly giving a lot of hugs. My little warriors. They never give up, no matter how much pain they endure. What was the best advice you ever received? What’s a fond memory of a family sharing their gratitude? What does “Lead up” mean to you? They stated, “You changed my life forever. Thank you for being by my side and helping our family.” Stay mission focused. Women empowering women. Sharing personal growth experiences because you never know who you could be encouraging. To donate in any way, learn more information, or volunteer, contact them through email: hello@amandahope.org. or online through their website or on Facebook. leadupforwomen.com Lead Up for Women 13 LUNCHEON • AZ Lead Up for Women travels to Arizona where the weather is hot and the women are on fire! We had a strong panelist of women at the Arizona Luncheon. Lead Up for Women welcomed Janice Jackson, President of Plexus; Vanessa Siren, Yogi and model for the Ford Company; Audrey Monell, President of Forrest Anderson Plumbing/HVAC Company; Deborah Bateman, Vice Chairman for First National Bank of Arizona; and Ashley Austin, Marketing Manager for the Phoenix Suns. Janice Jackson reminded us to expect the unexpected, as she did when she was offered a deal to come to Plexus as the president of sales and marketing, while on the verge of 14 Lead Up for Women May-June 2019 retiring. She is a true steward of Servant Leadership and leads the only way she knows how, to be her true self. Vanessa shared that finding her balance and “centerâ€? through Yoga and setting her priorities straight was how she found true happiness. From Brazil, she currently lives with her family of five children and devoted husband, Troy. She even took the attendees and panelists through breathing exercises that can be used in our everyday lives to relax and reset. Audrey shared that stepping into a position of power that is a predominantly male-driven industry was the Deborah was a light of many colors as she shared her journey from a teller to her current role as vice chair, and how women are the greatest force if we just believe in ourselves and our abilities. leadupforwomen.com hardest trial of her life. She enlightened us with the wisdom that if you truly stick to who you are and talk straight, challenging those to respect and stand steadfast through the trials, you can endure the hardships. Deborah was a light of many colors as she shared her journey from a teller to her current role as vice chair, and how women are the greatest force if we just believe in ourselves and our abilities. She also shared her wisdom with the group of making sure you are always the example for those that follow. The awesome panelists encouraged all of us by sharing experiences that compliment all of our current leadership styles to set our careers on the right path. Remember to always believe in yourself, lean into who you are, and STOP making yourself small. Stop apologizing, talk straight and be proud, proud to be YOU and start leading without permission. You can find the full video for the luncheons, including photos for all of our members at https:// leadupforwomen.com/gallery/ luncheon-scottsdale-az-april-25th-2019/ To become a Member of Lead UP for Women, visit our website membership and start your journey to living your best life. Lead Up for Women 15 LUNCHEON • ATL Atlanta women brought the passion, courage and humor Lead Up for women traveled to Atlanta in May where the attendees and panelists were mighty. We welcomed panelists Afsaneh Abree, Integral Coach, June Cline, best-selling author and humorist, and last minute special guest speaker Ambassador Dr. Chanita Foster, mother, best-selling author, TV personality, coach, entrepreneur, activist and philanthropist. He shared their experiences in business and showed how success is possible for all women, if you tap into your greatest power, YOU!! The panelists were awesome and encouraged everyone in the room, sharing experiences that reminded us how important it is to surround yourself with the right community of support, LIVE for your PURPOSE and remember humor as you enjoy the journey to be “Mo’ Better”! Afsaneh challenged attendees with the question, “Who is going to be there for you when you lean into your best self?” She said you need a strong community to support you through your journey. She also shared some personal secrets about her life growing up in a hut, and how her hardships as child living through the revolutionary war that was full of violence didn’t stop her from being “Afsaneh.” Even though she was practicing her Muslim faith, and was required to wear cloth to fully cover her head and body, she would sneak out and climb up a ladder to the roof of the hut and lay out under the hot sun in her bikini. She said she even had an amazing suntan that year. June humored us with her story on how she found her purpose through working for a gut wrenching job as the director of financial aid in a small college. Using the applications as humor, she would locate “bloopers” from the 16 Lead Up for Women May-June 2019 “We must find our purpose first, then we will have peace, and last the money will follow.” – Ambassador Dr. Chanita Foster leadupforwomen.com applications and gather them up to use when she spoke, which led to her becoming a paid speaker. She finds the sun in every dark place, and it shows. Dr. Chanita Foster showed real grit and determination as she shared her personal struggles with depression and finding her purpose. She reminded us that depression is real and needs to be dealt with. She also uplifted the room when she preached on how we must find our purpose first, then we will have peace, and last money will follow. She said chasing money will only bring you discouragement and disappointment times 10. You can find the full video for the luncheons, including photos for all of our members at. com/gallery/lufw-atlanta-ga-luncheonmay-22-2019 To become a Member of Lead UP for Women, visit our website membership and start your journey to living your best life. Lead Up for Women 17 LEADERSHIP A farm-to-food solutions’ leader who dreams bigger and advocates healthier. Lucinda Perry Jones, Director of Strategic Initiatives at Operation Food Search By Rochelle Brandvein Imagine growing up in northeast Missouri on a 900-acre farm—never ending chores to be completed, animals to be cared for and fed. This type of 24/7 working lifestyle was a family affair for Lucinda Perry’s household. From a very young age, she wholeheartedly embraced this world—one that would eventually meld effortlessly with her adult life. In Her Shoes Lucinda’s parents taught her every aspect of farm and livestock management, so much so that she reveals “ensuring people have access to nutritious food is in my DNA.” Her family farm consisted of raising hogs and cattle, along with 18 Lead Up for Women May-June 2019 has defined her work ethic and sharpened her leadership skills. This type of upbringing inspired her to strive for a standard of excellence that, for most, was out of reach but, for her, required stretching just a little bit further. Bold change surrounds Lucinda both personally and professionally. She is currently pursuing her Doctor of Education at Brandman University in Organizational Leadership after earning a Master in Public Administration. A Fearless Dreamer When 9/11 happened, everything changed. Lucinda was living in New York City at the time, and the devastating ordeal made her carefully examine her life. This introspection led to leaving her job for an eight-week immersion program in Mexico. The unusual hiatus gave her the courage to follow her heart and take a risk she never would have ordinarily. What ended up as a temporary pivot became a nearly two year journey of teaching in an environment where they needed her—and she learned even more from them. Her career was subsequently filled with integral philanthropic positions in both private and public sectors. Ranging from the ACLU and academia, to public health and grocery retail, Lucinda gravitated more toward opportunities that embraced community transformation and strategic collaboration. Throughout her life, Lucinda has had an uncanny ability to foresee—and answer—the needs of those who cannot imagine a solution. She is, in fact, the catalyst in creating types of favorable consequences that enable others to live more independently. Lucinda is a flawless fit for her current position as Director of Strategic Initiatives at Operation Food Search (OFS), a non-profit hunger relief organization based in producing corn and soybean crops, a very laborious, although enlightening existence, particularly for a youth whose curiosity about the world grew as she did. Lucinda’s time was spent caring for farm animals and tending the harvest at a level of responsibility fit for someone much older. Yet she says this path gave her a stronger base that leadupforwomen.com Lead Up for Women 19 LEADERSHIP St. Louis. Her decision-making abilities are apparent in each and every program she touches—and her input is both powerful and essential. A Hunger-Free Missouri She aptly describes her present role as “chief architect for OFS’s compelling new vision which involves motivation to end the food pantry line, not merely feed the line.” Her agency is shifting the reliance on emergency food distribution to more “upstream, preventative measures that address root causes and call for innovative integrated program models.” In her quest to end hunger, Lucinda acknowledges what is necessary to succeed in the simplest terms: “intensive community partnership collaborations, a trained and knowledgeable staff and board, involved philanthropists and food donors, and an informed citizenry.” 20 Lead Up for Women Lucinda spearheaded efforts to compile a coalition of Fresh Rx game-changers that included widespread financial support and a local partnership with a hospital’s fullservice prenatal care facility. Bold change surrounds Lucinda both personally and professionally. She is currently pursuing her Doctor of Education at Brandman University in Organizational Leadership after earning a Master in Public Administration. The program, which is bolstering her ability to be a transformational leader, is enabling Lucinda to better guide her team in assembling an even more impressive strategic plan. Just What The Doctor Ordered She recently helped launch OFS’s newest program called Fresh Rx: Nourishing Healthy Starts. This food-as-medicine initiative is the only fresh food prescription program in existence that addresses food insecurity during pregnancy. The need to provide low-income women the experience of healthier pregnancies May-June 2019 and healthy babies was glaringly apparent. Lucinda and her team took the reins to create and activate this monumental plan. Fresh Rx has a variety of components—a weekly share of protein, fruits and vegetables from local farm partners; one-on-one nutrition consultations with OFS’s Fresh Rx registered dietitian; cooking classes delivered by OFS’s community chef; at home and online nutrition and cooking tutorials; and supportive services and links to community resources by OFS’s licensed clinical social worker. Lucinda spearheaded efforts to compile a coalition of Fresh Rx game-changers that included widespread financial support and a local partnership with a hospital’s full-service prenatal care facility. The initial impact is astounding and, as the program grows, so will its ripple effect upon the community. Empowering Our Future While eliminating overall hunger is OFS’s goal, Lucinda states “it’s the kids who suffer the worst consequences in this equation.” How can 18.6 percent of children in Missouri—nearly one in five—live in households that struggle with hunger? And how can these children achieve academic success when their stomachs are empty from missing meals? • L eading OFS’s “Sunny Day Endowment Campaign,” a $5 million dollar fundraising effort to ensure sustainability and innovation for years to come. •P roviding nutritious food to children 18 years of age and younger through a year-round approach. The services— including summer meals, afterschool meals, and a weekend meal program for elementary children—impact thousands of children throughout the St. Louis bi-state region. •O ffering on-site cooking demonstrations (to food pantry volunteers, staff and clients) in addition to nutrition education services that reach kids and their families (shows how to plan, shop, and prepare healthy and affordable meals through Cooking Matters®.) Reaching Her Summit Lucinda believes she still has many miles to go to achieve maximum impact in the hunger relief field. For inspiration, she remembers a favorite quote from Nelson Mandela who said, “It always seems impossible until it’s done.” Lucinda is determined to make transformational change by leading from the heart, and inspiring others to dream and achieve the impossible. The task to turn the curve toward food security is daunting, but Lucinda and her team have enacted new programs and enhanced existing services to include: • Establishing an advocacy program that engages elected officials and systems – such as hospitals and school districts—to assess and implement policies that will support nutrition-forward environments. • “Launching Healing Hunger,” a hunger-informed training program to aid frontline professionals in understanding how to screen and help clients obtain food and nutrition services. leadupforwomen.com Lead Up for Women 21 BUSINESS Putting the ‘human’ in human resources By Zoe Hawkins How Sharon Lontoc brings humanity to corporate culture 22 Lead Up for Women Like most people in the working world, Sharon Lontoc once had a job with an unpleasant work environment. Rather than complain or fear, it was a sign of an unfulfilling career ahead. She took on the challenge and became an expert on fixing the problems people face in the workplace. Working in leadership at a variety of top companies before landing her dream job at Title Alliance, Sharon has shown that innovative and creative Human Resources strategies can make even unexpected companies become inspiring places to work. May-June 2019 toward her bachelor’s degree while working full time. There were some underlying issues in her place of employment at the time, and when she discussed them with a favorite professor, he advised her to take a course on Human Resources the next semester. Not only did that course highlight exactly the issue that Sharon was experiencing at work, but it also sparked a passion for Human Resources, which became the focus of her bachelor’s degree. Following completion of that course, Sharon returned to the professor to discuss the situation again. Pivotally, he asked her, “Now that you know what’s going on, what are you going to do about it?” This is the question that has resonated throughout Sharon’s career. It has led her to make changes in each of the organizations she’s worked based on predictive analytics resulting in improved personnel retention and cutting department overhead. A Varied Path to the C-Suite Before becoming Chief Human Resources Officer at Title Alliance, Sharon held a range of Human Resources positions at a variety of organizations. Whether working at Lockheed Martin or Southwestern Bell, a law firm or a financial institution, Sharon has brought her uniquely strategic and innovative approach to each job. Sharon always strives to not just work hard, but truly make a difference. From job to job, she helped to save companies money, boost morale and drive HR strategies that align with business goals. This includes attracting top tier talent while building a culture of engagement, agility and innovation. She says this is an important factor in how she’s continually worked up through the ranks of various companies. Deciding to go into Human Resources While Sharon has quite a proficient career in Human Resources, she didn’t know that was where her career would eventually take her. Early on, Sharon was studying leadupforwomen.com Lead Up for Women 23 BUSINESS “It’s all about how people remember you,” Sharon says. “I’ve earned amazing positions at top companies, all through networking and connections. It’s only when you forge truly meaningful relationships that people will remember you when they need leadership at their organizations.” Leading the Company Forward While Title Alliance is a title and escrow company, it prides itself on being a relationship company first and foremost. Title agents might deal with hundreds or thousands of closings, but the people buying homes might only experience the process a couple of times in their lives. At Title Alliance, the goal is to make every interaction filled with superior customer service, bringing the spirit of celebration back to the closing table. Sharon refers to her role at Title Alliance as her “dream job.” This is largely due to the opportunity to play a more strategic role in the company, creating Human Resources strategies that exceed client expectations, engage people, enable exceptional performance and support an inclusive corporate culture. Not only is she a part of enhancing Title Alliance’s reputation in the title insurance industry, but also generally as an employer of choice. “It’s thrilling to be a part of such a collaborative company,” Sharon says. “I am trusted to fulfill my role as a leader in the organization and given the room to implement strategic initiatives that help us move forward. It’s particularly exciting to be a part of a company with a majority female C-Suite, highlighting the leadership opportunities here for women of all backgrounds.” As Title Alliance continues to grow into new regions, Sharon says her role keeps expanding. “There’s so much growth now, it feels like there’s something new and exciting every day. Of course, it means that 24 Lead Up for Women Teaching the necessary skills to balance passions and projects, work and home is a key part of Sharon’s life. That’s why she has volunteered as a Girl Scout Troop leader for nine years. She’s particularly enthusiastic about breaking down walls and gender stereotypes. May-June 2019 we need to think strategically to stay ahead of any growing pains and ensure a smooth process as we reach out to new regions and bring on more partners.” Empowering Women and Girls Sharon is distinctly aware of the struggles that women face in the work world. The mom of three is married to a reservist who has been deployed, leaving her to feel like a single mom in some ways as she needs to balance work and childcare. With this experience, Sharon is able to relate to other women and give them advice on how they can achieve their professional goals. “I get asked all the time about how I’ve been able to achieve the success I’ve seen in my career,” Sharon says. “If there’s one thing I want to share with women in the working world, it’s to focus on networking and build as many strong, positive relationships as possible, with men and women alike. You never know what opportunities might be out there until you really connect with people and have them remember who you are and what you can bring to the table.” Teaching the necessary skills to balance passions and projects, work and home is a key part of Sharon’s life. That’s why she has volunteered as a Girl Scout Troop leader for nine years. She’s particularly enthusiastic about breaking down walls and gender stereotypes. “It’s amazing to see the growth in the girls who join Girl Scouts,” Sharon says. “The girls I lead are interested in robotics, take part in Girls Who Code, and defy all the gender norms you might imagine. I’m humbled at the chance to help them along, because they inspire me even more than I lead them.” EXPLORE • DISCOVER • GROW Deborah Bateman An Experienced, Award-Winning, Results-Oriented Leadership and Personal Brand Coach. Supporting you to realize and embrace your Purpose, Potential, and Passion. Coaching customized to your needs and goals. Contact Deborah at Deborah@DeborahBateman.com Find out more at: • LIFESTYLE Beating the odds one day at a time By Jeanie R. Davis My journey has been nothing like I imagined it would be. In college, I studied Public Relations, then later Interior Design. I even taught math, as I partnered with my husband in owning and running two Math tutoring centers. As it turned out, none of these would become my passion. Illness and disability made certain of that. After the last of my four daughters was born, I was diagnosed with Multiple Sclerosis. It began more as an irritation, but before long what had started as a relapsing/remitting disease became progressive. In fact, in 2003, my neurologist, a leading MS specialist at Barrow Neurology at the time, held up my latest MRI and informed me that my brain was filling with fluid and I would soon become brain-dead. Nothing could stop the rapid downward spiral of my disease. There was only one thing for me to do—prove him wrong. My husband, Rick, is fond of traveling, and even though his job took him to many foreign and exotic countries around the globe, he wanted me to experience them as well. I will never forget my first trip 26 Lead Up for Women to Europe. It was then my disease worsened, making it difficult for me to walk or even stay awake for long periods of time. Never having experienced the full impact of losing control of my faculties until this time, I was a mess. My young daughters were an ocean away and at the rate my health was declining, I wondered if I would see them again. Sickness can distort your thinking. I told Rick I could never travel again, at least without our girls. This began some of our family’s greatest adventures. Though my disability made it difficult for me to travel, we were able to take the necessary measures to see the world with our daughters. We visited Fiji, Costa Rica, Ireland, Europe (a couple of times), Mexico and Africa. And I managed to stay alive. It was on one of these adventures that I learned a valuable lesson; I can do ANYTHING if I take it one step at a time. While were visiting the Alps in Switzerland, one of the activities was to climb 900 steps on a steep mountainside up to a glacier. This activity is not very conducive to people with physical disabilities. I assured my family that I would be up to the task. They were skeptical. But they’d sacrificed so many activities on my behalf, I didn’t want this to be another, so I convinced them that I really did want to climb those steps, and that with their help, I could do it. I didn’t know how wrong I was. To get to the base of the mountain, a hike in and of itself, exhausted me before I had even taken one step of the 900, but I was determined. (Determination can be good, but sometimes, as in my case, misguided). So up we went. The stairs were constructed of narrow, wooden slats and the mountain was steep—900 steps up, 900 steps down— suddenly that was a lot of steps. May-June 2019 I made it up the first 200 before my legs buckled and I knew I absolutely couldn’t go any further. My husband and youngest daughter wanted to stay with me and help me down, but I insisted I would be fine on my own. After many assurances and a lot of convincing on my part, Rick and my daughters reluctantly went up the stairs while I went down. It had been cold and drizzly all morning, but shortly after parting ways with my family, it began to pour in earnest, making the steps slippery. Some of the symptoms of MS are dizziness, imbalance, numb limbs, weakness and visual impairment, just to name a few. I was experiencing most of these. The stairs had a railing, which I clung to for dear life, but every time someone would come up the steps, I’d have to let go of one side to let them pass, causing me to lose my balance. Looking down from where I stood overwhelmed me. The stairs went on and on. My legs were so tired that each step I took down became a major accomplishment. I finally reached the point where I felt completely exhausted, hopeless and utterly alone. I had gone as far as I could possibly go on my own. When I looked down and saw how far I still had left to reach the bottom, I was sure I couldn’t make it. My options were limited. I could sit down, cling to the railing and wait for my family to return (and probably freeze to death in the pouring rain), or I could pray and ask God to get me off that mountain. I chose the latter. As I was pleading with God for help, there was a distinct voice in my head telling me to quit looking down to the bottom of the mountain as that perspective gave me no hope. Instead, I should just take one more step. Well, I could do that. Just one more step wouldn’t kill me, but the whole staircase would. I was sure of that. So I took one step. When I did, I heard the voice again asking, “Can you take one more step?” The pattern continued. If I ever took my eyes off the steps and looked down to the bottom, I’d get discouraged and lose hope all over again, so I quickly leadupforwomen.com learned not to do that. Instead, I just took one shaky step at a time until I miraculously found myself at the bottom of the mountain, completely drenched from the pouring rain, but alive. I learned two valuable lessons from this: With God, all things are possible, and never give up or give in to that voice of failure. Instead, focus on taking it one step at a time, or one day at a time. at how many readers enjoyed it who weren’t related to me. This spurred me to write more. In the meantime, I had tapped into a world I hadn’t known existed, the wonderful writing community of which I am now a part. As I learned, I wrote, edited, revised and rewrote my next novel, “Time Twist,” a romantic suspense story with some time travel. This book was picked up by a New York Through the years of rearing my daughters, when my illness had disabled me, I turned to writing poetry. Gradually, I took to the piano and began writing music. I found great comfort in composing songs and was rewarded when I was asked to write music for conferences, weddings and other events. Music gave me purpose. I felt blessed and fortunate to have developed this gift. But how long until I lost it? Even worse, how long until I didn’t recognize my husband and daughters—my truest passion? One of my daughters suggested that I write a book. I laughed, but the idea worked in me until I decided to try it. I began writing and found it so fulfilling, I didn’t stop at one book, but kept going. The first book I published, “As Ever Yours,” is a historical fiction based on the true story of my grandparents’ amazing lives. I wasn’t too worried about my lack of training in the writing field, as this book would be read by mostly family. However, I was astonished publishing house. Since then, I have written a sequel, “Time Trap,” which will be released later this year, along with a Christmas novella, “Chrissy’s Catch,” part four of the Christmas Frost series, written with my critique partners. Writing has been a godsend for me; it has given me purpose. But it isn’t always easy or fun. Ask me how I felt about it when the first few rejection letters rolled in. Plus, though I’m gratefully not brain-dead, focus, muscle cramps and spasms, along with many other MS related symptoms make it a challenge. I work through those one day at a time. Perseverance and commitment are essential to success, but writing books doesn’t make my story one of success. There are thousands of successful authors in the world. Beating the odds and proving my doctor wrong has made this a success story. As I learned long ago when I was stranded on the side of the steep Swiss Alps, I can do anything when I turn to God for help. Lead Up for Women 27 LIFESTYLE Photography by Lauren Hensgens at Becoming more present in life By Kate Pittman Is anyone else out there (I know who you are) running on fumes because you are pushing full steam ahead with a “no-rest-for-the-weary” mindset to provide a great life for yourself and your family while also striving for that day you will hit the pinnacle of a successful career and arrive at a life abundantly filled with joy? Hand raised—yep, that was me. Everyone on this planet is on a journey to find fulfillment, joy, love and connect with the greater being that surrounds us—the one who embodies all of these gifts and more. This task is probably the most elusive and greatest sought-after treasure that our life’s journey holds. I recognize that there are plenty of working and stay-at-home parents or individuals (including my past self) who play out their lives either too busy or too focused on other things to give the pursuit of joy a chance. A year ago, I was at the height of chaos and felt as though I was far from experiencing joy. I was extremely blessed with a great job, beautiful family and financial security, but something seemed off. I loved my life, but something was missing. After everything that I had accomplished, I was still wondering if I was fulfilled and I was constantly asking myself, “Am I content?” If not, what exactly does contentment look like? 28 Lead Up for Women I was a Business Development Director for a great architecture, engineering and general construction company and our team was making big progress by bringing in exciting accounts that would provide great work for the 350-plus employees who worked there, hopefully years to come. I was a jet-setter, but I was constantly mitigating the travel that was probably necessary to do my job to the maximum level. I was shaving here and there on travel, doing what I believed necessary for my professional job, to still enable me to do my even more important job of being a mother to my two small children and a supportive wife to my loving husband, who had an equally taxing stressful full-time job. I had just flown home from yet another conference, I do not even recall which, but it didn’t really matter. At this point, they all seemed to run together. I was always careful to arrange my travel to bring me home—if the stars aligned and there were no delays, around 3:45 p.m., to have time to pick up my loves as their day ended a little early at 4:30 p.m., or so. There was something about those smiling faces that lit up the moment you opened the door to their daycare that suddenly made it all seem worth it. After two years of being on the road, managing relationships—both in and out of the office—with my clients, an explosion of stress was mounting that I could not control anymore. The dam had sprung a leak—additional pressure built as work was booming, behavior problems arose with my oldest in school (minor things, but major to a parent in the moment) and I was feeling more distant from my husband because we were only passing in the wind and shouting instructions at each other to keep the machine oiled and running. Although I was good at connecting with people for my job, I was doing worse than subpar with the one who mattered most as my partner in this life—my husband. I was exhausted, WE were exhausted. May-June 2019. Unfortunately, this had become a repeat ploy he had enacted on multiple occasions before. Each time, he was put on the phone and I was able to speak with him. His voice was somber, ashamed and I could hear the depression echo the same feeling that plagued my heart. Talking him off the ledge (and myself at that point), I did as best I could to give a pep talk to our wilting five-yearold. I informed him I would pick him up and that we would do something special. I gave him the task of picking what that something may entail and he was on his way with a glee that had been missing a few moments beforehand. There I was, fatigued from entertaining potential clients the night before and the night previous. I had been on the phone since landing and walking to my car, loading my luggage and driving to the daycare with another director at the company—relaying messages about what needed to be done, for who and when. I hung up as I rolled into the parking lot and I turned up the radio as soon as I heard the song that I knew by heart play. In this moment, the song, Breathe, by Jonny Diaz (see song on his 2015 Everything is Changing Album on his website music/) spoke so clearly about what I was going through. I felt in that moment that the Universe understood and was sympathizing with me. I love a good song, and if you pass me driving on the street, most likely, you will see me singing wildly out loud to myself (or my kids) in my car. So there I was, sitting in the parking lot for the entire song as it played its course, singing out loud and praying. My eyes filled with tears, and I sang the lyrics and rested in the chorus. As the music peaked and began to trail off, I wiped my tears, leadupforwomen.com took a deep breath and cleared my head. Somehow, everything felt better after that. My emotional outburst had taken place so quietly that the rest of the world did not even notice, but that time of recognition from the Universe (that is God to me) gave me just what I needed to clear my head, re-adjust and even give me a spark of an idea about where I may be headed. God was, in that moment, answering my prayers. What I had realized was that my life had become they are well-trained (I hoped that would be soon). And how my husband and others would view me as superwoman if I could make his and everyone else’s lives easier (although I was constantly needing accolades this for momentum). I realized that all of those things I was focusing all of my energy on were items that my exterior self, this persona that I had created for myself, about myself—my ego—needed to keep form. In the end, none of these things were providing me with joy or happiness because it was not what my spirit, or my internal self-needed or wanted to reach its fullest potential. All of these desires were completely centered selfishly around how I wanted to be and to be perceived by others, and how I would be affected—not how I could affect others in a positive way. At this point, I was not even sure WHAT my spirit cried out to do. But, I took the hint from what was being sent my way through God, through the Universe and from that moment on, I vowed to myself that I would make the first step: to focus my energy much more. so busy, so cluttered with this, that, what had been and what was next to come that I had completely forgotten how to “just be” in each present moment as it passes. You see, there was so much joy in my life that I was missing because I was concentrating so much on the end result that was to become. The house that we would be able to afford by putting 20 percent down and having a two high-income household (completely necessary in Southern California where we live). How my children should behave when on being present in order to take full advantage of seeing the beauty that this world provides us in every second, which we are most of the time too busy to even recognize. My hope is to be able to recognize translate the many interactions with others, beautiful occurrences and pleasant happenstances (that to me are all part of God’s plan) that are ever-presently playing out in the world surrounding me daily so that my purpose will unfold. Amen Sista! Lead Up for Women 29 Lead Up Tips 10 IN A G TO TIPS IDENCE F CO N 1. Link up with other women: Take a look around your industry’s community, your personal community and social channels. Join a community group that keeps you connected and provides ongoing support, understanding and opportunities. You’ll make new friends, too. 2. Support and share with other women: 3. Make your voice heard: By supporting and empowering each other, we’re all building personal success stories in our industries. Share your stories with others to inspire them. Don’t assume your boss will notice what a great job you do and make that next important project a part of your road to success. You must speak up if you want to be heard. 4. Meet challenges with solid support: 5. Take the leap: 6. Toot your own horn: 7. Speak your mind: 8. 9. 10. When you connect with individuals who serve as inspiration, they help you develop the confidence it takes to push you to the next level. Look at all the women today with important roles in high finance, politics, entrepreneurship and manufacturing. We often hear we are too emotional for the BIG jobs, but it is our ability to empathize that makes us effective leaders. We get the big picture and understand the details. Believe in yourself and get your ideas out there. Don’t be afraid to do things your way. It’s okay to let people know when you get a win, at least in small doses. You can build your own confidence by pointing out that you were the one who accomplished something for the company or organized a social gathering that serves the community. A lack of confidence is often a bottleneck that keeps you from saying what you really think. Uncork that confidence blocker. By stating your view in a meeting, you are building confidence because you can see the reactions to your viewpoint and adjust as needed. Speak up to lead up. Increase your knowledge: Training helps build confidence because it goes right to the source of the problem. Read more books, attend more seminars and watch online talks. Confidence grows when you act on what you know.. Feel your best being YOU: Nothing feels better to a woman than looking and feeling your best. Spend time to get your nails done, style your hair, get a spray tan or spend work out several days a week. Find what makes you feel just as beautiful on the outside as you are on the inside and OWN it! We suggest joining Lead Up for Women as your confidence building resource. 30 Lead Up for Women May-June 2019 2019 LUNCHEON SCHEDULE Mar 28: New York City, NY Apr 25: Scottsdale, AZ May 22: Atlanta, GA June 13: Philadelphia, PA July 16: Boston, MA July 25: Columbus, OH Aug 20: Nashville, TN Sept 12: New York City, NY Oct 10: Denver, CO Oct 24: Los Angeles, CA *Luncheon. SUMMER 2019 Kitchens Fast. Affordable. Healthy. Theodore Dubin, director of design & construction, Just Salad The Just Salad way continues to be a leader in fresh food options Also Inside: A special supplement to: Waterfront Evolution Cover story photography by Cyrille Dubreuil Photography 130 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Fast. Affordable. Healthy. The Just Salad way continues to be a leader in fresh food options By Michael J. Pallerino T he trading floor can be brutal. The fast and furious, always-on-the-move atmosphere does not always afford itself the most nutritional of lunchtime options. Nick Kenner and Rob Crespi had seen enough. They wanted to give New York City lunchtime crowd something that was fast, affordable healthy. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 131 FAST. AFFORDABLE. HEALTHY. In 2006, Just Salad was launched with a focus on selling quality salads. At first, the New York media glowed over the concept (and crowd). One publication went as far as to write, “Cute guys plus carb-free dressing? I’m so in.” But the initial concept needed refining, so the Just Salad team worked with a culinary agency to develop new menu items, eventually broadening the restaurant's appeal and consumer base. With a growing store count, the Just Salad brand continues to blossom. Commercial Kitchens sat down with Theodore Dubin, director of design and construction, to get his thoughts on where the brand is heading. Give us a snapshot of Just Salad brand? Just Salad is a QSR focused on delivering healthy, delicious and sustainable food. Our speed of service and value proposition set us apart from the competition. 132 COMMERCIAL KITCHENS From a Just Salad brand perspective, we see a tremendous opportunity in suburban markets. We are making a huge push in suburban areas in new geographic regions. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 What type of consumer are you targeting? We primarily target the lunch crowd in dense office districts and at large universities. Our target customers come from a diverse set of demographics and backgrounds, but they are all looking for a healthy meal at a reasonable price. How does the design of Just Salad units cater to what today's consumers are looking for? Our store design and ambiance are major drivers of sales. The design is clean, cutting edge and visually interesting. It’s just an inviting place to spend time and eat a meal. The effort and care we put into our store design is mirrored by the food we serve, and I think our customers make that connection. Is there a location that is one of your favorites (and why)? That’s a tie between our 750 Third Avenue store in New York and our 10 S. Riverside CIRCLE NO. 44 FAST. AFFORDABLE. HEALTHY. COMMERCIAL KITCHENS store in Chicago. Both locations have huge windows that make the architectural elements and finish materials in the stores pop. I particularly love the lighting design at 750 Third Avenue. It gives the space a sort of sparkling glow that naturally draws you in. Walk us through how and why it was designed the way? I think lighting is one of the most important elements of a store’s environment. You need to strike a balance between volume and quality of light. The goal of the lighting design here was to create a warm and inviting space, while throwing enough light on our food and signage to properly direct the customer’s attention. Take us through your construction and design strategy. You need to build beautiful stores. The spaces need to be warm and inviting, but also clean and sleek. One of our brand’s advantages is the low cost and simplicity of construction. When we overhauled design, we knew we needed to set a new segment standard for store design, while continuing our track record of spending a fraction of what the competition spends on build outs. We achieved that goal by painstakingly designing every piece to be modular and plug and play. Our millwork, lighting, wall coverings and soffit designs can easily be applied in any space we look at. We have learned how to make everything simple and straightforward for our GCs on site through years of trial and error. Give us a rundown of the market's layout. The well established trade areas are more competitive and expensive than ever. It’s a challenge to attract foot traffic when new competition is constantly opening around you. Players in today’s market win by providing the best experience to delivery and pickup customers. That is something we have done a great job capitalizing on. The leaders at Just Salad have done an unbelievable job evolving the business model in a way that grows our advantage in the marketplace. What's the biggest issue today related to the construction side of the business? The biggest issue is how busy contractors continue to be. Demand for construction 134 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 ES T 2010 CIRCLE NO. 45 FAST. AFFORDABLE. HEALTHY. COMMERCIAL KITCHENS has been so strong for so long that the best subs are stretched thin. They can afford to demand higher prices so it’s a continuous challenge to keep budgets in line. markets. We are making a huge push in suburban areas in new geographic regions. From a construction standpoint, I am excited about the number of units we are building. Our pipeline has more stores than ever before, and I am looking forward to taking advantage of bulk pricing for future FF&E orders. Talk about sustainability. What are you doing? Refrigeration and HVAC are the electricity consumption beasts in stores. We continue to look for the highest efficiency units within our budget and intelligently layout our back of house refrigeration. What do you see as some of your biggest opportunities moving ahead? From a Just Salad brand perspective, we see a tremendous opportunity in suburban One of our brand’s advantages is the low cost and simplicity of construction. Are you optimistic about what you see today in the marketplace? Yes, definitely. The market for affordable, healthy fast casual food continues to grow. Landlords want our brand because it attracts young, affluent traffic to their properties. There is so much opportunity outside of our traditional markets that once we establish a foothold we are in a position to really blow up. What is your growth plan? What areas are you targeting? We just opened in Gainesville, Florida, and we our opening our first store in the Miami area this month. We will continue to target large urban areas outside of our core New York market because there is a surprising lack of competition and an obvious pent up demand for our product. We are entering the most aggressive unit growth period in the company’s history. What trends are you seeing? Delivery and off-site sales are disrupting the QSR market. There are a lot of new ideas and business models competing for supremacy. I think it remains to be seen what exactly the future looks like here, but it is safe to say the commercial spaces we look at and construction we do is going to change. What is the secret to creating a "must visit" environment in today's competitive landscape? You need to build beautiful stores. The spaces need to be warm and inviting, but also clean and sleek. You need to create a great in-store experience, which for us means efficiently moving the customer through the queue. We spend a lot of time making sure we optimize customer flow within the space. 136 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 46 FAST. AFFORDABLE. HEALTHY. COMMERCIAL KITCHENS What is today's consumer looking for? What’s the biggest item on your to-do list right now? Describe a typical day. Tell us what makes you so unique? Today’s consumer demands healthy, fresh and sustainably sourced food. People are extremely conscious about what they put in their bodies and they think about how food affects their personal well being, environment and local economy. I have close to a dozen projects in different stages of construction and pre-construction. I spend my days reviewing site reports with contractors, coordinating Just Salad’s vendors, reviewing new store plans, and following up with expediters and architects on permitting and construction documents. My biggest challenge is learning the ins and outs of new regions on the fly. We are doing an incredible amount of work in new markets, and it is on me to get to know the contractors, architects, supply chain and permitting processes. I handle the entire construction process from the time the lease is signed to the day we hand over to ops. I am responsible for the store design, construction documents, permitting and construction management. My biggest challenge is coordinating and negotiating with the dozens of contractors it takes to open a store, while staying on budget and hitting our schedule. CCR One-on-One with... » Theodore Dubin director of design & construction, Just Salad What’s the most rewarding part of your job? When I see customers and employees in the store for the first time. It is the culmination of all the hard work and late nights it takes to build a new restaurant. It’s a very satisfying feeling. What was the best advice you ever received? Stay on top of your subs. My Mom told me that. Your contractors and vendors need to know that you are holding them accountable at all times. What’s the best thing a client ever said to you? The best conversations are when we are reviewing new store sales and the numbers are beating expectations right out of the gate. I like to think execution in the design and construction phase help move the needle and turns an average store into a cash cow. Name the three strongest traits any leader should have and why The ability to listen, adapt and hold people accountable. When people feel like you value their opinion and heed their advice, they take ownership in what they are doing. A good leader makes everyone feel personally invested in the process of whatever they are doing. 138 Every project has its own unique set of challenges and roadblocks. If you can’t adapt and find solutions in real time, you can’t successfully lead a complex project. A good leader not only has the ability to hold other people accountable for their work, but also has the humility to take responsibility for himself. If you want to take credit for success you also need to take responsibility for failure. You need to be able to admit when something goes wrong and recognize that, even if its not necessarily your fault, you are ultimately responsible for the project. The key is the ability to learn from bad situations. What book are you reading now? I’m reading a book called “High Rise.” It’s about the construction of an office building in midtown Manhattan. It’s fascinating to see the same challenges I face on a day-to-day basis play out on a $500 million project. How do you like to spend your down time? I like to spend as much time as possible outside. I take every opportunity I can to play basketball, golf, run and go fishing. When fall comes around, I’ll be going back and forth to Ann Arbor as much as possible for Michigan football games. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Are you messing with the imitators? Switch to the Innovators Navien NPE-A condensing tankless water heaters Copied by some, equaled by none. • Number one condensing tankless water heater in North America • ComfortFlow® exclusive built-in recirculation system with internal buffer tank and recirc pump • Up to 0.96 UEF/0.97 EF Energy Star rating • Leading commercial and residential warranties • Cascade and common vent • Proven easier to sell, easier to install with 2" PVC venting and 1/2" gas pipe capability • Unparalleled technical support • Endless hot water and endless advantages Commercial Residential To learn more visit NavienInc.com T h e L e a d e r i n C o n d e n s i n g CIRCLE NO. 47 T e c h n o l o g y Waterfront The NYC skyline as seen from Weehawken, New Jersey. 140 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Evolution By Dan Vastyan How the House of Que is making New Jersey love Texas-style barbeque I t, New Jersey. All 155 passengers and crew members survived. Without being instructed to do so, ferries left their docks and hurried to pull people from the frigid January waters. Known for its smoked meat dishes, House of Cue is located across from the Weehawken Ferry Terminal. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 141 WATERFRONT EVOLUTION COMMERCIAL KITCHENS “Everyone called it the 'The Miracle on the Hudson,’” says Keith McGowan, commercial sales associate at Johnstone Supply. “It was 19 degrees F. "The Port Imperial Ferry Terminal has become a main aorta for commuters headed into the Big Apple,” he says. ,” McGowan says. “When a unique space opened up on the ground floor of a new parking garage directly across from the terminal, a Texas-style barbeque, called House of Que, jumped to make it their own.” Founded in 2013, House of Que. The House of Que is at ground level, and almost the entire front of the venue has folding glass doors that open to the street, and to the view of the city across the river. This project was the first VRF system that Dash mechanical installed. 142 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 “This is a plan/spec job where we collaborated with the engineer,” says Ashenfelter, whose company focuses on commercial and industrial plumbing and HVAC throughout New Jersey. “Limited space, and the fact that the restaurant sits below a parking garage, made for a lot of unique considerations.” While the House of Que Que wanted to keep the mechanical systems in-house, with comfort and efficiency prioritized. PEAK PERFORMANCE IS HUMANLY POSSIBLE What are you doing to launch your business forward? Experis Engineering provides technical talent and end-to-end solutions in the construction industry to help companies meet their project and budget timelines. Experis Atlanta, Georgia Phone: 770.821.5700 Email: jason.davis@experis.com CIRCLE NO. 48 WATERFRONT EVOLUTION COMMERCIAL KITCHENS Dave Ashenfelter, president of Dash Mechanical, worked with the mechanical engineer to change the spec to two, smaller VRF condensers instead of on, which the original design called for. Large, high static air handlers are used to heat and cool the seating area through exposed spiral duct. Conditioning unique space Ashenfelter says that in the original design, a single VRF condenser was going to serve the entire space. He worked with the engineer to change the spec to two, smaller Fujitsu Airstage VRF systems. The restaurant owner preferred two condensers instead of one. If a condenser fails, the restaurant can at least maintain half its heating or cooling capacity. The House of Que. “The use of the air curtains causes a wash of air over the entrances to prevent outdoor air from rushing indoors, and helps keep insects from entering,” says McGowan, who has worked on the project with Ashenfelter 144 While the House of Que dining experience— complete with live stage, open-air atmosphere and a menu to die for— will be the same in Weehawken as it is in Hoboken, the mechanical system takes a page from a different playbook. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019. CREATIVITY | RELIABILITY | QUALITY The South Shore Sign Company has been providing innovative sign solutions since 1984. Design • Project Management • Engineering • Installation We specialize in all aspects of digital signage, large format printing, custom wall coverings, franchise sign management. 732-583-5624 CIRCLE NO. 49 Certified Shop WATERFRONT EVOLUTION COMMERCIAL KITCHENS A first of many A 40-year veteran, Ashenfelter founded Dash Mechanical about five years ago. This was his first VRF project, so he was," Ashenfelter says. As soon as Dash was contacted about providing a solution for the restaurant last year, Ashenfelter asked McGowan where to get solid VRF training. Several Dash employees then attended Fujitsu’s VRF training courses over the winter. “Keith has been in this industry a long time, and like everyone at Johnstone, he’s always been a great resource,” Ashenfelter says. . CK 146 Extensive use of air curtains helps to keep dirt out and conditioned air in when the large doors are open. Keith McGowan (R), with Johnstone Supply, visited the construction site frequently during mechanical system installation. Dan Vastyan is a regular contributor to Commercial Construction & Renovation magazine. Common Ground is a marketing communications brokerage that covers the commercial construction market. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 50 A commercial contractor’s expertise is beneficial when a facility maintenance project is more complex than originally thought. For example, at this senior community, Englewood Construction’s work on what was initially a simple water infiltration maintenance project turned into a full-on repair of the property’s entire exterior façade and window systems. 148 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Don't I know you? When déjà vu strikes with your commercial general contractor W hen commercial property owners and tenants start looking for the right provider to handle facility management and maintenance needs, they shouldn’t be surprised if they have a déjà vu moment. Why? Because the best partner might be a firm they already have a good working relationship—the commercial general contractor they originally tapped to build or remodel their space. Several years ago, a handful of our clients at Englewood Construction started making requests for maintenance-level services outside of our original scope of work. Turns out, these client were on to something. Years later, we have a stand-alone Facilities Management Group that provides planned, preventative and emergency facility maintenance. Today, we not only have many clients who first tap us for a construction project later engage us for their facility management needs, but we also have a number of clients who initially hire us for facility management and then eventually bring us on board for a construction or remodel project. Issues that come up in facility management often initially look like a small maintenance project, but what they actually need is a construction approach. By Chuck Taylor In both cases, this has been due to clients realizing the advantages and efficiencies of having the same partner for both functions. Here are just a few benefits of this setup: Insider knowledge The contractor that built or remodeled a facility is already intimately familiar with the building, which pays dividends when that same firm also manages its maintenance. The GC typically has easy access to recent drawings associated with the space, as well as information about the subcontractors and manufacturers for different building systems. This is all helpful not only in troubleshooting issues, but also in planning for building maintenance based on the age and condition of mechanical systems and unique building features that require special upkeep. Additionally, a construction firm that handles the maintenance of a building will have a unique perspective when planning for a commercial renovation or remodel. That includes not only having a good understanding of which mechanicals need to be updated or replaced—or, that are in good condition and can be repurposed— but also knowing if the property has problem areas that may need to be addressed in a renovation. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 149 DON’T I KNOW YOU? Problem-solving know-how Issues that come up in facility management often initially look like a small maintenance project, but what they actually need is a construction approach. It may be that a maintenance job is more complex than originally thought, or could benefit from a contractor’s ability to analyze a problem and figure out the solution. For example, if a facility maintenance client has a recurring issue with flooring tiles popping, a GC will investigate why the tiles are failing—whether due to a bad install or a problem below the flooring—and address the cause, rather than simply continuing to replace them, which may not solve the real issue. Perhaps the biggest benefit of choosing the same firm for construction and maintenance services is that it is an extension of an already proven and successful partnership. Similarly, as mechanical equipment or systems near their life expectancy, such as an aging HVAC roof unit that is increasingly requiring service calls, a GC can provide market-based construction costs to replace the unit and compare that to what the client is spending on frequent servicing to help decide whether to continue repairing or instead replace the unit. Budget & planning efficiencies As existing mechanical equipment or building systems near their life expectancy, a GC that also provides facility maintenance can provide market-based construction costs for replacement and compare that to what the client is spending on frequent servicing to help decide whether to continue repairing or opt for replacement. 150 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Commercial property owners, managers and tenants can often benefit from considering the big picture as they assess their needs for both future facility maintenance work as well as upcoming capital improvement projects. A GC can help plan and estimate costs for not only maintenance line items but also renovation work, allowing clients to better prioritize these projects and make the most out of their facility budget. Likewise, a GC that also manages facility maintenance can advise on maintenance and construction projects that it makes sense to do at the same time for the sake of efficiency and cost, and manage them both. is a failed HVAC system, a burst pipe, or facade damage from a major storm, the client doesn’t have to waste valuable time figuring out whether their commercial contractor or their maintenance provider is better suited to deal with it; there’s just one firm to call. The ease of one phone call Comfort with existing relationships No one wants to make more phone calls than they have to, so having one trusted partner as a single point of contact for all construction and facility services can streamline work and provide peace of mind. Often, when a firm turns to Englewood Construction for both commercial construction services and facility management, it is because they want to minimize the number of partners and vendors on their roster, and have one resource for their team to go to with any construction or facility-related need. Having that single point of contact can prove particularly beneficial when it comes to emergency maintenance. Whether the situation Perhaps the biggest benefit of choosing the same firm for construction and maintenance services is that it is an extension of an already proven and successful partnership. Clients that turn to Englewood for both construction and facility management do so because they are confident in the quality of our firm’s work, know we can deliver on brand standards and are comfortable with our team. That level of trust and respect is an important basis for any client relationship, and helps ensure facility work—whether maintenance-based or a full-scale construction project—will be completed smoothly and successfully. CCR Chuck Taylor is director of operations for Lemont, Illinois-based Englewood Construction, a national commercial construction firm specializing in retail, restaurant, hospitality/entertainment, industrial, office, entertainment and senior living construction as well as facilities management.-outs, CIRCLE NO. 51 info@hunterbuilding.com By John T. McGrath Jr. Gaining a foothold Flooring renovation reinforces Chicago area VA Hospital's outreach to its patients 152 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 T he Edward Hines, Jr. VA Hospital, located west of Chicago, is a 147-acre campus that provides a safe and supportive environment for America’s veterans. It is home to a variety of facilities for our servicemen and women, including America’s first Blind Rehabili- tation Center (BRC). Today, the Hines BRC is a 34-bed, in-patient facility that receives applicants from more than 50 VA hospitals in 14 Midwest states. percent,” says,” Tucker says. “We also needed to create something special in our design and elevate the facility to a new level of sophistication by creating wayfinding that our patients could feel with their long canes.” The design process Due to overarching VA standards, the design and specification process was somewhat limited. The replacement flooring could not MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 153. 154 “One of our biggest challenges with the project was the phased installation approach in a fully operational healthcare facility for vision impaired veterans.” — Nick Anos, INSTALL Contractor NuVeterans Construction Services COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 “All of the physical demands dictated the material selection process,” Tucker says. “However, there were specific design needs as well. It needed to contrast for vision impaired patients but not be too contrasting that it looked gaudy or disjointed.” The team also needed materials that would pass the tap and sweep test of a long cane and help with wayfinding. Patients needed to be able to feel a difference between the flooring material without it being too obvious. For example, a long strip of textured flooring perpendicular to a door or opening signifies a patient room. A textured square is a sign for non-patient rooms. A long strip or band that runs parallel to a door signifies an elevator or exit. These designs do double duty, as they help the fire department look for patient rooms in smoky or hazardous conditions where only the floor is visible. CIRCLE NO. 52. “One of our biggest challenges with the project was the phased installation approach in a fully operational healthcare facility for vision impaired veterans,” says president and owner Nick Anos. “Not only were infection control barriers and other safety systems a major concern, we needed to get to the bottom of the flooring failure from the installation in 2005.” Throughout each phase, Anos and his team needed to address moisture barrier and mitigation issues to improve conditions. After removing each tile, the installation ground and shot-blasted the concrete to remove any contaminants. They then used a moisture mitigation system to prepare the surface for primer and texturizing.,” Anos says. With roughly 13,000 square-feet of flooring and 21 phases, the Hines installation was challenging, but thanks to certified training from INSTALL, Anos and his team were able to quickly and efficiently complete the project with minimal disruption to patients, staff and visitors. Specific education in moisture mitigation and substrate preparation helped them provide the facility with a floor that will last for decades. More importantly, the patients who now call the Hines VA BRC home know they have a floor that’s specifically designed for them. HC John T. McGrath Jr. is the executive director of INSTALL—the International Standards and Training Alliance, which is the construction industry’s best endorsed floor covering installation training and certification program. A three-decade flooring installation professional and accomplished speaker, McGrath conducts seminars regularly for architects, interior designers, building owners and facility managers to increase their knowledge of flooring installation issues. He can be contacted at: john. mcgrath@carpenters.org. 156 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 A Smarter EXCHANGE Global Security Exchange (GSX) is the security industry’s premier global event. EXPLORE 300+ EXPERT-LED SESSIONS, each designed to deliver valuable, actionable takeaways to help shape your security strategy—today and in the future. The GSX education program—peer-led by ASIS, Cyber Security Summit, and InfraGard subject matter experts—consists of a full curriculum of interactive learning opportunities exploring innovation in facility security, perimeter technologies, Enterprise Security Risk Management (ESRM), and many more sessions in construction security. REGISTRATION IS OPEN! GSX.ORG/CCR CIRCLE NO. 53 Cold as ice Minnesota Townhome complex deals blow to winter conditions By Rachel Ruhl Mike Jackson, Associated Mechanical pipefitter foreman, left, and Nick Kruse, Michel Sales field support and technical trainer, perform final boiler commissioning in one of the mechanical room locations. M innesota is home to Bob Dylan, Betty Crocker, Post-It Notes, Bisquick and the Jolly Green Giant. And it is becoming green, quite literally. All across the Midwestern state, federal dollars have been hard at work setting new energy standards. 158 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 As the capital city, it only makes sense that Saint Paul has taken a leading role in the movement toward greater energy efficiency. Managers of Hanover Townhomes, a Section 8 housing complex in the heart of Saint Paul, have fully embraced the efficiency shift and signed off on a $10 million energy retrofit, completely transforming the mechanical systems ±and comfort—of the townhomes. It’s rare to hear “affordable housing” and “comfort” in the same sentence. Too often, comfort is the compromise that gives way to price and ease of installation. Why is it that experiencing true comfort is so often thought to be reserved for people of greater means? It’s a question that troubled the property manager for BDC Management Company, the group that owns Hanover Townhomes. “Fortunately, I know now it’s just a myth, and we’ve done our best to bust it,” said the property manager. Many of the occupants at Hanover Townhomes are refugees. Some are from the Vietnam era while some are younger, with families and children. Busting BTUs at the source Nick Kruse, foreground, Michel Sales field support and technical training, completes a combustion analysis on one of the Laars Mascot boilers. Working with him are Larry Sundberg, left, Michel Sales field support and technical trainer and Mike Jackson, Associated Mechanical pipefitter foreman. Minnesota isn’t known for mild winters. Residents learn to expect—and get—the worst of winter conditions all too often. Temperatures can drop to -30 degrees F as lake effect conditions engulf the state, leaving annual snowfalls of up to 170 inches. Knowing this, when the 13-building, 128-unit apartment complex was built in 1968, a central heating plant delivered plenty of heat in time of need. Colossal cast iron boilers, with a combined 50 million BTUs of output, supplied hot water to each building. But of course, the bigger they are, the harder they fall. Ultimately, no-heat calls came in frequently. Almost without fail, most of them came in the middle of a midwinter night. When facility maintenance pros dug into the source of problems—and deeper Mike Jackson, Associated Mechanical into the ground, in fact—they discovered pipefitter foreman, prepares to activate one that leaking pipes were the chief culprit. of the Laars Mascot FTfiretube boilers. BTUs were shared abundantly with the soil around the pipes, and stubborn air locks simply caused hydronic fluids to stop circulating. After years of challenge with the aging central system, including plant, the cast iron boilers eventually became a maintenance chalenormous heat loss and plentiful leakage, managers smartly chose lenge and had an insatiable thirst for fuel. to decentralize the old heating systems. Each apartment building then got its own cast iron boiler. The central plant was turned into a Déjà vu roomy storage facility. For some of the long-time maintenance guys—those who could For a decade or so, all space and water heating needs were recall long nights while tending to the needs of the big old central provided this way. While still an upgrade from the central heating plant—it was a déjà vu experience. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 159 Nick Kruse, Michel Sales field support and technical trainer, left, Larry Sundberg, technical training and field support at Michel Sales, middle and Mike Jackson, Associated Mechanical pipefitter foreman, right, discussing one of the nowcomplete hydronics systems at Prairie Meadows Minnesota isn’t known for mild winters. Residents learn to expect— and get—the worst of winter conditions all too often. Nick Kruse, Michel Sales field support and technical trainer, sets up a heat curve for one of the Laars boilers. 160 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 For facility managers, they knew it was again time to switch to a more reliable and more efficient heating system, hopefully with an approach that would serve needs there more reliably, and for a much longer period of time. “This time, we scrutinized every detail to proposals that we received —from the type of equipment that was recommended, to approaches taken to meet—or preferably exceed—tenant expectations for comfortable space heating, as well as heat for domestic water,” explained the property manager Shakopee, Minnesota-based Associated Mechanical was chosen to complete the system overhaul at Hanover Townhomes. “The project broke ground on April first,” says Mike Jackson, Associated Mechanical jobsite superintendent. “The demolition of the existing mechanical systems was a pretty laborious job.” Nick Kruse, inside sales at St. Paulbased Michel Sales Agency explained that access to mechanical rooms is through a storm door, outside—and the steps that led to the basement, below, are old and steep. “Because of the difficult access, walk-behind cranes were used to hoist out the old equipment through the storm doors,” Kruse says. “It was no easy task.” Laars Mascot FT firetube boilers were selected by Associated. With an efficiency of 95 percent AFUE, input of 199 MBH, with 10:1 turndown, and the capability of cascading up to twenty boilers for larger structures— the new boilers give managers and residents new peace of mind during preparations for the inevitability of the Minnesota winters. “The Mascots were chosen for this job because they’ve got standard features that aren’t even options with other brands—like integral circulating pumps,” Jackson says. This meant that Associated’s pros didn’t have to supply one for each system— at an additional cost to them—or the added work entailed in installation, and dialing them in. Boiler operation is now controlled by outdoor reset, built into each boiler’s circuitry. “This alone brought a whole new level of comfort for residents of the apartment complex,” says Larry Sundberg, technical training and field support at Michel Sales. “Before, residents had one- or two-zone systems that simply operated by an ‘on’ or ‘off’ function. At last, residents now enjoy the luxury of heat that’s measured out in doses, according to ambient conditions—good or bad.” Aside from new mechanical systems, all Hanover Townhomes buildings were equipped with new windows, doors, sidewalks, steps and handrails, appliances and insulation. Mike Jackson, Associated Mechanical pipefitter foreman, left, and Nick Kruse, Michel Sales field support and technical trainer, perform final boiler commissioning in one of the mechanical room locations. Comfort & efficiency: The sum of its parts Aside from new mechanical systems, all Hanover Townhomes buildings were equipped Laars Mascot FT firetube boilers in one of with new windows, doors, sidewalks, steps the mechanical rooms at Prairie Meadows. and handrails, appliances and insulation. Electrical improvements were also made. “It’s been a complete and noticeable improvement for everyone, and we’ve had several comments from living spaces so much more comfortable. The final proof of tenant tenant confirming that the changes are appreciated,” the property happiness and comfort comes when our maintenance phones remain manager says. “The updates and improvements here have made the silent all winter long.” MH Rachel Ruhl is a writer and account manager for Common Ground, a Manheim, Pennsylvania-based trade communications firm focused on the plumbing and mechanical, HVAC, geothermal and radiant heat industries. She can be reached at rachelr@seekcg.com. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 161 By JoAnne Castagna Exterior of Grant Barracks at the U.S. Military Academy at West Point, New York. Credit: Dan Desmet, Public Affairs. 162 COMMERCIAL CONSTRUCTION & RENOVATION — MARCH : APRIL 2019 With honor Army Corps shares love of preservation for Ulysses Grant's family N ew Jersey resident, Ulysses Grant Dietz is named after his great great grandfather, Army General Ulysses S. Grant, the nation’s 18th president and commander in chief. Dietz says. , is renovating and preserving Grant Barracks at the U.S. Military Academy at West Point, New York, he was pleased. “Buildings like this are often not treated with historical sensitivity, which seems too bad. So the idea that the historic aspect of the building—even though it was built in 1931—is being considered is a good thing,” Dietz says. Renovation of Grant Barracks at the U.S. Military Academy at West Point, New York. Credit: Dan Desmet, Public Affairs. The Army Corps is performing this work as part of the West Point Cadet Barracks Update Program. This purpose of the program is to provide additional modern living space for the Cadets: •G rant Barracks—formerly named “Old South Barracks”—was constructed in 1931 and is the oldest Cadet barracks in use. • It was re-named after General Grant who commanded the victorious Union Army during the American Civil War. •G rant graduated from the academy in 1843 and both his son and grandson would following in his footsteps. Today, the barracks is being modernized to meet the needs of the modern Cadet. “The renovation includes a complete gut and remodel of the existing structure Army General Ulysses S. Grant. Credit: Wikipedia. MARCH : APRIL 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 163 and the floor plans will be optimized to utilize space in a more practical way,” says Christopher Reinhardt, former Chief of New York District’s Military Programs Branch. The renovation is being accomplished by Army Corps contractors, while. Ulysses Grant Dietz, the great great grandson of Army General Ulysses S. Grant. Credit: Ulysses Grant Dietz. To help with this, it will be equipped with Wi-Fi and work stations will be equipped with cable connectors and power supply between computers and devices—Universal Serial Bus (USB) ports. In addition, the barracks. Exterior of Grant Barracks at the U.S. Military Academy at West Point, New York. Credit: Dan Desmet, Public Affairs. The Army Corps is performing preservation work inside and outside the building, including restoring the decorative wood work along the walls and ceilings of the dining hall, like these historic unit crests. To preserve these crests, plaster and terrazzo work will be performed. Credit: Dan Desmet, Public Affairs. 164 COMMERCIAL CONSTRUCTION & RENOVATION — MARCH : APRIL 2019 Preservation a birthright Besides the renovations, preservation work is also underway. Dietz says says. “In the 1920s, his job in Washington, D.C. was to oversee all of the public buildings and parks, including the White House, which he altered for the Coolidge’s.” Dietz was also bitten by the preservation bug. “I’ve always loved old houses and the variety of things that went inside. It is also a great gathering place to meet up with friends socially or discuss projects with professors in a more relaxed atmosphere. “Within the woodwork on the walls and ceiling of the dining hall there is a rich history of unit crests decorating the interior," Reinhardt says. "To preserve these crests plaster and terrazzo work will be performed.” A gothic revival Grant Barrack’s exterior military gothic revival architecture is also being restored in order to blend in with the rest of the historic 200-year old campus. This involves delicate repointing work, pressure washing and re-grouting of the exterior granite stones. Repointing is when the joints of brick or stonework are repaired by filling in with grout or mortar. The primary purpose of this is to prevent water from infiltrating into the building. Besides the granite work, exterior historic items will be restored, including decorative metal railings, stone masonry, and decorative metal and wood doors. The barracks is expected to be available to the Cadets in the summer of 2020. This spring, Dietz will have an opportunity to see the Grant Barrack’s renovation work. He’ll be at the academy with the Ulysses S. Grant Association to view a new Grant statue. After this, he’ll be traveling with the academy’s honor guard to Grant’s Tomb in New York City for an annual commemoration ceremony. There he’ll give a speech—like he’s done for 30 years—educating the public about Ulysses S. Grant and continuing to preserve and share his family’s history. FC A regular contributor to Commercial Construction & Renovation, Dr. JoAnne Castagna is a Public Affairs Specialist and writer for the U.S. Army Corps of Engineers, New York District. She can be reached at joanne.castagna@usace.army.mil SPECIAL REPOR T DON’T MISS OUT ON YOUR CHANCE TO BE INCLUDED Inc. Crossville, PR Representative Specialties Irene Williams, Dr. Product Construction r 349 Sweeney Brown, Senior Wade NG REPORT SPECIAL FLOORI Mills Bentley VP of Marketing SherryDreger, Rd. Don Julian a Ave. 14641 E CA 91746 2500 Columbi r, PA 17604 City of Industry, 9 Lancaste 672-7212 423-470 (717) .com (800) ntleymills.comlls.com flooring mstrong ntleymi ngflooring.com Type: marketing@be jeeno@armstro Product Finished Product Type: Vinyl ed, Solid, Tile: t Sheet: Tile, Rugs Resilien Wood: Engineer m, Carpet Linoleum Broadloo e, Wood, Carpet: N/A VCT, Linoleum Served: Floors: Laminat Floating t Tile: Solid Vinyl, Vinyl, Linoleum Markets Resilien t Sheet: Accessories Resilien Wall Base, nts, , Inc. t Other: Rd. re, Restaura Resilien ty, Healthca Shopping Malls Bostik Watertown Plank Retail, Hospitalie, Education, 11320 W Served: WI 53226 Corporat Markets Floors Wauwatosa, Associated tional (414) 607-1373 1 InternaPresident IONAL 607-155 n, INTERNATSince 1921 Fax: (414) Goodma /us Ave. FLOORS, Environments Rich TED Multi-Location stik.com 32 Morris for ASSOCIA Resource nes The Flooring ld, NJ 07932 , Membra Springfie 376-1111 Product Type: Grouts, Adhesive (973) 5 Materials, 376-114 Fax: (973) com Setting Served: N/A sociatedfloors. om Markets ociatedfloors.c Type: an@ass Product rgoodm claimed ns e Aged/Re Brinto g Executiv 200 ed, Exotics, Agglomerates, Marketin 200, Ste , Unfinish Cement, Terrazzo Tile Lydia Day, Blvd., Bldg. , Quartz, Woven Bamboo Steel 1000 Cobb Place Porcelain Wood: Strand/Clay, Glass, Cork Metal: Stainless w, GA 30144 Tile: Ceramic Kennesa Floors: Wood, 5 Floating Rubber, Recycled 594-931 , Cork, 1 n, PVC Free) (678) Linoleum 594-930 Vinyl, VCT, , Bio Based, PolyolefiRubber, Misc. Fax: (678) Solid et Tile: t PVC Free) Resilien Rubber, Misc. (Polymer , Rubber, Recycled or Other sa.com Polyolefin Accessories Vinyl, Linoleum rintonsu t Sheet: , Bio Based,Treads, Wall Base,Wool or Other lday@b Resilien (Polymer Rugs Stair Rugs, Sisal, (Natural Fiber) Product Type: m, Carpet Tile, t Other: Casino, Marine, Resilien m, Carpet Tile, : Broadloo ty, Gaming/ Poured Floors Broadloo nts, CARPET SERVED: Hospitali Carpet: , Topping, TS e: PolishedHealthcare, Restaura Malls MARKE Concret ty, Leisure Shopping Retail, Hospitalie, Education, Public Spaces, Served: Corporat Markets Group p White h St. of Italy The Belkna 111 Plymout02048 Ceramics hio, Marketing Officer d, MA Mansfiel 283-7500 Daniele DeustacSuite 1000 (800) Ave., The 1 SE 3rd p lknapwhite.com Product Type: Miami, FL 33131 Belkna d, 6 Unfinishe Finished, claimed (305) 461-389 0 White ed, Solid, 497-890 Exotics, Aged/RePorcelain Fax: (786) Wood: Engineer info Group ramica. /Clay, Glass, Other Tile: CeramicLaminate, Wood, Misc. ice.it Floors: Rubber, miami@ Floating nts, Type: PVC Free) Rubber, Recycled re, Restaura Polyolefin, Rubber Product Vinyl, VCT, /Clay, Porcelain ty, Healthca Recycled t Tile: Solid (Polymer, Bio Based, ries Tile: Ceramic Resilien Retail, Hospitali Vinyl, Rubber,Base, Accesso t Sheet: Wall Carpet: Markets Served: n, Shopping Malls Resilien Stair Treads, nts, e, Educatio t Other: Corporat re, Restaura Resilien ty, Healthca Shopping Malls Retail, Hospitalie, Education, Served: Corporat Markets Flooring Armstrong s Manager Program Julie Eno, Brand UCTION 42 RCIAL JULY : AUGUST TION — & RENOVA g Manage e, TN 38555 Marketin Route 405 Crossvill 0 6696 State PA 17756 (931) 484-211 inc.com Muncy, 3 .com (800) 293-849 sg2mkt FLOORING Armstrong Flooring Julie Eno, Brand Programs Bentley Mills Manager com/efs irene@m 2500 Columbia sgroup. SherryDreger, com Product Type: of Marketing sgroup. Lancaster, PA Ave. 14641 Glass, Porcelain floor@c17604 E Don Julian Rd. Type: Ceramic/Clay, Product (717) 672-7212 City N/A m Tile: of Industry, CA Served: Steel, Aluminu 91746 Markets .com jeeno@arms PVC Free) trongflooring.com (800) 423-4709 Metal: Stainless Polyolefin, Tile , Bio Based, Carpet: Carpet late) Wood: Engineered,Product Type: (Polymer Use Curecrete & RetroP g Solid, Finished marketing@ n, Mixed t Tile: Misc. Formula bentleymills Resilien e, Educatio (Ashford Director of Marketin Floating Floors: .com Tile: Product Type: re, Corporat Laminate, Wood, Soong, Healthca Inc. Resilient Tile: Linoleum Served: Solid Vinyl, CorPlug, s GarrettSpring Creek Pl. Resilient Sheet: Markets Vinyl 1203 Resilient Sheet: VCT, Linoleum r Relation Custome Carpet: Broadloom, Dr. le, UT 84663 Gray, VP Markets Served: Resilient Other: Wall Vinyl, Linoleum Springvil Carpet Tile, Rugs Jacquie 2708 Cardinal Base, 3 Retail, Accessories Markets Hospitality, Healthcare, Served: N/A CA 92626 (801) 489-566 7 Corporate, Education, Restaurants, Costa Mesa, 432-1995 (801) 489-330 Shopping Malls (714) com m Fax: recrete. rplug.co Bostik, Inc. com ASSOCIATED m recrete. FLOORS, INTERNATIONA The Flooring rplug.co Resource for ing@cu Associated Floors 11320 W Watertown Multi-Location L info@co Environments Since 1921 Type: market Plank Rd. nts, Type: Product International Wauwatosa, re, Restaura WI 53226 PVC Free) Product e: Polished, Stained ty, Healthca Sports Rich Goodman, President Polyolefin, ry Concret (414) 607-1373 Garages, Retail, Hospitali Parking 32 , Bio Based, cores in multi-sto Morris Served: etc. g Malls, (Polymer floor Springfield, NJ Ave. Fax: (414) 607-1551 Markets l Centers, n, Shoppin t Tile: Misc. infill abandoned 07932 office buildings to Resilien Office Corporate, Educatio Warehouses, Logistica (973) 376-1111 commercial type plug ion .com/us e, High Rise Fax: (973) 376-1145 Corporat Cast Concrete Product Type: Arenas, Distribut Served: a Markets rgoodman@ om associatedfl Inc. Strand North Americ Wood: oors.com Setting Materials, Grouts, r Woven Adhesive, Membranes Cosentino Jose Luis Sorto,r Del Conca USA,Tile:Manage Ceramic/Clay, Bamboo, Unfinished, Exotics, Product Type: Markets Served: N/A Manage Glass, Porcelain, Aged/Reclaim Molina, General cial Sales Quartz, Cement, ed Floor Juan Commer Agglomerates Conca Way Cr., 10th , Brintons 155 Del North America355 Alhambra Terrazzo FL 33134 TN 37774 Metal: Stainless Tile Coral Gables, 378-2665 Loudon, 0 Resilient Steel Lydia Day, Marketing Floating Tile: Solid Vinyl, (818) (865) 657-355 4 Executive VCT, Linoleum, Floors: Wood, Cork 1000 Rubber, sentino.com Fax: (865) 657-355 Cork, Rubber, Cobb Place Blvd., Resilient Recycled sa.comSheet: Misc. (Polymer, Bio Based, Bldg. 200, Ste lconcau cosentino.com 200 .com Vinyl, Linoleum, Rubber, Polyolefin, PVC Free) Kennesaw, GA 30144 jlsoto@ Type: N/A @delconcausa Recycled Rubber, (Polymer, Bio Product Based, Polyolefin Misc. (678) 594-9315 N/A j.molina Resilient nts, Served: Other: Stair Type: Restaura or Other PVC Free) Fax: re,Treads, Carpet: Broadloom, Markets Product (678) 594-9301 Wall Base, Accessories ty, Healthca Carpet Tile, Rugs, Sisal, Wool or Retail, Hospitali Inc. Tile: Porcelain Malls Other r Shop, t Markets Served: n, Shopping Concrete: (Natural Maste lday@brinto Fiber) Markets Presiden Edge nsusa.com e, Educatio Served: Retail, Polished, Topping, Poured Creative Hospitality, Healthcare, James Belilove, S 23rd St. Corporat Floors Product Type: 601 Corporate, Education, Restaurants, Stone IA 52556 Shopping Malls CARPET: Broadloom, Fairfield, er Tile and t 5 Carpet Tile, Rugs MARKETS SERVED: (641) 472-814 8 DesignJay, Vice Presiden The The Belknap Hospitality, Gaming/Casin Nathan 472-284 White Group Public Spaces, Ave. Fax: (641) o, Marine, .com 100Belkn Newfield ap Leisure tershop 111 Plymouth NJ 08837 St. rjet.com Edison, Mansfield, eativeedgemas White 7 MA ec-wate 02048 jimb@c Product Type: (732) 225-187 Ceramics of (800) 0 Italy 225-066 Group Tile 283-7500 Daniele e.com pwhite.com Fax: (732) , Terrazzo tileston Deustachio, Marketing Tile: Porcelain, Cork, Rubber lestone .comEngineered, Product Type: 1 SE Wood: Officer 3rd Ave., Suite Solid, Finished, Linoleum Rubber info@designerti 1000 Unfinished, Miami, Vinyl, VCT, Recycled , QuartzExotics, Aged/Reclaim FL 33131 ries Type: , Rubber, t Tile: Solid Porcelain Tile: ed (305) Product Resilien Sheet: Linoleum Wall Base, Accesso /Clay, Glass,Floating Ceramic/Clay, Glass, Porcelain t Rugs 461-3896 CeramicTile: Resilient Floors: Carpet: Resilien Stair Treads, N/A nts, Tile: t Other: Served:Solid Vinyl, VCT, Rubber, Laminate, Wood, Other Fax: 43 (786) 497-8900 Resilien re, Restaura Markets Recycled Rubber,TION (Polymer, Bio g Malls ty, Healthca & RENOVA Misc. Based, ica.info Polyolefin, UCTION Resilient Sheet:CONSTR Retail, Hospitalie, Education, Shoppin PVC miami@ice.it Served: RCIAL Vinyl, Rubber, Recycled Free) Resilient Corporat Other: COMME Markets Rubber Stair Treads, Wall 2018 — Base, Accessories Product Type: JULY : AUGUST Markets Served: Retail, Hospitality, Carpet: Tile: Ceramic/Clay Healthcare, , Porcelain Corporate, Education, Restaurants, Markets Served: Shopping Malls Retail, Hospitality, Healthcare, Restaurants, Corporate, Education, 42 Shopping Malls COMMERCIA L CONSTRUCTIO N & RENOVATION — JULY : AUGUST 2018 2018 Construction Wade Brown, Specialties Crossville, Inc. Senior Product Marketing Manager Irene Williams, PR Representativ e 6696 State Route 349 Sweeney Dr. 405 Muncy, PA 17756 Crossville, TN 38555 (800) 293-8493 (931) 484-2110 up.com/efs illeinc.com floor@c-sgro up.com irene@msg2 mkt.com Product Type: Product Metal: Stainless Type: Steel, Misc. (Polymer, Bio Based, Polyolefi Aluminum Tile: Ceramic/Clay n, PVC Free) , Glass, Porcelain Markets Served: Markets Served: Healthcare, Corporate, Carpet: Carpet Tile N/A Education, Mixed Use Resilient Tile: CorPlug, Curecrete (Ashford Inc. Jacquie Gray, Formula & VP Customer Relations Garrett Soong, Director RetroPlate) of Marketing 2708 Cardinal Dr. 1203 Spring Creek Costa Mesa, CA Pl. 92626 Springville, UT 84663 (714) 432-1995 (801) 489-5663 g.com Fax: (801) 489-3307 info@corplu g.com Resilient Tile: Product Type: Misc. (Polymer, Cast Concrete Bio Based, Polyolefi type plug to infi n, PVC Free) marketing@curecrete.co ll abandoned fl m oor cores in multi-story Product Type: commercial offi Markets Served: ce buildings Concrete: Polished, Corporate, High Stained Rise Office Markets Served: Retail, Hospitality, Corporate, Education, Healthcare, Restaurants, Cosentino North America Arenas, Distribution Shopping Malls, Parking Garages, Warehouses, Sports Jose Luis North America Logistical Centers, Commercial Sales Sorto, etc. Manager 355 Alhambra Cr., 10th Floor Del Conca USA, Inc. Juan Molina, General Coral Gables, FL 33134 Manager (818) 378-2665 155 Del Conca Way Loudon, TN 37774 jlsoto@cose ntino.com (865) 657-3550 Fax: (865) 657-3554 Product Type: Markets Served: N/A causa.com N/A j.molina@de Creative Edge lconcausa.com Master Shop, Product Type: Inc. James Belilove, President Tile: Porcelain 601 S 23rd St. Markets Served: Retail, Hospitality, Fairfield, IA 52556 Corporate, Education, Healthcare, Restaurants, Shopping Malls (641) 472-8145 Fax: (641) 472-2848 Designer Tile and Stone ershop.com Nathan Jay, Vice President jimb@cec-w aterjet.com 100 Newfi Product Type: Edison, eld Ave. NJ 08837 Tile: Porcelain, Resilient Tile: Terrazzo Tile (732) Solid 225-1877 Resilient Sheet: Vinyl, VCT, Linoleum, Cork, Rubber Fax: (732) Linoleum, Rubber, 225-0660 Resilient Other: Stair Treads, Wall Recycled Rubber nertilestone.com Base, Accessories info@design Markets Served: ertilestone.c Carpet: Rugs Retail, Hospitality, om Product Type: Healthcare, Corporate, Education, Restaurants, Tile: Ceramic/Clay Shopping Malls , Glass, Porcelain, Markets Served: Quartz N/A CONSTR JULY : AUGUST COMME 2018 — COMMERCIA L CONSTRUCTIO N & RENOVATION 43 Get your company’s profile in our Flooring and Project Management Software listings, which will be published in our July/August 2019 issue. Our annual listings provide a snapshot of the leading companies in the respective sectors. Go online at on Advertising Page to download form. Deadline: July19 CIRCLE NO. 54 CIRCLE NO. 55 MARCH : APRIL 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 165 GREAT CHEMISTRY R&D lab unites sustainable products and lean construction 166 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 by Jerry Brandmueller, IMC Construction The company’s new Discovery Hub will consolidate the labs of 330 scientists into one building on the University of Delaware’s Science, Technology & Advanced Research (STAR) campus, connecting the industry leader with one of the foremost bio-engineering schools in the country. The $195 million, 340,000-square-foot facility is expected to be complete in 2020. A fortuitous pairing of construction ingenuity and chemical innovation is proving to save time, reduce costs, enhance safety and protect the environment in the building of this major facility. Best and the Brightest To Chemours, whose 2017 Corporate Responsibility Commitment dedicates $50 million to STEM education programs, UD’s STAR campus is the ideal location for its new building. “We’re thrilled to begin research and development work in the creative environment of a public university, alongside professors and students, not far from our global headquarters in downtown Wilmington,” said Mark Vergnano, Chemours’ CEO. “In this building, we’re going to bring together some of the brightest minds in academia alongside our R&D scientists to innovate new solutions for our customers and hopefully introduce some University of Delaware students to potential careers in chemistry and science.” The building will include more than 100 labs, fifty specialty rooms, cafes, conference rooms and twenty “huddle” rooms for small group meetings. It will span 15 acres within a campus of 272 acres. Rising Star D elaware has been the home of DuPont for more than two centuries, but a four-year-old DuPont spinoff is staking its claim in Newark with a brilliant new research facility. Chemours, with headquarters in Wilmington’s historic DuPont Building, is a Fortune 500 chemical company with 7,000 employees in the Americas, Asia, Africa, Europe and the Middle East. It is the world’s largest producer of titanium dioxide products for coatings, plastics and paper; and fluoroproducts including Teflon, lubricants and refrigerants. The STAR campus is rising rapidly. Within the next few years, several additional stateof-the-art, collaborative workplaces will be complete, establishing the campus as a 21st century nexus of innovation, learning and research. The Discovery Hub is going up in the southeast corner of the campus along a streetscape of retail, housing and green spaces. Adjacent to the Newark train station, STAR will be more like a dynamic urban environment than a suburban office park. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 167 GREAT CHEMISTRY “We’re building a community here,” said UD President Dennis Assanis at groundbreaking ceremonies in December 2017, “one of researchers, students, innovators, entrepreneurs and leading-edge thinkers and doers.” Hub Cap The location may be ideal, but not without challenges for the building team led by IMC Construction. Formerly occupied by the Chrysler Assembly Plant, the campus contains areas of contamination (AOCs) as identified by environmental engineers. Since acquisition in 2009, the University’s due diligence in testing and remediation of the brownfields site has been meticulous. Delaware Natural Resources and Environmental Control regulations stipulate that soil from AOCs can be contained and stored elsewhere temporarily, but ultimately it must be returned to the original site. IMC’s solution was to excavate 140,000 cubic yards of the contaminated 168 The company’s new Discovery Hub will consolidate the labs of 330 scientists into one building on the University of Delaware’s Science, Technology & Advanced Research (STAR) campus. COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 soil in segments, raise the building pad to accommodate the soil that had to go back, and create a vented vapor barrier underneath the floorplate. The vapor barrier is a proven, effective method. While remediation has significantly reduced off-gases, the vapor barrier will remove any trace of off-gassing from the AOC and vent it above the building. The solution allows brownfield sites to be renewed and reused by the next generation of makers, and it puts this project one step closer to its Green Globes goal. Sustainability In addition to building on a brownfields site, the project has several sustainable features that may earn it three Green Globes out of four when the project is complete. It is a profound advantage to work with a client who is also committed to energy conservation. Said Vergnano, “This building represents our company’s commitment to Delaware and to our community – made all the more special by the cutting-edge construction technology that incorporates some of our company’s own products.” Chemours’ new product Opteon will be used in the building’s chiller plants. A ground-breaking environmentally-friendly refrigerant with Low Global Warming Potential, Opteon is being adopted around the world, particularly in regions that are vigorously fighting climate change. Several other products will be the subject of study as well as used in the building of the Discovery Hub. Chemours’ Ti-Pure titanium dioxide – a nearly ubiquitous white pigment covering walls, floors, automobiles etc. – will be used for architectural coatings in the Discovery Hub. IT wiring will be coated with Chemours’ fluoropolymers, and a fire-resistant foam insulation made of Chemours’ fluorochemicals will be applied to the walls. The building will have an energy recovery ventilation system that uses 100% outside air. In laboratories that may use 17 exotic gases, recirculating interior air is impossible. Ducts as large as 92” will draw clean air in from outside and circulate it throughout the building. Other sustainable features include energy-efficient options for appliances, occupancy sensors and LED lighting, reflective roof coating and a native plant landscaping plan. Prefabrication Project Team Chemours Discovery Hub, Newark, DE Owner Chemours Company, Wilmington, DE General Contractor IMC Construction, Malvern, PA Owner’s Representative Trammell Crow, Philadelphia, PA Architect L2 Partridge, LLC, Philadelphia, PA Site/Civil Engineer Tetra Tech, Philadelphia, PA Structural Engineer O’Donnell & Naccarato, Philadelphia, PA MEP Engineer NV5 (formerly RDK Engineers), Philadelphia, PA As designed by L2 Partridge Architects, the building has north, east and west wings. The north is three stories; east and west are two stories. The canopy plaza at the north door is an engaging public space in addition to a dramatic entryway to the double-height lobby. The exterior skin is precast concrete curtainwall with punched windows. “It’s a refined, subtle façade,” explains architect Joel Ziegler of L2 Partridge, who is also renovating the Hotel DuPont for Chemours headquarters. “The surface modulation causes interesting shadows, and there are areas of glass curtainwall and metal panels to help break down that precast façade and signify the main entrance.” The building is a steel structure with poured slabs on deck – the type of structure that IMC has built many times. For this project, however, the builder had portions of the interior infrastructure prefabricated instead of building in place. Segments were constructed elsewhere and then brought in to be assembled on site. IMC did the same for the central utility plant that produces chilled water, steam and temperate water, and for the five, 65 x 50foot rooftop units. The plants were segmented, built in a fabrication yard and shipped to the site. The large, central plant arrived in 16 segments and was assembled on the roof. This procedure had labor, safety and schedule advantages: • The local labor market in the region is tight, with the boom in construction showing no abatement. Installing prefabbed segments takes fewer workers than building onsite. • Installation poses fewer safety hazards in the field. • Steel fabrication could begin concurrent with the approvals process, rapidly accelerating the construction schedule. 3-D Modeling When they arrived onsite, did all the pieces fit? Like a glove, thanks to IMC’s Virtual Design Department. The department created a complete 3-D model of the building, including the interior laboratories. An added advantage was being able to effectively walk the owner through the finished space using virtual reality goggles. The model also identified the pipes, ductwork and electrical systems throughout the laboratories, aiding the engineers and subconsultants who could prefabricate from IMC’s model. The investment in software is significant, but ultimately cost-effective and labor saving because the team knows exactly where everything must be. For IMC, the Discovery Hub resembles other major construction contracts the company has won over the years. But innovation in products and process has greatly improved the time, economy and quality of the construction, bringing Chemours that much closer to its nexus of collective entrepreneurship. CCR Jerry Brandmueller (jbrandmueller@imcconstruction.com) is the R&D project executive for IMC Construction. Based in Malvern, Pennsylvania with an office in Philadelphia, IMC has completed similar projects for the University of Pennsylvania, Endo Pharmaceuticals, Siemens Medical and other leaders in scientific inquiry. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 169 170 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 Eliminating the middleman How blockchain technology can impact your business By Kris Lindahl B lockchain technology has found its way into mainstream news cycles, but many stories still only associate blockchain with cryptocurrency transactions. While this is an application in which blockchain is being used heavily due to its incredible capabilities to track and securely log transactions of all kinds, a number of businesses and industries are discovering how blockchain might change the way they work. In short, blockchain technology permits the secure distribution of transparent and incorruptible data across a peer-to-peer cloud-based network through an online digital ledger or database, removing the need for potentially flawed or time-consuming paper copies and e-files. In the commercial construction industry, many are paying attention to blockchain technology's ability to decentralize building design data, reduce fraud and improve management amid the supply chain. Replace the BIM Platform System Most architects, builders and inspectors currently rely on some type of Building Information Modeling (BIM) platform to centralize information concerning communications and building design plans. Blockchain technology can potentially eliminate this "middleman." This is done by allowing the parties involved in the structure's planning, design and implementation processes to enter relevant data into a distributed ledger. For example, a builder on-site realizes that certain design details must be changed due to unforeseen circumstances. By decentralizing building design and construction data, these necessary changes to the plans can be easily and confidentially communicated to all parties involved, including any stakeholders. Workflow variations can be costly to maintain, whereas a distributed blockchain Using blockchain technology and enabling the use of smart contracts can virtually eliminate issues with fraud and quality control. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 171 ELIMINATING THE MIDDLEMAN ledger allows anyone with permission to access the data for decades—an invaluable resource to investors and commercial structure owners. Greater quality control Even with the strongest bonds between companies, it can only take one weak link in the chain of command to cause a project to go awry. Paper or e-contracts leave room for error and can allow certain parties to attempt to make fraudulent claims concerning what has and has not been performed during the construction and design process. Using blockchain technology and enabling the use of smart contracts can virtually eliminate issues with fraud and A number of commercial industries have already taken advantage of the blockchain to improve supply chain management. quality control. There is no more reliance on conventional contracts and building inspectors to ensure that everything is constructed as promised when switching to blockchain technology. Contractors are not typically paid until contract specifics are confirmed and fulfilled and all conditions are met according to the smart contract in place. Another example of how this may benefit commercial construction professionals is by its further implementation in effort to integrate with Internet-of things technology (IoT). These smart devices are currently used to connect with sensors amid structural piping, which can already be regulated and monitored using IoT technology. This innovation helps inspectors confirm their location, material types, and whether or not they are installed according to coding requirements. In short, all of this will be logged amid the blockchain, which could be connected to the sensors to confirm their compliance and integrity for decades. Improve supply chain management A number of commercial industries have already taken advantage of the blockchain to improve supply chain management. Experienced professionals amid the construction industry are very aware that issues can occur throughout various project phases. Material deliveries can be delayed, or the wrong products can be sent to construction sites. Another benefit of the blockchain is that, by distributing these issues to the ledger, it allows other individuals, companies and investors involved to become aware of these situations, and change their timelines and schedules for their crew or awaiting tenants accordingly. Essentially, this can save a great deal of time considering that there are likely other projects that can be tackled while waiting for corrections to be made and keep the construction moving forward. This technology also streamlines project management. This is done by allowing parties to see projects become digitally “signed off” on as they are completed, rather than waiting for paperwork or e-documents to confirm the integrity and completion of each and every transaction. While blockchain is only beginning to become recognized as a tool within construction companies and other industries, organizations like the Construction Blockchain Consortium are helping to drive innovation amid the industry. The way in which businesses adopt this technology will reveal itself over time, and it is possible that any one of these solutions could evolve into something beyond what many have predicted. But when things play out, staying abreast of new developments can help existing companies move naturally into the future. CCR Kris Lindahl is a nationally recognized innovator in real estate, marketing, leadership and community involvement. In 2014, he was Minnesota's No. 1 real estate agent as ranked by Real Trends. In 2017, the Kris Lindahl Team rose to become one of America's top real estate teams. In May 2018, he fully embraced his own real estate model to form Kris Lindahl Real Estate, Minnesota's premier independent real estate agency. 172 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 CIRCLE NO. 56 174 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 The Green Wave 5 successful ways to achieve sustainable construction By Kevin Hill T he construction industry never slows down. And it goes without saying that it is a massive industry. A lot of materials flow in and out of any construction site, and unfortunately, a lot of waste is generated as well. This makes it imperative to keep the waste to a minimum and aim to make the processes as sustainable as possible. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 175 THE GREEN WAVE The objective of sustainable construction is to create and operate processes that prioritize resource efficiency and ecological design. It focuses on seven core principles across the building's life cycle: protecting nature, reducing the consumption of resources, reusing resources, using recyclable resources, eliminating toxins, applying life-cycle costing and emphasizing on quality. Sustainable construction includes three critical elements— environmental impact, social responsibility and economic efficiency. These elements help in governing the building design, quality of architecture, technologies and processes, working conditions and serve as the basis for sustainable construction. more natural light, install smart windows that block ultraviolet rays and install solar panels on the rooftop for running HVAC units and water heaters. 3. Minimize Construction Waste Waste is a byproduct of any construction site. A huge amount of wasted roofing, cardboard, glass, drywall, metal, insulation, etc., isn’t uncommon. Identify the materials that can be repurposed or reused. Waste is also observed when materials go unused due to the presence of large quantities. Implement truck scales to weigh the raw materials accurately so that you don't order more than you require to avoid wastage. The main environmental benefits of practicing sustainability in the construction industry include reduced carbon emissions, alleviated noise pollution, increased cost savings, faster turnaround times and reduced scrap. 1. Utilize Low Impact Construction Material 2. Choose Renewable Energy Sources One of the smartest and simplest ways to achieve sustainable construction is by using alternative energy sources. Incorporating solar, wind and hydro energy will tremendously reduce your fuel footprint and optimize your energy savings. You can incorporate renewable energy into the building design by constructing structures that are well-ventilated. Aim to bring in Actively implement sustainable practices in your organization and seek LEED certification. Encourage recycling programs and train your staff on different sustainable practices they can adopt within the facility. Partner with other companies who practice sustainability to maximize your efforts. 5. Focus on Space Efficiency Some ways to attain design sustainability through maximizing space efficiency include: • Having open spaces to maximize the use of daylight in the interiors • Incorporating industrial weighing scales in numerous equipment to reduce movement • Minimizing surface areas by excluding spaces like the patio, porches, etc. • Bringing in folding beds, moving walls and space saving furniture to maximize the usable area and minimize the size of the structure. • Incorporating raised floor solutions to make space for under floor systems, reduce overhead space and improve HVAC efficiency In order to incorporate sustainable practices in construction processes, implement the following five practices: Manufacturing construction materials from scratch requires a lot of energy. In order to reduce the energy expended on various manufacturing processes, use low-impact materials that are recycled or repurposed. Use materials which are sourced from other building sites or materials that come from naturally occurring elements containing recycled waste and content such as blown paper insulation. Opt for modular designs for buildings to minimize wasted materials and decrease the construction time. Moreover, they are durable and can be reused and recycled continually. 4. Practice Inter-Company Sustainability The main environmental benefits of practicing sustainability in the construction industry include reduced carbon emissions, alleviated noise pollution, increased cost savings, faster turnaround times and reduced scrap. But the shift to sustainability cannot happen overnight and requires a considerable amount of research, innovation and creativity, apart from a positive attitude and support from the stakeholders. Sustainability is the future and the construction industry must set an example by embracing this trend. CCR Kevin Hill heads the marketing efforts at Quality Scales Unlimited in Byron, California. 176 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE 2019 dedicated to the art of engineering. Wallace Engineering Structural and Civil Consultants Atlanta | Denver | Kansas City | Oklahoma City | Tulsa wallacesc.com CIRCLE NO. 57 Sponsored by: Contact David Corson 678.765.6550 or e-mail davidc@ccr-mag.com End-Users (retailers, hoteliers, restaurateurs, etc.) will receive complimentary hotel, airfare, transportation CIRCLE NO. 58 The Voice of Craft Brands Claire Marin, founder and CEO of Catskill Provisions 180 CRAFT BRAND AND MARKETING MAY/JUNE 2019 CBAM-MAG.COM Keeping bees By Eric Balinski Inside the story of the Catskill Provisions brand 3,000 years ago. Greek mathematicians Euclid and Zenodorus found that honeycombs maximize the use of space with the least amount of building material. In more modern times, honeycomb structures have been described as “an architectural masterpieceâ€? for their resilience and space efficiency. While honeycomb structures were not the original fascination of Claire Marin, founder and CEO of Catskill Provisions, bees and beekeeping were. Starting as a hobbyist beekeeper, Marin discovered the wonders of bees and what they produce. This became the backbone of her company and its wide array of craft products all based on bee honey. CRAFT BRAND AND MARKETING 181 Catskill Provisions One of the key reasons Catskills Provisions has been successful with restaurant chefs is that it recognizes Marin’s knack for blending flavors and finding the right balance between acids, sugars or bitterness. Like themselves, it is her deep attention to flavor they love to explore with her. Chefs can find Catskill Provisions’ products at artisanal food distributors Baldor and Gargiulo. CBAM sat down with Marin to get her thoughts on all things Catskill Provisions. flavor profiles. They appreciate artisanal, small batch and hand-packed products with sustainable practices. Today, consumers can find Catskill Provisions products at specialty retailers in New York, New Jersey, and Connecticut or on-line at iGourmet.com. What is today’s consumer looking for? I think authenticity. Today’s consumer is more educated and aware of where their food comes from, and they respond to companies with a genuine point of difference and a conscience. What do your consumers (and competitors) find so appealing about your brand? Honestly, the flavor. Whether the artisanal foods or the whiskey, we use our honey subtly, more as a balancing agent than a sweetener, allowing the Give us a snapshot of Catskills Provisions? Catskill Provisions is an artisanal food and craft spirits company with honey at our core. Our 100-percent raw wildflower honey from the Catskill Mountains is the key ingredient in our finely crafted products, from our handrolled honey chocolate truffles, honey-infused ketchup and apple cider vinegar to our highly acclaimed New York Honey Rye Whiskey. What type of consumer are you targeting? Our best customers seek only the finest, locally harvested ingredients with unique 182 CRAFT BRAND AND MARKETING MAY/JUNE 2019 CBAM-MAG.COM ENGAGE. ENHANCE. ENTER TAIN. Stunning Visual Products to Enhance your Brand VisualEFXGroup invents, designs, and manufactures unique, high-quality products to display your message or brand in new and impactful ways. Topper™ display with LED lighting Unique wall art in any material or shape Sand-carved etched glassware LyghtCatcher™ window display VisualEFXGroup creates consumer products, commercial products, and wholesale products for resale. Contact us today for more information! info@visualefxgroup.com | +1 202.207.6000 | visualefxgroup.com CIRCLE NO. 59 Catskill Provisions true flavor of the ingredients to come through. We find that our whiskey in particular appeals to the novice drinker, since the honey softens the spiciness of the rye, but the typical whiskey drinker is always surprised and impressed at how smooth and balanced the spirit is, and often become some of our biggest fans. People also always respond to our beautiful packaging, which perfectly captures the brand. How do you tie in everything you do with the brand? Smaller craft brands are emerging every day so it’s an interesting time, and women are definitely stepping up to have a seat at the table. That’s easy. We are built on a few key brand pillars—honey at our core, small batch, handpacked and made in the state of New York. We are also female-owned, which is unique, especially within the spirits industry. Also, a percent of all sales are devoted to pollinator-saving causes. Walk us through your branding strategy. Everything we do consistently supports our brand pillars and reinforces our key values. Brand consistency is key, so even though we have two product lines—artisanal foods and craft spirits, our unique points of difference and brand DNA remain consistent in both packaging and message. We reinforce our brand through promotional alliances, events and collaborations with similar values. We look for opportunities that highlight the wonderful resources of the Catskills, celebrate local farmers, support pollinators and empower women. 184 CRAFT BRAND AND MARKETING MAY/JUNE 2019 CBAM-MAG.COM CIRCLE NO. 60 Catskill Provisions What is the secret to creating a branding story that consumers can buy in to? What do your consumers think when they think of your brand? Finding a unique point of difference and being authentic. Consumers have been so over-promoted to, they can smell a gimmick a mile away. My company is truly my story, unembellished and real and people have responded to it. They think of a brand that’s true to its mission and always delivering quality products with exceptional flavor by using traceable products. Hopefully they also think of the bees and the Catskills. What’s the biggest issue(s) today related to the marketing/sales side of your brand today? What trends are defining the space? The biggest issue is prioritizing how to devote my time and resources. As a small company, we all wear many hats, and it can be difficult to compete withMay_June-2019.pdf the big brands with large staffs and budgets. 1 5/16/19 1:25 PM Smaller craft brands are emerging every day so it’s an interesting time, and women are definitely stepping up to have a seat at the table. What do you see as some of your biggest opportunities moving ahead? It’s such an exciting time for us, as we just opened our own distillery in The Catskills, with a tasting room following this summer. We are adding one more to the less than one percent of female-owned and operated distilleries in the United States. I am distilling on the grain, using non-GMO corn and our wildflower honey, and will soon introduce a Pollinator Vodka, Pollinator Gin and Wanderer Gin. Each celebrates and supports pollinators like the bees and the endangered Monarch Butterflies. C What’s the biggest item on your to-do list right now? M Y I just finished my first mash, so I’d have to say getting my first batch distilled on our property out into the marketplace. CM MY CY Describe a typical day. CMY K METROCERAMICS.COM/SERVICES Wow, there is no typical day, which is one of the things that I love about owning my own business. I wake up early, spend the early hours catching up on emails and paperwork, then I hit the ground running. If I am upstate, I am in the barn distilling, working with a local farmer down the road planting botanicals and rye, and of course, tending to my bees. I am often in New York City, visiting chefs and bartenders, sitting on panels, networking. CIRCLE NO. 61 186 CRAFT BRAND AND MARKETING MAY/JUNE 2019 CBAM-MAG.COM Why is being female owned and operated your secret weapon? Women approach things differently, so we bring a fresh approach to the spirits industry. We are used to multi-tasking and tend to be more collaborative, so I think women are bringing innovative improvements to some longtime practices. Why is important to have (and to market) locally harvested resources? It is so difficult to be a farmer, and stay true to good practices, not cut corners and still make a living. We must support these local communities, and it is also so much healthier to eat local food the way nature intended. And you can definitely taste the difference. Tell us what makes you so unique? Our culinary roots. I started as a beekeeper, selling my local honey to chefs in and around New York City, and slowly expanded to include other honey-infused artisanal foods. To then cross over into distilling is somewhat crazy and pretty unique. To me, talking to chefs or mixologists are not that different; both are passionate about artistic expression through unusual pairings and creations. Sitting down with... Claire Marin, founder and CEO of Catskill Provisions What’s the most rewarding part of your job? Giving people the joy of tasting something delicious. This creates such human connection through the food with people, ultimately creating happiness. Likewise, meeting with chefs and making personal connection and exploring flavors. Chefs are the “rock-stars of the food world and there is so much to learn from them. What was the best advice you ever received? Always have your lawyer near you. In today’s world, one has to be careful, as people will lead you down the wrong path whether intentionally or not. I believe it is critical that both parties agree on a written document, which is signed by both parties, and that any fine print is talked about before signing the arrangement. What’s the best thing a customer ever said to you? It always a delight when someone unexpectedly tells you they tried your product and how wonderful it is. Maybe the most surprising time was when our distributor entered our NY Honey Rye Whiskey into a blind taste test with a panel of experts. I received a call from one of the judges who told me we were selected and now would be the official “Honey Whisky” of Madison Square Garden. How very cool is that, and it’s such a massive audience. Everyone goes to the Garden, and the owners and buyers there know their customers appreciate good local brands. We are mainstream now, not just for a few. When you are not at the Garden, the whiskey is also available at www. forwhiskeylovers.com and bar and restaurants can get it through the distributor, Winebow. What is your favorite brand story? I love two brand stories in particular, Haagen-Dazs and Southwest Airlines. Haagen-Dazs is so interesting because they use great ingredients, have beautiful packaging in a variety of sizes and this premium products is accessible in places that are as diverse as grab-and-go like 7-11, to high end retailers, such as Whole Foods, while maintain their premium brand identity across these different channels. Southwest—one just has to be impressed with how happy their staff is. Everything is positive, how they show-up to work to how they make customers happy. Southwest has shown me that happy bees make better honey. So we go to great lengths to take care of our bees. And in turn, the bees teach us to be good, kind and happy as well. Eric Balinski is the owner of Synection, LLC, which is a strategy and growth consultancy firm. For more information, visit: synection.com. CIRCLE NO. 64 CRAFT BRAND AND MARKETING 187 Southington, CT $1,500,000.00 5,465 New Construction Q4 2019 Dunkin Donuts Holyoke, MA $600,000.00 2,000 New Construction Q4 2019 KFC Somersworth, NH $500,000.00 1,406 Renovation Q3 2019 RETAIL/RESTAURANTS/QUICK SERVE: Longhorn Steakhouse #5606 RETAIL/STORES/MALLS: O'Reilly Auto Parts Halifax, MA $2,000,000.00 7,225 New Construction Q3 2019 Circle K Dover, NH $1,400,000.00 5,328 New Construction Q3 2019 Walmart Supercenter #1866-218 Farmington, ME $700,000.00 157,377 Renovation Q3 2019 Dollar Tree Woonsocket, RI $350,000.00 8,895 Remodel Q3 2019 Carter's & Oshkosh #1304 Nashua, NH $300,000.00 3,641 Remodel Q3 2019 40 Trinity Place Boston, MA $225,000,000.00 429,000 New Construction Q4 2019 Mill Plaza Redevelopment Durham, NH $70,000,000.00 254,000 New Construction/Addition Q4 2019 Seabury Cooperative Housing New Haven, CT $19,800,000.00 72,306 Renovation Q3 2019 Chicopee Mill Renovation Chicopee, MA $6,000,000.00 65,000 Renovation Q3 2019 Tru by Hilton Manchester, NH $23,500,000.00 63,157 New Construction Q3 2019 Alexandra Hotel Redevelopment Boston, MA $10,000,000.00 60,000 Renovation Q4 2019 Ana Grace Academy of the Arts Phase II Bloomfield, CT $50,000,000.00 160,000 New Construction Q3 2019 Chapin Street Elementary School Ludlow, MA $40,000,000.00 106,000 New Construction Q3 2019 Manchester Community College (MCC) Lab Renovations Manchester, NH $300,000.00 8,220 Renovation Q3 2019 New Tewksbury Fire Headquarters Tewksbury, MA $13,200,000.00 22,300 New Construction Q3 2019 New DPW Facility Montague, MA $8,500,000.00 28,844 New Construction Q3 2019 Neuroscience Center - Yale-New Haven Hospital Saint Raphael Campus New Haven, CT $838,000,000.00 505,000 New Construction Q1 2020 RESIDENTIAL/MIXED USE: HOSPITALITY: EDUCATION: MUNICIPAL/COUNTY: MEDICAL: 188 Biddeford Saco Dental Associates Saco, ME $4,900,000.00 15,800 New Construction Q3 2019 ConvenientMD Belmont, NH $3,000,000.00 5,000 New Construction Q3 2019 Lahey Behavioral Health at Solomon Center Lowell, MA $1,000,000.00 5,000 Remodel Q3 2019 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE. 62 AD INDEX Advertiser Page Reader Service No. Advertiser Page Reader Service No. Acclaim Lighting......................................................... 71.........................33 Lido Lighting......................................................... 69.......................32 Ad Art/Genesis Light Solutions.............................. 15, 94.................10, 41 Metropolitan Ceramics......................................... 186......................61 ANP Lighting......................................................... 77.......................34 Mike Levin............................................................. 8.........................5 Beam Team Construction...................................... 31.......................15 Mountain Resort Properties, Inc............................ 95.......................42 Bitro Group, Inc..................................................... 65.......................30 NAC Products.........................................................147...................50 Capacity Builders................................................... 5.........................3 National Terrazzo & Mosaic Association................ 33.......................16 CDO Group............................................................ 37.......................18 Navien................................................................. 139......................47 Commerical Construction & Renovation .............. 165......................55 P&C Construction.................................................. 57.......................28 Commerical Construction & Renovation People................................................ 96.......................43 Permit.com........................................................... 93.......................40 Commerical Construction & Renovation Retreat............................................... 91.......................39 Commerical Construction & Renovation Summit..........................................178-179..................58 Construction Data Co. (CDC)................................ 189......................62 Phoenix Drone Pros.............................................. 87.......................38 Plaskolite.............................................................. 83.......................37 Poma Retail Development, Inc.............................. 135......................45 Prime Retail Services............................................ 47.......................23 CONSTRUCT-ED................................................... 173......................56 R.E. Crawford Construction................................... 49.......................24 Construction One.................................................. 11........................8 Rockerz, Inc........................................................... 7.........................4 Controlled Power.................................................. 14........................9 S.L. Hayden Construction, Inc............................... 53.......................26 Davaco.................................................................. 9.........................7 Shames Construction............................................ 51.......................25 Dynamic Air Quality Solutions............................... 17.......................11 Schimenti......................................................... 8, CVR4.................6, 64 Experis Engineering............................................. 143......................48 Serigraphics........................................................ 137......................46 Flynn Construction................................................ 41.......................20 ShopTalk 360....................................................... 165......................54 Fortney & Weygandt, Inc....................................... 43.......................21 Signage Solutions................................................. 21.......................12 Georgia Printco.................................................... 185......................60 South Shore Sign Company................................. 145......................49 Global Security Exchange..................................... 157......................53 Taylor Bros. Construction Co., Inc.......................... 55.......................27 Harding & Companies........................................... 81.......................36 UHC Construction Services............................... COV2-1....................1 Harmon Construction, Inc...................................... 45.......................22 Visual EFX Group................................................. 183......................59 Hunter Building Corp............................................ 151......................51 Wallace Engineering............................................ 177......................57 Jesco Lighting Group............................................ 79.......................35 Warner Bros........................................................ CVR3.....................63 Jones Sign........................................................... 155......................52 Westwood Contractors, Inc................................... 61.......................29 Lakeview Construction, Inc................................... 39.......................19 Window Film Depot............................................... 35.......................17 Lamar LED............................................................ 67.......................31 Wolverine Building Group..................................... 133......................44 Laticrete............................................................ 25, 29.................13, 14 ZipWall.................................................................. 3.........................2 190 COMMERCIAL CONSTRUCTION & RENOVATION — MAY : JUNE. May/June October 31,. MAY : JUNE 2019 — COMMERCIAL CONSTRUCTION & RENOVATION 191 PUBLISHER PUBLISHER’S PAGE by David Corson A time to celebrate W inning a professional sports championship is not easy no games. By end of season and playoffs, they were one of the best matter how long a team and its fans have been around. teams in the NHL, hitting on all cylinders. Some teams have it easier than others to get to the pinnacle of To win the Stanley Cup a team must win 16 games, via three championship achievement, while others get close, but don’t get to seven-game series and the Stanley Cup finals. That’s after playing reign in a championship. an 82-game season and all of the bumps This year, after 52 long years in the and bruises that come along the way. National Hockey League (NHL), the Saint Playing the same players, the new Louis Blues won its first NHL Championship coach just tweaked a few things to turn the to hoist Lord Stanley’s Cup. hockey franchise around and shock many of The Blues were one of the original six the pundits who gave them no chance. teams in the NHL. Since then, it has expandIt is amazing when a team or people ed to 31 teams—24 in the United States in general have hit rock bottom. Something and seven in Canada. sparks them to get up and get things done The NHL is considered by many to for the positive of all involved and not choke be the premier professional ice hockey when it’s all on the line. league in the world, and one of the major Winning a sports championship is professional sports leagues in the United just like business. Some teams win every States and Canada. The Stanley Cup, year, some are on the perpetual pendulum the oldest professional sports trophy in of winning and losing, and some just can’t North America, is awarded annually to catch a break. But they all keep coming the league playoff champion at the end of back to practice to try and get better each each season. day, while learning how to win and lose, Each member of the championship and pick up the pieces after every game to NHL team gets his name etched on the Cup look forward to being named Champion. for a lifetime and becomes a member of an Business is the same. You must have an elite group of hockey players that can call obtainable goal in mind, stay focused on the themselves NHL Champions. job at hand, work smart and hard, and never, The St Louis Blues have made the ever say die. If things change, don’t play the playoffs 42 times and have played in three Cup “what if” game. Make a decision so you will finals, each time getting swept, only to have know. Not everything works out the way you the Summer to think the losses before it starts want it, but in the long run, usually things will all over again in early October. With their 4th pay off with high dividends. For the St Louis appearance this year, history was made. Blues, in June 2019, they did it with guts, grit It is amazing when a team and pure desire. Hats off to them. Halfway through this season, the Blues were in last place and its fans were starting So, as we enter the second half of 2019, or people in general have to give up hope. Ownership replaced the we hope to see many of you at our remaining hit rock bottom. Something head coach with Craig Berube, a retired CCRP receptions, Fall Retreats and 10th sparks them to get up and Anniversary Summit in January 2020, when hard-nosed NHL player with a new outlook of playing hard, playing smart and never we travel to Jacksonville, Florida. get things done for the giving up. It is not what you don’t have, but To all, here’s to good health, safe travpositive of all involved what you do with what you have. els and the feeling of championship victory. and not choke when Little by little, as the season progressed And always, just like Blues fans, Keep to the finish line, the Blues began to win the Faith. CCR it’s all on the line.. 192 — MAY : JUNE 2019 Turning imagination into reality. ™ and © 2019, Warner Bros. Entertainment Inc. CIRCLE NO. 63 From Midtown to Manhattan Beach. We’ve expanded to the West Coast Bringing decades of experience building high profile retail and office environments for the world’s largest brands. We’re ready to build for you. Tom Fenton, Business Development Manager (914) 244-9100 x 322 / tfenton@schimenti.com NE W YORK / LOS ANGELE S CIRCLE NO. 64 INSIDE THIS ISSUE: President’s Message....................pg 3 Member Directory......................pg 4-5 In Memoriam: Past President James Healy................................pg 6 Consensus Docs Updates..............pg 6 The Current State of Construction Safety ...................pg 7 SPRING EDITION • 2019 NEWSLETTER RCA Hosts Successful 29th Annual Conference RCA’s 29th Annual Conference was held March 1-3, at the Gaylord Texan in Grapevine, Texas. The conference featured a welcome reception on Friday and a day of professional development on Saturday. Two events were hosted at the nearby Cowboys Golf Club: a dinner/ casino night on Saturday, and a spontaneous indoor golf event on Sunday. Attendees included members, retailers, architects, and sponsor representatives. A welcome addition was a group of superintendents who had attended our Superintendent Training Program workshop that was held two days prior to the conference. Speaker Gene Marks discussed technology trends and their impact on customer service and how we do business. Economist Anirban Basu presented a James Bondthemed (“To All the Economists I’ve Loved Before”) economic update, with data on both domestic and international markets. Sarah Wicker Kimes discussed the evolving retail and shopping center experience and how companies are adapting to meet changes and (Continued on page 2 ) Ray Catlin, Randy Danielson & Kristen Roodvoets Eric Handley & Jay Dorsey Anirban Basu RCA’s mission is to promote professionalism and integrity in retail construction through industry leadership in education, information exchange, and jobsite safety. NEWSLETTER (Continued from page 1 ) address customer needs. Ray Catlin, Schimenti Construction Company, moderated a discussion about engaging the workforce and creating a corporate culture where compensation isn’t the most important factor in an employee’s decision to work for or stay with a company. Panelists Randy Danielson, Tri-North Builders, and Kristen Roodvoets, SmileDirectClub, offered their personal experiences and examples of what is resonating in the retail construction industry today. RCA scholarship recipient Ryan Pullin, a University of Houston graduate now working for Triad Retail Construction, talked about the impact of the RCA’s support as he was beginning his career in the industry. Sarah Wicker Kimes During the business meeting portion of the conference, RCA’s new officers and Board members were introduced: President: Steve Bachman, Retail Construction Services, Inc. Vice President: Ray Catlin, Schimenti Construction Company Secretary/Treasurer: Eric Handley, William A. Randolph, Inc. Immediate Past President: Rick Winkel, Winkel Construction, Inc. Dan De Jager & Alison Strauss New Board members are Eric Berg, Gray, Randy Danielson, Tri-North Builders, and Carolyn Shames, Shames Construction; Ray Catlin and Eric Handley began their second terms. SAVE THE DATE FOR OUR 30TH ANNUAL CONFERENCE, TO BE HELD PRIOR TO SPECS, MARCH 13-15, 2020, BACK AT THE GAYLORD TEXAN. We’re We’re Lion Tamers Lion Tamers Commitment to adjust to the demands of jobs. Commitment toyour adjust to Not the other way around. the demands of your jobs. Proactive support, consistency, trademark Proactiveand support, transparency. consistency, and trademark Not the other way around. transparency. Access to everything on site at any hour, even at 3am. Access to everything on site at any hour, even at 3am. Meet us at SPECS • Booth #617 Meet us at SPECS • Booth #617 800-915-9002 800-915-9002 cmi-usa.com cmi-usa.com 2 SPRING EDITION • 2019 Art Rectenwald & Kent Moon Many thanks to our conference underwriters. Platinum Commercial Contractors, Inc. Gold Timberwolff Construction, Inc. Retail Construction Services Bogart Construction, Inc. Management Resource Systems, Inc. Schimenti Construction Company, Inc. Gray Silver Tom Rectenwald Construction, Inc. Commonwealth Building, Inc. ADVISORY BOARD President’s Message What has your RCA Board been doing for you? This past October, the Executive Committee of the Board and some former Presidents met in Washington, DC to re-visit the Purpose, Mission, and Vision of the organization. As part of this work there was an effort to review and validate the relevance of the organization for the membership, recognizing that our businesses must evolve to be successful in the ever-changing retail landscape. With this Steve Bachman market evolution the Board subsequently decided to re-shuffle the committee structure to better adjust to the needs of our membership. We have created the following committees: Membership Experience, which includes Member Benefits, Sponsorship, Membership, and Events; People, which includes Recruitment and Scholarship; Training, which includes Superintendent Training and Safety, and Legislative/Regulatory. A highlight of this rework is the creation of the Advisory Board Committee, which is comprised of all of the current (and potentially future) RCA Advisory Board members. These committees are chaired by your Board Members who have a passion to serve in their specific areas of interest and are supported by other Board members or RCA members at-large who share this interest as well. We believe that these new groups will provide a better roadmap to help us navigate the challenges ahead and help the organization grow its collective knowledge base. The listing of committee chairs can be found to the right of this column. As you know, the media hype of store closings often overshadows the reality of the net gain in retail store openings each year. While some of the legacy brands and namesakes may disappear, there are just as many up-andcomers to take their place; witness the click-to-bricks evolution. Don’t forget, over the last 30 years, the RCA has grown its membership with exciting companies that are often as diverse in their business models and methodology as they are in their customer base. So we know that retail is changing, but hasn’t it always? Next, we will take a deep-dive into the re-purposing of the regional shopping center: stay tuned! We know that retail is changing, but hasn’t it always? Steve If you have any feedback or ideas for the organization, please contact me. We are always looking for ways to continue strengthening the organization: president@retailcontractors.org. Ken Christopher - LBrands Jeffrey D. Mahler - L2M, Inc. Mike Clancy - FMI Jason Miller - JCPenney Company Craig Hale, AIA - Steven R. Olson, AIA - CESO, Inc. HFA - Harrison French Associates Brad Sanders - CBRE | Skye Group COMMITTEE CHAIRS LEGISLATIVE/REGULATORY SAFETY MEMBER BENEFITS SCHOLARSHIP MEMBER EVENTS SPONSORSHIP MEMBERSHIP TRAINING Mike McBride legislative@retailcontractors.org Brad Bogart Rick Winkel memberbenefits@retailcontractors.org Jeff Mahler memberevents@retailcontractors.org Hunter Weekes membership@retailcontractors.org RECRUITMENT Eric Berg safety@retailcontractors.org Mike McBride Justin Elder scholarship@retailcontractors.org Phil Eckinger sponsorship@retailcontractors.org Randy Danielson Carolyn Shames training@retailcontractors.org Jay Dorsey recruitment@retailcontractors.org OFFICERS President - Steve Bachman Secretary/Treasurer - Eric Handley Vice President - Ray Catlin Immediate Past President - Rick Winkel Retail Construction Services, Inc. Schimenti Construction Company William A. Randolph, Inc. Winkel Construction, Inc. BOARD OF DIRECTORS 2020 Steve Bachman 2021 Jack Grothe 2020 Eric Berg 2022 Eric Handley 2020 Brad Bogart 2021 David Martin 2022 Ray Catlin 2021 Mike McBride 2021 Randy Danielson 2021 Carolyn Shames 2021 Jay Dorsey 2021 Hunter Weekes 2021 Phil Eckinger 2020 Rick Winkel Retail Construction Services, Inc. Gray JG Construction William A. Randolph, Inc. Bogart Construction, Inc. Schimenti Construction Company Tri-North Builders Triad Retail Construction, Inc. Eckinger Construction Co. H.J. Martin & Son, Inc. Westwood Contractors Shames Construction Weekes Construction, Inc. Winkel Construction, Inc. 2020 Justin Elder Elder-Jones, Inc. PAST PRESIDENTS David Weekes 1990-1992 W. L. Winkel 1993 Robert D. Benda 1994 John S. Elder 1995 Ronald M. Martinez 1996 Jack E. Sims 1997 Michael H. Ratner 1998 Barry Shames 1999 Win Johnson 2000 Dean Olivieri 2001 Thomas Eckinger 2002 James Healy 2003 Robert D. Benda 2004-2006 K. Eugene Colley 2006-2008 Matthew Schimenti 2008-2012 Art Rectenwald 2012-2014 Mike Wolff 2014-2016 Robert Moore 2016-2017 Brad Bogart 2017-2018 Rick Winkel 2018-2019 2019 • SPRING EDITION 3 NEWSLETTER RCA Membership COMPANY Acme Enterprises, Inc. All-Rite Construction Co., Inc. Atlas Building Group BALI Construction Bogart Construction, Inc. Buildrite Construction Corp. Burdg, Dunham and Associates Comet Construction Commercial Contractors, Inc. Commonwealth Building, Inc. Construction One, Inc. Corstone Contractors LLC David A. Nice Builders De Jager Construction, Inc. Desco Professional Builders, Inc. Diamond Contractors DLP Construction E.C. Provini, Co., Inc. Eckinger Construction Company EDC ELAN General Contracting Inc. Elder-Jones, Inc. Encore Construction, Inc. Engineered Structures, Inc. Fi Companies Fiorilli Construction, Inc. Fortney & Weygandt, Inc. Fred Olivieri Construction Company Frontier Building Corp. Fulcrum Construction, LLC Go Green Construction, Inc. Gray H.J. Martin & Sons, Inc. Hanna Design Group Harmon Construction, Inc. Hays Construction Company, Inc. Healy Construction Services, Inc. Herman/Stewart Construction Howard Immel Inc. International Contractors, Inc. J. G. Construction JAG Building Group James Agresta Carpentry Inc. KBE Building Corporation Kerricook Construction, Inc. Lakeview Construction, Inc. M. Cary, Inc. Management Resources Systems, Inc. Marco Contractors, Inc. Metropolitan Contracting Co., Ltd. Montgomery Development Carolina Corp. National Building Contractors National Contractors, Inc. Pinnacle Commercial Development, Inc. Prime Retail Services, Inc. PWI Construction, Inc. R.E. Crawford Construction LLC Rectenwald Brothers Construction, Inc. Retail Construction Services, Inc. Retail Contractors of Puerto Rico Rockford Construction Co. Russco, Inc. RCA members must meet and maintain a series of qualifications and are approved by the Board of Directors for membership. They have been in the retail construction business as general contractors for at least five years; agree to comply with the Association’s Code of Ethics and Bylaws; are properly insured and bonded; are licensed in the states in which they do business; and have submitted letters of recommendation. CONTACT Robert Russell Warren Zysman Brian Boettler Kevin Balestrieri Brad Bogart Bryan Alexander Harry Burdg Bernard Keith Danzansky Kenneth Sharkey Frank Trainor Bill Moberger Mark Tapert Brian Bacon Dan De Jager Bob Anderson Lori Perry Dennis Pigg, Jr. Joseph Lembo Philip Eckinger Christopher Johnson Adrian Johnson Justin Elder Joe McCafferty Mike Magill Kevin Bakalian Jeffrey Troxell Greg Freeh Dean Olivieri Andrew Goggin Willy Rosner Anthony Winkco Robert Moore David Martin Jason Mick William Harmon Roy Hays James Healy Terry Varner Pete Smits Bruce Bronge Jack Grothe Matt Allen James Agresta Michael Kolakowski Ann Smith Kent Moon Robert Epstein Doug Marion Martin Smith Jane Feigenbaum John Fugo William Corcoran Michael Dudley Dennis Rome Donald Bloom Jeff Price Jeffrey T. Smith Art Rectenwald Stephen Bachman Sean Pfent Thomas McGovern Matthew Pichette PHONE STATE EMAIL MEMBER SINCE 586-771-4800 MI rrussell@acme-enterprises.com 2009 973-340-3100 NJ warren@all-riteconstruction.com 1993 636-368-5234 MO bboettler@abgbuilds.com 2017 925-478-8182 CA kevin@bali-construction.com 2017 949-453-1400 CA brad@bogartconstruction.com 2008 770-971-0787 GA bryan@buildriteconstruction.com 2013 816-583-2123 MO harry@burdg-dunham.com 2016 561-672-8310 FL barney@danzansky.com 2016 616-842-4540 MI ken.t.sharkey@teamcci.net 1990 617-770-0050 MA frankt@combuild.com 1992 614-235-0057 OH wmoberger@constructionone.com 2015 360-862-8316 WA Mark@corstonellc.com 2019 757-566-3032 VA bbacon@davidnicebuilders.com 2011 616-530-0060 MI dandj@dejagerconstruction.com 1990 860-870-7070 CT banderson@descopro.com 1995 816-650-9200 MO loriperry@diamondcontractors.org 2015 770-887-3573 GA dpigg@dlpconstruction.com 2008 732-739-8884 NJ jlembo@eprovini.com 1992 330-453-2566 OH phil@eckinger.com 1994 804-897-0900 VA cjohnson@edcweb.com 1998 619-284-4174 CA ajohnson@elangc.com 2010 952-345-6069 MN justin@elderjones.com 1990 410-573-5050 MD joe@encoreconstruction.net 2018 208-362-3040 ID mikemagill@esiconstruction.com 2016 732-727-8100 NJ kbakalian@ficompanies.com 2017 216-696-5845 OH jtroxell@fio-con.com 2019 440-716-4000 OH gfreeh@fortneyweygandt.com 2013 330-494-1007 OH dean@fredolivieri.com 1992 305-692-9992 FL agoggin@fdllc.com 2018 770-612-8005 GA wrosner@fulcrumconstruction.com 2014 412-367-5870 PA anthony@ggc-pgh.com 2017 714-491-1317 CA ramoore@gray.com 2005 920-494-3461 WI david@hjmartin.com 2016 847-719-0370 IL jmick@hannadesigngroup.com 2016 812-346-2048 IN bill.harmon@harmonconstruction.com 2017 303-794-5469 CO r.hays@haysco.biz 2002 708-396-0440 IL jhealy@healyconstructionservices.com 1996 301-731-5555 MD tvarner@herman-stewart.com 1995 920-468-8208 WI psmits@immel-builds.com 2018 630-834-8043 IL bbronge@iciinc.com 1995 909-993-9332 CA JackG@jgconstruction.com 1998 239-540-2700 FL matta@jagbuilding.com 2019 201-498-1477 NJ jim.agresta@jacarpentryinc.com 2013 860-284-7110 CT mkolakowski@kbebuilding.com 1998 440-647-4200 OH ann@kerricook.com 2012 262-857-3336 WI kent@lvconstruction.com 1998 631-501-0024 NY repstein@mcaryinc.com 2014 336-861-1960 NC dmarion@mrs1977.com 1992 724-741-0300 PA marty@marcocontractors.com 1994 210-829-5542 TX jfeigenbaum@metcontracting.com 1995 919-969-7301 NC jfugo@montgomerydevelopment.com 1999 651-288-1900 MN bill@nbcconstruction.us 2013 952-881-6123 MN mdudley@ncigc.com 2018 732-528-0080 NJ dennis@pinnaclecommercial.us 2012 866-504-3511 GA dbloom@primeretailservices.com 2014 480-461-0777 AZ price@pwiconstruction.com 2003 941-907-0010 FL jeffs@recrawford.com 2011 724-772-8282 PA art@rectenwald.com 1996 651-704-9000 MN sbachman@retailconstruction.com 1998 586-725-4400 MI spfent@rcofusa.com 1996 616-285-6933 MI info@rockfordconstruction.com 2014 508-674-5280 MA mattp@russcoinc.com 1995 (Continued on page 5) 4 SPRING EDITION • 2019 Sachse Construction and Development Corp. Jeff Katkowsky Scheiner Commercial Group, Inc. Joe Scheiner Schimenti Construction Company, Inc. Matthew Schimenti Shames Construction Co., Ltd. Carolyn Shames Singleton Construction, LLC Denise Doczy-Delong Solex Contracting Gerald Allen Southwestern Services John S. Lee, Sullivan Construction Company Amanda Sullivan Taylor Brothers Construction Company, Inc. Jeff Chandler TDS Construction, Inc. Robert Baker Thomas-Grace Construction, Inc. Don Harvieux Timberwolff Construction, Inc. Mike Wolff TJU Construction, Inc. Tim Uhler Tom Rectenwald Construction, Inc. Aaron Rectenwald Trainor Commercial Construction, Inc. John Taylor Travisano Construction, LLC Peter J. Travisano Tri-North Builders, Inc. Randy Danielson Triad Retail Construction Jay Dorsey Warwick Construction, Inc. Walt Watzinger WDS Construction Ben Westra Weekes Construction, Inc. Hunter Weekes Westwood Contractors, Inc. Mike McBride William A. Randolph, Inc. Tony Riccardi Winkel Construction, Inc. Rick Winkel Wolverine Building Group Michael Houseman Woods Construction, Inc. John Bodary 248-647-4200 719-487-1600 914-244-9100 925-606-3000 740-756-7331 951-308-1706 817-921-2466 954-484-3200 812-379-9547 941-795-6100 651-342-1298 909-949-0380 530-823-7200 724-452-8801 415-259-0200 412-321-1234 608-271-8717 281-485-4700 832-448-7000 920-356-1255 864-233-0061 817-302-2050 847-856-0123 352-860-0500 616-949-3360 586-939-9991 MI CO NY CA OH CA TX FL IN FL MN CA CA PA CA PA WI TX TX WI SC TX IL FL MI MI jkatkowsky@sachseconstruction.com 2009 joe@scheinercg.com 2012 mschimenti@schimenti.com 1994 cshames@shames.com 1994 denisedelong@singletoncontruction.net 2012 jerry@solexcontracting.com 2015 JLee@southwesternservices.com 2017 amanda@buildwithsullivan.com 2012 Jeff.Chandler@TBCCI.com 2014 inbox@tdsconstruction.com 1994 don.harvieux@thomas-grace.com 2012 mike@timberwolff.com 2008 tim@tjuconstruction.com 2016 arectenwald@trcgc.net 2010 john.taylor@trainorconstruction.com 2012 pj@travisanocontruction.com 2015 rdanielson@tri-north.com 2015 j.dorsey@triadrc.com 2013 walt@warwickconstruction.com 2008 bwestra@wdsconstruction.net 2019 hweekes@weekesconstruction.com 1990 mikem@westwoodcontractors.com 1990 tony.riccardi@warandolph.com 2011 rickw@winkel-construction.com 1990 mhouseman@wolvgroup.com 2012 jbodary@woodsconstruction.com 1996 Visit retailcontractors.org to view the profile of each RCA member company. Click on “Find a Contractor” on the home page to search the member list. Please notify the RCA Office (800-847-5085 or info@retailcontractors.org) of any changes to your contact information. 3M WINDOW FILM NATIONAL INSTALLERS FREE Estimates 866-933-3456 2019 • SPRING EDITION 5 NEWSLETTER In Memoriam: Past President James Healy It is with deep regret that we share news regarding the passing of RCA Past President James (Jim) Healy. He passed away on April 18 after a four-year battle with esophageal cancer. After his diagnosis, Jim spent quality time with his family, including wife Kathy, president of Healy Construction, with whom James Healy he traveled to many places and wintered in Florida. Jim continued to manage the construction side of Healy Construction along with his son James. Jim was able to see both of his children get married and enjoyed time with his grandchildren. Jim was RCA president in 2002-2003. Gifts can be made in memory of James D. Healy to Northwestern University to support esophageal cancer research under the direction of Dr. Victoria Villaflor. Checks payable to Northwestern University may be mailed to: Terri Dillon, Northwestern University Feinberg School of Medicine, Development and Alumni Relations, 420 E. Superior Street, Rubloff Bldg., 9th Flr., Chicago, IL 60611. COMMERCIAL CONSTRUCTION & RENOVATION P E O P L E Don’t miss our CCRP events July 15th (Monday) Boston, MA July 25th (Thursday) Columbus, OH August 20th (Tuesday) Nashville, TN If you would like to sponsor a CCRP event, please contact David Corson at davidc@ccr-mag.com 6 SPRING EDITION • 2019 Consensus Docs Updates ConsensusDocs and AIA Comparison The Defense Research Institute’s (DRI) Construction Law Committee has provided a comprehensive comparison of the ConsensusDocs 200 and AIA A201 General Conditions. As the two most used construction standard contract documents, DRI thought that a summary and brief analysis would help readers determine the advantages and disadvantages in the AIA A201 and ConsensusDocs approaches on important issues. The comparison (available at consensusdocs.org/new-consensusdocs-and-aia-comparisonreleased-by-dri) shows interesting differences, as well as similarities, on a range of issues from contract, structure, consequential damages, financial information, indemnification, and much more. New Design-Assist Guidebook The ConsensusDocs Coalition just released a new Guidebook for the ConsensusDocs 541 Standard Design-Assist Addendum. The Guidebook provides tools and commentary on how to modify the contract to best meet specific project needs. As the first industry standard contract for design-assist, the commentary is important new resource. “Design-assist services are not a monolith, but rather a range of services that facilitates design development that improve the constructability and overall quality of design documents by getting input from key trade contractors and Constructors earlier.” says Brian Perlberg, Executive Director of ConsensusDocs. More information is available at consensusdocs.org/ new-design-assist-guidebook-for-consensusdocs-541-design-assist-addendum. DESIGNED TO RUN CONSTRUCTION Procore’s universal platform connects your team, applications, and devices in one centralized hub. From bidding to closeout, collaborate in real time with all your teams, on any device. With Procore, do more than run great projects, run a great business. procore.com The Current State of Construction Safety By Raken This article is excerpted with permission and originally appeared on Raken’s blog on May 31, 2019. It's no secret that those working on construction sites face certain risks that would not pertain to 9-5 office jobs. In fact, year after year, construction has consistently ranked as the industry with the most workplace associated fatalities in the United States - not exactly the type of list anyone wants to top. Heavy machinery, complex tools, and dangerous heights are just a few hazards that construction workers have to deal with on a dayto-day basis, and it's up to project authorities to make sure that the jobsite is as safe as possible. The good news is that recent innovations in tools and gear have helped make construction sites much safer, and there are far fewer workplace fatalities than ever before. This is also due in part to significant increases in safety regulations, and the creation of new roles that focus entirely on workplace safety. Although there is still a long way to go before reaching the goal of zero jobsite deaths per year, there is certainly a driving force to make construction a safer industry. It wasn't even until recent times that proper construction safety gear was present on all American jobsites. Even the ubiquitous hard hat has only been around for 100 years! However, in this age in which every person and their dog owns more than one smart gadget, all of the safety gear we're now used to seeing on jobsites are getting modern makeovers. In the grand scheme of things, personal protective equipment (PPE) is a relatively new invention, and despite the construction industry's lack of urgency in adopting new technology, safety equipment such as harnesses and respiratory devices are frequently updated with improvements. These can range from simply producing gear in more sizes to fit different body types to adding sensors that wirelessly submit information to jobsite authorities, all of which yield life-saving results. Continue reading the full article at rakenapp.com/blog/the-currentstate-of-construction-safety 2019 • SPRING EDITION 7 NEWSLETTER RCA Sustaining Sponsors PLATINUM GOLD SILVER 8 WINTER EDITION • 2019 2800 Eisenhower Avenue, Suite 210, Alexandria, VA 22314 800.847.5085 •
https://issuu.com/bocdesigninc/docs/ccr-mj19-final.v3?e=31569550/70795643
CC-MAIN-2019-39
refinedweb
55,197
54.93
From: Ken Hagan (K.Hagan_at_[hidden]) Date: 2004-01-26 07:46:20 "Andy Little" <andy_at_[hidden]> wrote: > > I am writing a library with hopes that it may, one day, become part > of boost. Whether it ever makes it to the main libraries or not it > would be nice to fit it in to the boost namespace system. Should I > grab a boost sub-namespace e.g boost::my_whacky_library or should I > avoid the boost namespace entirely? I feel a bit 'cheeky' using the > boost namespace for something that is not 'officially' boost. Is > there any policy on this? I thought one of the original intentions of namespaces was to avoid such questions. Specifically, you could develop the library in a non-boost namespace and then move it into boost later. Clients would anticipate the change and protect themselves by using a namespace alias. I haven't done much namespace changing myself, so if there are traps around (say) argument-dependent lookup then I won't be familiar with them, but I would have thought it was a fair strategy to start with and I would hope that a complete list of "traps and their workarounds" could be compiled. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/01/59851.php
CC-MAIN-2021-31
refinedweb
221
74.79
Definition An instance S of the parameterized data type segment_set<I> is a collection of items ( seg item). Every item in S contains as key a line segment with a fixed direction (see data type segment) and an information from data type I, called the information type of S. is called the orientation of S. We use < s, i > to denote the item with segment s and information i. For each segment s there is at most one item < s, i > S. #include < LEDA/geo/segment_set.h > Creation Operations Implementation Segment sets are implemented by dynamic segment trees based on BB[ ] trees ([90,57]) trees. Operations key, inf, change_inf, empty, and size take time O(1), insert, lookup, del, and del_item take time O(log2n) and an intersection operation takes time O(k + log2n), where k is the size of the returned list. Here n is the current size of the set. The space requirement is O(n log n).
http://www.algorithmic-solutions.info/leda_manual/segment_set.html
CC-MAIN-2015-27
refinedweb
160
62.88
C++ Basics C++ is a compiled language. To run a program, its source code has to be processed by a compiler, which creates object files, which connect the executable program by a linker. C++ program typically contain many source code files. C++ Character Set The character set is a combination of letters, digits, special symbols or white spaces, so learn to combine these letters to form words, which in turn are combined to form sentences and sentences are combined to form paragraphs. There are 4 types of characters in C. 1.Alphabets Lower case Letters: a,b,c,……….,y,z. Upper Case Letters: A,B,C,………….,Y,Z. 2.Digits ex.0,1,2,3,4,5,6,7,8,9. 3.Special Characters 4.White space Characters. C++ Tokens The smallest individual unit in a program is known as tokens. 1.Keywords These words have special meaning in the language, you cannot name a variable, function, template, type with any of these words. Each of these words already has a meaning and the compiler will stop any attempt to give them some other meaning. For Example:- char, default, inline, switch, struct, etc. 2.Identifers A name given by the user to a unit of the program. An identifier may contain letters and numbers. The first letter must be a letter: the underscore _ counts as a letter. Upper case and lower case letters are different. All characters are important. 3.Literals The Literals refers to constants as a fixed value that cannot be changed. This means literals are variables with a fixed value. For example: int a = 42; 4.Punctuators Punctuators are the special symbols used in program. This symbols are used to give proper form to statements, expressions. For example: [] () {}, ; * = # Exercise:- 1. C++ provides various types of …………………… that includes keywords, identifiers, constants, strings and operators. View Answer Explanation:C++ provides various types of tokens that includes keywords, identifiers, constants, strings and operators 2. ………………….. are explicitly reserved identifiers and cannot be used as names for the program variables or other user defined program elements. View Answer Explanation: Keywords are explicitly reserved identifiers and cannot be used as names for the program variables or other user defined program elements. Program C++ Program to Add Two Numbers #include <iostream> using namespace std; int main() { int first, second, sum; cout << "Enter two integers: "; cin >> first >> second; // sum of two numbers in stored in variable sum sum = first + second; // Prints sum cout << first << " + " << second << " = " << sum; return 0; } Output: Enter two integers: 4 5 4 + 5 = 9 Visit :
https://letsfindcourse.com/tutorials/cplusplus-tutorials/cplusplus-basics
CC-MAIN-2021-21
refinedweb
424
57.57
). Activity - All - Work Log - History - Activity - Transitions Is this not a problem whenever the target portlet method loads resources via the classloader – config files, images, etc.? I think it's important for code running in the target portlet (edit: or container) to be able to rely on the context classloader being an appropriate one to use to access the portlet's resources, but it seems you're saying that this is not a safe assumption in Pluto. I'm not saying Pluto shouldn't switch to SLF4J – I have no opinion on that. I do think there is a deeper issue here than logging, however. Also, I don't follow what makes a portlet container different from any other servlet container in this regard. For instance, why isn't your issue a problem for Tomcat? Hi John, The problem with commons-logging is different from the typical classloader usage and handling within portlets and portletcontainer and I'll try to explain. Portlets loading resources (including classes) typically do so through their own (webapp) classloader, nothing extraordinary here or different from plain web applications. So you are correct, and you can rely on the context classloader to access the portlet resources. Pluto (or better: the webcontainer) handling is save to be used for that. When a portlet invokes a portletcontainer method however it most likely will mean a "cross-context" invocation because typically (depending on your embedding portal setup) the portletcontainer code will reside in another webapplication (the portal). If that happens, its the responsibility of the portletcontainer to determine the right classloader to use (either from the portlet application or the embedding portal). A good example of this is the PortletEvent payload handling. When a portlet sets a new PortletEvent using a complex payload, the (Pluto) portletcontainer will unmarshall that payload using jaxb for which jaxb will be told to use a different classloader (the one from the portlet application in this case). This kind of cross-context/multiple classloader situations are known and recognized and explicit handling is in place to deal with them. For logging configuration however, things are a bit different. First of all, logging is usually configured using a static initializer, e.g. private static final Logger log = LogFactory.getLogger(<classname>). Such static initializers are "executed" as soon as a class is accessed/loaded, so on demand, by the loading classloader (typically the classloader of the class referencing the to be loaded resource/class) If the portal application would, during startup, preload every possible class and resource from its own webapplication, all things would be fine as then you would be guaranteed the expected classloaded to be used. However, that's unpractical, undesirable and not doable in practice. An alternative to "fix" this commons-logging static initialization could have been wrapping it and temporarily setting the CurrentClassLoader to that of the current class, somewhat similar to how we deal with the PortletEvent payload unmarshalling over jaxb for instance, but then the other way around. But that would just be a "workaround" for a wrong usage/pattern with respect how log configuration is intended to be used. The static/compile time binding as applied by slf4j is much more "natural" and doing exactly as what you expect to happen in this case, and allows us to use logging configuration for the container (and portal) classes just as for any other class and application. All of this is not so much a problem of using a portletcontainer, but of using cross-context webapplication interactions in a webserver as used/required for portals in general. Tomcat is no different in this respect than any other servlet container and I actually "hit" this problem while testing against Tomcat. However, Tomcat in general is "easier" to use than for instance JBoss or Websphere as those webservers by default use a PARENT_FIRST webapplication classloader scheme, contrary to the advised (and IMO required) recommendation of the servlet specification itself (see last paragraph of section SRV.9.5 of Servlet Spec 2.4). As a consequence, when deploying a portal (like Pluto or Jetspeed) and your own portlet applications on JBoss or Websphere you always have to make sure to override this default to use a PARENT_LAST (or CHILD_FIRST) classloader scheme to ensure the expected behavior (at least, from a portlet/portal POV). Hi Ate, I've been following this issue since it popped up on the Commons Dev mailing list. Would you mind explaining in more detail what problems you are experiencing using Commons Logging, due to the differences in class loading described above. Is it the selection/configuration of which logging implementation (Log4J, Java Util Logging etc.) to use that is the problem? Or is it something else? Hi Dennis, I wasn't aware of the discussion on the commons-dev list but I've just subscribed and responded there. As hopefully will be clear from my explanation (there) it has nothing to do with the actual logging implementation choice but only the way CL uses the current ContextClassLoader for selecting it. For anyone else interested (and it is an interesting and already long thread), here is a link on Nabble: Migration to slf4j has been completed. What I just noticed from reviewing the commit message is that in this commit another change accidentally was also merged in which I intended to do separately. This concerns two things: Testing pluto/jetspeed on Websphere showed that the stax-api-1.0.1 is invalidly packaged as it incorrectly also contains the javax.xml.namespace.QName class causing jaxb to break on Websphere 6.1 The stax-api-1.0-2.jar is clean and AFAIK otherwise the same (coming from SUN while the stax-api-1.0.1 comes from codehaus)
https://issues.apache.org/jira/browse/PLUTO-553
CC-MAIN-2016-40
refinedweb
958
50.06
Opened 6 years ago Last modified 2 years ago #6024 infoneeded feature request Allow defining kinds alone, without a datatype Description Sometimes we want to define a kind alone, and we are not interested in the datatype. In principle having an extra datatype around is not a big problem, but the constructor names will be taken, so they cannot be used somewhere else. A contrived example: data Code = Unit | Prod Code Code data family Interprt (c :: Code) :: * data instance Interprt Unit = Unit1 data instance Interprt (Prod a b) = Prod1 (Interprt a) (Interprt b) We're only interested in the constructors of the data family Interprt, but we cannot use the names Unit and Prod because they are constructors of Code. The suggestion is to allow defining: data kind Code = Unit | Prod Code Code Such that Code is a kind, and not a type, and Unit and Prod are types, and not constructors. Note that using "data kind" instead of just "kind" means the word "kind" does not have to be a reserved keyword. You could also think you would want to have datatypes that should not be promoted: data K data type T = K But I don't see a need for this, as the fact that the K constructor is promoted to a type does not prevent you from having a datatype named K. Change History (13) comment:1 Changed 6 years ago by comment:2 Changed 6 years ago by Is it clear that being able to use the same names for different things won't cause too much confusion? I guess that we manage fine with constructors and types sharing names, so perhaps there is no problem with types and kinds. Are there any plans to have support for "sorts" and above in GHC? If so, would each level have its own namespace? comment:3 follow-up: 11 Changed 6 years ago by I don't think the language of sorts will be extended; if anything, types and kinds might be collapsed into a single level. That would also destroy the possibility of doing what is asked in this ticket, though. comment:4 Changed 5 years ago by I'm particularly interested in using * in a kind definition, which should be possible with a distinct syntax for kind definitions. comment:5 Changed 5 years ago by comment:6 Changed 4 years ago by Moving to 7.10.1. comment:7 Changed 4 years ago by Iavor, you worked on this for a while, didn't you? What's the status? comment:8 Changed 3 years ago by comment:9 Changed 3 years ago by Moving to 7.12.1 milestone; if you feel this is an error and should be addressed sooner, please move it back to the 7.10.1 milestone. comment:10 Changed 3 years ago by Milestone renamed comment:11 Changed 2 years ago by Is this feature request still possible to implement, now that "kinds and types are the same" (6746549772c5cc0ac66c0fce562f297f4d4b80a2). See also comment:3. comment:12 Changed 2 years ago by Yes, though it's a shade less useful. Types and kinds are indeed the same now, but terms are not. I see this ticket as a request for an ability to define a datatype whose constructors are defined only in types, never in terms. From a user standpoint, this ticket is largely unaffected by type=kind. It's a shade less useful because you can now say data Foo = MkFoo1 Type -- Type is a.k.a. * | MkFoo2 Int so some of the motivation for kind-only has gone away. A kind-only definition would still prevent clients from constructing the object at runtime, though. presents a more detailed discussion.
https://ghc.haskell.org/trac/ghc/ticket/6024
CC-MAIN-2018-22
refinedweb
618
67.18
Memory leak testing From OLPC These are instructions on how to test for memory leaks in Sugar. Install guppy If you don't already have guppy on the XO you want to test, you'll have to download and install it. In Terminal, as root: wget rpm -i guppy-0.1.9-1.i386.rpm Edit /usr/bin/sugar-shell with nano /usr/bin/sugar-shell (or whatever your favorite text editor is) and add a line that reads import guppy.heapy.RM Then restart Sugar (ctrl-alt-erase). Starting a heapy session ssh into the XO from another computer. We'll call this the Monitor Computer, and it should be separate from the XOs you are testing. When we observe Sugar, it is best not use it for anything else than to execute the test case. From your Monitor computer, in the shell you're ssh'd into your XO with: su - olpc python -c "from guppy import hpy;hpy().monitor()" sc 1 int Running a test From your monitor computer: hp.setref() Then, on your XO, do whatever action you're testing memory leaks for. (For instance, start and then close an Activity.) Go back to your monitor computer and type: hp.heap() Reading the results You'll see a table that looks like this: >>> hp.heap() Partition of a set of 1019 objects. Total size = 89364 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 25 2 13000 15 13000 15 dict of sugar.graphics.animator.Animator 1 12 1 6240 7 19240 22 dict of sugar.graphics.icon._IconBuffer 2 156 15 5616 6 24856 28 types.MethodType 3 3 0 5016 6 29872 33 dict of sugar.graphics.palette.Palette 4 82 8 4668 5 34540 39 str 5 106 10 4192 5 38732 43 tuple 6 8 1 4160 5 42892 48 dict of sugar.graphics.palette.MouseSpeedDetector 7 55 5 3776 4 46668 52 list 8 2 0 3344 4 50012 56 dict of view.BuddyMenu.BuddyMenu 9 2 0 3344 4 53356 60 dict of view.palettes.CurrentActivityPalette <80 more rows. Type e.g. '_.more' to view.> hp.heap() prints a summary of the contents of the python heap, which is where python places objects we ask it to create. The most interesting part is this line: Partition of a set of 1019 objects. Total size = 89364 bytes. This would mean that, since the last time we called hp.setref(), 219 new objects have been placed on the heap and they take up 18212 bytes in total. This doesn't necessarily mean we are leaking all these 219 objects, though - but if you repeat this procedure several times... hp.setref() - do the action you're testing in Sugar* hp.heap() and you see the amount of bytes growing steadily, we may have a bug. Look at anything that is over 10kb or so. Test variants Repeated leak testing A more definite test for leaks is the following: hp.setref() - do the action you're testing in Sugar 9 times* hp.heap() ...and then look for new objects in quantities that are multiple of 9. Those will probably be leaks. Without collaboration If you run a test multiple times and get results with a lot of variance, it may be due to collaboration activity (especially if you're in a radio-noisy environment with many XOs). You can solve this by disabling radio (in the Sugar Control Panel) and using an usb-to-ethernet dongle to give the XO a wired connection to ssh into.
http://wiki.laptop.org/go/Memory_leak_testing
CC-MAIN-2015-11
refinedweb
595
74.79
Are you sure? This action might not be possible to undo. Are you sure you want to continue? NET made by dotneter@teamfly Debugging ASP.NET Jonathan Goodyear Brian Peek Brad Fox Publisher: Financial Times Prentice Hall First Edition October 19, 2001 ISBN: 0-7357-1141-0, 376 pages New Riders - Debugging ASP.NET made by dotneter@teamfly Debugging ASP.NET FIRST EDITION: October, 2001-110030 06 05 04 03 02 7 6 5 4 3 2 1 Interpretation of the printing code: The rightmost double-digit number is the year of the book’s printing; the right-most single-digit number is the number of the book’s printing. For example, the printing code 02-1 shows that the first printing of the book occurred in 2002. Printed in the United States of America Trademarks. Warning and Disclaimer This book is designed to provide information about Debugging ASP.NET.. New Riders - Debugging ASP.NET made by dotneter@teamfly Credits Publisher David Dwyer Associate Publisher Stephanie Wall Managing Editor Kristy Knoop Acquisitions Editor Deborah Hittel-Shoaf Development Editor Chris Zahn Product Marketing Manager Stephanie Layton Publicity Manager Susan Nixon Copy Editor Krista Hansing Indexer Chris Morris Manufacturing Coordinator Jim Conway Book Designer Louisa Klucznik Cover Designer New Riders - Debugging ASP.NET made by dotneter@teamfly Brainstorm Design, Inc. Composition Jeff Bredensteiner To my beautiful, dedicated, and supportive wife, Joy. The only person who has ever left me speechless. Jonathan Goodyear To Bunk, my dog—keep fighting. Brian Peek To my Wonderful, Supportive and Beautiful wife Wendy, who has helped me become the man I am today and helped me achieve my goals in life. Thank you so much for being there for me, I know it was a difficult journey and you stood by me every step of the way. Thank you! To my sons, Brandon and Travis, I love you guys very much! And Finally, thank you Mom and Dad for your support and inspiration. Brad Fox New Riders - Debugging ASP.NET made by dotneter@teamfly Debugging ASP.NET About the Authors About the Technical Reviewers Acknowledgments Jonathan Goodyear Brian Peek Brad Fox Tell Us What You Think Introduction Who Will Benefit from This Book? Who Is This Book Not For? Organization of This Book Source Code and Errata Conventions I: ASP Debugging Basics 1. Conceptual Framework Understanding Server-Side Events New Language Options Summary 2. Traditional Approaches to Debugging in ASP Structure of Pre–ASP.NET Pages Problems and Shortcomings Old Strategies That Still Do the Job Well An ASP Debug Object Summary 3. Debugging Strategies Tier Sandboxes Divide and Conquer Simple Before Complex Turtle Makes the Wiser Summary 4. Code Structure That Eases Debugging Code Partitioning Control-of-Flow Guidelines New Riders - Debugging ASP.NET made by dotneter@teamfly Structured Exception Handling Global Exception Handling Summary II: ASP.NET Debugging Tools 5. Conditional Compiling What Is Conditional Compiling? Other Preprocessor Directives Summary 6. Tracing Configuration Trace Output Setting Trace Messages Trace Viewer Tracing via Components Tips for Using Trace Information Summary 7. Visual Studio .NET Debugging Environment Introduction to Features Attaching to Processes Setting It All Up Inline Debugging of ASP.NET Pages Inline Debugging of Components Remote Debugging Summary 8. Leveraging the Windows 2000 Event Log The Windows 2000 Event Log Defined Web Applications Can Use the Event Log The System.Diagnostics Event Log Interface Custom Event Logs Handling Different Types Of Events Access Event Log Data via the Web Summary III: Debugging the New ASP.NET Features 9. Debugging Server-Side Controls Creating the Project Debugging the Control Summary Debugging Web Services Web Services Stumbling Blocks Error Messages Problems Working with XMLSerializer Working with Errors in SOAP Error Returning Certain Types of Data Working with Streams Tools Basic Web Services Debugging Problems Deploying Your Web Service? Summary 14.NET made by dotneter@teamfly 10. COM+ Issues Role-Based Security Component Services Microsoft Management Console .NET Components Versus Registered COM Components Summary 15. Caching Issues and Debugging Output Caching The Caching API Summary IV: Debugging Related Technologies 13. Debugging . Debugging User Controls User Control Basics Adding Properties and Methods Dynamic User Controls Summary 12. Debugging Data-Bound Controls Data-Bound Controls Debugging Templates Namespace Issues XML Binding Summary 11.NET Components and HttpHandlers The Component Interfaces HttpHandlers State-Management Issues .New Riders .Debugging ASP. Data Namespace Catching SQL Errors New Connection Components Issues with the DataReader Class Working with Transactions Error Codes and How to Debug Them Common Pitfalls SQL ADO. Debugging ADO.NET made by dotneter@teamfly Transaction Issues Summary 16.NET Moving from ASP to ASP.New Riders .NET Moving from VBScript to Visual Basic Opting for C# Summary .NET Understanding the System.Debugging ASP.NET Objects Summary A.NET Objects Versus OleDb ADO. Issues that Arise When Migrating from ASP to ASP. He has also worked as a consultant for PricewaterhouseCoopers and as the Internet architect for the Home Shopping Network’s e-commerce presence ( Riders .com). When not hunched over a keyboard. New York.Debugging ASP. New York.rapiddevelopers. wireless applications.angrycoder.NET made by dotneter@teamfly About the Authors Jonathan Goodyear began his career as a software developer at Arthur Andersen after receiving a degree in accounting and information technology from Stetson University. focusing on developing web applications with ASP. ASPSoft. Brian Peek is a senior software developer with Rapid Application Developers. Inc. He holds a bachelor’s degree in computer science from Union College in Schenectady. the first eZine written completely in ASP.ganksoft.com) and is a charter member of the Visual Studio 6 MCSD certification.com/) located in Troy. and any other projects that happen to come along. Additionally. Jonathan is a contributing editor for Visual Studio Magazine (. a small video game–development company dedicated to producing high-quality games for video game consoles using only freely available tools and documentation. Florida. web-based applications. he is the owner and lead programmer of Ganksoft Entertainment (. He specializes in developing n-tiered applications.hsn. his hometown.com/). he works as an independent consultant through his consulting practice.NET. Jonathan likes to spend time going to theme parks with his family near his home in Orlando. When not coding for work or coding games that he wishes . He is also the founder and editor of angryCoder (. Presently. (). Brad Fox started programming in BASIC at the age of 12. or playing his latest favorite video game. Diane feels that her biggest strength as an online teacher is her ability to present the student materials with life skills and to help her students understand the material so that they can process it and use it. .com or brian@rapiddevelopers. Brad joined the Army right out of high school and served in the 82nd Airborne Division. About the Technical Reviewers These reviewers contributed their considerable hands-on expertise to the entire development process for Debugging ASP. CalCampus. Currently.New Riders . Their feedback was critical to ensuring that Debugging ASP. Diane is also an adjunct professor for Mary Baldwin College. he can often be found practicing magic.NET. learning to play piano. Since then he has gone on to become a Microsoft Certified Solution Developer.NET made by dotneter@teamfly would be published commercially.com. computers and technology have played an integral part in his life. these dedicated professionals reviewed all the material for technical content. Since then. where he spends most of his time developing cutting-edge technology for the financial industry. Capella University. Diane Stottlemyer is presently an online teacher for Learning Tree.Debugging ASP. and flow. Franklin University. Inc.NET fits our reader’s need for the highest-quality technical information. Brad is CEO of Digital Intelligence. organization. She enjoys teaching online and feels that her diverse background enables her to teach many different courses and add variety to them. and ElementK.. He can be reached at brian@ganksoft. As the book was being written. Connected University. Deciding that contracting was the way to go. degrees in computer science from Lacrosse University. and Lloyds Bank. and JavaScript. entitled Automated Web Testing Toolkit: Expert Methods for Testing and Managing Web Applications. She is excited about the book and feels that it will be a great addition to anyone’s library. he started a career that brought him into contact with quite a number of blue-chip companies: Shell Chemicals.Debugging ASP. and advanced computer skills. . More recently. he joined RDF Consulting. he got in through the back door after studying chemistry and chemical engineering with Pilkington Brothers. While writing some of their financial systems in CICS/COBOL and other obscure languages.NET applications. will be released this month. Steve Platt has been around computers for the last 17 years. a well-known glass manufacturer. There. including a huge data transfer from legacy systems to DB2. All these companies were mainframe users. new software. she just completed her doctorate in computer science. She teaches several course on testing. a Brighton-based organization and Microsoft solution provider that specializes in the e-commerce needs of financial organizations such as Legal & General and Northern Rock. Perl. management. and James Sprunt Community College. Steve uses his skills as a configuration/build manager and Oracle DBA. but the projects were varied.New Riders . production support fire-fighting. Diane’s first book. Steve has spent the last few years in the Internet arena. She also just signed a contract for her second book on testing . programming. on Iceland online shopping and the shopping portal Ready2Shop using UNIX.NET made by dotneter@teamfly Piedmont Valley Community College. She is an avid reader and keeps up on new technology. She believes that education is the door to future opportunities and that you are never too young or old to learn. and new hardware. Oracle 8i.D. he rose from junior programmer to Senior Analyst/Programmer. American Express. such as Gener/OL and FOCUS. Shell Oils. and some data warehousing using Prism. working with Victoria Real (who created the UK’s BigBrother site). She received her undergraduate degree from Indiana University and received masters and Ph. Diane is a professor of computer science and a Certified Software Test Engineer. It has been a pleasure to work with you and the rest of the VSM staff (particularly Patrick Meader and Elden Nelson) over the past couple of years. Paula Stana.Debugging ASP. At Visual Studio Magazine (formerly Visual Basic Programmer’s Journal). I would like to thank Nick Ruotolo for taking a chance on a young. and John Wagner at Arthur Andersen for patiently mentoring me while I was a development rookie. • • • • • • • • Acquisitions Editor: Karen Wachs Acquisitions Editor: Deborah Hittel-Shoaf Development Editor: Chris Zahn Managing Editor: Kristy Knoop Copy Editor: Krista Hansing Indexer: Chris Morris Compositor: Jeff Bredensteiner Technical Reviewers: Diane Stottlemeyer. My co-authors. With your help. Without your help. I would like to thank Darren Jensen.New Riders . I would like to thank Susannah Pfalzer. kid. He would dearly love to emigrate to Australia. but ambitious. I would like to thank Robert Teixeira for filling in the gaps in my knowledge while I was at PricewaterhouseCoopers. From The Home Shopping Network. You have an amazing gift for distilling complex concepts into understandable information. Acknowledgments We would all like to thank the following people at New Riders Publishing for their tireless work towards making this book a reality. I would have become an employment casualty early in my career.NET made by dotneter@teamfly Steve has a wife and daughter. Steve Platt Jonathan Goodyear Many people have helped lift me to where I am today. and he is interested in fitness and the martial arts. Brian Peek and Brad Fox. I would also like to thank Ken McNamee for lending me the server that I used to test this book’s code. You guys stepped in and did a fantastic job. my writing has improved tenfold. . He is passionate about motorcycling and new technology. and he can be found on many an evening coding into the early hours. deserve a special thank-you for their help in bringing my book idea to life. would not be possible without the help and support of quite a few people. Brian Peek This book. Thanks to my parents for providing me with daily encouragement while writing. Chuck Snead. Danette Slevinski. Patrick Tehan. At the very top of that list would be Jonathan for giving me the opportunity to co-author this book with him. I thank Arden Rauch for giving me confidence in my abilities at an early age. J . Ed O’Brien. thank you for your endless patience while I pursue my career dreams. Clare Mertz-Lee. they seem so cliché. Joy. for giving me the opportunity to co-author with him. Your relentless pursuit of your Olympic dreams in gymnastics has been an inspiration to me. a great friend who has guided me through some tough times and who continues to be highly mellifluous (and Canadian). Justin Whiting. Jonathan. Michael Kilcullen. Patricia Hoffman. And finally. but for making me cookies. First of all. Cherie. none of this would have been possible. You guys are the best! To my wife. Thanks to Bob Thayer for putting up with my anal retentiveness on graphic layouts. This has been a dream of mine for half my life. Thank you to Matthew Kennedy for being a great friend and sharing my cynicism toward the planet.NET made by dotneter@teamfly On a personal level. Robert Sharlet. Dionne Ehrgood. and Damian Jee. I have to thank Jonathan Goodyear—yes. and for many wonderful times and memories that I will always treasure. Jason Sawyer. a short hello and thank you to my friends that have supported me in my endeavors: Adina Konheim. for opening my eyes to the wonders of fatherhood. A big thank-you to my grandparents for putting me through college. They are Stephen Ferrando. James Rowbottom. and the rest of the RAD staff. I would like to thank some of my lifelong friends for everything that they have done for me throughout the years. David Wallimann. the author of this book. Jon Rodat. I would like to thank my 1-year-old son. Thank you to Mark Zaugg. A thank you to Jennifer Trotts not only for coming back. Jamie Smith. Brad Fox Every time you read these things. Thank you.New Riders . Girish Bhatia. Andy Lippitt. and he has helped me to accomplish it. I missed you. I would like to thank Stacy Tamburrino for teaching me a great deal about myself. Without you.Debugging ASP. and life in general. Lastly. but what can you do? I will try to make mine interesting. I would like to thank my sister-in-law. CJ. email.Debugging ASP. and that due to the high volume of mail I receive. Rob Howard. what areas you’d like to see us publish in. You can fax. A few individuals at Microsoft that were instrumental in making this book a success. I would like to thank David Waddleton for helping me with Chapter 13. Fax: Email: Mail: Stephanie Wall Associate Publisher New Riders Publishing 201 West 103 rd Street Indianapolis. I love ya. Susan Warren.NET Framework. you are the most important critic and commentator. We value your opinion and want to know what we’re doing right.NET made by dotneter@teamfly Next. “Debugging Web Services. an incredible amount of industry attention has been paid to Microsoft’s new . or write me directly to let me know what you did or didn’t like about this book—as well as what we can do to make our books stronger.” I would also like to thank Shannon McCoy for his support and the knowledge that I was able suck from his brain. It is the platform that will drive Microsoft’s .New Riders .wall@newriders. I welcome your comments. he’s the best friend anyone could have. and Dmitry Robsman. As the Associate Publisher for New Riders Publishing. Not to mention.com Introduction Over the last year or so. Please note that I cannot help you with technical problems related to the topic of this book. and any other words of wisdom you’re willing to pass our way. Thanks to all of you for your help on this book. When you write. IN 46290 USA 317-581-4663 stephanie. Shaykat Chaudhuri. buddy. Tell Us What You Think As the reader of this book. what we could do better. I will carefully review your comments and share them with the author and editors who worked on the book. These include Scott Guthrie. please be sure to include this book’s title and author as well as your name and phone or fax number. I might not be able to reply to every message. NET and demonstrates how to use them effectively. The reader should be familiar with developing ASP. comes increased complexity. This book is designed to address many of the problems and issues that developers will most assuredly face as they begin developing web applications using ASP. mentoring junior-level developers. event logging.New Riders .NET is the next generation of the Active Server Pages web-development platform. such as tracing. showing potential error messages and explaining how to fix their causes.NET web application. Finally. and it represents a quantum leap forward with respect to its feature set and scalability.NET or C# (all code examples are provided in both languages). and web services Some of the caveats and issues common to migrating traditional ASP web applications to ASP. developers have been clamoring to get their hands on anything and everything .NET. There is simply no way to account for all possible errors and bugs that can be encountered in an ASP.NET. however.NET made by dotneter@teamfly technology direction for at least the next five years. When you finish this book.NET web applications with either Visual Basic .NET. Who Will Benefit from This Book? The intended audience for this book is intermediate to experienced developers and project managers. Specifically. caching. With so much at stake. and conditional compiling How to track down bugs associated with specific parts of ASP.NET.Debugging ASP.NET Framework is ASP. this book gives solid advice on how to build bug-free web applications. This book also introduces the myriad new debugging tools that are available in ASP. Instead.NET. The web portion of the multidimensional . gives you a firm understanding of the debugging tools that are at your disposal. and explains how to handle some of the more common errors and bugs that occur.NET web application. and debugging web applications will get the most benefit from this book. this book tackles the issues and problems associated with each aspect of ASP. Some of the key skills that the reader will learn from this book are listed here: • • • • • How to write code that reduces the chance of bugs Solid strategies for debugging large web applications How to leverage the many debugging tools available in ASP. you should be confident enough to find and eliminate any bug that you encounter in your ASP. ASP.NET .NET. With its newfound power. such as User Controls. it introduces tried-and-true web-development strategies that reduce the risk of bugs and also enable bugs to be tracked down more easily when they do occur. By no means is this book a troubleshooting compendium.NET. The persons responsible for establishing project coding standards. ADO. Topics include code partitioning. structured exception handling. Part I: ASP Debugging Basics Chapter 1. more manageable pieces. the ASP. It is not an ASP. and the new language options available.NET and interpret its results.NET made by dotneter@teamfly Who Is This Book Not For? This book is not for junior-level developers or for developers who are not relatively comfortable developing web applications with ASP.” explains some of the new concepts introduced with ASP.Debugging ASP. “Debugging Strategies. Chapter 3. The reader will not be able to understand and use the code examples without this knowledge.New Riders . “Tracing.” gives advice on how to build code that is both less likely to contain bugs and easier to debug when bugs creep in. This includes debugging application tiers individually and distilling complex code into smaller. “Conditional Compiling. such as server-side events.NET. Part II: ASP. It highlights several of the problems and shortcomings with the limited tools that were available. . many other books on the market accomplish this task very well. Chapter 2.NET tutorial. Chapter 4.” covers some of the approaches used to debug traditional ASP web applications. or both.” shows you how to use the new TraceContext object available in ASP.NET. Likewise. “Code Structure That Eases Debugging.NET web applications. Trace configuration at both the page and the application levels is covered. “Conceptual Framework. control-of-flow guidelines.NET Debugging Tools Chapter 5.NET. and global exception handling.NET page life cycle. C#.” covers how to take advantage of function attributes and preprocessor directives to dynamically add debugging code to your web applications.” outlines several plans of attack for debugging ASP. “Traditional Approaches to Debugging in ASP. this book assumes that the reader is familiar with either Visual Basic . Organization of This Book The book parts and chapters are outlined in the next several sections. Chapter 6. as is using the Trace Viewer utility. ” covers many of the issues that you might encounter while building user controls. “Debugging Data-Bound Controls. “Debugging User Controls. . Chapter 11. Chapter 14. output caching. Several different error messages are discussed. Some of the other topics covered are the XMLSerializer. Some of the things you will learn in this chapter include how to create custom event logs. inline ASP. Chapter 12.New Riders . the caching API. expiration callbacks. Chapter 10.NET Features Chapter 9. outlining many of the issues that you might encounter. and UDDI. “Leveraging the Windows 2000 Event Log.” delves into the types of issues that crop up when leveraging caching in ASP. the call stack.NET made by dotneter@teamfly Chapter 7.NET components and HttpHandlers.NET page debugging. the watch window. and how to access the contents of the Windows 2000 Event Log via the web. Part III: Debugging the New ASP. “Debugging . and declarative attributes. and dynamic user control issues. “Caching Issues and Debugging. Chapter 8.NET Debugging Environment.” takes a close look at some of the common mistakes that can be made while using data-bound server controls.” introduces all the powerful debugging features packed into the Visual Studio . DataList. “Debugging Server-Side Controls. and how to attach to processes. and the XML data binding are a few of the topics covered. SOAP. methods. DataGrid. how to handle both expected and unexpected events.NET web applications.Debugging ASP.” explains how to write data to the Windows 2000 Event Log. Some of the topics covered include how to set breakpoints. “Debugging Web Services.NET Components and HttpHandlers. Practical advice and solutions for these issues are provided.” uncovers and offers solutions for many of the problems that you might encounter while building and implementing web services. The chapter also discusses issues with interfaces and state management.” takes you through the process of creating a custom server control.” explains how to use the StackTrace and TextWriterTraceLister objects to track down bugs in . The basics are covered. as are properties. “Visual Studio . Part IV: Debugging Related Technologies Chapter 13. Highlights of this chapter include cache dependencies.NET IDE. such as page declaratives.” helps you interpret ADO.NET. Appendix Appendix A.NET”. is a collection of issues that you are likely to run into while porting your traditional ASP web applications to ASP. as well as track down bugs associated with each of the new ADO. “you should note the addition of the RUNAT="server" parameter added to each control.NET objects. “Debugging ADO.” covers the problems and issues that can occur when setting up components to leverage COM+. are inserted into code when a line is too Part I: ASP Debugging Basics ASP Debugging Basics 1 Conceptual Framework 2 Traditional Approaches to Debugging in ASP 3 Debugging Strategies 4 Code Structure that Eases Debugging . and other “computer language” are set in a fixed-pitch font—for example. and cookies. It also covers some of the runtime anomalies that might occur in the context of COM+. events. functions.net. variables. The new declaration syntax for script blocks is discussed along with many other useful topics.NET error messages. Database permissions issues are also briefly discussed. Chapter 16. Also available at the site are any corrections and updates to the text.New Riders . “COM+ Issues. Source Code and Errata All the source code provided in this book can be downloaded from www.” • Code Continuation characthers wide to fit into margins.NET made by dotneter@teamfly Chapter 15. “Issues That Arise When Migrating from ASP to ASP.Debugging ASP.NET. The errata will be updated as issues are discovered and corrected Conventions This book follows these typographical conventions: • Listings.debuggingasp. detect the . These events allow you to trap button clicks. These server-side events provide for a programming model that is very similar to that of a traditional event-model Visual Basic application. Client-side events are part of the DHTML standard.NET architecture.NET version of ASP. Now let’s see what events are available to you on the server and how you can exploit them to your advantage. With ASP. This allows for separation between your client and server code. it is strictly code without any HTML. Now you can use ASP in a true three-tier architecture. is that server-side events are handled on the server. obviously. tab out of text boxes. Understanding Server-Side Events Server-side events are one of the fundamental changes in the ASP. Is it the same as ASP? Not even close. It extends the existing ASP framework into a whole new realm of power. instead of VBScript.NET made by dotneter@teamfly Chapter 1. Differences from Client-Side Events The major difference between client-side events and server-side events.New Riders . Is ASP.NET IS THE NEXT STAGE IN THE Active Server Page evolutionary process. making things much more organized and structured. They can be scripted in either JavaScript or VBScript. you will be able to harness the power of a true programming language.NET or C#. we discuss some of the new features of the ASP. First we focus on server-side events. and then we discuss new language options for use with the . In this chapter. such as Visual Basic .Debugging ASP. The code-behind file simply contains everything that used to be contained within the <%%> files in your old ASP pages. however. It also provides for far more structured code.NET. This adds a great degree of flexibility and power to the existing ASP framework. and they are embedded directly in the HTML of your page. The code for these events is generally stored in a separate file called a code-behind file.NET easy to use? Yes. Conceptual Framework ASP.NET framework and show how it differs from the ASP framework that you currently know and love—or at least know and use. But do not fret. compared to older versions of ASP. Debugging ASP. } . such as changing a text box or clicking a button. Listing 1.1 Simple ASP Page with a Form <HTML> <BODY> <FORM ACTION="form.2 shows this in C#. Take a look here at what one of these events looks like on the server. and handle other events that involve a user changing something on the client-side interface. <INPUT TYPE="text" ID="txtText" RUNAT="server"> <INPUT TYPE="submit" ID="btnSubmit" RUNAT="server"> </FORM> </BODY> </HTML> You will notice that this looks like some plain old HTML.NET made by dotneter@teamfly presence of the mouse cursor over certain controls. This enables you to trap the events on the server and respond to them appropriately. but. text box changes. e as EventArgs) { txtText. and drop-down box changes. However. obviously. Listing 1. Listing 1. Imagine having to make a trip to the server every time your mouse moved 1 pixel! Types of Server-Side Events The types of events available on the server are very basic.2 Server-Side Events in C# public void btnSubmit_ServerClick(sender As Object. you will be able to respond to button clicks. For example.New Riders .1 shows a very simple ASP page that contains a form with a text box and a Submit button. these are handled on the server instead of the client. Because a trip to the server is required to activate these events. Page_Load At this point. Each event that is handled on the server takes the Object and EventArgs arguments.NET page.NET Private Sub btnSubmit_ServerClick (sender As Object. e As EventArgs) Response. including closing connections and dereferencing objects. you should clean up everything you have used.NET Page Life Cycle The life cycle of an ASP.NET page is similar to that of an ASP page. Here you should provide any initialization code required for instantiation of the page. } Listing 1. you can hook into a few new events as well. and so on).NET Page Event Page_Init Description This is the first event to be called in the process.Write("TextChanged: " + txtText. Listing 1. e as EventArgs) txtText.Value) End Sub In each of these examples.New Riders . Table 1.3 Server-Side Events in Visual Basic . The EventArgs argument contains any event.Debugging ASP. Page_Unload This is the very last event to be raised. two separate events are being handled on the server: a Submit button being clicked and the value of a text box being changed. However.Value = "You clicked the Submit button!" End Sub Sub txtText_TextChanged(sender Object. Events in the Life Cycle of an ASP.1 runs down the events of an ASP.At this point. The object is simply the object that sent you the event (for example. You can now read and update control properties. the button. control view state is restored. ASP.Value).Write("TextChanged: " & txtText. Table 1.NET made by dotneter@teamfly protected void txtText_TextChanged(object sender. Page_PreRender This event is raised before any page content is sent to the browser. the text box.or object-specific arguments that are relevant to the object being acted upon.1. EventArgs e) { Response. .3 shows the same event examples in Visual Basic. Table 1. of requests that are Description only once. Clean up here. Application_ReleaseRequestState Raised to enable you to store the state of your application. Raised when the request is to be authorized.2. Application_PostRequestHandlerExecute Raised after the handler has finished processing the request. You could call a logging routine here. Raised when a user’s session times out or is reset.asax file. Session_Start Application_End Session_End Application_Error Raised when a new user’s session begins.You Application_AuthorizeRequest can provide custom authentication code here.NET page or a web service. Raised when your application shuts down or is reset. Application_AcquireRequestState Raised to enable you to maintain state in your application (session/user state). Raised when the request is to be authenticated. application-wide . Again.NET made by dotneter@teamfly In terms of the life cycle of your ASP. Raised when an unhandled exception is raised (those not caught in a try/catch block).New Riders .Debugging ASP. The events that can be hooked into in global. quite a few new events can be used inside the global.asax are described in Table 1.asax Event Application_Start Raised Initialize here. when the things application starts for the first time.2. Application_PreRequestHandlerExecute The last event to be raised before the request is processed by the ASP. Application_ResolveRequestCache Raised to enable you to stop the processing cached. Events That Can Be Used Inside global. you can provide custom authorization code. Application_BeginRequest Application_Authenticate Raised every time a new request is received at the server.NET application. NET-enabled language as your server-side code. you’ll be able to use Visual Basic.NET finally enables you to use a real programming language. Another benefit to using a real programming language is that you have real data types. Application_EndRequest Application_PreRequestHeadersSent Raised at the very end of your request. Because no time is wasted in parsing the scripted code. not a scripted language. ASP. Now you can’t say that you don’t have enough control of what happens when and where in your ASP. JScript. you get a very significant speed increase. Raised before HTTP headers are sent to the client.NET made by dotneter@teamfly Application_UpdateRequestCache Raised when processing is complete and the page is going into the ASP. Summary This chapter discussed server-side events and showed how they change the way an ASP. and because your code is compiled down into native machine code. The next chapter takes a look at some existing ways to debug ASP code that can be easily used in the ASP. Later chapters discuss debugging techniques that are new and specific to ASP. If you’re familiar with ASP. Here you can add custom headers to the list.NET application. Now you will never need to question whether your variable is really a String or an Integer internally because you will be able to specify its type before it is assigned. First. and C++ with managed extensions as your server-side language. It also discussed the advantages of using a real programming language to write your server code. you gain the speed of a compiled language versus an interpreted language. These events also provide excellent places to hook into the ASP.Debugging ASP. . not VBScript. Out of the box.NET environment.New Riders .NET gives you unprecedented flexibility in that it enables you to use any .NET processing chain for debugging.NET enables you to use typed languages to your advantage. New Language Options ASP. This offers a huge benefit in a couple areas. C#.NET page is processed versus a traditional ASP page. as you will see in some later chapters.NET only. you will remember that every variable is of type Variant.NET cache. ASP. Note that we said Visual Basic. but it is not the type of structure and organization that is obtained with a real programming language. Traditional Approaches to Debugging in ASP IF YOU HAVE EVER USED A PREVIOUS version of ASP.New Riders . you’ll find a debugging object that can be used in tandem with a traditional ASP page to display a great deal of useful information for tracking down those pesky errors in your ASP code.NET made by dotneter@teamfly Chapter 2. you might not realize what is happening behind the scenes with the script parser. The fact is.asp file and use the #include file directive to bring them into the rest of your . At the end of the chapter. if you have a series of “global” functions that are used throughout your application.Debugging ASP. the potential problems and pitfalls involved in debugging a typical ASP application.NET pages are severely lacking in the structure department. For example. don’t get us wrong: A certain degree of structure can be attained with standard ASP programming. in some respects—your ASP pages have the structure of a 50-story skyscraper made out of Popsicle sticks. This chapter explains some of the shortcomings of the original ASP technology. there just isn’t any great way to write extremely structured code in ASP like there is in Visual Basic or C#. and a few ways to overcome these obstacles. you probably shove them into an . The Great Monolith So how do you currently write ASP pages? Well. the structure of a page in previous versions of ASP is quite different from that of an ASP. Pre–ASP. .asp pages to avoid code repetition. Although this might get the job done. Although there are a few ways to make your ASP code slightly structured.NET Pages As you will soon see. several lines become blurred. Now. you are already aware of the nightmare that debugging a traditional ASP application can become. if you’re anything like us—and let’s hope that you’re not.NET page. Structure of Pre–ASP. This section talks about a few of the common problems that developers run into when developing with previous versions of ASP. 1 A Typical ASP Page <HTML> <BODY> <% Dim pvNameArray pvNameArray = Array("Jonathan Goodyear". they simply get tacked on at the point where you include them. The theory of three-tier programming is that there is a distinct separation among the client presentation layer. In the traditional ASP environment. Listing 2. Luckily. instead of a very clear distinction between the HTML that creates the drop-down box and the code that fills it. This means that the entire included page is parsed and processed even if only a single constant declaration. for example. as shown in Listing 2. This type of coding conflicts with the basic principles of “three-tier” programming. and the database layer. "Brian Peek". the two are very much intertwined. because ASP. In the previous example. is used out of it.NET made by dotneter@teamfly By including pages in this manner.Write "<option value=' " & i & " '>" & pvNameArray(i) & "</option>" Next %> </BODY> </HTML> As you can see. Traditional ASP code is the epitome of spaghetti code.Write "<select name='Author'>" For i = 0 to 2 Response. you are doing work that could and should be performed at the business logic layer instead of at the client layer.New Riders . compiled languages. you generally wind up mixing your HTML presentation layer with your VBScript server code.1. In ASP. the lines between these layers can very easily become blurred.” which is code that lacks structure and clear separation among the main elements. Code does not get much more tangled up in any language quite like it does in ASP. Pasta Nightmare You probably have heard the expression “spaghetti code.Debugging ASP. "Brad Fox") Response. The main reason for this is the lack of a distinct separation between client-side presentation code and server-side business/logic code. the server business logic layer. .NET uses true. it gets you out of this bind. Listing 2.Value & " " & poRS. For example. The code to grab the recordset from the database might look similar to Listing 2.Connection") poCon.Fields("au_lname").CreateObject("ADODB.Open "SELECT * FROM authors ORDER BY au_lname".CreateObject("ADODB.2 Code to Retrieve an ADO Recordset <% Const adLockReadOnly = 1 Const adLockPessimistic = 2 Const adLockOptimistic = 3 Const adLockBatchOptimistic = 4 Const adOpenForwardOnly = 0 Const adOpenKeyset = 1 Const adOpenDynamic = 2 Const adOpenStatic = 3 Const adCmdText = 1 Const adCmdTable = 2 Const adCmdStoredProc = 4 Const adCmdUnknown = 8 %> <HTML> <BODY> <% Set poCon = Server.Debugging ASP. saving them from having to reinvent the wheel every time they need to do something simple.PWD=" Set poRS = Server. Most programming languages provide a way for developers to include libraries of functions in their applications. but it has one very major flaw.Open "Driver={SQL Server}. adLockReadOnly.Database=Pubs.Recordset") poRS.2. ASP allows for something similar. poCon. adCmdText Do While Not poRS.UID=sa.Value & "<br>" .NET made by dotneter@teamfly The Inclusion Conclusion Code reuse is a very important part of writing any type of application.Write poRS. No one wants to rewrite the same code to perform the same function time and time again. adOpenKeyset. imagine that you are writing a screen for an application that needs to pull several ADO recordsets from a database and display them to the user in a table.Server=(local).EOF Response.New Riders .Fields("au_fname"). So. deploying components can be a difficult process. you gain the benefit of a compiled language—that is. every time you call a method on that object. creates the recordset object. and opens the recordset.NET made by dotneter@teamfly poRS. However. you can write some generic routines that open connections or open recordsets and put them in a separate file. ASP. it does not allow for structured programming. However.MoveNext Loop %> </BODY> </HTML> Every time you need to retrieve an ADO recordset. if you created an ASP page that included all your database routines for your entire application. you can create an instance of your component and call methods on it just as you would from Visual Basic. among other issues. the ASP processor still would parse the entire page and work its way through all the unused code before giving you what you need. the GetIDsOfNames API call is fired to locate the position of the method in the object’s vtable.Debugging ASP. and it does not follow an event-style programming model. and you then included it on a page on which only one of the functions is used. So how do you solve this in traditional ASP? Well. It lacks separation of client. This runs contrary to the simple process of copying assemblies to the server in the case of a ASP. With this approach. So how do you combat that in ASP? The way to currently work around this problem in ASP is to move your business logic code into components. This can become quite cumbersome if your site undergoes many changes before becoming stable. which can even be done through a simple shared drive or folder. Luckily. Access to the server is required to register the resulting DLLs. it suffers from a number of problems and shortcomings.and server-side code. when the object is instantiated in your ASP page.NET addresses many of . you must rewrite the code that creates the connection object. you get only a generic object that pretends to be the real object. you will be forced to compile your code every time you want to make a change instead of just editing a text file. Then you can use the #include file="<filename>" directive to pull this file into the page that is about to be displayed to the user. Finally. Second. Problems and Shortcomings Although ASP is a very powerful technology. this scenario contains one major downfall: Every page that is included in this fashion is parsed in its entirety. From ASP. increased speed and decreased memory usage.New Riders . opens the connection. First. this is terribly inefficient. this approach suffers from a number of downfalls. This is a definite performance hit.NET application. Because of this. Without a doubt. As an ASP programmer. which defines an ASP interface for ADO. the entire page is parsed every single time it is referenced. you are never wasting time accessing any portion of the ADO library that you don’t explicitly request. you would compile against a library containing pointers for the real versions of the functions in precompiled DLLs. . in the case of static linking. even though the same types of events are taking place in your web page–based application. this type of programming model is nonexistent. such as Visual Basic or C. if you drop a button on a form in Visual Basic. you are quite familiar with event-based programming. however. or that allow them to be called from an external binary file in the scenario of dynamic linking. No Events If you’re a Visual Basic programmer.NET made by dotneter@teamfly these problems and turns ASP into an even more powerful and easy-to-use technology for today’s high-powered web-based applications. for a number of reasons. Includes Eat Up Memory As discussed in the previous section. In ASP. then when a user clicks it. it generates a Click event that you can respond to and act on appropriately.“includes” are a horribly inefficient way to do something that is inherently very simple. you should be able to link a library of functions into the main application without a detrimental performance hit.New Riders .NET really shines. Most programming languages allow for dynamically or statically linked libraries that contain commonly called functions directly to the application only once. In this way. That is. buttons are being clicked. In a real programming language. Why shouldn’t you have the capability to answer these events in a similar manner? This is one area in which ASP. and most people will use only a few items out of the myriad of things it declares. users are entering and leaving fields. and so on. This involves an enormous and severe performance hit.Debugging ASP. Scripted Language Versus Compiled Language One of the major problems with ASP is that it is a scripted language rather than a compiled language. Even though you might use only those very few items.INC file provided by Microsoft. For example. This file is huge. An excellent example of a bloated include file is the ADOVBS. some old methods and tricks from the days of ASP are still worthwhile.New Riders .1 shows what properties are available on the ASPError object. Server.You will write an object like this later in the chapter so that you can use it very easily in your ASP debugging procedures.Debugging ASP.NET IDE can be used in debugging your ASP. and Session. Second. This is one of the easiest ways to debug standard ASP pages and is still applicable in the new architecture. Table 2. script. Using the Server Object Internet Information Server (IIS) 5. Table 2. Old Strategies That Still Do the Job Well Although a variety of new debugging features in the Visual Studio . Request.0 included a new method on the Server object. called GetLastError. the script parser needs to parse the entire page top to bottom before any of the code can be executed. you will wind up with a pile of if/then statements and Response. Using the Response Object Previous versions of ASP are built on the foundation of five objects: Response. object) Error Code from IIS . A better approach is to create a specific debugging object that outputs important and pertinent information but handles it in a much nicer and cleaner fashion.NET applications. This output can get lost inside the HTML if it’s not placed properly. At the end of debugging a long logic process. when the code is parsed. Application.asp pages.Write() calls littered throughout your . it is not generated into a native machine code that can be executed directly every time thereafter. The problem with this approach is that it isn’t very pretty. This method returns an ASPError object that contains almost everything you need to know about the error except how to correct it. The Write method of the Response object can be used to dynamically write content to the client’s browser.1. which is obviously incredibly inefficient.NET made by dotneter@teamfly First. This process must be repeated every single time that the page is rendered. Properties of the ASPError Object ASP Code Number Source Category COM error code Source of line that caused error Type of error (ASP. The Response object is used to send information from the server down to the client’s browser. 3 Sample ASP Error Page Using the ASPError Object <% Dim Err Set Err = Server.Description%></td> </tr> <tr> <td>Number</td> <td><%=Err. it will perform a Server.Number%></td> </tr> <tr> <td>Category</td> <td><%=Err.New Riders . then when the server encounters an error. Listing 2. If a custom page is selected.Category%></td> </tr> <tr> <td>File</td> . Listing 2. you can create a custom error page that is displayed instead of the standard ASP error page.You can set the error to your own custom page by using the IIS Admin tool. This enables you to get an instance of the ASPError object and pull out the pertinent information for display as a debugging guide. maintaining all state information to the new page.Transfer to the error page.GetLastError() %> <HTML> <HEAD> <TITLE>Error</TITLE> </HEAD> <BODY> An error has occurred! <p> <table border> <tr> <td>Description</td> <td><%=Err.Debugging ASP. with this information at your disposal.3 shows such a page. However.NET made by dotneter@teamfly File Line Column Description ASPDescription ASP file where error occurred Line number where error occurred Column where error occurred A text description of the error Detailed description if error was ASP-related You will notice that the information returned in the object is the same information that is presented to you in a standard ASP error page. Listing 2. shown in Listing 2.0 introduced classes into the scripting language.asp) <style type="text/css"> span.NET.subhead { background-color:cccccc. you’ll now create a debug object that works in previous versions of ASP.File%><</td> </tr> <tr> <td>Line</td> <td><%=Err.0. cellspacing:0. arial.4.trace__ th.New Riders .font: 10pt verdana.3 } span. take a look at the code of your object.trace__ table { font: 10pt verdana.Line%><.2. color:white.ASPDescription%></td> </tr> </table> </BODY> </HTML> <% Set Err = Nothing%> An ASP Debug Object In preparation for the debugging and tracing tools available to you in ASP. VBScript 5.0 or higher running on your server. you will need to have VBScript 5.3. margin-bottom:25.4 clsDebug Source Code (clsDebug. padding:3.trace__ th { padding:0. } span. For this example.} span.trace__ tr.} span. color:black.alt { background-color:black.trace__ { background-color:white.3.NET made by dotneter@teamfly <td><%=Err. arial.Column%></td> </tr> <td>Source</td> <td><%=Err.3. cellpadding:0./td> </tr> <tr> <td>Column</td> <td><%=Err. and you will be taking advantage of that here.Source%></td> </tr> <tr> <td>ASP Description</td> <td><%=Err. First.Debugging ASP. } . } span.15.text-decoration:underline.end { padding:0. text-decoration:underline. } span.trace__ div. arial. } span.nopad { padding-right:5 } </style> <% Class clsDebug Dim mb_Enabled Dim md_RequestTime Dim md_FinishTime Dim mo_Storage Public Default Property Get Enabled() Enabled = mb_Enabled End Property Public Property Let Enabled(bNewValue) mb_Enabled = bNewValue End Property Private Sub Class_Initialize() md_RequestTime = Now() Set mo_Storage = Server. padding-bottom:17.15.0. arial } span. arial.trace__ . padding:0. } span.} span. arial. margin:0.trace__ a.0} span. arial. font: 8pt verdana. font: 8pt verdana.0.trace__ a. 0.trace__ table.0.5. } span. } span. arial.link {color:darkblue.trace__ div.small { font: 8pt verdana.0.trace__ h3 { font: 12pt verdana. font: 8pt verdana.Debugging ASP.trace__ table td { padding-right:20 } span.0.viewmenu a:hover {color:white. font: 8pt verdana.alt { background-color:eeeeee } span.0} span.viewmenu td.text-decoration:none } span. } span.0.0. margin:15.arial.Dictionary") .5. color:white.3 } span.0.CreateObject("Scripting.trace__ table.trace__ table.trace__ td { padding:0.trace__ a:hover { color:darkblue.buffer {padding-top:7.trace__ tr.trace__ table.trace__ h2 { font: 18pt verdana.0.trace__ table td.NET made by dotneter@teamfly span.0} span. margin:0.tinylink {color:darkblue.0. text-decoration:underline.trace__ th a { color:darkblue.15} span.} span.New Riders .viewmenu td { background-color:006699.trace__ a { color:darkblue. margin:0.trace__ h1 { font: 24pt verdana.outer { width:90%.3.viewmenu a {color:white. arial. New Riders .Form()) Call PrintCollection("COOKIES COLLECTION".Add(label.Status & "</td></tr<" & vbCrLf) Response.Write("<tr><td>Start Time of Request</td><td>" & md_RequestTime & "</td></tr>" & vbCrLf) Response. Request.Cookies()) Call PrintCollection("SERVER VARIABLES COLLECTION". md_FinishTime) & "</td></tr>" & vbCrLf) Response.Write "</span>" End If End Sub Private Sub PrintSummaryInfo() Dim i PrintTableHeader("SUMMARY INFO") Response.ServerVariables()) Response. mo_Storage) Call PrintCollection("QUERYSTRING COLLECTION".Write "<p><span class='trace__'>" & vbCrLf Call PrintSummaryInfo() Call PrintCollection("VARIABLE STORAGE".NET made by dotneter@teamfly End Sub Public Sub Print(label.Debugging ASP.Write("<tr class='alt'><td>Request Type</td><td>" & Request.Write "</tr></table>" End Sub . Request. output) If Enabled Then Call mo_Storage.Write("<tr><td>Status Code</td><td>" & Response. md_RequestTime.QueryString()) Call PrintCollection("FORM COLLECTION".ServerVariables("REQUEST_METHOD") & "</td></tr>" & vbCrLf) Response.Write("<tr><td>Elapsed Time</td><td>" & DateDiff("s". Request. Request.Write("<tr class='alt'><td>Finish Time of Request</td><td>" & md_FinishTime & "</ td></tr>" & vbCrLf) Response. output) End If End Sub Public Sub [End]() md_FinishTime = Now() If Enabled Then Response. ByVal Collection) Dim vItem Dim i PrintTableHeader(Name) For Each vItem In Collection If i mod 2 = 0 Then Response. All you need to do is include the page at the top of the ASP page that you want to track. Finally.Write("<tr>") else Response. When you’re finished.Write "</tr></table>" End Sub Private Sub Class_Terminate() Set mo_Storage = Nothing End Sub Private Sub PrintTableHeader(ByVal Name) Response.Write("<tr class='alt'>") end if Response. enable it. instantiate an instance of the object in your ASP page. set it equal to Nothing to destroy it.Write "<table cellpadding='0' width='100%' cellspacing='0'>" & vbCrLf Response.New Riders .NET made by dotneter@teamfly Private Sub PrintCollection(Byval Name.Write "<tr><th class='alt' colspan='10' align='left'><h3><b>" & Name & "</b></ h3></th></tr>" & vbCrLf Response. call the End method to display the collection information.Write("<td>" & vItem & "</td><td>" & Collection(vItem) & ">/td></ tr>" & vbCrLf) i = i + 1 Next Response. Another nice feature of this object is that it can be enabled and disabled . and then call the Print method to output your own debugging information.Debugging ASP.Write "<tr class='subhead' align='left'><th width='10%'>Name</th><th width='10%'>Value</th></tr>" & vbcrlf End Sub End Class %> Using this object for debugging and tracing is extremely simple. NET made by dotneter@teamfly at will.asp" name="frmForm2" id="frmForm2"> <input type="text" name="txtText2" id="txtText2"> input ' Instantiate it ' Enable it . you could simply disable the debug object on that page to stop the output from appearing instead of manually removing all the lines that reference it.asp"—> <% Dim Debug Dim x Set Debug = New clsDebug Debug.asp) <%@ Language=VBScript %> <%Option Explicit%> <!—#include <input type="text" name="txtText1" id="txtText1"> <input type="submit" name="btnSubmit1" id="btnSubmit1"> </form> <form method="GET" action="DebugTest.Debugging ASP.New Riders . x %> <form method="POST" action="DebugTest. take a look at Listing 2. If you tossed in a few Debug. which shows an example ASP page where the debug object that you just built is being used.5 Sample ASP Page Using clsDebug (DebugTest. Listing 2.Enabled = True ' Set a test cookie Response.Cookies("TestCookie") = "This is a test cookie!" %> <HTML> <HEAD> <TITLE>Test Page>/TITLE> </HEAD> <BODY> <% x = 10 ' Output a debug string Debug. for example.Print calls for testing and did not want them output for a demo. As an additional guide.Print "x before form". “Tracing.Debugging ASP. and several solutions to these problems. Sample output of your debugging object. the query string.End Set Debug = Nothing %> </BODY> </HTML> After calling the End method. x ' Close it all up Debug. server variables.NET functionality that you will be looking at in Chapter 6.Print "x after form". the shortcomings of a scripting language . Summary This chapter looked at problems and shortcomings of previous ASP versions.1. cookies. The output is almost identical to the trace output that is included as part of the ASP.You also learned about the problems inherent in “including” files. issues with debugging it.” The output of this object can be seen in Figure 2.1. all information regarding form submissions.New Riders .NET made by dotneter@teamfly </form> <% x = 20 Debug. and your own variable-tracking statements is displayed. Figure 2. including a tool that you can use to help find problems in your own ASP code. The chapter is oriented more toward overall methodology than technical implementation. Before entering into a battle. you will be ready to tackle the brave new world of ASP. Even a slightly more intuitive approach. Debugging under these circumstances might consist of randomly attempting different things.Yet.NET. you create a plan of attack and then execute it to defeat your enemy. We will demonstrate more complex examples of each of these methods. in vain hope that one of them (or a random combination of them) will solve the problem.Debugging ASP. With this information in mind. Chapter 3. Note also that the following methods are designed mostly to track down semantic errors. These methods link together to form a complete approach to narrowing and eliminating those hard-to-find problems. Few would consider such an engagement without a plan—thoughts of doing so conjure up images of soldiers blindly running at the enemy and throwing themselves into the path of gunfire. and the lack of an event programming model in the existing ASP framework. as well as the overall approach. In this chapter. so we will keep the examples simple. Debugging Strategies DEBUGGING WEB APPLICATIONS IS AN ART very much like warfare. meaningful progress. The next chapter discusses some general strategies for debugging applications. We need more than a collection of memorized “fixes” to problems. we introduce some tried-and-true methods for debugging web applications. They all aim to reduce the amount of time required to find bugs in code.NET debugging. it all too often leads to long.NET made by dotneter@teamfly versus a compiled language. These are runtime or logic errors. falls short of the ideal. frustrating hours yielding little. such as basing your debugging efforts on past experiences. This is because working with newer technologies reduces the effectiveness of the experience factor. . somebody wins the lottery). many developers attempt to engage in a debugging battle without any such plan or method.New Riders . We need a complete process to track down and eliminate bugs in code. Syntax or compile-time errors will be discussed in the context of each technology covered in the following chapters. Although sometimes this works (hey. We will be sure to mention which method or methods we are using so that you can keep track of them. throughout the book as we go into more detail on debugging specific parts of ASP. if any. if you are connecting to an OLEDB data source. Because web applications are typically broken into logical tiers already. We can’t tell you how many times we have been frustrated to the point at which we are contemplating tossing the monitor out the window. Rename the new text file that is created to test. For the purposes of this book. The first thing that you can do to get a better handle on the situation is break the system into parts.NET made by dotneter@teamfly Tier Sandboxes The first question that must be answered to fix a bug is.NET web application. select New. To do this.NET web applications have three (or more) tiers to them. and then select Text Document. web page) tier. Data Tier In the context of an ASP. As an example. it helps to verify that your connection string is properly formatted and that it contains valid server information and security credentials.Debugging ASP. How you do this is largely determined by the nature of your data source. and a user interface (for example. Regardless of the data source.1) will be displayed. such as an Exchange server or Directory Services. The moral of the story is to establish one or more known test scenarios and verify the data setup for each of the scenarios each time before you run it. so finding a bug (particularly a subtle one) in such a large system can at first appear to be a daunting task. a business object tier.1. This will leave the tier(s) that you need to concentrate more debugging effort on. .New Riders . only to discover that the data was not set up properly or had been changed or removed by one of the other developers on the team. to eliminate the data tier from the list of suspects. the warning prompt (shown in Figure 3. Figure 3. The first is whether the data is set up properly in the data source.“Where is it?” Most ASP. but it could also be represented by another data source. the data tier is usually represented by a relational database. A good way to do this is to create what is known as a data link file. you must determine three things.udl. right-click the desktop.You can then test each tier and eliminate the ones that are not contributing to the problem. During this process. it only makes sense to use this as the first level of debugging segmentation. Warning prompt displayed when changing filenames. The second thing that you must determine to verify that the data tier is not the source of your problem is to make sure that you can connect to the data source. we assume a system containing three tiers: a data tier. the Data Link Properties dialog box is displayed.2. or custom data extraction components because this adds an additional level of ambiguity to your . A Convenient Data Link File Feature A nice feature about data link files is that after you create a connection string that works. let alone exchange information with it.NET code. you could execute a SQL Server 2000 stored procedure directly in Query Analyzer with hard-coded parameters to determine whether the results returned are what you expected. If you double-click the test. After you finish this. Dialog box used to set the properties of the data link file. and fill in the connection information on the Connection tab. ADO. If you are going to do this.2. then you can run some tests on your data conversations to make sure that they are working properly.Debugging ASP.udl file that you just created. Figure 3. be sure to check the Allow Saving Password check box on the Connection tab so that your password information is stored in the data link file. Select an appropriate data source from the Provider tab.New Riders . you can open the file in Notepad and extract a perfectly formatted connection string to use in your ASP.NET made by dotneter@teamfly Click Yes. As an example. or just press the Enter key. however. Be sure not to test your data conversations through any data component layers such as ADO. The Connection tab for a SQL Server connection would look like the one shown in Figure 3. If a connection to the data source can be successfully achieved.NET. you can test whether a connection to the data source can be achieved by clicking the Test Connection button. A failure at this level indicates that you are unable to even talk to your data source. but also whether there are bugs in the communication layers between it and the other tiers in the web application. OleDbConnection cn = new OleDbConnection(conString).Write("Success!"). if(cn.Close(). the business object tier is next. Luckily.New Riders .NET architecture comes with a new version of this popular library. as shown in Listings 3.Debugging ASP.” For now.NET Connects to the Data Source Properly (C#) <%@ Page Language="C#" ClientTarget="DownLevel" %> <%@ Import Namespace="System.NET made by dotneter@teamfly test.Data" %> <%@ Import Namespace="System. Business Object Tier The business object tier is the glue that holds the entire web application together. After you have checked the data tier.OleDb" %> <% string conString = "Provider=SQLOLEDB. you can use a simple test to see whether ADO. Listing 3.1 A Simple Test to See Whether ADO.1.Data.State==ConnectionState.NET.Open) { Response. you must determine not only whether there are bugs internal to the tier." + "Data Source=localhost".NET. Right now. ADO. making this multifaceted determination is relatively straightforward. We’ll go into more detail about ADO.“Debugging ADO. The most common object used as a communication layer between the business object tier and the data tier is ADO. Microsoft’s new .NET in later chapters in the book. cn. } cn." + "User ID=sa. and we’ll focus on debugging issues with it in Chapter 16.1 and 3. %> ." + "Password=. you just want to determine whether the data tier is doing its job." + "Persist Security Info=True.Open(). Because of this.NET is capable of connecting to your data source properly." + "Initial Catalog=pubs.2. Most of the problems with communicating with the bottom tier of your web application occur for one of two reasons.State = ConnectionState." & _ "Data Source=localhost" Dim cn As OleDbConnection = New OleDbConnection(conString) cn.1." & _ "User ID=sa. As an example. Second. The bulk of the rest of this book (Parts III. there are custom components. such as server controls.OleDb" %> <% Dim conString As String = "Provider=SQLOLEDB.“Debugging Related Technologies.Close() %> If “Success!” gets rendered to the browser.NET made by dotneter@teamfly Listing 3. A simple call to Response.Debugging ASP. then you know that ADO.New Riders .NET Connects to the Data Source Properly (Visual Basic .NET (your communication layer between the business object tier and the data tier) is working properly.NET Features.NET functionality.Write() can confirm this. Communication with the user interface tier is usually handled through the intrinsic ASP.” in particular) is dedicated to dissecting each of these topics one at a time.” and IV. For now. covering how to find bugs in each and how to solve them. there is the intrinsic ASP. one gotcha that will .Data" %> <%@ Import Namespace="System." & _ "Persist Security Info=True.Write("Success!") End If cn.NET) <%@ Page Language="VB" ClientTarget="DownLevel" %> <%@ Import Namespace="System." & _ "Password=.NET Response object." & _ "Initial Catalog=pubs.“Debugging the New ASP. The meat of the business object tier is comprised of two major parts. you might not be conveying the proper information.Open Then Response. and data binding. First. First. user controls. so this layer is usually working just fine. then the business object tier is most likely the source.NET debugging more thoroughly in Chapter 16.2 A Simple Test to See Whether ADO. suffice it to say that if you have established that your data tier and communication layers are working properly and you are still seeing peculiar or incorrect behavior in your web application.Data.Open() If cn. We’ll go into reasons why it might not be working properly when we discuss ADO. Debugging ASP. Divide and Conquer Now that you a have a list of tiers that might be causing problems. This causes server event handlers to fail. The user interface tier is subject to many of the same subtle logic and functionality bugs that can occur on the data tier and the business object tier. although a good book on VBScript or JavaScript will provide a better reference on the nuances of client-side programming and its object model.NET server controls. The second reason why you might have trouble communicating with the bottom tier of your web application arises in the case of nonuser interface communication such as web services. you can divide them and attack them individually.3. When you have a list of objects.3. The concepts introduced in this chapter are generic enough to be applied to all tiers in a web application. The most effective place to start (where else?) is at the beginning. and the page cannot maintain state. Figure 3.NET debugging. Regardless of the code encountered. We will discuss the nuances of debugging web services in Chapter 13. the same strategies can be applied. The advantage to taking this encapsulated debugging approach is that you can effectively granulize the problem into tiny chunks of code that either work or don’t work. it is helpful to know when you are dealing with a bug on the client side (user interface tier). if you are prompted with an error dialog box similar to the one shown in Figure 3.” User Interface Tier Although this book focuses on server-side ASP. functions.New Riders . As a general rule.NET made by dotneter@teamfly plague you in your development efforts (although it is easy to solve) occurs when you forget to add the runat=server attribute to your ASP. An example of a client-side runtime error dialog box. “Debugging Web Services. stored procedures. the next step is to trace the path of execution that the “buggy” functionality is following. then you have a problem with your client-side script. not all client-side bugs will generate error dialog boxes. . and so on that are being used. Unfortunately. 3 and 3. you can compare the sum total of the line-level comments to the mission statement at the top of the code.NET made by dotneter@teamfly Logic Test Before rushing in and making changes to a section of code. line by line. adding comments about what the code is doing. and it provides you with a reference to look back on if the toils of developing and debugging give you a temporary case of vertigo. garbage out” definitely applies here.New Riders . take a step back and look at the big picture. It sounds cheesy. What are you trying to accomplish with the code? A good way to clarify this is to create a “mission statement” for the code segment. Not surprisingly. When you get done. though. It sometimes helps to traverse the code. but it actually works well. Listing 3. Plan for Debugging from the Start Ideally.4 accomplishes this. Take care to state not only all the tasks that the code should accomplish. Before you begin debugging. many problems can be diagnosed right off the bat because a piece of logic was either omitted or included in the wrong order. Inputs and Outputs Perfect logic doesn’t mean a hill of beans if you are working with incorrect data.You need to verify that the data that is going into your code is correct. don’t despair. you should create code mission statements at the initial development time as well so that you don’t forget what the code’s original purpose was when you return to debug it later. or else you cannot expect the results to be what you want. It makes you think about what the code needs to get done. If your logic stands up to the test.Debugging ASP.3 Validating the Input of a Method (C#) <script language="C#" runat="server"> class foo . Sometimes this is easier said than done. Now compare the mission statement that you just created with what the code is actually doing. add this mission statement as a comment at the top of the code segment. A good place to start testing inputs is right in the beginning of the code block that you are testing. Take a look at how the code in Listings 3. but also in what order they are to be done.You have only just begun to fight. The old adage “Garbage in. The output to a function can be tested in a similar fashion. if(b > 2) throw new Exception("b=" + b. Listing 3. ToString() + " is too big").4 Validating the Input of a Method (Visual Basic . you can throw an exception to the browser that gives you some helpful debugging information.NET) <script language="VB" runat=="server"> Class foo Public Function CalcThis(a as Integer.New Riders . Listings 3.You can test the bounds of the output values of your code and throw exceptions as appropriate. int b) { if(a < 7) throw new Exception("a=" + a. an ADO. return(a + b). If your procedure is more complex and modifies the input parameters within the code.5 Validating the Output of a Method (C#) . you should run checks to see if they fall within the valid bounds that you have specified. b as Integer) _ as Integer If a < 7 Then Throw New Exception("a=" & CStr(a) & _ " is too small") If b > 2 Then Throw New Exception("b=" & CStr(b) & _ " is too big") CalcThis = (a + b) End Function End Class </script> Before any code has had a chance to modify the input parameters.Debugging ASP.NET OleDbDataReader).6 exemplify this sort of test. you might want to check one or more of its properties to verify that it contains the proper data. } } </script> Listing 3. you might want to place these checkpoints in more than one place to verify that they are being properly calculated. If an input parameter is of a complex data type (for example.5 and 3. ToString() + " is too small"). This enables you to inspect them for accuracy. If the check fails.NET made by dotneter@teamfly { public int CalcThis(int a. int c = 0.Write(c. We will be covering them in detail in Chapter 4. “Code Structure That Eases Debugging. and the event log utility objects. ToString()).CalcThis(a. ToString() + " is too small. %> Listing 3.b) If c < 10 Then Throw New Exception("c=" & CStr(c) & _ " is too small.Debugging ASP.New Riders . int b = 2. Response.") Response.” we will introduce more advanced tools and methods for handling debugging tasks. Beyond the Basics In Part II. c = objFoo. . the Visual Studio Runtime Debugging Environment.CalcThis(a. int a = 8.6 Validating the Output of a Method (Visual Basic .NET made by dotneter@teamfly <%@ Page Language="C#" ClientTarget="DownLevel" %> <% foo objFoo = new foo().Write(CStr(c)) %> Don’t worry if you are unfamiliar with using exceptions in this manner.b). if(c < 10) throw new Exception("c=" + c. “Debugging the New ASP.NET Features.NET) <%@ Page Language="VB" ClientTarget="DownLevel" %> <% Dim objFoo As New foo Dim a As Integer = 8 Dim b As Integer = 2 Dim c As Integer = 0 c = objFoo. These tools are the Trace object.").”They are extremely useful for flagging data integrity issues with the data that goes into and comes out of your processes. Debugging ASP.7 Displaying a Simple Form <%@ Page <asp:TextBox <br> <asp:Button </form> Notes About This Part of the Example For simplicity.NET server controls. Also.NET. we have left out the form’s ACTION attribute. As we add more complex code to the example. That is done with the ASP. the form does little other than maintain form state by using the runat="server" attribute of the ASP. Although you eventually want to use a dynamically generated ASP. you start out . Listing 3.NET code shown in Listing 3. even though the Language attribute of the Page directive indicates C# as the language. Imagine that you want to build a form that has a text field that must contain a value when the form is submitted.NET field validator control.New Riders . The Basics The first order of business is to get the form to display.You want the form to be validated by a dynamically generated ASP. When building complex functionality. At this point. it is often better to start with a simple foundation and gradually build upon it. we will show implementations in both C# and Visual Basic .NET RequiredFieldValidator control. Nowhere do these words ring more true than in the case of web application development. This technique can be used both when you are building new functionality and when you are debugging existing functionality that doesn’t seem to be working properly.7.NET made by dotneter@teamfly Simple Before Complex Some common words of wisdom are that you must first learn to crawl before you learn to walk. you have not done anything language-specific yet. It is much easier to find the source of a problem when you are building in small steps rather than in large ones. so the form will post to itself. Take the following scenario as an example. Listing 3.9 Dynamically Creating ASP. This implementation is not your final intention (remember. you wanted the field validator to be created dynamically). valid1. EventArgs E) { if(IsPostBack) { RequiredFieldValidator valid1 = new RequiredFieldValidator(). Adding Complexity To dynamically create the field validator control. valid1.Debugging ASP. void Page_Load(Object sender.8 Adding a Static ASP. you create the field validator control and bind it to the form as shown in Listings 3. your code now looks like Listing 3.NET made by dotneter@teamfly simply by creating one the traditional way.NET Validation Control <%@ Page <asp:TextBox <asp:RequiredFieldValidator </form> Your form now displays an error message if a value is not placed in the text field before the form is submitted.8.NET page. you need to implement the Page_Load event of your ASP. <asp:TextBox <br> <asp:Button </form> Listing 3. you ask? Well. Why.Add(valid1). however. the validation does not take place.NET Validation Control (Visual Basic .Add(valid1) End If End Sub </script> You have removed the ASP. Even after you call the Validate method.Controls. It’s showing up after the Submit button instead of after the text field to which it references.NET validation control from the form definition and are now dynamically creating it in the Page_Load event. E as EventArgs) If (IsPostBack) Then Dim valid1 As RequiredFieldValidator = _ New RequiredFieldValidator valid1. If you run this code. That’s not the only problem.New Riders . so you know that it is not a problem with the validation logic. so your dynamic validation control is being created after the fact.NET) <%@ Page Sub Page_Load(sender as Object. if you hadn’t taken the previous steps and reached a checkpoint. however. The reason for this is that when you use the Add method of the form’s Controls collection. the error message is being rendered in the wrong place.NET made by dotneter@teamfly frm1. To correct this. it .Controls.ErrorMessage = _ "This field must contain a value!" frm1. It must be a problem with the way you have implemented it. you must call the Validate() method of the Page object to re-evaluate the form based on your new validation object.ID = "valid1" valid1. you would have absolutely no idea what the problem was.Debugging ASP. A little snooping around leads you to the conclusion that the form is being validated before the Page_Load event is fired. however.You did.10 Dynamically Creating ASP. valid1). Listing 3. Sub Page_Load(sender as Object. valid1. you can determine that an index of two puts the error message where you want it.12 Fixing the Placement and Behavior of the Validation Control (Visual Basic . E as EventArgs) If (IsPostBack) Then Dim valid1 As RequiredFieldValidator = _ New RequiredFieldValidator valid1.12. EventArgs E) { if(IsPostBack) { RequiredFieldValidator valid1 = new RequiredFieldValidator().AddAt(2. valid1.11 Fixing the Placement and Behavior of the Validation Control (C#) <%@ Page void Page_Load(Object sender. valid1. With a little trial and error. <asp:TextBox <br> <asp:Button </form> Listing 3. To insert the RequiredFieldValidator control after the text field.11 and 3.Debugging ASP. swapping code and fiddling with application settings without any rhyme or reason is a colossal waste of time when you consider the odds against hitting the magic combination. All the concepts already introduced in this chapter can be used to aid you in this effort. When trying to solve what appears to be an unsolvable bug. However. even if you do strike gold and fix the problem. . Many parallels exist between this timeless fable and the debugging process.AddAt(2. Turtle Makes the Wiser Just about everyone has heard the tale of the tortoise and the hare. as well as things you could add or change to test your theories. Although this method occasionally works.valid1) Validate() End If End Sub </script> As you can see. Take a step back and ask yourself. And when they do. most types of bugs can be tucked away for only so long before they rear their ugly heads again.NET made by dotneter@teamfly valid1. it is hard to resist the temptation to quickly and randomly make changes with the hope that you will stumble upon the solution by accident. In your moment of triumph. it is unlikely that you will understand or remember how you did it. Second.Debugging ASP. and the lesson to be learned from it.ErrorMessage = _ "This field must contain a value!" frm1.ControlToValidate = "text1" valid1.“What is the most likely cause of the bug?” Jot down some ideas.Controls. you’ll be right back at square one when it comes to fixing them. Plan Carefully When you encounter a bug that looks like it is going to take more than a trivial amount of time to solve. From these ideas.New Riders . this might not concern you much. you must make a plan of attack. First.You can then dedicate your time to finding the problems with the parts that you are not certain of. formulate a set of logical steps to find the bug. it has a few drawbacks. starting out simple and gradually building complexity greatly reduces the amount of time that it takes to troubleshoot problems because you already know what parts definitely work. First. Perhaps you are not looking in the right place after all.New Riders . This chapter also demonstrated how to break a complex piece of code into its simplest form. Be careful to make only one modification at a time. (If only our luck with poker could follow this same trend. The most complex bugs usually require a combination of changes to fix them. In the next chapter. This is done through the logic test and by testing the code’s input and output.NET made by dotneter@teamfly Proceed with Caution Armed with a debugging plan. in a separate file. Finally. These strategies. Summary This chapter introduced some proven strategies for tracking down and eliminating bugs in your code. . when applied together. It’s definitely not all in the cards. you can now begin making modifications to both code and application settings. emphasizing that results can be achieved faster by proceeding with caution. If you get to the end of your plan before you resolve the bug. because your luck will most definitely improve with experience. didn’t you?).) The best part is that if you stick to your debugging plan. we introduce some guidelines for creating and structuring code that makes debugging a much easier task. That way. The key to the whole process is patience. Take another step back and follow the strategies outlined in this chapter again. or even on a sheet of paper. you divide the code in each tier into functional units that you can test individually for bugs. you will have a road map to fix the problem the next time it occurs. however. we discussed how to take a logical and systematic approach to creating and executing a debugging plan. gradually building in complexity while continually verifying proper functionality. you will know what worked if the bug suddenly disappears (which has happened to us on several occasions). That way. form a solid debugging foundation.Debugging ASP. don’t give up. you divide a web application into logical segments (tier sandboxes) to narrow the problem scope and eliminate the tiers that are not contributing to the problem. you can always take a few steps back before proceeding down a different route without having to reload your code from your backup files (you did create backup files before you started. Second. Always document the modifications that you make either in the code. and there is always an element of luck involved with choosing the right debugging route first. “AN OUNCE of prevention is worth a pound of cure.NET made by dotneter@teamfly Chapter 4. Breaking up the code in this way makes it easier to track down problems because you don’t have to sift through user-interface code to get to the business logic code.Debugging ASP. the ones caused when you complicate things to the point of absolute confusion). Second. using code-behind classes prevents one group from accidentally stepping .Execute was somewhat helpful.New Riders . Using Server. This chapter is designed to help you create your code in an organized fashion. you can accomplish two things. your primary options were to put your partitioned code into include files. you reduce the number of bugs that are introduced into your code (namely. Code Structure That Eases Debugging AS THE POPULAR SAYING GOES. you can use include files (not recommended) and business object components. you will be able to find and eliminate them much more easily. if the user-interface code of your project is created by a different group of developers than your business logic code is. The extra overhead of having multiple copies of include files in memory (one for each file that uses it) made this solution undesirable. If you take the time to plan and build your web applications in a structured and organized way.NET. separate ASP files that you executed with Server. when you do encounter bugs in your code. Code Partitioning ASP. Business object components were the only good alternative in traditional ASP. as well as give you a foundation on which to debug your web application after it is built. First.”This statement is quite true when it comes to software development. With ASP. especially for the Internet. as well as two new techniques to partition code: code-behind classes and user controls. In traditional ASP. but you could not pass the executed ASP file any additional query string parameters.NET offers better solutions for code partitioning than were offered with traditional ASP. Also.Execute or business object components. Code-Behind Classes You can use a code-behind class as an effective way of separating client-side HTML code from server-side event-processing code. Listing 4. using System.NET made by dotneter@teamfly on the code of the other group. Notes About the Code Partitioning Example The example in the “Code Partitioning” section of this chapter builds upon itself and uses the pubs database that comes standard with SQL Server 2000.3 User-Interface Code for an Author Search .Web. EventArgs e) { lbl_author.New Riders . and 4. Search for an author:<br> Name: <asp:TextBox <asp:Button <br> <asp:Label </form> Listing 4. public TextBox txt_name. public void btn_name_click(Object sender. public class AuthorSearch : Page { public Label lbl_author.UI.Web.UI.1 User-Interface Code for an Author Search <%@ Language="C#" Inherits="AuthorSearch" Src="listing42.2.Debugging ASP. however. Elements of the user-interface. e As EventArgs) lbl_author.NET introduces user controls to help ease the user-interface encapsulation nightmare and enable you to fix bugs once. User controls. User Controls Have you ever found a problem with a piece of your code and gotten completely flustered because. you had to make the change in a hundred other places where you used the exact same code? In traditional ASP. suppose that you want to search for authors by state as well as by name.UI Imports System. This reusability greatly reduces the time it takes to test. The code to create a drop-down box of state names is an ideal candidate for a user control as can be seen in Listing 4.Web.New Riders .Debugging ASP. common business logic functions were often encapsulated in include files or in components. Listing 4. after you fixed it. add another dimension to code partitioning: reusability. maintain. having them seamlessly applied everywhere the code is used.Web.UI. and implement web applications. Text End Sub End Class You can see that having all your user-interface code in one place and all your business logic code in another makes it much easier to comprehend and maintain.4. Text = "You are searching for " & _ txt_name.ascx" %> .WebControls Public Class AuthorSearch Inherits Page Public lbl_author As Label Public txt_name As TextBox Public Sub btn_name_click(sender As Object.cs" %> <%@ Register TagPrefix="ch04" TagName="StateList" src="listing45.NET made by dotneter@teamfly Imports System Imports System. Continuing with the previous example. described next. ASP.4 Author Search User-Interface Code Implementing a User Control <%@ Language="C#" Inherits="AuthorSearch" Src="listing48. This was typically the case because there was no convenient way to do it. were encapsulated much more rarely. NET made by dotneter@teamfly " /> </form> At the top of Listing 4.Debugging ASP.6 Code-Behind Class for User Control using System. 4.UI.StateList" %> State: <asp:DropDownList <asp:ListItem>AL</asp:ListItem> <asp:ListItem>AK</asp:ListItem> <asp:ListItem>WV</asp:ListItem> <!—Remaining states omitted for space reasons—> <asp:ListItem>WI</asp:ListItem> <asp:ListItem>WY</asp:ListItem> <asp:ListItem>DC</asp:ListItem> </asp:DropDownList> Listing 4. Any page that needs to access a list of all the states can use the same control. using System. namespace Chapter4 { public class StateList : UserControl { .New Riders .4.6.WebControls.5.Web. and 4.5 User Control Code <%@ Language="C#" Inherits="Chapter4.UI. Listings 4. you register a user control that contains a DropDownList server control populated with all the state abbreviations. using System.Web. Listing 4.7 present the code for the user control and its code-behind class. Web.New Riders .NET) Imports System Imports System.dll listing46.Web.8 and 4.SelectedItem.dll /r:System.cs The Visual Basic code in Listing 4. .dll file to the bin subdirectory of your web application.7 Code-Behind Class for User Control (Visual Basic . you must create a compiled assembly.Web. After your assembly is compiled.WebControls Namespace Chapter4 Public Class StateList Inherits UserControl Protected lst_state As DropDownList Public ReadOnly Property SelectedState() As String Get Return lst_state.Web.UI.Debugging ASP.UI Imports System. you must be sure to move the chapter4.} } } } Listing 4. public string SelectedState { get {return lst_state.You would use the following compile script to compile the C# version of the code in Listing 4.6: csc /t:library /out:chapter4.dll listing47. ToString().You’ll see how to reference the Chapter4 namespace in Listings 4.vb Make sure that you enter the entire compile script as one command line.7 can be compiled using the following compile script: vbc /t:library /out:chapter4.SelectedItem.dll /r:System. ToString() End Get End Property End Class End Namespace To be able to reference your user control as its own Type (such as StateList).9.NET made by dotneter@teamfly protected DropDownList lst_state. UI. Text = "You are searching for " + txt_name. } public void btn_state_click(Object sender.9. using Chapter4.NET made by dotneter@teamfly You can now extend the code-behind class shown in Listings 4. EventArgs e) { lbl_author.Web.NET) Imports System Imports System.UI. public void btn_name_click(Object sender. using System.9 Code-Behind Class Referencing the User Control (Visual Basic . using System. Listing 4. e As EventArgs) .2 and 4. EventArgs e) { lbl_author.Web. Text = "You are searching for authors in " + uc_statelist.8 Code-Behind Class Referencing the User Control using System.UI.WebControls.3 to include code to reference your StateList user control.Web.UI Imports System. Text. protected TextBox txt_name. public class AuthorSearch : Page { protected Label lbl_author.SelectedState.WebControls Imports Chapter4 Public Class AuthorSearch Inherits Page Protected lbl_author As Label Protected txt_name As TextBox Protected uc_statelist As StateList Public Sub btn_name_click(sender As Object.8 and 4.New Riders .Debugging ASP. } } Listing 4.Web. protected StateList uc_statelist. This is shown in Listings 4. Debugging ASP. Because you will be connecting to SQL Server. Text = "You are searching for authors in " & _ uc_statelist.NET made by dotneter@teamfly lbl_author.Data. Next you’ll extend your AuthorSearch example to actually retrieve records from the authors table in the pubs database. using System. Listings 4. “Debugging User Controls.10 AuthorLogic Object Code using System.Sql namespace instead of the more generic System.” Business Objects Although user controls are great for reuse of user interface elements. namespace Chapter4 { . Less risk and less code equals fewer bugs.Data. Text = "You are searching for " & txt_name. Listing 4.Data.10 and 4. to be exact) and a database connection string.You will also eliminate the risk of changing one implementation of the user interface and forgetting to change it in the other places where it is used.SQL.Data. That is where traditional business objects come into play.New Riders . The GetAuthorByName method will accept an author name and a database connection string. Text End Sub Public Sub btn_state_click(sender As Object. It will implement a GetAuthorByName method and a GetAuthorByState method to encapsulate the stored procedure calls to retrieve authors from the database. The GetAuthorByState method will accept a state name (its abbreviation. you will drastically reduce the amount of time required to make changes. you’ll create a business object named AuthorLogic. To do that. using System.11 present the code for your business object. e As EventArgs) lbl_author.You’ll get more thorough coverage of user controls as they pertain to debugging in Chapter 11.SelectedState End Sub End Class If you apply user controls in your web applications whenever you are going to use the same groups of user interface elements repeatedly.OleDb namespace. they are not really meant for reuse of business logic. you will use the System. //create a dataset to hold results DataSet ds = new DataSet(). } } } Listing 4.SQL Namespace Chapter4 .NET) Imports System Imports System. Tables["DataTable"].New Riders . "DataTable"). open it.Debugging ASP. //create connection to database. and retrieve results SQLDataSetCommand cmd = new SQLDataSetCommand(sql. string connectString) { //construct stored procedure call string sql = "sp_get_author_by_name ' " + name + " ' ". and retrieve results SQLDataSetCommand cmd = new SQLDataSetCommand(sql.FillDataSet(ds. cmd.connectString).NET made by dotneter@teamfly public class AuthorLogic { public DataTable GetAuthorByName(string name.FillDataSet(ds. cmd.Data Imports System.connectString). } public DataTable GetAuthorByState(string state.11 AuthorLogic Object Code (Visual Basic . open it. Tables["DataTable"]. //create a dataset to hold results DataSet ds = new DataSet(). "DataTable"). //create connection to database. //return only the DataTable that contains your results return ds. //return only the DataTable that contains your results return ds. string connectString) { //construct stored procedure call string sql = "sp_get_author_by_state ' " + state + " ' " .Data. connectString) cmd. Tables("DataTable") End Function Public Function GetAuthorByState(state As String. open it.NET made by dotneter@teamfly Public Class AuthorLogic Public Function GetAuthorByName(name As String.You’ll see when you recompile your assembly that you can specify multiple source code files. It doesn’t matter that the code resides in separate files.Debugging ASP. open it. "DataTable") ' return only the DataTable that contains your results return ds.FillDataSet(ds. _ connectString As String) As DataTable ' construct stored procedure call Dim sql As String = "sp_get_author_by_state ' " & state & " ' " ' create a dataset to hold results Dim ds As DataSet = New DataSet() ' create connection to database. _ connectString As String) As DataTable ' construct stored procedure call Dim sql AS String = "sp_get_author_by_name ' " & name & " ' " ' create a dataset to hold results Dim ds As DataSet = New DataSet() ' create connection to database. . and retrieve results Dim cmd As SQLDataSetCommand = _ New SQLDataSetCommand(sql. Tables("DataTable") End Function End Class End Namespace You added your AuthorLogic class definition to your existing Chapter4 namespace (which already contains the code-behind class for your StateList user control).New Riders .FillDataSet(ds. "DataTable") ' return only the DataTable that contains your results return ds. and retrieve results Dim cmd As SQLDataSetCommand = _ New SQLDataSetCommand(sql.connectString) cmd. dll.Data.dll listing46.cs listing410. you need to recompile your assembly.dll.dll /r:System.System.dll.vb listing411. Next. construct stored procedure calls.12 and 4. Listing 4.Web.ascx" %> <%@ Import .vb Make sure that you enter the entire compile script on the same line.NET page and its code-behind class to use the AuthorLogic object. take a look at Listings 4. For the C# version.Data. make the calls. use this compile script: vbc /t:library /out:chapter4.cs For the Visual Basic.dll.System.13 to see how we have modified the ASP.dll listing47.Debugging ASP. They accept arguments.net version.New Riders .System.12 Author Search Page Using AuthorLogic Object <%@ Language="C#" Inherits="AuthorSearch" Src="listing414. use this compile script: csc /t:library /out:chapter4.Web. Now that you have added to your Chapter4 namespace.dll /r:System.System. The lines are wrapped here because of space constraints. and return the results to the client. New Riders - Debugging ASP.NET made by dotneter@teamfly <template name="ItemTemplate"> <%# ((DataRow)Container.DataItem)["au_fname"]. ToString() %> <%# ((DataRow)Container.DataItem)["au_lname"]. ToString() %> <br /> <%# ((DataRow)Container.DataItem)["address"]. ToString() %> <br /> <%# ((DataRow)Container.DataItem)["city"]. ToString() %>, <%# ((DataRow)Container.DataItem)["state"]. ToString() %> <%# ((DataRow)Container.DataItem)["zip"]. ToString() %> <hr /> </template> </asp:repeater> </form> Listing 4.13 Author Search Page Using AuthorLogic Object (Visual Basic .NET) <%@ <template name="ItemTemplate"> <%# CType(Container.DataItem,System.Data.DataRow)("au_fname") %> <%# CType(Container.DataItem,System.Data.DataRow)("au_lname") %> New Riders - Debugging ASP.NET made by dotneter@teamfly <br /> <%# CType(Container.DataItem,System.Data.DataRow)("address") %> <br /> <%# CType(Container.DataItem,System.Data.DataRow)("city") %>, <%# CType(Container.DataItem,System.Data.DataRow)("state") %> <%# CType(Container.DataItem,System.Data.DataRow)("zip") %> <hr /> </template> </asp:repeater> </form> These two listings look similar to Listing 4.4, with a few additions. For example, there is the addition of an @Import directive for the System.Data namespace. This enables you to reference the objects contained in the System.Data namespace without explicitly typing System.Data as a prefix to its members each time you want to use one. The code-behind class for your author search ASP.NET page binds the Repeater server control to the Rows collection of a DataTable, so you need to be able to cast its individual data items to DataRow objects. Finally, take a look at how the code-behind class for your author search ASP.NET page uses the AuthorLogic object to get lists of authors and binds it to the Repeater server control. This can be seen in the Listings 4.14 and 4.15. Listing 4.14 Code-Behind Class for Author Search Page That Uses AuthorLogic Object using System; using System.Web.UI; using System.Web.UI.WebControls; using Chapter4; public class AuthorSearch : Page { protected Label lbl_author; protected TextBox txt_name; protected StateList uc_statelist; protected Repeater authorList; //normally, you would keep database connection info //in the global.asax file or in an XML configuration file private string connectString = "Password=;User ID=sa;" + "Initial Catalog=pubs;Data Source=localhost;"; New Riders - Debugging ASP.NET made by dotneter@teamfly //declare your AuthorLogic business object private AuthorLogic al; public void btn_name_click(Object sender, EventArgs e) {(); } public void btn_state_click(Object sender, EventArgs e) {(); } } Listing 4.15 Code-Behind Class for Author Search Page That Uses AuthorLogic Object (Visual Basic .NET) Imports System Imports System.Web.UI Imports System.Web.UI.WebControls Imports Chapter4 Public Class AuthorSearch Inherits Page Protected lbl_author As Label Protected txt_name As TextBox New Riders - Debugging ASP.NET made by dotneter@teamfly Protected uc_statelist As StateList Protected authorList As Repeater ' normally, you would keep database connection info ' in the global.asax file or in an XML configuration file Private connectString As String = "Password=;User ID=sa;" & _ "Initial Catalog=pubs;Data Source=localhost;" 'declare your AuthorLogic business object Private al As AuthorLogic Public Sub btn_name_click(sender As Object, e As EventArgs)() End Sub Public Sub btn_state_click(sender As Object, e As EventArgs)() End Sub End Class Although setting up all of this might seem like a bit of a hassle at first, the effort pales in comparison to the amount of work that you will do in the long run if you handle every page á la carte. With the introduction of the .NET framework, many sources will tell you that because ASP.NET pages are now compiled, it isn’t necessary to use components. Don’t listen to them. Compiled ASP.NET pages might buy you performance, but good design is what yields maintainability and, most importantly, robust, bug-free code. New Riders - Debugging ASP.NET made by dotneter@teamfly Control-of-Flow Guidelines A good way to keep yourself out of trouble when it comes to debugging and code maintenance is to employ good control-of-flow coding practices. This might seem like a no-brainer, but experience has shown that the concept definitely warrants reinforcement. If Statements and Case/Switch Constructs For simple decisions in your code that have two or fewer possible alternatives, If statements are ideal. They are a quick and easy way to guide the flow of your code. If you need to make a selection based on a long list of possible alternatives, however, If statements can get messy and intertwined. If you are choosing the path of execution based on the value of a single variable, we strongly recommend using a Case construct (Switch construct, in C#). Consider Listings 4.16 and 4.17 that use multiple If statements. Listing 4.16 Large If Statement Block public int Foo(int someVar) { if(someVar==10 || someVar==20 || somevar==30) {//do something here} else if(someVar==15 || someVar==25 || someVar==35) {//do something here} else if(someVar==40 || someVar==50 || someVar==60) {//do something here} else {//do something here} } Listing 4.17 Large If Statement Block (Visual Basic .NET) Public Function Foo(someVar As Integer) As Integer If someVar = 10 Or someVar = 20 Or someVar = 30 Then ' do something here ElseIf someVar = 15 Or someVar = 25 Or someVar = 35 Then ' do something here ElseIf someVar = 40 Or someVar = 50 Or someVar = 60 Then ' do something here Else ' do something here End If New Riders - Debugging ASP.NET made by dotneter@teamfly End Function Although you can tell what is going on upon close examination of the code, it might not be apparent when you are scanning it for bugs. A more concise and organized way of accomplishing this task is to use a Case/Switch construct as shown in Listings 4.18 and 4.19. Listing 4.18 Switch Block (C#) public int Foo(int someVar) { switch(someVar) { case 10: case 20: case 30: //do something here break; case 15: case 25: case 35: //do something here break; case 40: case 50: case 60: //do something here break; default: //do something here break; } } Listing 4.19 Case Statement Block (Visual Basic .NET) Public Function Foo(someVar As Integer) As Integer Select Case someVar Case 10, 20, 30 ' do something here Case 15, 25, 35 ' do something here Case 40, 50, 60 ' do something here Case Else ' do something here End Select End Function //see whether you need to get out early if(someVar==50) {break.} if((someVar * 2) % 10 == 0) {break.NET) Public Function Foo(someVar As Integer) As Integer Dim i As Integer For i = 1 To 100 ' do something to someVar someVar = i .New Riders .20 Code with Multiple Exit Points public int Foo(int someVar) { int i = 0. leaving you scratching your head when you try to determine why your code is not working properly.21 Code with Multiple Exit Points (Visual Basic .NET made by dotneter@teamfly Function and Loop Exit Points Some of the most elusive bugs in applications arise from improper use of function and loop exit points.Debugging ASP. } Listing 4. then it is more difficult to track down which exit point your code is using when you are debugging. As a general rule.i++) { //do something to someVar someVar = i. Consider the following function that violates the single exit point rule (shown in Listings 4. Listing 4.20 and 4.21). for(i=0.i<=100. a function or loop should have only one exit point.} } if(i < 100) {return 1. Why should this be so? If a function or loop has more than one exit point. An exit point could be hidden in your code.} return 0. 22 and 4. Listings 4. Status flags can be included in loop definitions to determine when the loop should end.i++) { //do something to someVar someVar = i. Status flags can also be used to set return codes at the single function exit point. so get out Return 1 End If Return 0 End Function You would need to set breakpoints and step through your code line by line to determine what is going on here.22 Code with a Single Exit Point public int Foo2(int someVar) { int status = 0. Listing 4.New Riders .23 demonstrate this concept.} //single place where loop can terminate early if(status==1) {break.Debugging ASP. for(int i=0.} } .NET made by dotneter@teamfly ' see whether you need to get out early If someVar = 50 Then Exit For End If If (someVar * 2) Mod 10 = 0 Then Exit For End If Next i If i < 100 Then ' loop ended prematurely. //see whether you need to get out early if(someVar==50) {status = 1.} if((someVar * 2) % 10 == 0) {status = 1.i<=100. A more effective approach is to use status flags. } Listing 4. if you need to do . With structured exception handling.New Riders . it should be used only when you are going to do something useful with the exception that is raised.23 Code with Single Exit Point (Visual Basic . When to Use Structured Exception Handling Structured exception handling can be an effective tool for catching and dealing with runtime errors.NET made by dotneter@teamfly return status. you can be more selective with error trapping.Debugging ASP. you can enclose blocks of code that are prone to errors and catch specific exceptions. For example.NET) Public Function Foo(someVar As Integer) As Integer Dim i As Integer = 0 Dim status As Integer = 0 For i = 1 To 100 ' do something to someVar someVar = i ' see whether you need to get out early If someVar = 50 Then status = 1 End If If (someVar * 2) Mod 10 = 0 Then status = 1 End If ' single place where loop can terminate early If status = 1 Then Exit For Next i Return status End Function Structured Exception Handling One of the greatest enhancements introduced with ASP. However. Rather than put a blanket On Error Resume Next statement in your code and constantly check for errors in the Err object.NET is structured exception handling. If you want to log the error and display it to the user.Write(e.24 and 4. ToString()).NET made by dotneter@teamfly any cleanup processing when an exception occurs. Listing 4.NET) <%@ Page Language="VB" %> <%@ Import Namespace="System. Later. we’ll discuss using a global exception handler and a custom error page to catch generic exceptions.txt") Catch e As FileNotFoundException Response. } %> Listing 4.NET. } catch(FileNotFoundException e) { Response.IO" %> <% Try Dim sr As StreamReader = New StreamReader("c:\bogusfile. avoid using the generic Exception class. then specify it. then structured exception handling is ideal.24 Trapping a Specific Exception Type <%@ Page Language="C#" %> <%@ Import Namespace="System. If you want to trap and clean up after a particular type of exception. ToString()) End Try %> .txt").New Riders . Effective Use of Structured Exception Handling When using structured exception handling in ASP.IO" %> <% try { StreamReader sr = new StreamReader(@"c:\bogusfile. then you can implement a global exception handler with a custom error page (discussed later in this chapter in the section “Implementing the Application_Error Event Handler”).Write(e.Debugging ASP.25 Trapping a Specific Exception Type (Visual Basic .25 demonstrate catching specific exceptions. Listings 4. A great way to do this is to implement an exception handler at the application level. which contains the definition for the Application object that you use in a typical ASP. Listing 4. you’ll want to redirect the users of your website to a friendly page that tells them that something has gone wrong.26 Application_Error Event Handler <%@ Application void Application_Error(object sender. and then provide them with customer support information as well as a link back to your web application’s home page. This will yield the original exception information.Form.Diagnostics" %> <script language="VB" runat="server"> Sub Application_Error(sender As Object. This is a wrapper that was placed around the original exception when it was passed from your ASP. The GetLastError() method of the Server object simply returns a reference to a generic HttpException. Inside the Application_Error event handler. you have to be sure to set a reference to the System.NET) <%@ Application Language="VB" %> <%@ Import Namespace="System.GetLastError().NET page to the Application_Error event.Diagnostics namespace.NET made by dotneter@teamfly "\nQUERYSTRING: " + Request.New Riders .GetLastError().GetBaseException(). regardless of how many layers have been added to the exception tree.StackTrace.27 Application_Error Event Handler (Visual Basic . . . . you need to call its GetBaseException() method. _ EventLogEntryType. //Insert optional email notification here.. you declare an Exception object and initialize it through a call to Server.WriteEntry("Test Web".QueryString. e As EventArgs) ' get reference to the source of the exception chain Dim ex As Exception = Server. You’ll use the EventLog class in this namespace to write exception details to the Windows 2000 event log.Debugging ASP. ToString() & _ "\nTARGETSITE: " & ex.QueryString. } </script> Listing 4. End Sub </script> First.GetBaseException() ' log the details of the exception and page state to the ' Windows 2000 Event Log EventLog. To get access to the original exception. EventLogEntryType. _ "MESSAGE: " & ex. ToString() + "\nSTACKTRACE: " + ex.Message & _ "\nSOURCE: " & ex.Source & _ "\nFORM: " & Request.Error) 'Insert optional email notification here. ToString() & _ "\nQUERYSTRING: :" & Request.Error).StackTrace.. TargetSite & _ "\nSTACKTRACE: " & ex. the exception source. Listings 4. It appears in the Source field of the Windows 2000 event log viewer. We chose to use the Error element of the enumeration. however. and a complete stack trace. including the exception message. it strips off the contents of the query string—hence the need to specifically include it previously. you can still leverage the global exception handler. At the end of the event handler.You can read more about how to leverage the Windows 2000 event log in Chapter 8. the contents of the QueryString collection. specifying the page that you want to load in the user’s browser. After the Application_Error event has completed its work.NET framework is beyond the scope of this book. Listing 4.NET page. Note that the stack trace contains the name of the file that was the source of the exception.“Leveraging the Windows 2000 Event Log. the contents of the Form collection. EventArgs e) { try { //do some complex stuff . If you need to do some cleanup in the event of an exception and you implement structured exception handling inside your ASP. The code that you have just implemented will capture all unhandled exceptions that occur in your web application. The third and final parameter to the WriteEntry() method is an enumeration of type EventLogEntryType. the name of the method that generated the error (TargetSite).Debugging ASP.28 and 4.New Riders . There are several overloaded signatures for this method. you can use the Server. it automatically redirects the user of your web application to your custom error page (which you will set up in the next section). you make a call to the WriteEntry() method of the EventLog class.ClearError() method after you have logged the exception and redirect your user using the Server. Optionally. Discussion of the different messaging paradigms in the .29 present examples of how you would do it.NET made by dotneter@teamfly Next. we inserted a comment block where you can optionally put code to email the exception information to your IT support staff.Execute() method.You can see that we have added a lot of information to help track down what caused the exception. The first parameter is the source of the error. The second parameter is the log data itself.28 Throwing a Handled Exception <%@ Page protected void button1_click(object sender. However.”The implementation that we chose to use here accepts three parameters. New Riders .NET made by dotneter@teamfly //generate your fictional exception int x = 1. } } </script> <form runat="server"> <asp:button </form> Listing 4.NET) <%@ Page Protected Sub button1_click(sender As Object.29 Throwing a Handled Exception (Visual Basic . e As EventArgs) Try ' do some complex stuff ' generate your fictional exception Dim x As Integer = 1 Dim y As Integer = 0 Dim z As Integer = x / y Catch ex As DivideByZeroException ' put cleanup code here Throw(ex) End Try End Sub </script> <form runat="server"> <asp:button </form> . int z = x / y.Debugging ASP. int y = 0. } catch(DivideByZeroException ex) { //put cleanup code here throw(ex). This takes you to the catch block. If the mode is Off. it behaves like the Off mode. Off. however. All other browsers of the site will get the behavior of the On mode.30 Adding the <customerrors> Tag to Your Config. This is just an ordinary ASP.Debugging ASP. and RemoteOnly.aspx page to display your friendly message to the user of your web application.web file of your web application. An extremely simple example is the one in Listing 4. It helps to boost users’ confidence in your site when it can recover gracefully from the unexpected. If you are browsing your web application while sitting at the web server itself.NET page that includes helpful information for the user of your web application if an error occurs. and you are redirected to the error.New Riders . you would do processing as usual.web file might already have a <customerrors> tag.31. a default ASP.aspx" /> </configuration> Note that your config. it fires the button1_click event handler. . Setting Up the Custom Error Page The first step in setting up a custom error page is to modify your config. you can perform any page-specific cleanup code before calling throw(ex) to pass your exception to the global exception handler to be logged to the Windows 2000 event log. When you click the button. The RemoteOnly mode is a hybrid of the two other modes. If the mode is On.30 to the config. Here.aspx) referenced in the config. users will always be redirected to the custom error page specified by the defaultredirect attribute if an unhandled exception occurs. Next. For the purposes of this demonstration.NET “friendly.NET made by dotneter@teamfly The code in these listings defines a web form with a text box and a button. If no defaultredirect attribute is set.web File <configuration> <customerrors mode="On" defaultredirect="error. you need to build the custom error page (error. In the event handler. so you might only need to modify the existing one. yet not so friendly” message will be displayed to the user when exceptions occur. you intentionally generate a DivideByZeroException.web file. Add the code in Listing 4. The mode attribute of the <customerrors> tag has three settings: On.web file (discussed in the next section) takes over. When the global exception handler is finished logging the error. the details of any exception that occurs will be shown to the user in the browser. the defaultredirect attribute that you set in your config. Listing 4.web file to route the users of your web application to a friendly error page if an exception occurs. web file. the chapter moved on to control-of-flow guidelines to help you prevent bugs from happening in the first place. you can click <a href="home.aspx">here</a> to go back to the homepage. you were introduced to the <customerrors> tag in the config. </body> </html> Summary This chapter covered a tremendous amount of ground. In this discussion. The next chapter discusses conditional compiling and shows how it enables you to toggle between debug and production code quickly and easily. Leveraging user controls enables you to reuse pieces of user interface code much more efficiently than using include files in traditional ASP web applications. . We covered how Switch and Case constructs are often superior to multiple If statements.Debugging ASP. Please contact customer service at (800)555-5555.New Riders .31 Simple Custom Error Page <html> <head> <title>My web application: Error page</title> </head> <body> An unexpected error occurred in the application.NET continues to support and encourage the use of business objects to encapsulate frequently used business logic in your web applications Next. It started off with a discussion of code partitioning and how it helps to reduce bugs that are caused by failure to update repetitive code. Or. The importance of a single exit point to both functions and loops was also stressed. This was followed with a discussion of when and how to use structured exception handling in your ASP. Thank you for your patience. We rounded out the chapter with a discussion on how to implement a global exception handler to log both unhandled exceptions in your ASP.NET made by dotneter@teamfly Listing 4.NET web applications.NET web applications and handled exceptions that you still want to be logged. ASP. The introduction of code-behind classes helps you to better separate your user interface logic from your business logic. and you learned how it is used to specify a custom error page that the user will be redirected to in the event of an error. Basically.NET Debugging Environment 8 Leveraging the Windows 2000 Event Log Chapter 5. Or.Debugging ASP. Generally. This chapter discusses what conditional compiling is. . In the . With conditional compiling. you might have a function that you want two versions of: one for your debugging and one for a release. The debug version might be littered with output statements or other code that would be too slow to execute in a release environment. and shows what it can provide for you in terms of debugging your ASP.NET made by dotneter@teamfly Part II: ASP. this is used if you have a debugging function that you do not want to include in your release build. conditional compiling enables you to compile parts of your code based on a certain condition defined at compile time. depending on its environment. Conditional Compiling CONDITIONAL COMPILING IS ARGUABLY ONE OF the greatest debugging tools available to any programmer.NET framework. One method uses function attributes to tag the function as conditionally compiled.NET Debugging Tools ASP.NET Debugging Tools 5 Conditional Compiling 6 Tracing 7 Visual Studio . you could create two versions of the function and flip the switch on which one should be compiled in your program.New Riders . The second method involves using preprocessing directives to tell the compiler at compile time which functions to include and which functions to remove.NET applications. What Is Conditional Compiling? Conditional compiling is a very basic concept that enables you to do some pretty powerful things. there are two ways to accomplish this type of conditional compilation. tells how it works. g cc.AmIHere()). } protected void Page_Load(object sender. [Conditional("DEBUG")] public void IAmHere() .Page_Load).Load += new System. } protected void Page_Init(object sender.NET made by dotneter@teamfly Conditional Compiling with Function Attributes First take a look at conditional compiling using attributes. Listing 5. EventArgs e) { CondClass cc = new CondClass(). Response.Debugging ASP.EventHandler(this. } private void InitializeComponent() { this.Init += new System. } } } public class CondClass { String str. namespace WebApplication7 { public class WebForm1 : System.Page { public WebForm1() { Page. EventArgs e) { InitializeComponent().Web.Diagnostics.IAmHere(). Listing 5. using System.EventHandler(Page_Init).New Riders .Write("Am I here? " + cc.1 Conditionally Compiled Code (C#) using System.UI.1 is a program listing in C# that uses conditional compiling via function attributes. Page Sub New() WebForm1 = Me End Sub Private Sub InitializeComponent() End Sub Protected Sub WebForm1_Load(ByVal Sender As System.EventArgs) Dim cc As CondClass = New CondClass() cc. ByVal e As System.Web.IAmHere() Response.Web.UI.Debugging ASP.Page Dim WithEvents WebForm1 As System.NET made by dotneter@teamfly { str = "I am here!".AmIHere()) End Sub Protected Sub WebForm1_Init(ByVal Sender As System.NET) Imports System Imports System.Diagnostics Public Class WebForm1 Inherits System.2 is the same program written Visual Basic . } public String AmIHere() { return str.Write("Am I here? " + cc.New Riders .EventArgs) InitializeComponent() End Sub End Class . ByVal e As System.UI.Object.NET. Listing 5.Object.2 Conditionally Compiled Code (Visual Basic . } } Listing 5. . The first thing that you will need to do in either language is to “import” the System. this is never the case. that particular string is empty. the Conditional attribute is used with the IAmHere function. In a debug build. you will need to define the DEBUG symbol yourself.Write is called to see if the string in the object has been set by the call to IAmHere. If you compile in release mode. in both code snippets. in C#. you use the using keyword.You will notice that. and then both the IAmHere and AmIHere methods are called.NET made by dotneter@teamfly Public Class CondClass Dim str As String <Conditional("DEBUG")> Public Sub IAmHere() str = "I am here!" End Sub Public Function AmIHere() As String AmIHere = str End Function End Class Now you’ll examine what is going on in these code snippets. you use the Imports keyword. it would look like /d:DEBUG. Then a Response. Notice the different placement. This can be done with the /d switch. In this case. In Visual Basic . depending on the language you are using. this is always the case. you will see only the “Am I here?” string—the IAmHere function will not be called because it has not been compiled into the application due to the Conditional failing. In the WebForm1_Load/Page_Load method. the symbol DEBUG has been defined by the default settings of the project. Calling IAmHere sets the string in the class to “I am here!”. Otherwise.Diagnostics namespaces. In a release build. so the string will be set and the output will show “Am I here? I am here!”.NET. Note that you can use any symbol that you choose instead of DEBUG.New Riders .Debugging ASP. you will notice that an instance of your CondClass object is instantiated. If you’re compiling in debug mode. In each of these pieces of code. we have created a class named CondClass that contains a string and two functions: IAmHere and AmIHere. The condition in which this function will be compiled is whether DEBUG is defined in the project properties. If you are compiling from the command line. you can change between RELEASE and DEBUG builds with the Project Properties dialog box.Page { public WebForm1() { Page.Load += new System.AmIHere()). } protected void Page_Load(object sender. respectively. EventArgs e) { CondClass cc = new CondClass(). namespace WebApplication7 { public class WebForm1 : System.Debugging ASP. Listings 5.New Riders .NET made by dotneter@teamfly If you are compiling using the Visual Studio .UI.NET IDE.Page_Load). Listing 5.3 Conditional Compiling with Preprocessor Directives (C#) #define MYDEBUG using System.Web.3 and 5. Conditional Compiling with Preprocessor Directives Now you’ll take a look at the second way to achieve conditional compiling through preprocessor directives.NET.EventHandler (this.4 show the same program in C# and Visual Basic . cc. Response. EventArgs e) { InitializeComponent().IAmHere().Write("Am I here? " + cc. } } .You can access this from the Build menu under Configuration Manager.EventHandler(Page_Init). } private void InitializeComponent() { this. } protected void Page_Init(object sender.Init += new System. Page Dim WithEvents WebForm1 As System.Debugging ASP.Web.Page Sub New() WebForm1 = Me End Sub Private Sub InitializeComponent() End Sub Protected Sub WebForm1_Load(ByVal Sender As System.UI. ByVal e As System.NET) #Const MYDEBUG = 1 Public Class WebForm1 Inherits System.IAmHere() .Web.EventArgs) Dim cc As CondClass = New CondClass() cc.New Riders . public void IAmHere() { #if MYDEBUG str = "I am here!".UI.NET made by dotneter@teamfly } public class CondClass { String str. #endif } public String AmIHere() { return str. } } Listing 5.Object.4 Conditional Compiling with Preprocessor Directives (Visual Basic . as described. First. They cannot be referenced in your program like a Visual Basic . this version of the code works a bit differently.#endif and the #Const/#define directives..NET and #define in C# define conditional compiler constants.NET or C#. Simply put.. These can be thought of as variable constants in either Visual Basic . However.Then. we’ll discuss what preprocessor directives are. the #Const declaration must come before any namespace imports or any program code for it to be valid.NET made by dotneter@teamfly Response. but these compiler constants are known to the compiler only.EventArgs) InitializeComponent() End Sub End Class Public Class CondClass Dim str As String Public Sub IAmHere() #If MYDEBUG Then str = "I am here!" #End If End Sub Public Function AmIHere() As String AmIHere = str End Function End Class The same class with the same functionality is provided.NET..New Riders .Write("Am I here? " + cc.Object..AmIHere()) End Sub Protected Sub WebForm1_Init(ByVal Sender As System.Debugging ASP. you will see that a constant known as MYDEBUG is defined using each language’s method.. In this instance.#End If/#if. In Visual Basic .NET or a C# variable constant. In both versions.You can . #Const in Visual Basic . ByVal e As System. preprocessor directives are commands embedded in your code that are interpreted by the compiler at compilation time and that will influence its behavior as it compiles your code. you are using the #If.. In the previous sample. != (inequality).6 demonstrates the same concept in Visual Basic .NET) Public Sub IAmHere() #If MYDEBUG Then str = "I am here!" #End If End Sub If you are using C#..#End If (Visual Basic .Debugging ASP.#endif (C#) public void IAmHere() { #if MYDEBUG str = "I am here!".. the line looks like #Const MYDEBUG.NET listing.You can use the following operators to evaluate multiple symbols: == (equality).#End If directives in Visual Basic .. In the C# example. you may also use operators to test whether an item has been defined.. && (and). .6 Usage of #If. Then... and you will get the “Am I here? I am here!” output. #endif } Listing 5.Then. and you will get only the “Am I here?” output because the internal string in the class will not be set during the IAmHere call. These constants are used in conjunction with the #If.New Riders .You can group symbols and operators with parentheses. then the contents of what is in between the tags will be compiled in.NET and the #if...5 shows the appropriate lines in C#.NET. This section discusses these additional directives. and shows what they can provide for you.#endif directives in C#.NET. If you remove the constant. and | | (or). tells how they work.. Listing 5.NET made by dotneter@teamfly see these used in the previous examples. it is written as #define MYDEBUG.. Listing 5. In either case. these tags are wrapped around the statements that you want to be included only if the constant that you choose has been defined. If you keep the constant in place at the top of your code. Listing 5. In the Visual Basic . these tags were put around the contents of the IAmHere function.. Other Preprocessor Directives A few other preprocessor directives are available in C# that are not found in Visual Basic . the function’s internals will not be compiled in.5 Usage of #if.. the MYDEBUG symbol is undefined. so they are demonstrated together here. immediately afterward. Listing 5. These two directives enable you to display a message to the user in the output window as the program is compiling. #endif #undef MYDEBUG #if MYDEBUG Response.New Riders . However. the first Response.7 Using #undef #define MYDEBUG public void MyFunction() { #if MYDEBUG Response. This message can have either a warning status or an error status. This can be useful if you need to temporarily remove a line or two inside a debugging function. Listing 5.Write will not be compiled into the program and.Debugging ASP.You could simply #undef the symbol for the lines that you want to execute and then again #define the symbol where you want to restart the execution of the function.NET made by dotneter@teamfly #undef #undef is the exact opposite of #define. Listing 5. #warning and #error #warning and #error are very similar. therefore.Write("You won't see me!"). the second Response. It removes the definition of a preprocessor constant. you would get an output of Hello but not You won't see me!. Listing 5. so the second #if check will fail.Write will be compiled in and executed.8 #warning and #error Example #define MYDEBUG . will not be executed. Because MYDEBUG has been defined.7 shows an example of how it is used.Write("Hello"). #endif } If you called the MyFunction function.8 shows an example of each. you will be able to add a robust debugging interface to your application and immediately remove it from the production-level output at the proverbial flip of a switch. you would get a compiler error with the text stated because both MYDEBUG and RELEASE were defined and DEBUG was not. both in a debug and a production environment. Next. with tremendously powerful results.NET applications. if you switched your build configuration to a release build and built the program. if you were in a debug build with MYDEBUG defined. Summary Conditional compiling is extremely easy to implement. the code is never compiled into the application—therefore. This is just a handy hint.NET made by dotneter@teamfly #if MYDEBUG && !DEBUG #error Both MYDEBUG and RELEASE are defined!! #elif MYDEBUG #warning WARNING . however. By using the preprocessor directives of this section. This chapter looked at two different ways to accomplish a similar task: through preprocessor directives and via function attributes. Chapter 6. you looked at some other preprocessor compiler directives and how they can provide additional information to aid you in debugging your ASP. In the next chapter. you will start to look at the built-in . Tracing . The key difference here is that by using the Conditional function attribute. it is never executed. This would be very useful to ensure that you do not compile any of your own debugging code into your release application.Debugging ASP. However. it could never be executed. you would simply see a warning telling you that MYDEBUG was defined while compiling your program.NET tracing facilities and how they can help you explore what is happening inside your application as it is running. By adding some conditional compiling to your code.New Riders . the debug code is compiled into the application.MYDEBUG is defined! #endif In this example. . The value that you assign to this attribute determines the display order of the trace results. To address this issue. tracing information will be displayed at the bottom of your ASP.NET made by dotneter@teamfly ONE OF THE MOST COMMON WAYS to debug traditional ASP web applications is to use trusty calls to Response. like this: <%@ Page Language="C#" Trace=="true" %%> If the Trace attribute has a value of true.Write appears wherever the call is made in the code. The output created by calls to Response. When you are finished debugging your web application using calls to Response. Alternatively. Configuration To use tracing in your ASP. however.NET web application. When your ASP pages get to be hundreds of lines long. This approach had several drawbacks. you are then faced with the daunting task of stripping out all this debug code. you must carefully scan each of your ASP pages to make sure that you remove all the Response. you can include the TraceMode attribute.Write.Write calls that pertain to debugging.NET page after the entire page has been rendered. This can be done at either the page level or the application level.NET implements the TraceContext class. Page-Level Configuration Enabling tracing at the page level entails adding the Trace attribute to the @Page directive.Debugging ASP. This enables you to create checkpoints in your code to view the contents of variables in the browser.NET web applications more effectively. SortByTime is the default if you do not specify the TraceMode attribute.New Riders . The TraceContext class solves all these issues and offers many more features. It sometimes becomes a chore trying to interpret your debugging information because it is strewn all over the page in the browser. you need to enable it. The formatting of your page is also affected.Write. The possible values are SortByTime and SortByCategory. this can be a real pain. ASP. Because you are using the same type of code both to debug and to create valid output. Let’s take a look at some of the ways that the TraceContext class can help you debug your ASP. New Riders . Table 6. The requestLimit attribute sets a limit on how many page requests are kept in the trace log.config File <configuration> <system.1. if tracing is disabled at the application level but is enabled at the page level. trace information will still be displayed in the browser.axd) is available only on the host Web server. localOnly Is true if the trace viewer (trace. none of them is required. An example of a trace entry in the web.web> </configuration> Even though this example uses all the available attributes. Setting the localOnly attribute to true enables you to view trace information if you are logged into the server locally.web> section of the web.NET made by dotneter@teamfly Application-Level Configuration Several tracing options are available at the application level. traceMode Indicates whether trace information should be displayed in the order it was processed.1 Application-Level Trace Configuration in the web. SortByTime is the default.Debugging ASP. Also note that page-level configuration settings overrule application-level settings.1. Tracing Options enabled pageOutput Is true if tracing is enabled for the application. or alphabetically by user-defined category. is false. other– wise. For instance. Is true if trace information should be displayed both on an application’s pages and in the .1.config file might look like Listing 6. Listing 6. The attributes available to you are shown in Table 6. These settings are specified using the <trace> XML element in the <system. SortByTime.web> <trace enabled="true" pageOutput="false" requestLimit="20" traceMode="SortByTime" localOnly="true" /> </system.config file. The default is true. The default is false. otherwise. The default is false. This prevents the logs from getting too large. That way. .axd trace utility. but remote users will not see anything. Note that pages that have tracing enabled on them are not affected by this setting. otherwise. The default is 10. is false. SortByCategory. requestLimit Specifies the number of trace requests to store on the server. is false. Figure 6. GET or POST The status code for the request—for example. Each of these sections is outlined here.NET tracing.1. Request Details section of trace output. By default. what exactly does it provide? Essentially. Request Details The Request Details section contains six pieces of information. a screenshot is included for each individual section of the trace output.Debugging ASP. outlined in Table 6. along with explanations. Trace Output Now that you’ve heard so much about configuring ASP.New Riders . Request Details Item Session Id Time of request Description Unique identifier for your session on the server The time (accurate to the second) that the page request was made Request encoding The encoding of the request—for example. when we discuss writing messages to the trace output.2.1.NET page contains several sections. Because the total output is too large to be viewed in one screenshot. the trace output generated by the TraceContext object and displayed at the bottom of your rendered ASP. Unicode (UTF – 8) Trace Information The Trace Information section contains the various trace messages and warnings that both you and the ASP.2.NET engine add to the trace output.NET made by dotneter@teamfly you can enable tracing to debug a problem. the . Later in the chapter. 200 The encoding of the response—for example. Table 6. you’ll get a chance to see what several sections put together look like. and the users of your website will never be the wiser. Unicode (UTF – 8) Request type Status code Response encoding You can see it all put together in Figure 6. Debugging ASP. Render Size Bytes.3 shows an example of the Control Tree section. Control Tree section of trace output. Figure 6.Value.2. Figure 6. and Viewstate Size Bytes. helping you to decipher control scope and ownership issues. The fields displayed for each item are Control Id.4 shows an example of the Cookies Collection section.Type.NET engine adds messages for when any events begin or end. Cookies Collection section of trace output.New Riders . Figure 6. Figure 6. The fields displayed for each item are Name.NET made by dotneter@teamfly ASP. Figure 6. time interval from the beginning of page processing. This enables you to get a feeling for which controls contain other controls. Control Tree The Control Tree section lists all the elements on your ASP. as with PreRender and SaveViewState. Figure 6. Trace Information section of trace output. and time interval from the last trace output item.NET web application.4. Message. Cookies Collection The Cookies Collection section lists all the cookies that are associated with your ASP. . The fields displayed for each item are Category.2 shows an example of the Trace Information section. The order in which the contents of the Trace Information section appear is determined by either the TraceMode attribute of the @Page directive or the TraceMode property of the TraceContext class.NET page in a hierarchical fashion. and Size.3. Figure 6. The fields displayed for each item are Name and Value. Querystring Collection The Querystring Collection section is displayed only if your ASP. along with its value. Figure 6. It contains two important pieces of information. Below the VIEWSTATE item is a listing of each control in the Form Collection section.5. Figure 6. Form Collection section of trace output.NET page includes a web form and you have already submitted it back to the server.6 shows an example of the Form Collection section. Querystring Collection section of trace output.NET page.New Riders . The fields displayed for each item are Name and Value.Debugging ASP. Figure 6. it displays the page’s VIEWSTATE.NET page has Querystring parameters passed to it.NET made by dotneter@teamfly Headers Collection The Headers Collection section lists all the HTTP headers that are passed to your ASP.5 shows an example of the Headers Collection section. . The fields displayed for each item are Name and Value. Headers Collection section of trace output. Figure 6. Figure 6.7 shows an example of the Querystring Collection section. Form Collection The Form Collection section is displayed only if your ASP. This is the condensed representation of the state of each of the controls on the web form.7.6. First. 2 Setting the IsEnabled Property Dynamically (C#) <%@ Page Language="C#" %> . two properties.Debugging ASP.8 shows an example of the Server Variables section (truncated because of the large number of elements in the collection).New Riders . you can specify tracing through a Querystring parameter. Of course. Figure 6. and SCRIPT_NAME. Listing 6.NET page.NET made by dotneter@teamfly Server Variables The Server Variables section contains a listing of all the server variables associated with your ASP. REMOTE_HOST. as shown in Listings 6. so you will need the constructor only if you want to enable tracing in your . A few examples are PATH_INFO. and two methods.8. TraceContext Properties The IsEnabled property works the same way as the Trace attribute of the @Page directive. ServerVariables section of trace output. The fields displayed for each item are Name and Value. The nice part about having this property available to you is that. For instance. Setting Trace Messages The TraceContext class has a fairly simple interface. unlike the @Page directive. these are in addition to the standard properties and methods inherited from the Object class. it can be dynamically assigned.2 and 6. Figure 6.NET components (discussed later in this chapter). with only one constructor.NET pages through the Trace property of the Page object.3. An instance of the TraceContext class is available to your ASP. Debugging ASP. The same behaviors and advantages that apply to the IsEnabled property also exist for the TraceMode property.NET) <%@ Page Protected Sub Page_Load(Sender As Object. Trace. Write and Warn: The output generated by the Write method is black. Just realize that everything said about the .New Riders . you will not suffer any performance penalty.IsEnabled = traceFlag End Sub </script> These listings set the IsEnabled property of the Trace object dynamically.3 Setting the IsEnabled Property Dynamically (Visual Basic . the IsEnabled property value still dictates whether trace information was displayed to the page.NET application when you move it to production. we will be discussing only the Write method. but it also isn’t even compiled. while the output generated by the Warn method is red. False) Trace.IsEnabled = traceFlag. EventArgs e) { bool traceFlag = Request. The real power of using the IsEnabled property is that when you set it to false. It is interesting to note that even if you specify a Trace attribute and set it to false. based on the presence of the trace Querystring variable.NET made by dotneter@teamfly <script language="C#" runat="server"> protected void Page_Load(object Sender. e As EventArgs) Dim traceFlag As Boolean = IIF(Request. the trace information not only isn’t displayed. Notice that no Trace attribute is assigned to the @Page directive. The TraceMode property works exactly like the TraceMode attribute of the @Page directive. As long as the IsEnabled property is set to false. True. TraceContext Methods Only one thing (besides their names) differentiates the two methods of the TraceContext class. For this reason. This means that you can leave your tracing code in your ASP. } </script> Listing 6.QueryString["trace"] != null ? true : false.QueryString("trace") _ <> Nothing. Figure 6. e As EventArgs) Trace.NET made by dotneter@teamfly Write method can also be applied to the Warn method. The second version accepts a trace message and a category.9.Write (string) The first of the overloaded Write methods of the TraceContext class accepts a single-string parameter.5 Implementing TraceContext. Listings 6.NET) <%@ Page Protected Sub Page_Load(Sender As Object.4 Implementing TraceContext.5 demonstrate its use. The first version accepts a trace message.Write("I'm tracing now") End Sub </script> Figure 6.Write (string) (C#) <%@ Page protected void Page_Load(object Sender. EventArgs e) { Trace. } </script> Listing 6.4 and 6. a category. Listing 6. TraceContext.2). The third version accepts a trace message. There are three overloaded versions of the Write and Warn methods.Write("I'm tracing now"). Each of these is covered in more detail next.Debugging ASP.New Riders . and an instance of an Exception class.9 shows what the trace output for the previous code looks like. .Write (string) (Visual Basic . This string contains the message that is displayed in the Message field of the Trace Information section of the trace output (as seen in Figure 6. Viewing a trace message in the trace output. No category was specified. Trace. string) (C#) <%@ Page protected void Page_Load(object Sender.NET pages.7 demonstrate the use of this version of the Write method.Write("Category 1". "Category 2 data"). The first parameter is the category of the trace item. Listing 6. As previously described. Trace. } </script> .You can assign categories to your trace items. "More Category 1 data").Write("Category 1".SortByCategory.Write (string. string) The second overloaded Write method of the TraceContext class takes two string parameters.Write (string. Trace. The next overloaded version of the Write/Warn method includes the category parameter. "Category 1 data"). TraceContext.6 Implementing TraceContext. Listings 6. This is probably the most likely version of the Write method that you will use when debugging your ASP. so it is blank. It appears in the Category field of the Trace Information section of the trace output.6 and 6. The second parameter is the message that will be displayed in the Message field.NET made by dotneter@teamfly Notice the message “I’m tracing now” that appears as the third line item in the Trace Information section.New Riders . EventArgs e) { Trace.Write("Category 2". leveraging the TraceMode attribute of the @Page directive or the TraceMode property of the TraceContext class to sort the Trace Information section results. and it is the same as the single-string parameter in the first overloaded Write method. TraceMode = TraceMode. this is done using the SortByCategory member of the TraceMode enumeration.Debugging ASP. Notice that the trace items are sorted by category so that both of the category 1 items appear together.Write (string. "Category 2 data") Trace. string. Figure 6.9 demonstrate . For the third parameter. TraceMode = TraceMode.You could alternatively have used the TraceMode attribute of the @Page directive.10 shows the trace output for the previous code.NET) <%@ Page Protected Sub Page_Load(Sender As Object. string) (Visual Basic . "Category 1 data") Trace.New Riders .7 Implementing TraceContext. e As EventArgs) Trace. Listings 6. you should pass in an object instance of the Exception class or an object instance of a class that inherits from the Exception class.NET made by dotneter@teamfly Listing 6.10. The first two parameters match up with the two parameters of the previous overloaded method call. instead of being separated by the category 2 item (which was the order in which the code made the calls to the Write method). "More Category 1 data") End Sub </script> Figure 6. Exception) The third overloaded version of the Write method takes three parameters.You would most likely use this method call when writing trace output in conjunction with structured exception handling.Write (string.Write("Category 2".SortByCategory Trace. Also.Debugging ASP. Viewing a trace message with a category in the trace output. the previous code uses the TraceMode property of the Trace object to set the sort order.8 and 6. TraceContext.Write("Category 1".Write("Category 1". Listing 6. string. } catch(DivideByZeroException ex) { Trace. try { int z = x / y. EventArgs e) { int x = 1. Exception) (Visual Basic . "Testing the limits of infinity?".NET) <%@ Page Protected Sub Page_Load(Sender As Object.Debugging ASP. int y = 0.Write("Errors". string. e As EventArgs) Dim x As Integer = 1 Dim y As Integer = 0 Try Dim z As Integer = x / y Catch ex As OverflowException Trace.Write (string. Exception) (C#) <%@ Page protected void Page_Load(object Sender. "Testing the limits of infinity?".ex).8 Implementing TraceContext.NET made by dotneter@teamfly this concept by intentionally causing an exception in a Try block that adds to the trace information in the Catch block.Write (string.New Riders .9 Implementing TraceContext.Write("Errors". } } </script> Listing 6.ex) End Try End Sub </script> . any request for this file is intercepted by an HttpHandler that is set up in either the machine.11 shows the trace output for the C# version of this code. Accessing the Trace Viewer The Trace Viewer is accessed via a special URL. In any directory of your web application. An entry within the <httpHandlers> XML element looks like Listing 6. you can access it by navigating to trace.You’ll notice that there is no trace.config file. now that you have all that data stored in the trace log. You’ll recall that the requestLimit attribute sets how many page requests to keep in the trace log. So. In addition to the message that you specify in the call to the Write method.config File .New Riders .axd.NET made by dotneter@teamfly Figure 6.10. you get the message from the exception that was thrown. making it easier to combine the two into a solution to the bug. Instead. Viewing a trace message with a category and exception information in the trace output. what do you do with it? Another fine question! The answer is to use the Trace Viewer to analyze it.config file. Listing 6.You can see valuable debugging information associated with the error that was thrown alongside your own custom comments.config file or your web application’s web. we discussed the various attributes of the <trace> XML element in the web. Trace Viewer In the “Application-Level Configuration” section at the beginning of the chapter.Debugging ASP. as well as the name of the procedure where the exception occurred.axd file anywhere. Figure 6.11.10 HttpHandlers Section of the machine. . Three items are present in the header of this page. you are presented with the Application Trace screen. File. It contains a list of page requests for which trace information has been tracked. When you first navigate to the trace.12 shows an example of the Application Trace screen of the Trace Viewer. Time of Request.New Riders . Status Code. all that is left to do to use the Trace Viewer is make sure that the enabled attribute of the <trace> XML element in your web.Web. Figure 6.. clearing all page requests from the screen. Figure 6. In addition. PublicKeyToken=b03f5f7f11d50a3a" /> . and Verb.axd" type="System. Culture=neutral.axd file. The third header item is a counter that tells you how many more requests can be tracked before the requestLimit is reached.config file is set to true.. The first is a link to clear the current trace log.2411. you are taken to the Request Details screen.other handler entries.. TraceHandler.NET made by dotneter@teamfly <httpHandlers> . On this page you will see is an exact . The second item in the header is the physical directory of the ASP. The fields displayed for each page request on the Application Trace screen are No.0. <add verb="*" path="trace. When you click one of the View Details links on the Application Trace screen.Debugging ASP. System.. Version=1.0.12. a link next to each item in the list shows the details for that specific page request.. Clicking this link resets tracing.. consisting of two different pages.NET web application.other handler entries. After that point. Application Trace page of the TraceViewer..Web. Using the Trace Viewer The Trace Viewer uses a fairly simple interface. trace information is not stored for anymore page requests until the trace information is cleared by clicking the Clear Current Trace link. </httpHandlers> With this HttpHandler entry in place..Handlers. using System.NET page.NET made by dotneter@teamfly representation of the trace information that would be displayed at the end of the particular ASP.12 present the component. Listings 6.Debugging ASP. } } } Listing 6. the . But what if you want to write trace information from within a component? Luckily. "I'm inside the component"). namespace Chapter6 { public class TestClass { public void DoStuff() { HttpContext.NET Framework makes this task equally easy. you need to build your simple component. Let’s take a look at how this would be done. so there is no need to present it again.Web.Write _ ("Component".NET) Imports System Imports System.NET pages contains an instance of the TraceContext class. "I'm inside the component") End Sub .Current.NET page if tracing had been enabled on it. The only difference is the large Request Details caption at the top of the page. First.11 Component That Leverages ASP. Trace.New Riders .Web Namespace Chapter6 Public Class TestClass Public Sub DoStuff() HttpContext. Several examples of this screen have been shown in previous figures in this chapter.NET Tracing (C#) using System. Trace. Listing 6.12 Component That leverages ASP.NET Tracing (Visual Basic . making it easy to write trace information from your ASP.Write ("Component".Current. Tracing via Components The Page object in your ASP.11 and 6. } </script> Listing 6.13.dll /r:System. e As EventArgs) Dim tc As TestClass = New TestClass() tc.Debugging ASP.dll /r:System. tc. you’ll get results like those shown in Figure 6.NET Page (Visual Basic .Web. . The first is for C#.DoStuff().dll Chapter6. Listing 6.Web.NET made by dotneter@teamfly End Class End Namespace Next. and the second is for Visual Basic . EventArgs e) { TestClass tc = new TestClass().NET) <%@ Page Protected Sub Page_Load(Sender As Object.dll Chapter6.NET Page (C#) <%@ Page protected void Page_Load(object Sender. compile your component using one of the following compile scripts.13 Using Trace-Enabled Component in an ASP. you can see that this works by using it in an ASP.NET. Viewing trace information written to the trace output from within a component.New Riders .vb Finally. Figure 6. csc /t:library /out:Chapter6.14 Using Trace-Enabled Component in an ASP.cs or vbc /t:library /out:Chapter6.DoStuff() End Sub </script> When you run this code.13.NET page. and server variables) was available to you in traditional ASP. how do you use it to your best advantage? Well. you could turn on tracing but set the pageOutput attribute of the <trace> XML element in the web. if used properly. The true power of ASP. can greatly reduce the amount of time and effort expended on debugging your ASP. headers. Most of the trace information presented (such as cookies. It enables you to see when each part of your ASP. Then you could let some of the potential users of your web application try it out. This can help you to determine which particular scenarios cause errors. all behind the scenes. This can be crucial to the process of finding performance bottlenecks in your code.New Riders . Often. . that really depends.” with a category of Component that was added from within the component.Debugging ASP. For instance. maybe the code is being executed multiple times by accident. It just wasn’t neatly packaged like it is in the Trace Viewer. Or.You can record lots of information about what they are doing and what is going wrong with their experience.NET web applications. These nuances. become fairly obvious when observing the contents of the Trace Information section of the trace output. Tips for Using Trace Information Now that you have all this trace information sitting in front of you.NET web applications.config file to false. which were tough to discover in traditional ASP. you’ll see the trace message “I’m inside the component. This can be a powerful tool for finding bugs in your ASP. It can also help you solve mysteries about why certain code is not processing correctly.NET tracing is in the Trace Information section.You can use that information just as you previously did.NET page is processing and determine how long it takes to process.NET made by dotneter@teamfly Inside the Trace Information section. the code isn’t being executed in the same order that you thought it was. Application-level tracing. config file. Next. you’ll get a thorough introduction to debugger in the Visual Studio .NET. This entailed adding attributes to the @Page directive and to the web.New Riders . We deferred the discussion of the constructor to the section on tracing via components.Debugging ASP.You started by learning how to configure tracing in ASP. Each of the three overloaded Write methods was explained and was correlated to the similar Warn method.NET tracing process: the TraceContext class.NET. Write and Warn. you took a detailed look at tracing in ASP. as well as how it is accessed via an HttpHandler that intercepts requests for the trace. which differs only in name and output appearance. IsEnabled and TraceMode.Trace Information. Following that. QueryString Collection. later in the chapter. .NET web application is not just limited to ASP. Control Tree. Cookies Collection.NET pages through the TraceContext object instance in the Page class’s Trace property. and Server Variables Collection sections.axd file. we discussed the primary player in the ASP. Then you learned how to both configure and use the Trace Viewer. you learned about the different sections that are included in the trace output at the page level.NET tracing to its fullest potential. Headers Collection. We also discussed how to leverage ASP. These include the Request Details.NET pages call. and you learned how they can be used to control the trace output of your ASP. were discussed next.NET pages.NET made by dotneter@teamfly Summary In this chapter. The TraceContext class’s two methods.NET IDE. at both the page level and the application level. The chapter wrapped up with a few tips and techniques for utilizing ASP. Tracing in an ASP. Form Collection.NET tracing from within components that your ASP. In the next chapter.You learned about its two properties. NET IDE debugger. this can be imagined simply as a stack. (As functions are called from within function. Call Stack The call stack enables you to display the list of functions currently called. . Figure 7. Many of these features have existed in the Visual Studio 6.NET IDE.You will take a detailed look at all of these as the chapter progresses. however. This will all be accomplished by building a project from scratch in the IDE. in turn.NET made by dotneter@teamfly Chapter 7.1. The call stack window. As the name implies.) With the call stack viewer.Debugging ASP. how to use them.1 shows the call stack in action. which. Figure 7. you can look at this stack and jump forward and backward into the stack to debug at any point in the chain.NET IDE is.NET Debugging Environment THE ASP SCRIPT DEBUGGER THAT IS INTEGRATED with the new Visual Studio . Unlike trying to debug traditional ASP pages in Visual Interdev. not all were previously available when debugging traditional ASP pages in Visual InterDev. so we recommend creating the project on your own as it is done in this chapter. and where each one is applicable for the problem you might be trying to conquer. are called from within functions.0 IDEs . Introduction to Features Let’s start out by taking a look at the most important features of the Visual Studio . Visual Studio . you will be looking at all the features available for debugging in the Visual Studio .New Riders . one of the greatest enhancements to debugging from the previous versions of ASP. a stack is created. without a doubt. this actually works right out of the box! In this chapter. New Riders - Debugging ASP.NET made by dotneter@teamfly. New Riders - Debugging ASP.NET made by dotneter@teamfly. New Riders - Debugging ASP.NET made by dotneter@teamfly. New Riders - Debugging ASP.NET made by dotneter@teamfly Attaching to Processes At some point you might need to attach to a process running somewhere on the computer and debug it from within the ASP.NET page. This could be the ASP.NET process, aspnet_wp.exe, or another running service, or any other program running on the server. This can be accomplished quite easily. Under the Debug menu you will find the Processes selection. Clicking this brings up the dialog box in Figure 7.6. Figure 7.6. Attaching to a running process. By default, you will see only the processes that you have started in some way. If you click the Show System Processes check box, you have access to everything that is running on the current machine. Click the process to be debugged, and then click the Attach button. Now, if that process hits a breakpoint or any other type of debugger event, that event pops up in the Visual Studio .NET IDE.You can also force the program to break by clicking the Break button at the bottom of the window. At any time, you can stop debugging the process by clicking the Detach button. Terminate kills the process altogether. After clicking the Attach button, you are given a choice of what type of debugging you want to do on the process you’ve selected. Figure 7.7 shows an example of this dialog box. Figure 7.7. Choosing the debug type. New Riders - Debugging ASP.NET made by dotneter@teamfly If you are attaching to a .NET process written using the Common Language Runtime (CLR), choose the Common Language Runtime option from the dialog box. If you’ll be breaking into a process that uses Microsoft Transact SQL language, check Microsoft T-SQL as your debug type. Native enables you to debug a standard Win32 application. This enables you to debug a Win32 program at an assembly language level. Finally, the Script type gives you the capability to debug standard Visual BasicScript and JavaScript. This is especially useful for debugging an instance of Internet Explorer. Setting It All Up There really isn’t a whole lot to mention here. It is extremely simple to use the Visual Studio .NET IDE for debugging ASP.NET pages. In most cases, it will be a plug-and-play affair. The default debug build of any ASP.NET project will have everything set up for you to begin. However, even though that probably will be the case, we will discuss what is absolutely required for the debugging to work, just in case something goes wrong. Take a look at the web.config file contained in your project. This is an XML file that contains specific configuration information for your ASP.NET project. One line in this file will look similar to the following: <compilation defaultLanguage="vb" debug=="true" //> The defaultLanguage parameter will be based on the default language of your ASP.NET project. But what we are concerned about here is the debug parameter. If you are running in a debugging environment and want to be able to access the spiffy features of the Visual Studio .NET IDE, this debug parameter must be set to true, as it is in the previous line. If it is set to false, none of the features will work. This is what you want for a release build of your project. New Riders - Debugging ASP.NET made by dotneter@teamfly Inline Debugging of ASP.NET Pages This is so very easy—you’re going to be extremely happy about this. If you’ve ever debugged a program using Visual Studio 6.0 (Visual Basic,Visual C++, and so on) you will feel right at home with what you are about to learn about Visual Studio .NET. Let’s discuss these great features using a sample project mentioned earlier in the chapter. As usual, both Visual Basic .NET and C# versions of the code will be provided for you to see. This sample project will consist of an ASP.NET page and a Visual Basic .NET/C# component so that you can see how easily the two interact and how they can be debugged simultaneously. The project itself will simply ask the user for a valid email address and then send a form letter to that address. Start out by creating the project. We called ours Chap5Visual Basic for the Visual Basic .NET version and Chap5CS for the C# version. The first thing to do is create the ASP.NET page that the user will see. Listing 7.1 contains the main ASP.NET page that contains the input form on which the user can enter the email address where the mail will be sent. Here the page name is left as WebForm1, the default name provided when the project was created. Listing 7.1 ASP.NET Page for Debugging Example <%@ Page <form id="Form1" method="post" runat="server"> Please enter the email address to send to: <br> <input type="text" id="txtEmail" runat="server" NAME="txtEmail"> <br> <input type="submit" id="btnSubmit" value="Send Email" runat="server" NAME="btnSubmit"> </form> </body> </html> EventHandler(Page_Init).2 contains the code-behind file for the C# project.UI. you will look at the code-behind file that is associated with this file.UI.HtmlInputText txtEmail.Init += new System.cs" AutoEventWireup="false" Inherits="Chap7CS.2 Listing for Debugging Example (C#) using System.ServerClick += new System. } private void btnSubmit_ServerClick(object sender.Page { protected System. public WebForm1() { Page. Now you will create the server– side code for this project.NET project. and Listing 7.NET made by dotneter@teamfly This page would work in a Visual Basic . It will be contained in two parts: First. System.Web. Listing 7. change the first line to the following: <%@ Page language="c#" Codebehind="WebForm1.UI.aspx. Here you will verify that you have a valid email address.btnSubmit_ServerClick).NET/C# component that will actually send the email.Web.EventHandler(this. protected System.New Riders . namespace Chap7CS { public class WebForm1 : System.WebForm1" %> This is a very simple page.EventArgs e) { . EventArgs e) { InitializeComponent(). Listing 7.3 contains the code-behind file for the Visual Basic .Debugging ASP.HtmlInputButton btnSubmit. you will create a Visual Basic .NET project.HtmlControls. To have it work in a C# project. } private void Page_Init(object sender.Web.btnSubmit.HtmlControls. Second. } private void InitializeComponent() { this. It consists of two elements: a text box for the email address and a Submit button to send the form to the server. IndexOf(".Web.IndexOf(".") == -1) Response.Write("The supplied email address is not valid. When the user clicks the Submit button.Object. ByVal e As System.NET) Public Class WebForm1 Inherits System.Write("The supplied email address is not valid. signifying that a .Value.Web. Next you will look at how the previously mentioned debugging tools can aid you in tracking down a problem in this simple page.IndexOf("@") = -1 Or _ txtEmail.IndexOf("@") == -1 || txtEmail. This drops a red dot into the margin.Write of an error message.Value. named btnSubmit. } } } Listing 7.HtmlControls.UI.HtmlInputText Protected WithEvents btnSubmit As System. simply move the mouse cursor to the gray left margin in the code editor window.NET IDE quite effectively. To set a breakpoint.") = -1 Then Response.EventArgs) Handles btnSubmit.Page Protected WithEvents txtEmail As System.. you can inspect what is in the text box on the form. This is the btnSubmit_ServerClick function in either piece of code.HtmlControls. you are listening to the ServerClick event of the Submit button.Web.New Riders .Value.").3 Listing for Debugging Example (Visual Basic .NET made by dotneter@teamfly if(txtEmail. but they demonstrate the features of the Visual Studio .ServerClick If txtEmail. it cannot be a valid email address and you then report this back to the user with a Response. If it does not contain an @ symbol or a.UI. this event is fired.Debugging ASP.Value. and click. In this example.UI.") End If End Sub End Class These examples are also extremely simple.HtmlInputButton Private Sub btnSubmit_ServerClick(ByVal sender As System. Setting a Breakpoint Let’s start out by setting a breakpoint on the ServerClick event of the Submit button. At this point. . The watch window option under the Debug menu.8 shows exactly where to set this breakpoint and what the margin will look like after clicking. If this tab is not available. you will find the same entry under the Debug menu as part of the Windows suboption. Click the one labeled Watch 1.NET made by dotneter@teamfly breakpoint has been set at that specific line.NET IDE should pop up with the breakpoint line highlighted in yellow.You will look at the watch window next. Figure 7. Figure 7.New Riders .Debugging ASP.Now click the Submit button. enter some gibberish into the text box that does not contain either an @ symbol or a . the Visual Studio . Figure 7. Figure 7. When this occurs. Setting the breakpoint in the example program.9. you will see a debugging window with some tabs below it. . Now go ahead and run the program. Watch Window At the bottom of your screen.9 shows the menu entry. When Internet Explorer appears.8. The execution of your program has paused. and now you can use some of the other debugging features. For example. if an error is supposed to occur if a value equals –1.Debugging ASP.Value as your watch name. This can be extremely useful when debugging all types of controls to see what values they contain.New Riders . That’s about it for the watch window.You will see that you can expand this entry into all the control’s properties to see what they each contain. control name. you might want to see only the Value property—in that case. It is always useful to start here when debugging almost any problem to make sure that the data isn’t to blame. With the txtEmail control expanded. you can see that the Value property contains whatever you entered in the control on the client side. This is a feature that you will use quite a bit in your debugging. you might be chasing after a problem only to find that the Value property is empty for some reason or that it does not contain the data that you think it does. This might be helpful if you want to test a certain case in your logic that might be difficult to hit. type in txtEmail. you could enter txtEmail. For now. you can enter a new value as you see fit.NET made by dotneter@teamfly The watch window enables you to type in a specific variable name. Also note that you can change the values of any of these properties at any time. To save space. Remember that you can enter any variable or any object into the window and view any or all of its specific properties. at this point you could change the value to –1 and continue execution to make sure that the code path is operating properly. For example. The other thing that you can do with the watch window is change the value of a variable. If you click the property value in the Value column in the watch window. or other object and then see what value it contains and what type it is. . So.Value = "newvalue". Similar to the watch window. this will be a familiar sight. you can test any function that you . The immediate window option under the Debug menu.10. which would set the string to "newvalue". For example.NET made by dotneter@teamfly The Command Window If you have used Visual Basic. it returns –1. To display the window. you can change the value of variables or object properties. just type its name into the command window. Immediate from the Debug menu at the top of your screen.you will see that it returns 0. The command window was called the immediate window in Visual Basic. if you type ValidateEmail ("test@myhost. Figure 7. either click the Command Window tab located at the bottom of your screen. If you type ValidateEmail("asdfasdf"). if you want to execute the ValidateEmail function in your code listing at any time. What makes the command window a bit more exciting than the watch window is its capability to execute functions.10 shows where you can find the option under the Debug menu. Figure 7. you could enter txtEmail. For example. or choose Windows. For example. This window enables you to issue commands to debug or evaluate expressions on the fly. typing txtEmail.com") .Debugging ASP. To view the contents of a variable or object.Value while at the breakpoint displays the contents of the text box upon submission to the server.New Riders . To change the value of the form text box. you could do it right from the command window: Just click in the window and call the function with the appropriate parameter. but its features are identical. NET made by dotneter@teamfly write right here without having the executing program call it explicitly—a great debugging aid. at the very least. Execution Control Before we start talking about the call stack. Keep this in mind if you see a flood of messages that you didn’t write into your code. Step Over does the opposite of Step Into. you are calling Debug. The keyboard shortcuts are the easiest method of using these features because they enable you to move through many lines of code in a very quick fashion. move to the next statement. if that level of detail is unnecessary.Debugging ASP. The easiest place to see the debug stream is in the output window at the bottom of the Visual Studio .NET IDE. Now keep in mind . in the ServerClick function of the Submit button. This window shows you all the debug statements that you have inserted into your code. These features enable you to control exactly what is executed in your program—and in what order. If you are about to call a function. They can also be used to trace deeply into certain portions of the code or skip over them. The three options are Step Into. Step Over. This can be used as a log to see exactly what is happening at any point in your code. This window can be displayed by either clicking the Output tab or choosing Output under the View menu and then choosing Other Windows. All these features can be found under the Debug menu at the top of your screen. If you are about to call a function in your code.WriteLine with the value of the form’s text box. and Step Out.NET. Tracing Tracing is a very simple method of debugging problems. As you can see in the previous examples. Step Into enables you to step one level deeper into the code at the current point of execution or. let’s take a brief journey into the execution control features of the debugger. All are also associated with keyboard shortcuts that vary depending on how you have configured Visual Studio .New Riders . Tracing can be used to print text statements during the execution of your code. This call spits out the value of the text box to the debug stream. using Step Over at this point does just that—it steps over execution of the function to the very next line of the function that you are currently executing. as well as anything else that gets written to the debug stream by any other components that might be associated with the running project. We recommend learning the keyboard shortcuts and using them while debugging your own code. using Step Into continues execution at the first line of the called function. This is quite useful when you know that a function or block of code is working correctly and you do not want to spend the time hitting every single line. and your debugging cursor moves to the next line in the previous function you were in.Debugging ASP. Now you have called a function from a function. After you have traced back to the call stack. The top two levels should show you the ValidateEmail function. You will also see quite a few other functions that are called by the ASP. This feature can be of use when you are debugging code that might not be all your own. you might need to know where you came from.NET made by dotneter@teamfly that this does not mean that it will not execute the function—it just will not allow you to dig into it. If you are many levels deep into a function stack. Now go ahead and double-click the btnSubmit_ServerClick function. Call Stack The call stack shows the current function you are in and all functions that preceded it. Again. This can be handy when you want to find where certain data values are coming from if they are wrong. you can trace back to the calling stack and see exactly who called you and with what information. Take a look at the call stack window. So how do you know which function you were previously in? That leads to the next debugging feature. you can use the watch window or the command window to inspect the local variables from the previous functions. followed by the btnSubmit_ServerClick function.NET system processes. . the call stack. In this case. A green highlight appears over the point in the function that you currently are in. the call to ValidateEmail is highlighted because this is the exact position that you are currently at in that function. this does not skip the execution of the remaining lines of code. When you call a function from a function from a function from a function. similar to the Step Over feature. they just execute behind your back. You can view the call stack window by clicking the Call Stack tab at the bottom of your screen or by choosing Call Stack from the Debug menu under Windows. Continuing the previous example. stop execution again on the btnSubmit_ServerClick function and then trace into the ValidateEmail function. you have a call stack that is four levels deep. It executes the function and moves on to the next line. By using this. Step Out enables you to jump out of the current function that you are debugging and go one level up.New Riders . with the current function on the top. you will remember that it can be a pain to deal with. Adding the Component You will now add the component to the ASP.New Riders .Debugging ASP.NET project. . Just right-click your mouse on the project name (Chap5VB or Chap5CS.NET application.NET component to your project and debug that simultaneously with your ASP.11.NET code.NET made by dotneter@teamfly Feature Summary That about wraps up the baseline features of the Visual Studio .NET IDE makes this process remarkably simpler.You can add the component to your ASP. Let’s look at how this is done. To do this. The new Visual Studio . Next you will look at how to add a C# or Visual Basic . Figure 7. Adding a new component to the project. you will add a component to the previous project that actually sends out the email to the address provided. Inline Debugging of Components If you tried debugging Visual Basic components within an ASP page in the previous version of Visual Studio.NET IDE. Here. Name the component Emailer.NET component class or a C# component class. and you need the separate Visual Basic IDE open to debug the components at the same time. for lack of a better name.You need the Visual Interdev IDE open for debugging the ASP page. if you’ve named them what we called them) and choose Add Component under the Add submenu. choose either a Visual Basic . depending on which type of project you are currently doing. as you will soon see. Figure 7.11 shows the menu option to choose after right-clicking the project name. and debugging of that component can be done within the same IDE in sequence with your ASP. The process is extremely stream-lined and quite seamless.NET pages. } .ComponentModel.4 is the C# version of the emailer. and Listing 7. Listing 7. Listing 7. it needs some code. namespace Chap7CS { public class Emailer : System.ComponentModel.4 Code for Emailer Component (C#) using System.Component { private System.Debugging ASP.Mail.ComponentModel.Web.IContainer container) { container.Container components = null.Add(this).New Riders . public Emailer(System. } private void InitializeComponent() { components = new System. InitializeComponent().ComponentModel. Reference whichever one is applicable for your project. } public Emailer() { InitializeComponent().NET version.Container().NET made by dotneter@teamfly Now that you have added the component.5 is the Visual Basic . IContainer) MyClass.Container <System.Body = "This is a test message.Web.com".Send(mm).ComponentModel. Exciting.New() InitializeComponent() End Sub Private components As System.ComponentModel. mm.From = "admin@domain.domain. mm.ComponentModel.Debugging ASP.Add(Me) End Sub Public Sub New() MyBase.Container() End Sub Public Sub SendFormEmail(ByVal toAddr As String) Dim mm As MailMessage = New MailMessage() mm. isn't it?".Diagnostics.Component Public Sub New(ByVal Container As System. mm.Mail Public Class Emailer Inherits System. To = toAddr.NET made by dotneter@teamfly public void SendFormEmail(string toAddr) { MailMessage mm = new MailMessage().ComponentModel.SmtpServer = "smtp.5 Code for Emailer Component (Visual Basic .com". } } } Listing 7.New Riders .Subject = "Chapter 7 Test Message". SmtpMail.New() Container. SmtpMail. To = toAddr . mm.DebuggerStepThrough()> Private Sub InitializeComponent() components = New System.NET) Imports System. com" mm.Debugging ASP.ServerClick Dim em As Emailer = New Emailer() If txtEmail. Then you use the MailMessage and SmtpMail objects from the System.Write("The email was sent successfully.IndexOf(". Response. Listing 7.Value.Value).Mail assembly to form the email message and send it out using a valid SMTP server.New Riders .SendFormEmail(txtEmail.IndexOf("@") = -1 Or _ txtEmail.6 gives the code for the modified btnSubmit_ServerClick in C#. Listing 7."). You will need to modify your btnSubmit_ServerClick function to create an instance of this component and call the SendFormEmail method to make it happen.com" SmtpMail.Body = "This is a test message.Value) == -1) Response.Value). } } Listing 7.domain.Web.").NET.6 Modified Code for btnSubmit_ServerClick (C#) private void btnSubmit_ServerClick(object sender.SmtpServer value with the SMTP server of your local network.EventArgs) Handles btnSubmit.Subject = "Chapter 7 Test Message" SmtpMail. System. and Listing 7.SmtpServer = "smtp. Each version contains a function called SendFormEmail that takes the email address to send to as a parameter. isn't it?" mm. if(ValidateEmail(txtEmail.Send(mm) End Sub End Class The code here is pretty simple.EventArgs e) { Emailer em = new Emailer(). else { em.From = "admin@domain.WriteLine("User entered: " + txtEmail.") = -1 Then . Debug.Object. ByVal e As System.NET) Private Sub btnSubmit_ServerClick(ByVal sender As System. To get this to work in your environment.7 Modified Code for btnSubmit_ServerClick (Visual Basic . be sure to replace the SmtpMail.Write("The supplied email address is not valid.Value.NET made by dotneter@teamfly mm. Exciting.7 gives the same code in Visual Basic . Finally.NET page and its respective code-behind file. It couldn’t be easier! Say goodbye to multiple programs being open simultaneously and other configuration issues that make your life difficult. And it couldn’t possibly be easier to install. Now step to the point where the current function is about to call the Emailer.NET. . set trace statements. and so on. The response usually is “yes. configure. You can debug this component while you debug the ASP.” We then follow up with the question of if that person has ever gotten it to work. With ASP. enter a valid email address in the appropriate box.Debugging ASP. we always ask if that person has ever tried to set up ASP debugging on the local machine. From here. we ask if that person has ever gotten ASP debugging to work remotely. and use.Write("The supplied email address is not valid.SendFormEmail(txtEmail. Immediately. The number who answer “yes” to that question is much lower. the breakpoint on the btnSubmit_ServerClick function should fire and the program is paused on that line. We have yet to find someone who has gotten it to work properly and consistently.") Else em. you can use all the techniques mentioned earlier to inspect variables.") End If End Sub Debugging the Component Now we get to the cool part. When Internet Explorer appears.New Riders . Remote Debugging Every time we have a discussion with someone regarding debugging ASP pages.NET made by dotneter@teamfly Response. and click the Submit button.Write("The email was sent successfully. You will see that the source code to the Emailer component appears with the code pointer at the top of the SendFormEmail function.NET and Visual Studio . At this position.SendFormEmail function. modify variable values. that all changes. set a breakpoint on the btnSubmit_ServerClick in the code-behind file and then start the program. To prove this. It finally works. do a Step Into.Value) Response. Next. you are hitting it on the server. Everything that can be debugged in Visual Studio . It will connect to the server and bring up an Internet Explorer window.NET made by dotneter@teamfly Installation When you install Visual Studio . the only thing you need to do is place your user account into the newly created Debugger Users group both on the client machine and on the server machine. you can create a brand new project on the server. You should now be familiar with things such as the watch . or simply start the application running.New Riders . Using It This is the easiest part of all. The same goes for components and anything else that you might be debugging.Debugging ASP. in the server’s memory space. And that’s it! It’s almost completely automatic.0 —it’s a huge time saver and a powerful tool. and remote debugging will still take place.NET on your server. If you are not connecting to an existing project. Choose the appropriate one. This can be done by choosing Open Project from the Web from the File menu. This can be done using the standard user configuration tools that are part of whichever Windows operating system you are using. The big difference here is that the application is running entirely on the server.NET IDE. set a breakpoint on the line where you want to stop. To use the new remote debugging features.NET locally can now be debugged remotely on any server where the remote debugging options have been installed. Summary In this chapter. simply create a project on your client computer that points to the project running on the server. all you need to do is install both Remote Debugging options listed under the Server Components option during the install procedure. Just type in the name of the server where the project resides. and you will be presented with a list of projects currently residing on that server. Setup To configure remote debugging. When you hit your breakpoint. as usual. you looked at many of the debugging features found in the new Visual Studio . I wish it was this easy in Visual Studio 6. NET pages and Visual Basic . and variable modification as applied both to debugging ASP. Chapter 8.Debugging ASP. breakpoints. and you are prepared for what is explained in the remainder of this book. you are prepared to start debugging your own projects. . the command window. A great solution to this problem is to leverage the Windows 2000 Event Log. The Windows 2000 Event Log Defined The Windows 2000 Event Log is the system by which Windows 2000 tracks events that happen on the server. In the next chapter. and tell how to handle both types. The application event log is designed for notifications concerning non–system-level applications on the server. these are custom applications from third-party vendors. application events. variable inspection.New Riders . This chapter explains what the Windows 2000 Event Log is and tells how to implement it in your web applications. We’ll also define expected and unexpected events. After you put your web application into production. we discuss how you can use the Windows NT and Windows 2000 Event Log to aid in tracking down troublesome code in your projects. Most often. and application events. if something goes wrong and you receive a page error.NET made by dotneter@teamfly window. Leveraging the Windows 2000 Event Log WHILE YOU ARE DEVELOPING A WEB APPLICATION. you get feedback right on the screen. they could go unnoticed for days or even weeks. system events. The chapter concludes with an exercise in building a web-based event log viewer. You can then use that information to track down the source of the error and fix it. The last category. you are not always there when a problem occurs.With these concepts in mind. The event log tracks three types of events: security events. Without a way to track these errors. however.NET and C# components. is what this chapter focuses on. the EventLog object in the System.Diagnostics" %> .NET Framework provides is ideal for capturing and logging application events. coupled with the fact that there was no structured error handling in traditional ASP applications.Diagnostics (Visual Basic .Diagnostics namespace is quite easy to manipulate and use. This helps you stay organized if you are hosting multiple web applications on the same server. Despite its extra features. this namespace is the one discussed and used in all the code examples in this chapter. The System. This.Diagnostics namespace is very feature-complete. "I'm a little teapot.1 and 8. %> Listing 8. This includes creating custom application logs that are specific to your web application.NET offers full support for manipulating the Windows 2000 Event Log. short and stout.").2 Writing to the Event Log Using the EventLog object in System.New Riders .2. To write a message to the Windows 2000 Event Log. Listing 8.Write("Done!"). meant that you had to either use the On Error Resume Next statement or direct all of your page errors to a centralized error page for processing—not very elegant at all. simply call the static WriteEntry method of the EventLog class as illustrated in Listings 8. Response.1 Writing to the Event Log Using the EventLog object in System.NET made by dotneter@teamfly Web Applications Can Use the Event Log Web applications can also leverage the application event log.WriteEntry("EventTest".Debugging ASP. ASP.Diagnostics Event Log Interface The event log interface in the System.NET) <%@ Page Language="VB" %> <%@ Import Namespace="System.Diagnostics (C#) <%@ Page Language="C#" %> <%@ Import Namespace="System. Global events also can be used to capture error information if errors occur in your web application. The structured error handling that Microsoft’s .Diagnostics" %> <% EventLog. Previous versions of ASP did not offer an easy way to write information to the application event log—you had to build a Visual Basic component that would enable this functionality. The EventLog object gives you a great deal of power and control over the Windows 2000 Event Log. Double-clicking an event in the Windows 2000 Event LogViewer displays its details in a property page dialog box.Debugging ASP. you must first check to see whether the log that you want to write to already exists or whether you need to create it.3 Creating a Custom Event Log and Logging an Entry in It (C#) <%@ Page Language="C#" %> <%@ Import Namespace="System. Custom Event Logs One of the neat things that the EventLog object enables you to do is create custom event logs and write to them on the fly. only a few extra lines of code are needed.1 shows how the event that you just logged would look when viewed in the Windows 2000 Event Log Viewer.") Response.1.4. however.3 and 8. It also enables you to create custom event logs and to control event logs on other machines on your network (discussed next). _ "I"m a little teapot. To do this.New Riders .Write("Done!") %> Figure 8. Listing 8. As you’ll see in Listings 8. short and stout. Figure 8.WriteEntry("EventTest".NET made by dotneter@teamfly <% EventLog.Diagnostics" %> . Notice that you can specify a source EventTest for the event that appears in the Source field of the Windows 2000 Event Log Viewer. among other things. Information. el.Write("Done!")." 'local computer el.Close() Response.MachineName = ".Log).NET made by dotneter@teamfly <% EventLog el = new EventLog().EventLogEntryType.SourceExists(el.NET) <%@ Page Language="VB" %> <%@ Import Namespace="System.Log = "Test Log" If Not EventLog.Source) Then 'Event source doesn't exist.el. //local computer el. so create a new one EventLog.WriteEntry("Just look at me now!".Information.Close().Source.Debugging ASP.CreateEventSource(el. el.EventLogEntryType.WriteEntry("Just look at me now!".New Riders . } el.Source)) { //Event source doesn't exist.12).4 Creating a Custom Event Log and Logging an Entry in It (Visual Basic .". el. Response. %> Listing 8.Source.Write("Done!") %> . if (!EventLog.SourceExists(el.Source = "Test Source" el.Log = "Test Log". _ 12 el.Log) End If el.el.Source = "Test Source".MachineName = ".CreateEventSource(el. so create a new one EventLog.Diagnostics" %> <% Dim el As EventLog = New EventLog() el. you set a few properties of the EventLog instance.2. Use the Warning value when something bad is about to happen or when something (or somebody) is trying to do something that is not supposed allowed. The final parameter to the WriteEntry method is the event ID. First. A final important thing to remember is that you must call the Close method of your EventLog object to release the read and write memory handles to the event log. but the one that is used here takes three parameters. It is just another way to logically separate unique event types.New Riders . The first parameter is the actual message that you want to write to your custom event log. contrary to the previous examples so far. This parameter is a Short datatype and can contain any numerical value that you want to pass in. You then can call the WriteEntry method of your EventLog instance to write an entry to your custom event log. Use the Information value when something happens in your web application that might not be a problem but that you just want to know about it. The WriteEntry method has several overloaded method signatures. The second parameter is a typed Enum (short for enumeration). FailureAudit and SuccessAudit. You then feed the Source property of your EventLog instance into the Static SourceExists method of the EventLog class.NET made by dotneter@teamfly You should note a few interesting things about the code just shown. You can organize your debugging efforts by separating events into different logs. you actually create an instance of the EventLog object here rather than using Static methods of the EventLog class. are used for logging to the security log and will not be discussed here.Debugging ASP. The possible values for the EventLogEntryType are as follows: Error FailureAudit Information SuccessAudit Warning You should use the Error |value when something has definitely gone wrong with your web application. Figure 8. The other two values. Figure 8. . Remember that you already specified the machine name.2 shows the custom event log that you just created. If a Boolean false value is returned from the method (meaning that the source does not exist in the Windows 2000 Event Log). Next. so these values take part in the WriteEntry method as well. and log to for your EventLog object instance. source. then you can call the CreateEventSource static method to create it. These “events” fall into two categories: expected events and unexpected events.New Riders . but you want to make a record of the fact that they did happen.6 provide examples.NET made by dotneter@teamfly Notice in the figure that you just created a new custom event log named Test Log in the Windows 2000 Event Log Viewer tree node under System Tools. el. short eventID) { EventLog el = new EventLog(). you would see a figure similar to Figure 8. Listings 8. //local computer el. depending on the outcome of the logic structure. you determine whether an expected event happened by using a logic structure. Handling Different Types Of Events Generally.5 Logging Expected Events with Logic Structures (C#) <%@ Page //build a generic function for logging.You can also see the Test Source value in the Source field and the value 12 in the Event field. Generally. Expected Events Expected events are things that are not completely out of the ordinary. EventLogEntryType eventType. Listing 8. to prevent redundant code void LogStuff(string message.". .Source = "Test Source".Debugging ASP.5 and 8. Items can be logged to the proper event log (or not logged at all). If you double-clicked on this event.Log = "Test Log".MachineName = ". el.1 that shows your new message in the Description box. you log entries to the Windows 2000 Event Log when something significant happens in your web application. Source)) { //Event source doesn't exist. ToString().Write("Done!"). ToString().SourceExists(el. ToString(). } else { LogStuff("The value is odd: " + second. 1). 2). depending on the value if(second % 2 == 0) { LogStuff("The value is even: " + second.Now. EventLogEntryType. so create a new one EventLog. 3). to prevent redundant code Sub LogStuff(message As String.6 Logging Expected Events with Logic Structures (Visual Basic .WriteEntry(message. } </script> <% //grab the "second" portion of the time int second = DateTime. } Response.eventID).Source. eventType As EventLogEntryType.CreateEventSource(el. //write to a different log. %> Listing 8.el.NET made by dotneter@teamfly if (!EventLog.Warning. el.eventType.Information.Debugging ASP. EventLogEntryType.Log).NET) <%@ Page 'build a generic function for logging. EventLogEntryType.Information. _ eventID as Short) .Second. } else if(second == 7) { LogStuff("Beware of superstitions: " + second.New Riders . } el. " "local computer el.eventID) el. In your own web applications.Second 'write to a different log. _ EventLogEntryType.Source = "Test Source" el.SourceExists(el.Log) End If el. ToString(). 1) ElseIf second = 7 Then LogStuff("Beware of superstitions: " & second. Because several different types of log entries can be made on the page. You can use the sorting capabilities of the Windows 2000 . Notice that each call to LogStuff feeds in a different value for the event ID (the last parameter). depending on the value If second Mod 2 = 0 Then LogStuff("The value is even: " & second.NET made by dotneter@teamfly Dim el As EventLog = New EventLog() el. The example itself is not very complex. You retrieve the “second” portion of the current time and write an entry to the custom event log based on the value that you obtain. 3) End If Response.Now.Information.Log = "Test Log" If Not EventLog. _ EventLogEntryType.Write("Done!") %> The example starts by defining a LogStuff function.CreateEventSource(el.el.Source.Source) Then ‘Event source doesn't exist.Warning. so create a new one EventLog.Debugging ASP.Close() End Sub </script> <% 'grab the "second" portion of the time Dim second As Integer = DateTime.eventType. ToString(). a utility function will suffice.WriteEntry(message. _ EventLogEntryType. you might also want to encapsulate this logic into a lightweight utility component. it makes sense to consolidate the logic into a utility function.New Riders . For now. ToString(). 2) Else LogStuff("The value is odd: " & second.MachineName = ".Information. short eventID) { //code truncated . Also. 1).NET made by dotneter@teamfly Event Log Viewer to group similar events for analysis. to prevent redundant code void LogStuff(string message. Structured error handling also presents the perfect place to log these errors to the Windows 2000 Event Log.7 } </script> <% int value1 = 10.8 will help clarify.7 and 8. EventLogEntryType. int value3 = 0. } Response. %> . The examples provided in Listings 8.New Riders .Write("Done!"). Unexpected Events Unexpected events happen when errors occur in your web application. the EventLogEntryType parameter is set to Warning when the number 7 (often considered lucky by superstitious people) comes up.NET Framework provides structured error handling to capture and handle these errors. Listing 8.see example 8. try { value3 = value1 / value2. Microsoft’s .Debugging ASP. int value2 = 0.Message.7 Logging Unexpected Events with Structured Error Handling (C#) <%@ Page //build a generic function for logging. } catch (DivideByZeroException e) { //log the error to the event log LogStuff(e. eventType As EventLogEntryType. Had the error not been of DivideByZeroException (OverflowException.NET) <%@ Page 'build a generic function for logging. EventLogEntryType. Access Event Log Data via the Web Now that you have all this information stored in the Windows 2000 Event Log.8 Logging Unexpected Events with Structured Error Handling (Visual Basic . specific exceptions will be caught. That way. If you’re not on the network. to prevent redundant code Sub LogStuff(message As String. You then intentionally manufacture a DivideByZeroException (OverflowException. .NET made by dotneter@teamfly Listing 8.8 End Sub </script> <% Dim value1 As Integer = 10 Dim value2 As Integer = 0 Dim value3 As Integer = 0 Try value3 = value1 / value2 Catch e As OverflowException ‘log the error to the event log LogStuff(e. how do you view it? Well. in Visual Basic) by dividing value1 by value2 (which translates to 10 / 0).see example 8. you open a structured error handling “try” block. Next. in Visual Basic). but if a strange exception occurs. then you’re still covered. _ 'code truncated .Error.Message. if you’re sitting at the machine or have access to another machine on the same network.New Riders . then you have to come up with something else. you can declare another catch block underneath it for the generic Exception class. then you can use the Windows 2000 Event Log Viewer. 1) End Try Response. a normal runtime error would have occurred on the page. The “catch” block intercepts the exception because it was defined as the proper type.Debugging ASP. logs.Close().SelectedItem. logs. protected void Page_Load(object sender. ToString().DataSource = elArray. el.Debugging ASP. ToString(). } protected void GetLogEntries() { EventLog el = new EventLog(). This is because the EventLog class in the System. .9 and 8. Diagnostics namespace exposes a complete interface for the manipulation of your event logs. EventArgs e) { GetLogEntries(). el.MachineName = ". GetLogEntries().DataValueField = "Log". You’ll build a small event log viewer here if you follow along with Listings 8. el.10.GetEventLogs(".MachineName = ".". } } protected void getMessages_Click(object sender.9 Web-Based Event Log Viewer (C#) <%@ Page Language="C#" %> <%@ Import Namespace="System.New Riders . el. logs. EventArgs e) { EventLog el = new EventLog().DataBind(). New Riders .Error: return "Error". case EventLogEntryType. case EventLogEntryType.Warning: return "Warning".Information: return "Information". break.NET made by dotneter@teamfly messages.SuccessAudit: return "Success Audit". break.Debugging ASP.FailureAudit return "Failure Audit". default: / /EventLogEntryType.DataSource = el.DataBind(). } } <"> . case EventLogEntryType. el. break. messages.Entries. break.Close(). } protected string GetEventTypeDesc(EventLogEntryType elet) { switch(elet) { case EventLogEntryType. break. DataItem).Category%> </td> <td> <%#((EventLogEntry)Container.DataItem).Source%> </td> <td> <%#((EventLogEntry)Container.DataItem).null)%> </td> <td> <%#((EventLogEntry)Container.EventID%> </td> <td> <%#((EventLogEntry)Container.DataItem).MachineName%> </td> <td> <%#((EventLogEntry)Container.Message%> </td> .DataItem).NET made by dotneter@teamfly <HeaderTemplate> <table border="1" cellspacing="0" cellpadding="2"> <tr> <th>Type</th> <th>Date/Time</th> <th>Source</th> <th>Category</th> <th>Event</th> <th>User</th> <th>Computer</th> <th>Message</th> </tr> </Headertemplate> <ItemTemplate> <tr> <td> <%#GetEventTypeDesc( ((EventLogEntry)Container.TimeGenerated.EntryType)%> </td> <td> <%#((EventLogEntry)Container.DataItem). ToString("G".UserName%> </td> <td> <%#((EventLogEntry)Container.New Riders .Debugging ASP.DataItem) .DataItem). NET made by dotneter@teamfly </tr> </Itemtemplate> <FooterTemplate"> </table> </Footertemplate> </asp:repeater> </body> </html> Listing 8." el.Clear() el.SelectedItem.GetEventLogs(". e As EventArgs) GetLogEntries() End Sub Protected Sub clearLog_Click(sender As Object.Log = logs. e As EventArgs) If Not IsPostBack Then Dim elArray() As EventLog = EventLog.Diagnostics" %> <script language="VB" runat="server"> Protected Sub Page_Load(sender As Object.MachineName = ".DataTextField = "Log" .10 Web-based Event Log Viewer (Visual Basic ." .New Riders .") With logs .Close() GetLogEntries() End Sub Protected Sub GetLogEntries() Dim el As EventLog = New EventLog() With el .NET) <%@ Page Language="VB" %> <%@ Import Namespace="System. ToString() el.DataValueField = "Log" . e As EventArgs) Dim el As EventLog = New EventLog() el.DataBind() End With End If End Sub Protected Sub getMessages_Click(sender As Object.MachineName = ".Debugging ASP.DataSource = elArray . ToString() End With messages.NET made by dotneter@teamfly .New Riders .Debugging ASP.FailureAudit return "Failure Audit" End Select End Function <"> <HeaderTemplate> <table border="1" cellspacing="0" cellpadding="2"> <tr> <th>Type</th> <th>Date/Time</th> </form> .Entries messages.SuccessAudit return "Success Audit" Case Else 'EventLogEntryType.SelectedItem.Warning return "Warning" Case EventLogEntryType.DataSource = el.Information return "Information" Case EventLogEntryType.Log = logs.DataBind() End Sub Protected Function GetEventTypeDesc(elet As EventLogEntryType) _ As String Select Case elet Case EventLogEntryType.Error return "Error" Case EventLogEntryType. DataItem.UserName%> </td> <td> <%#Container.DataItem.Source%> </td> <td> <%#Container.DataItem.DataItem.NET made by dotneter@teamfly <th>Source</th> <th>Category</th> <th>>Event</th> <th>User</th> <th>Computer</th> <th>Message</th> </tr> </Headertemplate> <ItemTemplate> <tr> <td> <%#GetEventTypeDesc(Container.Category%> </td> <td> <%#Container.Debugging ASP.DataItem.Message%> </td> </tr> </Itemtemplate> <FooterTemplate> </table> </Footertemplate> </asp:repeater> </body> </html> .MachineName%> </td> <td> <%#Container. TimeGenerated.EventID%> </td> <td> <%#Container.DataItem. ToString("G")%> </td> <td> <%#Container.DataItem.EntryType)%> </td> <td> <%#Container.New Riders .DataItem. NET page. The GetEventLogs method returns an array of EventLog objects that you bind to the DropDownList server control. you build and display an ASP. A Clear Log Entries button also calls the clearLog_Click event. You then call the Close method. The function then calls the Close method of the EventLog object instance. The messages Repeater displays the properties of each EventLogEntry object in an HTML table.New Riders .NET DropDownList server control that contains the names of all the event logs on the local server.Debugging ASP. Finally. This event calls the GetLogEntries function that you define. It then binds this collection of EventLogEntry objects to the messages Repeater server control.3. Figure 8.NET made by dotneter@teamfly In this sample. The web-based Event Log Viewer with some sample results will look similar to Figure 8. Alternatively. You can programmatically re-create the Windows 2000 Event Log in the form of an ASP. when the page first loads. The GetLogEntries function gets a list of event log entries for the selected event log. Inside this event. the getMessages_Click server event is fired. If you select one of the event logs from the DropDownList control and click the Get Log Entries button. you establish a connection to the event log that was selected in the DropDownList server control and call the Clear method of the EventLog object instance. . you could specify another machine on the network. you call the GetLogEntries function to refresh the Repeater server control. wildcard character that stands for local server.3. This list is obtained by calling the Static GetEventLogs method of the EventLog class and passing in the . Debugging ASP. The EventLog class in the System.Diagnostics namespace offers a rich interface for both reading and writing to the Windows 2000 Event Log. Expected events are identified using logic structures and are logged according to the rules outlining their importance.NET made by dotneter@teamfly Summary In this chapter. Unexpected events are trapped and handled via structured error handling. “Debugging the New ASP. This interface is located in the System. you move on to Part III. you learned that Microsoft’s .NET Features 9 Debugging Server-Side Controls 10 Debugging Data-Bound Controls 11 Debugging User Controls 12 Caching Issues and Debugging .NET Features Part III Debugging the New ASP.New Riders . Another button clears all the log entries from the selected event log in the DropDownList server control.NET server controls. This includes setting up custom event logs and manipulating all the properties of an event. Specific exceptions can be trapped and logged. You also learned that there are two different types of events: expected events and unexpected events.Diagnostics namespace.NET Framework exposes an interface for managing the Windows 2000 Event Log. Part III: Debugging the New ASP. as can generic exceptions.” and start off with a discussion on debugging ASP. In the next chapter. The chapter concluded by showing you how to build a web-based Event Log Viewer that enables you to select an event log from a DropDownList server control and click a button to display all the events in that event log.NET Features. In the dialog box that appears. choose Web Custom Control and name this component SimpleTabControl. Debugging Server-Side Controls ASP. In the traditional ASP paradigm. . In the simplest terms. including navigating some common pitfalls in developing a control like this and properly debugging it both as it is being written and after it has been completed.cs or SimpleTabControl. Name it either Chapter9CS for C# or Chapter9VB for Visual Basic . however.NET. Next. These generally are used to create brandnew types of user-interface controls that are not currently available in ASP. These tags that you insert into your HTML page are really server-side controls that implement either a standard HTML control or a custom control that is only part of ASP.NET.New Riders . you are actually building a real compiled component that is executed entirely on the server. a server control is a control that is executed on the server and that provides some type of output to the user.NET. Now. As always. you will be familiar with the <asp:></asp> style tags that have been introduced into the language.NET made by dotneter@teamfly 9. this can be thought of as an include file that contains the HTML or JavaScript required to implement a new type of control. The demonstration project used in this chapter is an extremely simple tab control. such as a data grid. we recommend following some naming conventions to make compiling and debugging easier.NET code will be provided for this project. First. This chapter focuses on creating a server-side ASP.vb. You’ll start by creating the project.NET control.NET application in the language of your choice.Debugging ASP. create a standard ASP. depending on the language you have chosen for this example. If you have been using ASP.NET. both C# and Visual Basic . Creating the Project If you plan to follow along step by step with this chapter.NET SERVER CONTROLS PROVIDE AN ENORMOUS amount of power to you as a programmer in developing truly object-oriented web-based applications. right-click the project name in the Solution window and choose Add and then New Component. 1 Tab Control (C#—SimpleTabControl.Split("~").NET made by dotneter@teamfly Now you can start coding the basic tab control framework.cs) using System. private string[] aNames.2 contains the Visual Basic .3 and 9.NET version. This will be explained a bit more as we explain the example in the pages ahead. each with the same code.UI. The only difference between the pages is having the proper code-behind file referenced from each page. } } public string Pages { get { return (string)ViewState["pages"].NET pages that you will be using to test the control for the C# and Visual Basic . } set { ViewState["names"] = value. Listing 9. respectively. } set { ViewState["curPage"] = value.New Riders . } } public string Names { get { return (string)ViewState["names"].NET. Listings 9.4 contain the code of one of the ASP.NET projects.NET pages. aPages = value. ViewState["aNames"] = aNames.Web. namespace Chapter9CS { public class SimpleTabControl : System. using System. IPostBackEventHandler { private string[] aPages.WebControl.UI. Finally. and Listing 9. Listing 9. } set { ViewState["pages"] = value. aNames = value. } } . you will create the very basic input and output routines of the tab control.5 is a listing of the code-behind file that will be used for the C# ASP.NET pages.WebControls. You will want to create three separate ASP. public string CurPage { get { return (string)ViewState["curPage"]. First.1 contains the C# version of the code. Listing 9. while Listing 9.6 contains the same code in Visual Basic . ViewState["aPages"] = aPages. as well as having each code-behind file with a unique class in it.Debugging ASP.Web.Split("~"). GetUpperBound(0) != aNames. if (i. i++) { output. ViewState["redirPage"] = aPages[Convert. for (i = 0.GetUpperBound(0). else output.GetUpperBound(0)) { // toss the error here } else { output. } // Override for outputting text to the browser protected override void Render(HtmlTextWriter output) { int i.New Riders . . if (aPages. <= aNames. } set { ViewState["redirPage"] = value. } } public void RaisePostBackEvent(string eventArgument) { ViewState["curPage"] = eventArgument. ToString() == (string)ViewState["curPage"]) output. } } public string InactiveColor { get { return (string)ViewState["inactiveColor"].Write("<table width='100%' border><tr>").Debugging ASP. ToInt32(eventArgument)]. } set { ViewState["inactiveColor"] = value. } set { ViewState["activeColor"] = value. } } public string RedirPage { get { return (string)ViewState["redirPage"].Write("<td bgcolor=\"").Write(ViewState["inactiveColor"]).Write(ViewState["activeColor"]).NET made by dotneter@teamfly public string ActiveColor { get { return (string)ViewState["activeColor"]. } } } } Listing 9.2 Tab Control (Visual Basic .UI. i.WebControl Implements IPostBackEventHandler Dim aPages() As String Dim aNames() As String Property CurPage() As String Get CurPage = ViewState("curPage") End Get Set(ByVal Value As String) ViewState("curPage")= Value End Set End Property Property Names() As String Get Names = ViewState("names") End Get Set(ByVal Value As String) ViewState("names") = Value aNames = Value.NET made by dotneter@teamfly output.Split("~") ViewState("aNames") = aNames End Set End Property Property Pages() As String .Debugging ASP.ComponentModel Imports System.GetPostBackEventReference(this.Web. ToString()) + "\">" + aNames[i] + "</td>\n").Write("</tr></table>").UI Public Class SimpleTabControl Inherits System.NET—SimpleTabControl.New Riders .vb) Imports System. } output.Web.WebControls.Write("\"><a id=\"" + aPages[i] + "\" href=\"javascript:" + Page. New Riders .RaisePostBackEvent ViewState("curPage") = eventArgument ViewState("redirPage") = aPages(Convert.NET made by dotneter@teamfly Get Pages = ViewState("Pages") End Get Set(ByVal Value As String) ViewState("pages") = Value aPages = Value. ToInt32(eventArgument)) End Sub .Split("~") ViewState("aPages") = aPages End Set End Property Property ActiveColor() As String Get ActiveColor = ViewState("activeColor") End Get Set(ByVal Value As String) ViewState("activeColor") = Value End Set End Property Property InactiveColor() As String Get InactiveColor = ViewState("inactiveColor") End Get Set(ByVal Value As String) ViewState("inactiveColor") = Value End Set End Property Property RedirPage() As String Get RedirPage = ViewState("redirPage") End Get Set(ByVal Value As String) ViewState("redirPage") = Value End Set End Property Sub RaisePostBackEvent(ByVal eventArgument As String) Implements IPostBackEventHandler.Debugging ASP. aspx~page3.Write(ViewState("activeColor")) Else output. ToString() = ViewState("curPage") Then output.aspx~page2.GetPostBackEventReference(Me.Web.0 Transitional/ /EN"> <HTML> <HEAD> <title>Chapter 9 .NET Page (C#) <%@ Register <Tab:SimpleTabControl .GetUpperBound(0) <> aPages. ToString()) + """>" & aNames(i) & "</td>" & vbCrLf) Next output. Visual Basic</title> </HEAD> <body> <form id="Form1" method="post" runat="server"> <Tab:SimpleTabControl </form> This is PAGE 1.NET Page (Visual Basic .vb" AutoEventWireup="false" Inherits="Chapter9Visual Basic. public Page1() { Page.Page { protected Chapter9CS.NET made by dotneter@teamfly </form> This is PAGE 1.NET) <%@ Register <HTML> <HEAD> <title>Chapter 9 .4 Test ASP. namespace Chapter9CS { public class Page1 : System.5 Code-Behind File (C#) using System.EventArgs e) .UI.aspx.New Riders .SimpleTabControl tab.Page1" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.aspx~page2.EventHandler(Page_Init). System.Page Protected tab As Chapter9Visual Basic.CurPage = Request.UI. EventArgs e) { InitializeComponent().EventArgs e) { if (IsPostBack) Response.NET) Public Class Page1 Inherits System.QueryString.Web.Load += new ystem. else tab.New Riders .CurPage = "0" End If End Sub .CurPage ).PreRender += new System. ByVal e As System. } private void Page_Init(object sender. this. } private void InitializeComponent() { this. } } } Listing 9.6 Code-Behind File (Visual Basic .Page_PreRender).SimpleTabControl Private Sub Page_Load(ByVal sender As System.EventHandler(this.Get("curpage") Else tab. } private void Page_PreRender(object sender.CurPage = "0".CurPage = Request.Page_Load).RedirPage + "?curpage=" + tab.Load If IsPostBack Then tab.Get("curpage").EventArgs) Handles MyBase.Object.Debugging ASP.EventHandler(this.NET made by dotneter@teamfly { if(IsPostBack) tab.QueryString.Redirect(tab. Web. it is necessary to add the Register directive at the top of the page and specify what the tag name will be (Tab). Typically.NET page. These properties become attributes that can be set in the ASP. Pages will contain a delimited list of ASP.NET page after the control has been put in place. the CurPage and RedirPage properties keep track of the current page and the page to move to based on which tab the user clicked. In the namespace inherited from System.Web. but only through form submissions. it enables you to handle the click in a special way. You will also notice several properties that we have created. what the namespace is (Chapter9VB or Chapter9CS) and. Control namespace. CurPage. you render a table with the links provided in the Pages and Names properties specified in the HTML tag. ActiveColor and InactiveColor store the background colors of the active and inactive tabs. the standard HTML text box has an attribute named value that can be used to set the default text that appears in the text box when the page is loaded. you need to override the Render method.New Riders .EventArgs) Handles MyBase.Redirect(tab.Control. Ours is no different. In this function.NET made by dotneter@teamfly Private Sub Page_PreRender(ByVal sender As System.RedirPage & "?curpage=" & tab. and RedirPage. In this example.NET pages that each tab will take the browser to. Now take a look at the ASP. this is where you will output your HTML.UI. ActiveColor.Debugging ASP. To use the server control on the page.PreRender If IsPostBack Then Response. In the tab control are several properties: Pages. The Names property contains a delimited list of the name to display to the user.Object.WebControls. All these properties are stored in the ViewState dictionary. The other thing to notice is that the RaisePostBackEvent method is implemented from the IPostBackEventHandler interface. Names.WebControls. For example. respectively.UI. .CurPage) End If End Sub End Class Every server control must inherit from the System. the assembly that it is located inside (Chapter9VB or Chapter9CS). Finally. This method is called whenever one of these special links is clicked. This is done so that you can create clickable links that generate form submissions instead of just regular links. you’re simply grabbing the special parameter on your links: the index into the Page array that you will be redirecting to. This is the function that sends the output down to the browser. finally. InactiveColor. ByVal e As System. which provides a way to store state information between page requests of the same page. however.NET pages named page1.aspx~page3. In the example. For example. One of the important things to understand about the ViewState dictionary is that it simply is a hash table. Tag Attributes and Values for the Example Tag Attribute Pages Names activeColor inactiveColor Value page1.aspx would contain a Page1 class. Debugging the Control Now that you have created the control. That should be enough background information to understand the example. They must match in length of elements. you can reference the server control using the following syntax: <tab:SimpleTabControl/> That creates an instance of the example tab control. Pages contains the pages to redirect to when the Name link is clicked. and page3. Page1. you will notice that the properties are specified on the tab control as HTML attributes. One very important difference. A detailed discussion of server-side controls is really outside the scope of this book. the ViewState dictionary is.aspx. is that the ScriptingDictionary was not case-sensitive in terms of the key name used.When writing this example. ViewState The ViewState dictionary is a name/value pair that enables you to store state information across page requests. page2. but a general understanding is required so that you can understand the example provided. All three will have their own code-behind files and will contain the same code across all three pages. you have the attributes shown in Table 9.aspx.1. It can be thought of as a ScriptingDictionary from VB6. be sure to create ASP.1.aspx. Looking at the full page listed earlier. just like the ScriptingDictionary.New Riders .Debugging ASP. let’s discuss some of the common pitfalls and problems that you can fall into when creating a server-side control.NET made by dotneter@teamfly With that directive in place.aspx would contain a Page2 class. while Page2.aspx Page 1~Page 2~Page 3 #ff0000 #0000ff Both Pages and Names contain a ~ –delimited list of parameters. . except for the name of the class contained in the Chapter9XX namespace. Table 9.aspx~page2. 0 Transitional//EN"> <HTML> <HEAD> <title>Chapter 9 . and you will see a hidden form field called __VIEWSTATE with a long.When referencing this value any subsequent time.aspx' href="javascript:__doPostBack('tab'.'0')">Page 1</td> <td bgColor='#0000ff'><a id='page2.aspx' href="javascript:__doPostBack('tab'. hidden form variable on any page. Here’s something else to keep in mind about the ViewState dictionary: It can’t be used across pages via standard links.'2')'>Page 3</ td> </tr></table> <input type="hidden" name="__EVENTTARGET" value="" /> <input type="hidden" name="__EVENTARGUMENT" value="" /> <script language="javascript"> <!– .New Riders . encrypted value assigned to it similar to that in Listing 9.7. The ViewState dictionary gets turned into an encoded. This is the output from the tab control test page.Debugging ASP.aspx" id="Form1"> <input type="hidden" name="__VIEWSTATE" value="dDwtMTg5MTU0NDI2Njt0PDtsPGk8MT47PjtsPHQ8O2w8aTwxPjs+O2w8dDxwPH A8bD xjdXJQYWdlOz47bDwwOz4+Oz47Oz47Pj47Pj47Pg==" /> <table width='100%' border><tr><td bgColor='#ff0000'><a id='page1.7 Sample Output Showing __VIEWSTATE <!DOCTYPE HTML PUBLIC "-/ /W3C/ /DTD HTML 4. The string "Hello" is stored into a key named MyKey.Visual Basic>/title> </HEAD> <body> <form name="Form1" method="post" action="page1. The key is case-sensitive and must be referred to in the same case each time. the following code would not work as you expect it to: ViewState["MyKey"] == "Hello". Listing 9.NET made by dotneter@teamfly For example.aspx' href="javascript:__doPostBack('tab'. not mykey or myKey or any other variation.Write(ViewState["mykey"]). Response.NET page. Go ahead and choose View Source from Internet Explorer on any ASP.'1')">Page 2</td> td<a id='page3. you must refer to it as MyKey. NET page.NET page listed earlier. in this example. this would look like this: protected Chapter9CS. That is why. eventArgument) { var theform = document.SimpleTabControl tab.New Riders . So. />). . this element is sent to the server.Form1.NET made by dotneter@teamfly function __doPostBack(eventTarget. and other elements.Debugging ASP. their names. list boxes. the page you are on. in this case) must match the name that you assigned to the control in its name and id parameters on the corresponding ASP. Declaring the Control in the Code-Behind File In the ASP. where it is decoded and thrown into the ViewState dictionary.value = eventArgument.NET. In C#. if you are creating some state information using the ViewState dictionary and are expecting to be able to use these values on a new page through a standard <a href></a> link. it is how you keep track of our pages. The __VIEWSTATE form variable is not sent to the next page.value = eventTarget. you will be very disappointed.__EVENTARGUMENT. In this case. it would be this: Protected tab as Chapter9Visual Basic. These will be accessible only through a form submission to the same page. } // –> </script> </form> This is PAGE 1. To do this. . and the page you are moving to.submit().NET resets default values in text boxes. this is not enough to reference the control from your code-behind file. the page that you are on is passed via a query string variable because it can’t be tracked the ViewState dictionary. .SimpleTabControl The critical thing to remember about the declaration here is that the variable name (tab. However. the tab control is declared in the actual page with the server control tag (< tab:SimpleTabControl . theform. it is necessary to have a declaration of the tab control in your code-behind file. theform. In Visual Basic .__EVENTTARGET. This is how ASP. </body> </HTML> On every form submission. theform. be sure that you have added a declaration of your control in your code-behind file. so when adding the control to the page. as well as the assembly where the control is located.Debugging ASP. or a property or method on the control being null. Registration of Control on ASP. Generally. runat=server As with all server-side controls. you will see an error message similar to the following when trying to build your application: The type or namespace name 'tab' could not be found ((are you missing a using directive or an assembly reference?) If you see a similar error in your future projects. you need to specify the TagPrefix. Instead. your page will not be displayed. and your code will either not compile or not run as you had anticipated. you will see an error message because the ASP.NET parser will not be capable of finding the control to include in the page. The message will usually be the following: Value null was found where an instance of an object was required. you must use the Register directive to register that specific control for use in that page.We stress this so much simply because the error message(s) that you will see state nothing about this being the problem. . This looks like the following line: <%@ Register TagPrefix="Tab" Namespace=="Chapter9CS" Assembly=="Chapter9CS" %> In this line. If these are incorrect. which is the name that will be used to reference the control.What you will most likely see is a message regarding a reference to the control. and your control should work fine on that specific page. . Just add it in.NET Page At the top of any ASP.We cannot stress enough how important this is. In the example.NET made by dotneter@teamfly If you do not remember to declare your controls in the code-behind file. the name Tab is used. you would write this: <tab:SimpleTabControl . the parser will tell you which one is missing. . /> The other things to specify are the namespace that contains the control.NET page that uses a server control. it is absolutely essential to include the runat=server attribute on the control for it to work properly. as appropriate. If you forget either of these attributes. you will not be able to reference the tab control directly.New Riders . Then. Figure 9.NET made by dotneter@teamfly If you see an error resembling this. Figure 9. consider yourself lucky. the first thing to check is to be sure that you’ve placed the runat=server attribute on the server-side control.You can set breakpoints on any line in the server-side code or any references to it in your code-behind file. you can also trace the order of events to make sure that they are being called in the order that you think they are being called.1 shows a debugging session from the C# project.NET Debugging Environment.Within the IDE.“Visual Studio .You can also see the Autos window in the lower-left pane.New Riders .NET If you are using Visual Studio . .NET server control inVisual Studio . Debugging an ASP. Here in the IDE.Debugging ASP. You will have unprecedented power in debugging your server-side controls in this IDE. when execution reaches any of these points.NET IDE provides a lot of flexibility when debugging your code. or to see if they are even being called at all. your code will break and you can inspect exactly what is happening and see if it is what you expect. you can set a breakpoint on the Render function and see if execution breaks at this spot that it is supposed to. Debugging in Visual Studio .1.NET. with a few of the variables that the debugger has automatically decided to add for inspection.” the Visual Studio . For example. As discussed in Chapter 7.NET. inspecting the contents of the curPage entry in the ViewState dictionary in the Command window. if you forgot to use the overrides keyword when overriding the Render function on your server control. the function would never be called because it isn’t the “proper” Render function. Listing 10. The Data Grid Control The data grid control is a great new addition to the tool box. Debugging Data-Bound Controls IN THIS CHAPTER. This control enables you to create a spreadsheet-like page of data and then manipulate the data as you need to. such as the data grid and data list controls.NET environment. including the data grid. and a few others. System. but we will focus on some of the more common controls. data grids. Chapter 10. This includes looking at the most common problems that developers can face when developing controls of this type. Listings 10. Let’s start out by taking a closer look at these data-bound controls and where you might have problems. and namespace issues.NET made by dotneter@teamfly Summary This chapter discussed the issues that can arise when creating a server-side control in ASP.We also looked at solutions to these problems and ways to avoid them from the start.New Riders .WE WILL LOOK AT the issues that surround debugging data binding. Microsoft has included a list of new controls. These controls have the capability to bind data elements or data sources to them. This chapter looks at various areas such as templates.2 show a simple data grid control in C# and Visual Basic .1 Simple Data Grid Control (C#) private void Page_Load(object sender. Then we’ll identify some areas where you might need to implement some debugging techniques. The next chapter takes a look at data-bound controls and some common strategies for debugging them in the . the data list.NET.1 and 10.Debugging ASP.NET. Data-Bound Controls You can bind data to a limited number of controls.EventArgs e) . Trusted_Connection=y es").AlternatingItemStyle.BackColor = Color. } Listing 10. da.Trusted_Connection=y es") conn. DataGrid1.Open().EventArgs) Handles MyBase.NET) Private Sub Page_Load(ByVal sender As System. . DataGrid1.AliceBlue DataGrid1.Fill(ds.Fill(ds.Font.Beige DataGrid1. It just makes a call to the database to get a list of products and then displays them in the data grid. "products") DataGrid1.New Riders . DataSet ds = new DataSet(). DataGrid1. conn.HeaderStyle.Load Dim conn = New SqlConnection("server=(local).BackColor = Color.SelectCommand = New SqlCommand("Select * from products".AlternatingItemStyle. DataGrid1. conn.Beige.Object.Bold = True DataGrid1.Close() This is a pretty simple example.NET made by dotneter@teamfly { // Put user code to initialize the page here SqlConnection conn = new SqlConnection("server=(local).Debugging ASP.BackColor = Color. conn) Dim ds = New DataSet() da."products").DataSource = ds.database=northwind.Bold = true. DataGrid1.Open() Dim da As SqlDataAdapter = New SqlDataAdapter() da.Close().BackColor = Color.DataBind().conn).Font.2 Simple Data Grid Control (Visual Basic . SqlDataAdapter da = new SqlDataAdapter ("Select * from products".HeaderStyle.DataSource = ds DataGrid1.database=northwind.DataBind() conn.AliceBlue. ByVal e As System. Take a look at what you need to do to make the paging feature work. If you stop to look at what is happening here. you will see that it is very simple to turn on the paging feature on the data grid control: <asp:datagrid id=DataGrid1</asp:datagrid> If you try to use the code as it appears here.NET made by dotneter@teamfly Data Grid Paging When you start getting into some of the other capabilities of the data grid. It is very easy to run into problems if you are not familiar with the workings of the data grid control.New Riders .Debugging ASP.3 and 10. you will undoubtedly come across the paging feature that is built into the control. Listing 10.3 Grid Paging Event Handler (C#) public void GridChange(Object sender.database=northwind.You can put any additional code that you want in the GridChange function. If you look at the following code line.Trusted_Connection=y es"). DataGridPageChangedEventArgs e) { // Set CurrentPageIndex to the page the user clicked. you can add the required code to handle the paging functionality. you will notice buttons for Next and Previous at the bottom of the data grid.CurrentPageIndex = e. a few things need to be done.NewPageIndex. A few things need to happen to really get the paging feature to work.You will also need to implement the OnPageIndexChanged handler.4 show you how to implement the last piece of the data grid paging feature. . SqlConnection conn = new SqlConnection("server=(local). you are just telling the data grid to call the GridChange function when the OnPageIndexChanged event is fired. but for the paging feature to work. as shown in this next example: <asp:datagrid id=DataGrid1</asp:datagrid> After you have changed the aspx code to include the OnPageIndexChanged attribute to point to a handler function such as GridChange. but they don’t seem to do anything. DataGrid1. Listings 10. Debugging ASP.Open() Dim da As New SqlDataAdapter("Select * from products".DataSource = ds.NET) Public Function GridChange(ByVal sender As Object. DataGrid1. DataGrid1. Don’t worry—this is a simple fix.NewPageIndex Dim conn As New SqlConnection("server=(local).CurrentPageIndex = e. . da.New Riders . DataGrid1.Open().1 telling you that the function you provided to handle the events is not accessible because of its protection level. Protection-level compilation error listing.Trusted_Connection=y es") conn. conn) Dim ds As New DataSet() da. DataSet ds = new DataSet(). Figure 10.conn).database=northwind.NET made by dotneter@teamfly conn. DataGrid1.Fill(ds.DataBind().Fill(ds.4 Grid Paging Event Handler (Visual Basic . "products"). You might come across the error shown in Figure 10.1. "products") ' Rebind the data. // Rebind the data. the data grid will display only the set of data that resides on that new index page value.DataBind() End Function You might notice that the key piece to this involves assigning the NewPageIndex value to the data grid’s CurrentPageIndex property. SqlDataAdapter da = new SqlDataAdapter("Select * from products". ByVal e As DataGridPageChangedEventArgs) ' Set CurrentPageIndex to the page the user clicked.DataSource = ds DataGrid1. } Listing 10. After you have done this. da.Trusted_Connection=y es").Fill(ds.New Riders .NET. When you don’t define your method or variables.DataSource = ds. Any method or property that is defined as private is accessible only to the context of its class or object. Setting the method or property to public grants external classes or objects access to that resource. you will see that the method is declared with the public keyword.NET made by dotneter@teamfly You will see this error if your code does not declare the event-handling function as public.Debugging ASP. If you look at the following examples.conn). so if you need to expose a method or property. . SqlDataAdapter da = new SqlDataAdapter("Select * from products".5 Correct Method Definition (C#) public void GridChange(Object sender. Listing 10."products"). DataSet ds = new DataSet(). DataGridPageChangedEventArgs e) { // Set CurrentPageIndex to the page the user clicked.database=northwind. // Rebind the data to the datasource DataGrid1.6 provide examples of the proper way to create an event handler for paging through a data grid in both C# and Visual Basic .Open(). This might differ in each language. be sure to use the public keyword. to avoid guesswork. they might be set to private. Listings 10.5 and 10. SqlConnection conn = new SqlConnection("server=(local).NewPageIndex. DataGrid1. conn.CurrentPageIndex = e. 6 Correct Method Definition (Visual Basic . In this section.DataBind(). This is because some aspx code is not validated or is syntactically correct but. you will probably be using templates to get the job done. } Listing 10.Fill(ds.When you understand the basics of templates. DataGrid1.DataSource = ds DataGrid1. ByVal e As DataGridPageChangedEventArgs) ' Set CurrentPageIndex to the page the user clicked. Working with Data List <ItemTemplate> When you are working with templates you can easily run into problems that won’t be caught by the compiler. we look at some problem areas that you may stumble over and how to work around them.DataBind() End Sub Debugging Templates Templates provide a more flexible way of displaying the data.Open() Dim da As New SqlDataAdapter("Select * from products". is not valid. conn) Dim ds As New DataSet() da. they can be a very powerful tool in developing your site.NET) Public Sub GridChange(ByVal sender As Object. .CurrentPageIndex = e.NewPageIndex Dim conn As New SqlConnection("server=(local).Trusted_Connection=y es") conn. Most likely.New Riders . if you will be doing anything with data. DataGrid1. at run time.NET made by dotneter@teamfly DataGrid1.Debugging ASP.database=northwind. "products") ' Rebind the data. So let’s get started and take a look at some problems that you might run into. though. If you look at the error shown in Figure 10. there is no column called stringvalue. That’s because. Invalid property in DataRowError listing. There you are trying to read a value from the data container named stringvalue. but the line does not tell you what the problem is. you will notice that the error occurred at Line 18. "stingvalue") %> </ItemTemplate> </asp:DataList> When you include the <ItemTemplate> tag in your aspx page. Figure 10. . whatever you put between the tags is what is going to be displayed on the page.DataItem.7 aspx Code <asp:DataList id=DataList1 <ItemTemplate> <%# DataBinder.NET made by dotneter@teamfly When you start trying to customize your data list.7 to see how the DataBinder is used in place of a recordset. If you look at the stack trace.2. Listing 10.New Riders .Debugging ASP. you will notice that it is telling us that the property name stringvalue does not exist. in the products table. don’t assume that all the other data fields will just magically appear.2. you will run into the DataBinder.Eval(Container. Take a look at Listing 10. Ambiguous reference error message.Class1 (C#) namespace MySpace { . then you’ll see what you need to do to fix the problem. you need to start with an existing web project or windows application.8– 10.11 illustrate the problem and show how namespaces enter into its solution. think hard about what it really means and then give it an appropriate name.Debugging ASP. there is an ambiguous reference.New Riders . the first thing you will probably see is the error message shown in Figure 10. When you do run into namespace problems. First. So before you just blow off the namespace description. Listings 10. In other words. Listing 10. This is where namespaces come into play. Let’s look at an example of where you might run into problems.3. Then you can add the following snippets to your project to see how they react.3. namespace will become an issue. Figure 10. If you want to use these snippets of code. there are two classes called Class1 and the compiler does not know which to use.NET made by dotneter@teamfly Namespace Issues You might not think much about namespaces now. you’ll take a look at how we arrived at this point. As the message indicates.8 MySpace. but when you start creating large applications or start to reuse your code from other projects. New Riders .10 YourSpace.Debugging ASP.Class1 (Visual Basic .9 MySpace.Class1 (C#) namespace YourSpace { . mylist[2]="planes". mylist[0]="cars". return mylist.NET made by dotneter@teamfly public class Class1 { public Class1() { } public string[] GetMyList() { string[] mylist = new string[4]. mylist[1]="trucks".NET) Namespace MySpace Public Class Class1 Public Function GetMyList() As String() Dim mylist As String() mylist(0) = "cars" mylist(1) = "trucks" mylist(2) = "planes" mylist(3) = "boats" Return mylist End Function End Class End Namespace Listing 10. mylist[3]="boats". } } } Listing 10. NET made by dotneter@teamfly public class Class1 { public Class1() { } public string[] GetMyList() { string[] mylist = new string[4]. mylist[2]="milk". mylist[1]="bacon".12).New Riders .11 YourSpace.Class1 (Visual Basic . } } } Listing 10.Debugging ASP. mylist[0]="eggs".NET) Namespace YourSpace Public Class Class1 Public Function GetMyList() As String() Dim mylist As String() mylist(0) = "cars" mylist(1) = "trucks" mylist(2) = "planes" mylist(3) = "boats" Return mylist End Function End Class End Namespace As you can see from these listings. return mylist. The only thing that differentiates them is their namespace. Now take a look at how we were trying to implement these classes in the web page (see Listing 10. mylist[3]="bread". both classes have the same name and the same methods. . ListBox1.EventArgs e) { // Put user code to initialize the page here Class1 c1 = new Class1().12 aspx Code for Implementing the Classes (C#-Incorrect Method) <%@ Import <form id="WebForm3" method="post" runat="server"> <asp:ListBox id=ListBox1</asp:ListBox> <asp:ListBox id=ListBox2</asp:ListBox> </form> </body> </body> </HTML> As you can see.Debugging ASP.DataSource = c1.GetMyList(). ListBox2.DataBind(). System.DataBind().DataSource = c1.New Riders . then the compiler knows definitively what you want to do.//Ambiguous Reference ListBox1. Take a look at the code in Listing 10.GetMyList(). the namespaces have been included at the beginning of the file so that the compiler can identify the classes. Debugging ASP.4 shows an example of a problem that you might run into.Class1(). ListBox1. YourSpace. ListBox2.Class1 c2 = new YourSpace.DataBind().DataSource = c2. Figure 10. aspx output.EventArgs e) { MySpace. Next you take a look at XML bindings and some hurdles that you might run into.Class1 c1 = new MySpace. However. . If you look at how data is handled in the Data namespace. you need to identify which namespace you are using if there are naming conflicts like those with Class1.GetMyList().DataBind(). the system does not throw an exception or give you any indication that there is a problem.Class1(). such as Steve Holzner’s Inside XML (New Riders Publishing.4. ListBox1. 2001).NET architecture. ListBox1. pick up some reading material on XML.NET made by dotneter@teamfly <%@ Import Namespace="MySpace" %> <%@ Import Namespace="YourSpace" %> <script language="C#" runat=server> private void Page_Load(object sender.GetMyList(). Figure 10. So if you are not familiar with XML basics.DataSource = c1. you will see that it all revolves around XML. } </script> As you can see here. XML Binding XML is a fundamental part of the . System.New Riders . ByVal e As System. Take a look at the example code in Listings 10.xml") DataGrid1.Object.DataBind().15 and XML file in Listing 10.ReadXml("c:\\portfolio.Load ' Put user code to initialize the page here Dim ds As New DataSet() ds.EventArgs e) { // Put user code to initialize the page here DataSet ds = new DataSet().NET made by dotneter@teamfly Everything looks fine on the page. System. DataGrid1. ds. DataGrid1.14 Using XML as a Data Source (C#) private void Page_Load(object sender.16 portfolio.DataBind() End Sub Listing 10.NET) Private Sub Page_Load(ByVal sender As System.DataSource = ds DataGrid1.ReadXml("c:\\portfolio.14 and 10.16 and see if you can identify the problem.840 </PRICE> . Listing 10.DataSource = ds.Debugging ASP.EventArgs) Handles MyBase. but it seems to be missing a few records.xml File <PRODUCT> <SYMBOL>IBM</SYMBOL><COMPANY>Int'l Business Machines</COMPANY><PRICE> $357.15 Using XML as a Data Source (Visual Basic .xml"). } Listing 10.New Riders . Figure 10. If it can’t parse the file.5 shows what happens when we tried to look at the file with IE.150</PRICE> </PRODUCT> <PRODUCT> <SYMBOL>CSTGX</SYMBOL> A</COM PANY><PRICE> $256. <COMPANY>S1 Corporation <COMPANY> Kemper Aggressive Growth A <COMPANY> AIM Constellation Fund . At first sight.5.x . Figure 10. but there is one small problem. Microsoft has done a nice job of implementing XML and data-bound controls together. You are missing the root element! One way to make sure that your file is well formed is to look at it with IE 5.960</PRICE> </PRODUCT> As you can see. it will tell you what the problem is.603</PRICE> </PRODUCT> <PRODUCT> <SYMBOL>KGGAX</SYMBOL> </COM PANY><PRICE>$121. this may look well formed. it is very simple to load an XML file into a dataset and assign it to the data grid’s data source property. IE XML view.NET made by dotneter@teamfly </PRODUCT> <PRODUCT> <SYMBOL>WEINX</SYMBOL> <COMPANY> AIM Weingarten Fund A</COMPANY><PRICE>> $318.600</PRICE> </PRODUCT> <PRODUCT> <SYMBOL>SONE</SYMBOL> </COMPANY><PRICE>$111.Debugging ASP.New Riders . as shown in Listing 10.Debugging ASP.New Riders . Summary This chapter looked at some key elements that play a vital part in data-bound controls. you will get a list of records displayed on your web page the way you intended to.17 XML File Corrected <PRODUCTS> <PRODUCT> <SYMBOL>IBM</SYMBOL><COMPANY>Int'l Business Machines</COMPANY><PRICE> $357. You also learned how to use the data list and customize it with the item template . Keep in mind that this XML file could be formatted in a number of ways to store just about any type of data you need. you will create a root element.NET made by dotneter@teamfly If you simply add an element called PRODUCTS at the beginning and end of the file encompassing all the XML text.17. Listing 10. You looked here at how to use some of the more useful features of the data grid control. These components will make up the base foundation as you move forward in building new data-driven web sites.840 </PRICE> </PRODUCT> <PRODUCT> <SYMBOL>WEINX</SYMBOL><COMPANY> AIM Weingarten Fund A</COMPANY><PRICE> $318. such as event handling and paging.150</PRICE> </PRODUCT> </PRODUCTS> Now that you have everything set up correctly. but that’s not always the case.1. but this approach wasted a lot of web server resources because.NET made by dotneter@teamfly tags. Debugging User Controls IN PREVIOUS VERSIONS OF ASP. and omissions that crop up with User Controls. and the capability to make changes on the fly. And don’t forget to use namespaces when you are using classes with the same names. and an ASP. For instance. MyControl. we’ll expose some of the nuances. You could use an include file.New Riders . as you can see in Listing 11. Right now. Along the way. this chapter deals with these difficulties. With any aspect of programming. optimized memory usage.ascx. This will alleviate ambiguous names that will give you compiler errors. you might create a User Control to display copyright information at the bottom of each page in your web application. User Controls will contain dynamic elements.You could package the user interface code into a COM object and call one of its methods to display it. let’s build a simple User Control. Likewise. You could use a library of functions in script file includes. your page uses only a small subsection of the code in each include file.aspx. however. Chapter 11.NET page that consumes it. the User Control MyControl will start off with just static content. you had limited options. In ASP. most of the time. MyPage. . MyControl Most of the time. difficulties can emerge. but there was no way to customize each “instance” of the code section. IF YOU wanted to encapsulate sections of user interface code. User Controls solve this dilemma.NET. but that meant that you had to recompile the COM object each time you wanted to make a change. technicalities. User Control Basics To get started. the examples in this chapter follow the creation of a User Control and an ASP. compiled code. To make sure that we cover everything.Debugging ASP.NET page that consumes it from start to finish. you’ll be concerned with the issues surrounding the basic plumbing of User Controls. User Controls are designed to give you the best of all worlds: encapsulation. MyPage Now that you have a simple User Control built.Debugging ASP.NET page.1 MyControl— Basics <%@ Control </body> </html> Basic Gotchas The code in Listings 11. Forgetting to include either an end tag </Chapter11:MyControl> or and end-tag marker /> will generate an error like this: Parser Error Message: Literal content (</body> </html>') is not allowed within a 'ASP. you can implement it in an ASP.MyControl_ascx' .1 and 11. Listing 11. as it is with all other ASP. Without the runat attribute. It doesn’t seem to make any sense because that is the TagPrefix and TagName that you specified. you might get this error: Parser Error Message: Unknown server tag 'Chapter11:MyControl'. In this case. it would have been easier to spot the issue because the ASP. This is a requirement.NET server controls. They can also expose methods. If the situation were reversed. or Web Services if you need more utilitarian functionality. the exact syntax of the error will vary. For instance. One of the nicest things about User Controls is that they can expose properties. Unknown Server Tag Occasionally. you’ll get a message that doesn’t make any sense at first glance. let’s add some complexity.NET made by dotneter@teamfly Of course. so it completely ignores it.New Riders . Although it’s not a requirement. making their underlying content more dynamic. .Debugging ASP. Or is it? Take another look at your @Register tag at the top of your ASP. . the ASP. If the error resides in the @Register tag. A quick way to verify this is to right-click the screen in Internet Explorer and select View Source. it was the </body></html> HTML tags. depending on what literal content is present directly after your User Control reference. You’ll most likely see that your User Control tag was mistaken for plain-old HTML tags and was sent directly to the browser. Adding Properties and Methods Now that you can debug the basic issues with User Controls. you misspelled either the TagPrefix or the TagName attribute. the solution is not always so obvious.NET page.NET components. Missing Runat Attribute What if your User Control doesn’t show up at all? This almost always means that that you forgot to include the runat="server" attribute on your User Control. it is good design practice to implement methods in User Controls only when they are needed to influence the appearance of the content of the User Control itself.NET sourcecode surrounding the error is shown. Use code-behind methods of your ASP.NET page.NET engine does not know that it is supposed to do anything with it. Odds are. } } </script> This control belongs to <%= Name %>.4 MyControl with Property (Visual Basic .3 MyControl with Property (C#) <%@ Control private string _name.Debugging ASP. Listing 11.NET) <%@ Control Private _name As String Public Property Name As String Get Return _name End Get Set _name = Value End Set End Property </script> . Listing 11. public string Name { get { return _name. } set { _name = value.NET made by dotneter@teamfly Add a Property Listings 11.New Riders .3 and 11.4 show your MyControl User Control with a Name property implemented. the default is private.6.NET page.5 Assign User Controls Properties Programmatically (C#) <%@ Page protected void Page_Load(object sender. so your property will not appear. like this: <Chapter11:MyControl Property Gotchas You’ll face two primary gotchas when it comes to dealing with User Control properties: the scope modifier and the order in which properties are assigned. the default is public. Improper Scope Modifier When testing your User Control with properties. } </script> <html> . it won’t even throw an error because you are setting the property in your server tag definition and are not programmatically in a script block (discussed next). Worse yet. you could add a script block to your ASP. so your property will appear just fine. If you wanted to assign your User Control properties programmatically. Name' is inaccessible due to its protection level or (Visual Basic .Private Property Name() As String' is Private. e As EventArgs) myControl1.6 Assign User Controls Properties Programmatically (Visual Basic .NET made by dotneter@teamfly <head><title>MyPage</title></head> <body> <Chapter11:MyControl </body> </html> Listing 11.. You should always specify scope modifiers.MyControl_vb_ascx. be sure to use public scope. It is always better if you do not rely on default behaviors of the programming language you are using. </body> </html> Note that the Name attribute of the server tag has been replaced with an id attribute. and is not accessible in this context.NET) Compiler Error Message: BC30390: 'ASP. For the purposes of exposing User Control properties.Debugging ASP.New Riders . The error thrown for an improper scope modifier at this point is a little more intuitive: Compiler Error Message: CS0122: 'ASP. This enables you to reference it from the code block. .ascx" %> <script language="VB" runat="server"> Protected Sub Page_Load(sender As Object.MyControl_cs_ascx.NET) <%@ Page Language="VB" %> <%@ Register TagPrefix="Chapter11" TagName="MyControl" Src="MyControl_vb. e As EventArgs) myControl1. Listing 11.Name = myControl1.Name + "Jon".New Riders .Name & "Jon" . For instance.Debugging ASP.7 and 11.8. } protected void Page_Load(object sender.Name & "Brian" End Sub Protected Sub Page_Load(sender As Object. EventArgs e) { myControl1.NET) <%@ Page </body> </html> Listing 11. EventArgs e) { myControl1.Name = myControl1.8 Property Assignment Order (Visual Basic .ascx" %> <script language="C#" runat="server"> protected void Page_Init(object sender.ascx" %> <script language="VB" runat="server"> Protected Sub Page_Init(sender As Object.NET made by dotneter@teamfly Order of Property Assignment You might experience what appears to be some strange behavior with your User Control properties if you don’t know the order in which events occur in your ASP.7 Property Assignment Order (C#) <%@ Page Language="C#" %> <%@ Register TagPrefix="Chapter11" TagName="MyControl" Src="MyControl_cs. NET made by dotneter@teamfly End Sub </script> <html> <head><title>MyPage</title></head> <body> <Chapter11:MyControl </body> </html> As you can see. This example also added the Page_Init event handler.9 and 11. public string Name { get { .10 show what your User Control code looks like now: Listing 11. and the Page_Load event fires last. then only the one in the Page_Load event would survive. it sometimes helps to check the sequence in which the code is executed.NET page. Tag attributes are evaluated and applied first.Debugging ASP. This can give you clues on what happened.New Riders . This exercise is meant to show the order in which code is executed in an ASP. as well as how to fix it. the Page_Init event fires next. When you run the previous code listings in a browser. it will display the following text: This control belongs to BradBrianJon. let’s add a method to your User Control. If you hadn’t been concatenating the Name property in each of the event handlers. Add a Method Now that you know some of the issues with User Control properties.9 MyControl with Method (C#) <%@ Control private string _name. the Name attribute has been added back to the User Control tag.You can trap a few other page-level events. but you get the idea. Listings 11. The point to remember is that if you don’t get the expected results displayed for your User Control. } set { _name = value. <asp:panel Listing 11. <asp:panel .NET made by dotneter@teamfly return _name.NET) <%@ Control Private _name As String Public Property Name As String Get Return _name End Get Set _name = Value End Set End Property Public Sub DisplayNewDateTime() panel1. } </script> This control belongs to <%= Name %>. } } public void DisplayNewDateTime() { panel1.New Riders .Add( new LiteralControl(DateTime.Controls.Debugging ASP. ToString() + "<br />")) End Sub </script> This control belongs to <%= Name %>. ToString() + "<br />")).Add( _ New LiteralControl(DateTime.Controls.Now.10 MyControl with Method (Visual Basic .Now. EventArgs e) { myControl1.New Riders .NET made by dotneter@teamfly The code required to call this method from your ASP.DisplayNewDateTime(). myControl1.ascx" %> <script language="C#" runat="server"> protected void Page_Load(object sender. Protected Sub Page_Load(sender As Object.DisplayNewDateTime() myControl1.DisplayNewDateTime() End Sub </script> <html> <head><title>MyPage</title></head> <body> <Chapter11:MyControl </body> .11 Calling a User Control Method (C#) <%@ Page </body> </html> Listing 11.11 and 11. if you placed a Label server control named label1 after your MyControl tag in your ASP. %>). you would get an error that looks like this: . For instance. however. This is exactly what the DisplayNewDateTime method defined previously does. is because when you add items to the Controls collection. There are two reasons for this. It is much easier (and less likely to contain bugs) if you use a Panel or Label server control as a placeholder for the new dynamic controls that you want to add to your User Control. though.New Riders .Web. An AddAt method of the Controls collection enables you to specify an insertion index. by default. Accessing Out-of-Scope Controls A common pitfall in using User Control methods trying to access a control that is contained in the ASP. You’ll notice that instead of adding your new LiteralControl directly to the Controls collection of your User Control.NET page and then tried to reference it in a method of your User Control. The actions performed by your User Control methods can cause problems. but it can be difficult to determine which index position to specify. you created a Panel server control to act as a placeholder.e. you cannot dynamically add any more controls to it. then you would have been able to add your new LiteralControl objects directly to the Controls collection of your User Control. If you had not used a code block to display the Name property of your User Control. it adds the item to the end of the User Control.Debugging ASP. . <% . Because you have already used code blocks in your User Control.NET page itself. This might not be the desired outcome. Doing so would result in an error that looks like this: Exception Details: System.NET made by dotneter@teamfly </html> Method Gotchas There really aren’t any tricky things about calling User Control methods themselves (besides the scope modifier and the code execution order issues discussed in the previous “Property Gotchas” section). . The other reason why you shouldn’t do this.HttpException: The Controls collection cannot be modified because the control contains code blocks (i. Modifying the Controls Collection One of the most common things that your User Control methods will do is dynamically create new server controls. This requires you to define a class name in your User Control definition file.New Riders . That way. Defining a Class Name To be able to reference the properties and methods of your dynamically loaded User Controls in your ASP. All it takes is to add a className attribute to the @Control directive.NET) Compiler Error Message: BC30002: Type is not defined: 'MyControl' ..NET page. This often occurs when it is uncertain how many instances of the User Control will be required on your ASP. you’ll run into a situation in which you want to create an entire User Control dynamically. you must be able to cast them to a strong type.NET) <%@ Control Language="VB" className=="MyControl" %> Failure to specify a className attribute will yield an error that looks like this: Compiler Error Message: CS0246: The type or namespace name 'MyControl' could not be found (are you missing a using directive or an assembly reference?) or (Visual Basic . The easiest solution is to put any controls that you want your User Control to manipulate inside the User Control definition file itself.Debugging ASP. Dynamic User Controls Inevitably. but you must be aware of a few snags. This is a relatively painless process.NET) Compiler Error Message: BC30451: The name "label1" is not declared.NET page.NET made by dotneter@teamfly Compiler Error Message: CS0246: The type or namespace name "label1" could not be found (are you missing a using directive or an assembly reference?) or (Visual Basic . like this: <%@ Control Language="C#" className="MyControl"%> or (Visual Basic . the controls will be within the scope of your User Control methods. Dynamically generated User Controls can be a powerful tool. mc1.NET) <%@ Page Protected Sub Page_Load(sender As Object.Controls. </body> </html> Listing 11.Add(mc1). protected void Page_Load(object sender. Listings 11.Add(mc1) End Sub . panel1.NET made by dotneter@teamfly Your User Control is now all set up to be dynamically loaded and cast to a strong type.NET page that will dynamically load your User Control changes significantly from the code used to display User Controls using server control tags.DisplayNewDateTime().DisplayNewDateTime() panel1. mc1.ascx").Debugging ASP.New Riders . you don’t need the @Register directive. replaced by a Panel server control that serves as a placeholder for your dynamically loaded User Control object. The @Reference directive enables you to cast the generic Control object reference returned by the LoadControl method to the specific strong type of your User Control (in this case. Another interesting thing to point out is that the server control tag you used to declare your User Control on the page is now gone.NET) Compiler Error Message: BC30512: Option Strict disallows implicit conversions from System.MyControl.Control' to 'ASP. MyControl). if you forget to include the @Register directive. .MyControl' or (Visual Basic .UI. For instance. if you forget to cast the generic control object returned by the LoadControl method to the strong type of your User Control.UI.New Riders .NET made by dotneter@teamfly </script> <html> <head><title>MyPage</title></head> <body> <asp:panel </body> </html> Notice the use of the @Reference directive instead of the @Register directive (which is now gone). then you’ll see one of these errors: Compiler Error Message: CS0029: Cannot implicitly convert type 'System. you’ll end up with an error that looks like this: Compiler Error Message: CS0246: The type or namespace name 'MyControl' could not be found (are you missing a using directive or an assembly reference?) or (Visual Basic .Debugging ASP.Web.Web. Because you won’t be adding User Controls as server control tags. When Dynamic User Controls Go Wrong Some interesting things can go wrong if you don’t follow the rules of dynamic User Controls.NET) Compiler Error Message: BC30002: Type is not defined: 'MyControl' Likewise.Control to ASP. Web. you’ll get an error message like this: Compiler Error Message: CS0117: 'System.UI.UI.Visual Basic will implicitly cast the Control object for you. your variable still maintains a reference to it and can manipulate it. Otherwise.NET) Compiler Error Message: BC30456: The name 'Name' is not a member of 'System.NET web applications.NET made by dotneter@teamfly Note that the second of these two errors is the Visual Basic flavor and will occur only if you are using the Strict="True" attribute of the @Page directive. One final thing to note is that if you want to add multiple instances of your User Control. you must call the LoadControl method for each one. C# does not offer a setting to turn off option strict.Debugging ASP. Simply changing the properties of an existing User Control instance and then adding it to the Controls collection of your Panel server control will not suffice. We then covered problems encountered by User Control methods. it merely changes the properties of the original User Control that you added.Control'.Web.New Riders . We then added a property to your User Control and took a look at some of the errors that can occur if proper scope modifiers and code execution order rules are not followed. Summary In this chapter. so an explicit cast is always necessary. Some of the issues covered here included class name . To see the properties and methods specific to your User Control.Control' does not contain a definition for 'Name' or (Visual Basic . If you try to access the properties and methods of a generic Control object returned from the LoadControl method. you must cast it to the proper strong type specified in your @Reference directive. you looked at some of the idiosyncrasies of dealing with User Controls in ASP. including how to handle problems with modifying the Controls collection of the User Control and how to access out-of-scope controls. We rounded out the chapter by creating a dynamically loaded User Control. Even though you add the User Control instance to the Panel server control. We started by discussing some of the basic errors and problems that you might run into with User Controls. At that point. Caching Issues and Debugging ONE OF THE MOST IMPORTANT NEW FEATURES that ASP. Unfortunately. Let’s take a look at a few techniques for how to deal with both output caching and the Caching API. . Output Caching The easiest way to implement caching in your ASP. One of the things that might not be so obvious is that. by nature. so they are much better equipped to handle cached resources.Debugging ASP. For instance. That’s not a very elegant solution at all.NET makes available is a fairly extensive caching engine.NET made by dotneter@teamfly declaration. this method is also the most limiting and frustrating to work with. ASP. that content could end up trapped in cache. caching can make debugging your web application particularly troublesome.NET page or user control so that it would refresh itself. User Control references. Although caching in ASP. and explicit strong type casting to custom User Control types.NET’s caching capabilities are integrated at the ISAPI level. This removes the need for “home-grown” caching solutions that often contain memory leaks. if your web application contains a bug that displays the wrong content on a page.New Riders . Chapter 12.NET web applications is through directives at either the page or the user control level. The next chapter discusses caching issues that you might encounter while debugging your web applications. using a validation callback. there are a few intricacies and “gotchas” that we’d like to cover. which we’ll discuss a bit later. you would be forced to make a change to the underlying ASP. There is a better way to handle this issue.NET is relatively easy to implement. . VaryByParam Attribute Misuse of the VaryByParam attribute can lead to some strange behavior by your ASP. "*". Although you can use many different possible attributes (a few of which you’ll see in a bit). however. regardless of any changes that were made to the query string.NET page. Although this might appear to solve the problem. what happens if you are passing information on the query string that is personal to the current user—say. The following two page requests would both return the data for the product with an ID = 10 :. you must be aware that the same cached page will be served up for all requests for that page. you’ll get an error like this: Parser Error Message: The directive is missing a 'duration' attribute.com/product.com/product.aspx?ID=20 You might choose to fix this anomaly by specifying a value of * for the VaryByParam attribute.New Riders .yourdomain. Failure to include it will yield an error message like this: Parser Error Message: The directive is missing a ‘VaryByParam’ attribute. Accidentally specifying a value of 10.VaryByControl can be substituted for VaryByParam in user controls. but you must realize that the number it accepts is the number of “seconds” that you want your content to be cached for.aspx?ID=10. or a list of name/value pairs. For instance. If you forget to specify the Duration attribute. and set the VaryByParam attribute to none. would cache your data only for 10 seconds instead of 10 minutes (which is what you might expect).Debugging ASP. for instance. . if you intended to cache product information pages on your web application. two are mandatory: Duration and VaryByParam.NET made by dotneter@teamfly @OutputCache Directive The basis for output caching at both the page and the user control level is the @OutputCache directive. This is the second mandatory attribute of the @OutputCache directive. then the first product that was requested would be served up for all subsequent requests. If you specify none as a value for this attribute.yourdomain. Duration Attribute The Duration attribute is fairly straightforward. which should be set to "none". . If you need to specify more than one QueryString parameter as a caching key. you would likely want to place your @OutputCache directive in a user control that contains your product-rendering code so that the user QueryString parameter could be used by other sections of the page to personalize the output. This is not allowed and will generate an error like this: Parser Error Message: The 'VaryByHeader' attribute is not supported by the 'OutputCache' directive.aspx?ID=10&user=5678 You would want both of these requests to be served up from the same item in cache. Then.aspx?ID=10&user=1234. in the scenario just discussed. Use the VaryByHeader attribute only at the page level.com/product. The tricky part is that no errors are generated when you make this mistake.yourdomain. VaryByHeader Attribute The most common error that you will run into with the VaryByHeader attribute occurs if you try to use it in the @OutputCache directive of a user control.com/product.yourdomain.asax file. use ID. It is useful to note that. Take the following two page requests:. thereby causing the wrong caching behavior. In this case. user would be interpreted as one QueryString parameter.New Riders . user.. make sure that you use a semicolon as the delimiter instead of a comma.NET made by dotneter@teamfly perhaps a user ID? You would then get a cached version for each user that browsed each product. No errors will be generated if you forget to do this. Instead. but your custom parameter will not take part in the cache key. you must remember to override the GetVaryByCustomString method in your web application’s global. This is where you would want to specify exactly which QueryString parameters you want the cache to key off. only the product information itself would be served up from cache. it would be the ID parameter only. The value ID.Debugging ASP. VaryByCustom Attribute When using the VaryByCustom attribute to customize the key used to store your page or user control in cache. .2. object data. As you can see.NET made by dotneter@teamfly Manipulation of Cached User Controls If you implement the @OutputCache directive in a user control. Here you’ll wire up a validation callback so that each time the page is requested from the cache. Take a look at the code in Listings 12. An example might look like this: <chapter12:MyControl protected void MyHttpCacheValidateHandler(HttpContext context.1 and 12.NullReferenceException: Value null was found where an instance of an object was required. To be safe.New Riders . Dynamically assigning the properties of a user control in one of the page’s events (such as Page_Load) works only if a copy does not already exist in the fragment cache. this error message isn’t very intuitive. you’ll get a chance to establish whether the cached item is still good. ref HttpValidationStatus validationStatus) { //initialize the password variable string password = "". pertaining to data being trapped in the output cache? Here is where we’ll put that issue to rest. which demonstrate the technique. The exception to this is when you want to use declarative attributes. You’ll do this by checking for the existence of a password QueryString parameter. Listing 12. QueryString["password"]!=null) { password = (string)context. } //if a password was specified.Invalid.Valid.Cache. break.AddValidationCallback(myHandler. EventArgs e) { //create an instance of the HttpCacheValidateHandler //delegate so that we can wire up the validation callback HttpCacheValidateHandler myHandler = new HttpCacheValidateHandler(MyHttpCacheValidateHandler). //wire up the callback Response. Text = DateTime. then determine if it is //correct.Request.NET made by dotneter@teamfly //set the password variable if the user specified the //"password" QueryString parameter if(context. //by displaying the current date and time on the screen.New Riders . then validate the cached page switch(password) { case "mypassword": validationStatus = HttpValidationStatus.Request. } } protected void Page_Load(Object sender. ToString(). then evict the page from the cache.QueryString["password"].Debugging ASP. //If the password is incorrect or wasn't //specified.Now. //we can see when the cache is being invalidated label1. If it is. null). break. default: validationStatus = HttpValidationStatus. } </script> <html> <head> <title>Validation Callback</title> </head> . If it is. data As Object.New Riders . then determine if it is 'correct.Request.Valid End Select End Sub Protected Sub Page_Load(sender As Object.QueryString("password") <> Nothing Then password = CStr(context.2 Invalidating the Output Cache Through a Validation Callback (Visual Basic .Debugging ASP.NET made by dotneter@teamfly <body> <asp:label </body> </html> Listing 12. then validate the cached page Select Case password Case "mypassword" validationStatus = HttpValidationStatus. _ ByRef validationStatus As HttpValidationStatus) 'initialize the password variable Dim password As Protected Sub MyHttpCacheValidateHandler( _ context As HttpContext. New Riders .2. you set an Invalid status value to the validationStatus variable. you set a Valid status value. When the cache is invalidated.000 seconds. Next.AddValidationCallback(myHandler. you can see that we start off by specifying the @OutputCache directive with a duration of 10. 'we can see when the cache is being invalidated label1. Be sure not to specify a * for this attribute—if you do.Cache. You can then wire up the callback through the AddValidationCallback method of the Cache property of the Response object. Notice the use of a switch statement (Case statement. This is not the desired outcome. Nothing) 'by displaying the current date and time on the screen. This is plenty of time to demonstrate the validation callback.Debugging ASP.NET made by dotneter@teamfly 'wire up the callback Response. you define a handler for the validation callback.Now. If the password is correct.“code Structure that Eases Debugging.NET) instead of an if statement. the password QueryString parameter would become part of the new cache key. As originally stated in Chapter 4. in Visual Basic . Below the @OutputCache directive. ToString() End Sub </script> <html> <head> <title>Validation Callback</title> </head> <body> <asp:label </body> </html> In Listings 12. With the * value for VaryByParam. You create an instance of the HttpCacheValidateHandler delegate. Otherwise. If it exists. Just don’t include the password parameter as one of them. you define your Page_Load event. . Also notice the fact that the VaryByParam attribute is set to none. you will never be able to reset the correct output cache. it immediately stores a new copy. you check for the presence of the password QueryString parameter. You can set the VaryByParam attribute to specific QueryString parameters. Text = DateTime. then you assign it to the password variable. using the address of the handler function just defined.” this enhances the readability of the code and enables you to add more options if needed at a later time.1 and 12. In it. The first time you hit the page. Listing 12.4 demonstrates both the incorrect and correct methods of specifying a file dependency.Web. For this reason. you can navigate to the page in your browser. Unlike most other references in ASP. you assign the current date and time to a label server control on the page.New Riders . To see the validation callback in action. The Caching API The Caching API is much more flexible than the @OutputCache directive. By using this technique. This is evident when you refresh the page and the same date and time appear on the page. in that you are in direct control of inserting items into cache and specifying exactly when and how they should be removed. you might get an error that looks like this: Exception Details: System.3 Wrong and Right Ways to Use a File Dependency (C#) . however. The code in Listings 12. You can see that the cache is refreshed with a new version of the page because the date and time change. it is stored in cache.NET web applications. Now add the password=mypassword QueryString parameter to the URL and hit the Enter key. Dependencies Setting up dependencies for cached items is usually pretty straightforward. file dependencies require you to specify that absolute path to the file.NET made by dotneter@teamfly To visually see when the cache is being invalidated.HttpException: Invalid file name for monitoring: {0} This error means that the Cache object cannot find the file that you want to base the dependency on. you can force a reset of the output cache. without a path. might be a bit confusing. When you are setting up a file dependency. for that matter) being trapped in cache.3 and 12. An easy way to get the absolute path is to use the Server. eliminating the problem of invalid debugging data (or any other incorrect data. The cause of this error is almost always that you attempted to use a virtual path to your dependency file or that you just specified a filename. This section of the chapter helps you with some of the issues that you might encounter while using the Caching API. If you now remove the password QueryString parameter and hit the Enter key again. you will not run into as many issues while using the Caching API.Debugging ASP. A couple situations. The trade-off is that it is not quite as convenient and it takes a bit more code to get working.MapPath() method call. you’ll see that the newly cached page is fetched and displayed. _ new CacheDependency(Server.MapPath("keyfile. string[])' has some invalid arguments . Leave the first parameter null (unless you want the dependency to be file-based as well).NET) <%@ Protected Sub Page_Load(sender As Object. _ new CacheDependency("keyfile.MapPath("keyfile.Insert("someKey". new CacheDependency(Server.Web. If you do this.Insert("someKey". you use the overloaded signature of the CacheDependency class that accepts two parameters.txt"))) End Sub </script> Some tricky situations can arise when you are attempting to set up a dependency based on another item in cache through its key.4 Wrong and Right Ways to Use a File Dependency (Visual Basic . To do this. } </script> Listing 12. you will get an error like this: Compiler Error Message: CS1502: The best overloaded method match for 'System.NET made by dotneter@teamfly <%@ protected void Page_Load(object sender.txt")). //this is the correct way Cache. Ideally. "someValue". E As EventArgs) 'this causes an error Cache.txt"))).Debugging ASP. "someValue".Insert("someKey".txt")) 'this is the correct way Cache.Caching. The tricky part comes in when you try to base the dependency on a single key.CacheDependency. you could just specify the single string value as a parameter. "someValue".CacheDependency(string[]. "someValue". however. new CacheDependency("keyfile. EventArgs e) { //this causes an error Cache.New Riders . The second parameter is an array of strings correlating to the keys upon which you want this cached item to be dependent.Insert("someKey". 6 Wrong and Right Ways to Use a Single-Cache Key Dependency (Visual Basic . Even though you are specifying only a single cache key as a dependency.NET) <%@ Protected Sub Page_Load(sender As Object.5 and 12. "someValue". "someValue".NET made by dotneter@teamfly or (Visual Basic . E As EventArgs) 'this causes an error Cache. _ . dependency)).Debugging ASP. new CacheDependency(null. } </script> Listing 12. Listings 12. EventArgs e) { //this causes an error Cache. "depKey")) 'this is the correct way Dim dependency As String() = {"depKey"} Cache.Insert("someKey". Cache. new CacheDependency(null. "depKey")). "someValue".Insert("someKey".Insert("someKey". //this is the correct way string[] dependency = {"depKey"}. it must still be in an array. _ new CacheDependency(Nothing.New Riders .5 Wrong and Right Ways to Use a Single-Cache Key Dependency (C#) <%@ protected void Page_Load(object sender.6 provide examples of both incorrect and correct ways to handle a single-cache key dependency: Listing 12. "someValue".Insert("someKey".NET) Compiler Error Message: BC30311: A value of type 'String' cannot be converted to '1-dimensional Array of String'. the item IS is inserted into the cache and can be referenced later.MaxValue or TimeSpan.Zero. //C# or Dim someValue As String = Cache("someValue"). you will get an error like this: . and the Cache.Debugging ASP. you will get the following error message: Exception Details: System. you must cast it to the appropriate type before assigning it.ArgumentException: absoluteExpiration must be DateTime.Zero so that only an absolute expiration or a sliding expiration is implemented. The key word to note in the previous sentence is or.Zero. if you set up a file-based dependency based on a file that doesn’t exist.New Riders . Date and Time Expirations Two of the overloaded signatures of the Insert method of the Cache class.MaxValue or slidingExpiration must be timeSpan. which is equivalent to DateTime. As an alternative. the Cache.NET made by dotneter@teamfly new CacheDependency(Nothing.NoAbsoluteExpiration constant can be used. ToString() 'VB If you do not cast it to the proper type and you are not assigning the item from Cache to a variable of the Object type. One suggestion to avoid that situation is to check for the existence of an item in the cache before you use it as a dependency for another item. enable you to specify an absolute expiration date/time for the cached item or a sliding expiration time. If you attempt to specify both at the same time. The error message tells you to use DateTime. as well as one signature of the Add method. This means that the item that you are trying to insert into the cache is immediately invalidated. dependency)) End Sub </script> A final point about dependencies that might cause you problems is that the Cache object enables you to set up a dependency on a cache key that does not exist.NoSlidingExpiration constant can be used. ToString(). which is equivalent to TimeSpan. Retrieving Cached Items Because all items returned from the cache are of the Object type. Interestingly.MaxValue. like this: string someValue = Cache["someValue"]. in Visual Basic . We started with a look . the return value is null (Nothing. } or 'Visual Basic . you should verify that it exists first. If the key is invalid. using the same method demonstrated previously. however.NET If Cache("someValue") <> Nothing Then someValue = Cache("someValue"). thereby evicting it from the cache. Instead. Normally.NullReferenceException: Value null was found where an instance of an object was required.NET). if you plan to use the item that you are removing from the cache. then when you attempt to cast it to the proper type (as you just saw). This might happen more often than you think because you can never tell when a resource that the cached item is dependent on changes. ToString() End If Removing Cached Items You can run into trouble if you attempt to remove an item from the cache using an invalid key.NET).NET made by dotneter@teamfly Compiler Error Message: CS0029: Cannot implicitly convert type 'object' to 'string' If you attempt to retrieve an item from the cache using a key that doesn’t exist. The following code demonstrates this: //C# if(Cache[“someValue”] !!= null) { someValue = Cache[“someValue”]. ToString(). If this is the case. in Visual Basic . it returns a null reference (Nothing. it does not generate an error. Summary In this chapter.Debugging ASP.New Riders . Therefore. To get around this. we covered some of the more common problems that you will encounter while leveraging caching in your web applications. the return value of the Remove method of the Cache class is the item that was removed from the cache. always verify that the item that you want to retrieve from the cache exists. you will get an error like this: Exception Details: System. The next chapter discusses Web Services in the context of debugging your ASP.Debugging ASP. as well as date and time expirations. These new capabilities enable developers to extend the reach of web applications.NET made by dotneter@teamfly at output caching. Part IV: Debugging Related Technologies Part IV Debugging Related Technologies 13 Debugging Web Services 14 Debugging . They give you the capability to reach any web server. and errors associated with retrieving and removing items from the cache. Problems associated with several of its attributes were discussed.and key-based dependencies. Next. Issues related to the Caching API were discussed next. It is also a useful feature that enables you to manually refresh an incorrect page that is in production. using the @OutputCache directive at both the page and user control levels. Specifically.NET Chapter 13. Debugging Web Services WEB SERVICES PROVIDE AN AMAZING NEW STRIDE forward in distributed architecture.New Riders . This capability is key to being able to effectively debug web applications without having to worry about incorrect data being trapped in cache. We also covered the error generated by manipulating an output cached user control.NET web applications.NET Components and HttpHandlers 15 COM+ Issues 16 Debugging ADO. But what about debugging a Web Service? . you learned how to use validation callbacks to manually evict a page from the output cache. you learned about the intricacies of both file. This chapter identifies the key areas in building and debugging Web Services.1–13. especially in Visual Studio .4 serve as examples of the code that you will need to get started. Debugging a Web Service might seem similar. but there are some important differences to keep in mind while doing the debugging. } } } enabled .asmx.NET made by dotneter@teamfly You probably have had the pleasure of debugging a DLL. Listing 13.NET framework tools here.Services.We will show you what tools are available and different techniques to assist you in finding those bugs.Now. First you’ll take a look at a simple Web Service. ToString().Services. Web Services Stumbling Blocks Microsoft has done a great deal to ease the process of building Web Services.Debugging ASP. Listings 13. using System.cs" Class="chap_13_c.New Riders .NET. TimeService" %%> Listing 13.Web.Web. namespace chap_13_c { //Create a class and inherit the WebService functionality public class TimeService : System. Then we’ll identify areas where you might have problems.WebService { //Constructor public TimeService() { } //This is required to make the following method SOAP [WebMethod] public string GetTime() { return DateTime. But let’s focus on using just the .1 ASMX Code (C#) <%@ WebService Language="c#" Codebehind="Service1.2 Simple Web Service (C#) using System. New Riders - Debugging ASP.NET made by dotneter@teamfly Listing 13.3 ASMX Code (Visual Basic .NET) <%@ WebService Language="vb" Codebehind="Service1.asmx.vb" Class="chap_13_vb.Service1" %%> Listing 13.4 Simple Web Service (Visual Basic .NET) Imports System Imports System.Web.Services Public Class TimeService Inherits System.Web.Services.WebService <WebMethod()> Public Function GetTime() As String HelloWorld = Date.Now. ToString() End Function End Class If you look at Listings 13.1–13.4 , you will notice that you have to produce very little code to create a simple Web Service. Let’s take a look at where you might run into problems and how to resolve them. Error Messages You might run into a variety of error messages, but we’ll try to focus on some of the more common areas where problems might arise. Let’s start off by looking at a couple common error messages.You might run into this one right off the bat: System.IO.FileInfo cannot be serialized because it does not have a default public constructor. If you manage to get past that error, you might run into the following one: System.Exception: There was an error generating the XML document. – - > System.Exception: System.IO.FileInfo cannot be serialized because it does not have a default public constructor. at System.Xml.Serialization. TypeScope.ImportTypeDesc(Type type, Boolean canBePrimitive) New Riders - Debugging ASP.NET made by dotneter@teamfly at System.Xml.Serialization. TypeScope.GetTypeDesc(Type type) at System.Xml.Serialization. TypeScope.ImportTypeDesc(Type type, Boolean canBePrimitive) at System.Xml.Serialization. TypeScope.GetTypeDesc(Type type) atSystem.Xml.Serialization.XmlSerializationWriter.CreateUnknownTypeEx ception(Type type) at System.Xml.Serialization.XmlSerializationWriter.WriteTypedPrimitive(S tring name, String ns, Object o, Boolean xsiType) atn2499d7d93ffa468fbd8861780677ee41.XmlSerializationWriter1.Write4_Ob ject(String n, String ns, Object o, Boolean isNullable, Boolean needType) atn2499d7d93ffa468fbd8861780677ee41.XmlSerializationWriter1.Write9_Ob ject(Object o) at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces) at System.Xml.Serialization.XmlSerializer.Serialize(TextWriter textWriter, Object o) at System.Web.Services.Protocols.XmlReturnWriter.Write(HttpResponse response, Stream outputStream, Object returnValue) at System.Web.Services.Protocols.HttpServerProtocol.WriteReturns(Object[] returnValues, Stream outputStream) at System.Web.Services.Protocols.WebServiceHandler.WriteReturns(Object[] returnValues) at System.Web.Services.Protocols.WebServiceHandler.Invoke() at System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest() If you get either of those messages, whether while your service is just starting or as you invoke the method that returns the data that caused the error, most likely you are trying to return a complex data set or array that the system cannot serialize. If you take a look at Listings 13.5 and 13.6, you can see that it would be easy to overlook what type of data the system can return. New Riders - Debugging ASP.NET made by dotneter@teamfly Listing 13.5 Complex Return Type (C#) [WebMethod] public DirectoryInfo[] Dir(string dir) { DirectoryInfo di = new DirectoryInfo(dir); DirectoryInfo[] diList = di.GetDirectories("*.*"); return diList; } Listing 13.6 Complex Return Type (Visual Basic .NET) <WebMethod()> Public Function Dir(ByVal dir As String) As DirectoryInfo() Dim di As New DirectoryInfo(dir) Dim diList as di.GetDirectories("*.*") Return diList End Function The code in Listings 13.5 and 13.6 looks fine, but it won’t work. So now what do you do? Well, one way to work around this is to build your own array and then pass it back to the client. If you do this, you must make sure that the array structure is published to public so that the users know what they are working with. This gets accomplished in Listings 13.7 and 13.8. Listing 13.7 Custom Return Type (C#) using System; using System.Collections; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Web; using System.Web.Services; using System.Xml.Serialization; using System.IO; namespace DirectoryTools { /// <summary> New Riders - Debugging ASP.NET made by dotneter@teamfly /// Summary description for Service1. /// </summary> public class Service1 : System.Web.Services.WebService { public Service1() { //CODEGEN: This call is required by the ASP.NET Web Services Designer InitializeComponent(); } #region Component Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { } #endregion /// <summary> /// Clean up any resources being used. /// </summary> protected override void Dispose( bool disposing ) { } // WEB SERVICE EXAMPLE // The HelloWorld() example service returns the string Hello World // To build, uncomment the following lines and then save and build the project // To test this web service, press F5 [WebMethod] public string HelloWorld() { return "Hello World"; } [WebMethod()] [XmlInclude(typeof(DirObject))] public DirObject Test( string sPath) New Riders - Debugging ASP.NET made by dotneter@teamfly { DirectoryInfo di = new DirectoryInfo(sPath); DirectoryInfo[] diList = di.GetDirectories("*.*"); DirObject temp = new DirObject(); int x = 0; foreach ( DirectoryInfo d in diList) { temp[x] = d; x++; } return temp; } } [XmlInclude(typeof(DirItem))] public class DirObject { [XmlElement("Item")] public ArrayList Data { get { return data;} set { data = value;} } protected ArrayList data = new ArrayList(); public object this [int idx] { get { if (idx > -1 && idx < data.Count) { return (data[idx]); } else return null; } } set { if (idx > -1 && idx < data.Count) { New Riders - Debugging ASP.NET made by dotneter@teamfly data[idx] = value; } else if (idx == data.Count) { DirItem x = new DirItem(); DirectoryInfo temp = (DirectoryInfo)value; x.FileName = temp.FullName; data.Add(x); } else { //Possibly throw an exception here. } } } } // MyClass public class DirItem { protected string filename; public DirItem() { } [XmlAttribute("FileName")] public string FileName { get { return filename;} set { filename = value;} } } } //Name space Listing 13.8 Custom Return Type (Visual Basic .NET) Imports System.Web.Services Imports System.IO Imports System.Xml Imports System.Xml.Serialization <System.Container 'Do not modify it using the code editor.ComponentModel.Container() End Sub Protected Overloads Overrides Sub Dispose(ByVal disposing As Boolean) 'Do not modify it using the code editor. InitializeComponent() 'Add your own initialization code after the InitializeComponent() call End Sub 'Required by the Web Services Designer Private components As System.Web.WebService #Region " Web Services Designer Generated Code " Public Sub New() MyBase.DebuggerStepThrough()> Private Sub InitializeComponent() components = New System.GetDirectories("*.Services. End Sub #End Region ' <WebMethod()> Public Function Test(ByVal path As String) As DirObject Dim di As DirectoryInfo = New DirectoryInfo(path) Dim diList() As DirectoryInfo Dim d As DirectoryInfo Dim temp As DirObject = New DirObject() Dim x As Integer = 0 diList = di.NET made by dotneter@teamfly <WebService(Namespace:=".*") For Each d In diList .Diagnostics.com/webservices/")> Public Class Service1 Inherits System.New Riders .Debugging ASP.New() 'This call is required by the Web Services Designer.ComponentModel. XmlRoot("Root")> Public Class DirObject Private pdata As ArrayList = New ArrayList() <XmlElement("Item")> Public Property Data() As ArrayList Get Return pdata End Get Set(ByVal Value As ArrayList) pdata = Value End Set End Property Default Public Property base(ByVal idx As Integer) As Object Get If (idx > -1 And idx < Data.Count) Then Return pdata(idx) Else Return Nothing End If End Get Set(ByVal Value As Object) If (idx > -1 And idx < pdata.Debugging ASP. DirectoryInfo) .New Riders .Count) Then Dim x As DirItem = New DirItem() Dim temp As DirectoryInfo temp = CType(Value.Count) Then pdata(idx) = Value ElseIf (idx = pdata.NET made by dotneter@teamfly temp(x) = d x = x + 1 Next Return temp End Function <XmlInclude(GetType(DirItem)). Debugging ASP.LastAccessDate = temp. End If End Set End Property End Class Public Class DirItem Private pfilename As String Private pLastAccessDate As Date Sub DirItem() End Sub <XmlElement("FileName")> Public Property FileName() As String Get Return pfilename End Get Set(ByVal Value As String) pfilename = Value End Set End Property <XmlElement("LastAccessDate")> Public Property LastAccessDate() As Date Get Return pLastAccessDate End Get Set(ByVal Value As Date) pLastAccessDate = Value End Set End Property .FullName x.LastAccessTime pdata.FileName = temp.Add(x) Else 'Possibly throw an exception here.NET made by dotneter@teamfly x.New Riders . ctor(Type type) at chapter13_c.ImportTypeDesc(Type type. –-> System. TypeScope.XmlReflectionImporter. Problems Working with XMLSerializer You just saw one way to pass back data. at System. These additional features define the output when this class is serialized.Xml.Serialization.9 and 13. Listing 13. there must be a simpler way to do it.XmlReflectionImporter.GetTypeModel(Type type) at System.ImportTypeMapping(Type type.ImportTypeMapping(Type type. but you might say that it involves a bit of work.NET made by dotneter@teamfly End Class End Class In Listings 13.Service1..XmlSerializer.DirectoryInfo'.Debugging ASP.New Riders .Xml.cs: line 110 Let’s look at the line of code where the error occurred and see why this is happening. XmlRootAttribute root. Boolean canBePrimitive) at System.Exception: There was an error reflecting 'System. Why not use the XMLSerializer class to generate the returning data in an XML valid format? Well.7 and 13.asmx. String defaultNamespace) at System.DirectoryInfo cannot be serialized because it does not have a default public constructor.Serialization.Xml. XmlRootAttribute root.Serialization.IO.Xml.Dir(String dir) in c:\documents and settings\brad\vswebcache\digital-laptop\chapter13_c\service1.9 Fix for the XMLSerializer problem (C#) .GetTypeDesc(Type type) at System. This is a great way to rename elements in the XML structure to something more meaningful.Serialization. you might get the following error message: System. if you try this. TypeScope.Serialization.Serialization. Listings 13. String defaultNamespace) at System.Xml. you will notice a few new elements that you might not have seen before: <XmlAttribute> and <XmlInclude>.10 provide a fix.Xml.IO.8.ModelScope.Exception: System. . string[] sret = new string[3]. Listing 13.NET made by dotneter@teamfly [WebMethod] public Object[] Dir(string dir) { XmlSerializer serializer = Return serializer. } Listing 13. Listings 13.IndexOutOfRangeException was thrown.cs:line 38 This error message is typical of what you would see if your Web Service ran into an error. new XmlSerializer(typeof(DirectoryInfo)).ErrorString() in c:\inetpub\wwwroot\chap_13_c\service1.asmx. The question is.New Riders . That is why you must define the different classes to mimic the DirectoryInfo object.11 and 13.IndexOutOfRangeException: Exception of type System.10 Fix for the XMLSerializer problem (Visual Basic .12 show a simple example to catch the error and return it inside the contents of the SOAP message.11 WebMethod Error (C#) [WebMethod] public object ReturnErrorString() { // Catch the error then return it inside a SOAP message.NET) <WebMethod()> Public Function Dir(ByVal dirname As String) As Object() Dim serializer = new XmlSerializer(typeof(DirectoryInfo)) Return serializer End Function The XmlSerializer cannot serialize the structure into XML. how do you want to handle the errors that pop up? The following error message is an example of what you would see if you did not have any mechanism for handling errors: System. TimeService. you eventually will need to handle errors in a more constructive way. To keep this from happening.Debugging ASP. at chap_13_c. Working with Errors in SOAP While working with Web Services. you will notice that we intentionally added a runtime error while adding the third item to the array. you can identify what the problem is inside the catch and try to resolve the problem or throw another exception that will be sent back to the client.12.//Error } catch(Exception e) { sret[0] = e. Instead. } Listing 13. it will be returned inside the SOAP message.NET) <WebMethod()> Public Function ErrorStringCatch() As String() ' Catch the error then return it inside a SOAP message. one of two things should be done. but this is not the preferred method for doing so. } return sret. Second. This is one way to approach handling errors and sending them back to the client. This gives you the capability to identify the error inside your Web Service client and deal with .Message End Try ErrorStringCatch = sret End Function If you look closely at the code in Listings 13. First. Now when you run into an error.NET made by dotneter@teamfly try { sret[0] = "one".11 and 13. you cannot catch the exception that will send it back to the client to handle.Message.Debugging ASP.12 WebMethod Error (Visual Basic .Dim sret(3) As String Try sret(0) = "Hello" sret(1) = "Bye" sret(13) = "Testing" 'This will generate an error Catch e As Exception sret(0) = e. sret[1] = "two".New Riders . sret[3] = "three". you would need to check the first item of the array and try to identify whether it is an error message.IndexOutOfRangeException was thrown.15. Listing 13.org/2001/XMLSchema" n1: </Object> What if you are working only with numbers? Then how do you go about sending back error messages? If you look at Listings 13.NET made by dotneter@teamfly it there. } Listing 13. the return message might look something like Listing 13. Otherwise.NET_webservices"> <string>Exception of type System.//Return the exception to the client } return sret.</string> <string>Bye</string> <string>xmlns:xsd=". if you just returned the error message inside the array.org/2001/ XMLSchema-instance" xmlns="Debugginmain. If you did this.New Riders .w3.//Error } catch(Exception e) { Throw(e).w3.14 and 13.14 Throw Exception (C#) [WebMethod] public object ReturnErrorString() { // Catch the error then return it inside a SOAP message. string[] sret = new string[3].13. sret[3] = "three". you will see that you can use the try-catch method and then rethrow the error. try { sret[0] = "one".13 SOAP Return Values <?xml version="1.12 xmlns:n1=". Listing 13.NET) . sret[1] = "two".0" encoding="utf-8" ?> <Object n1:type="ArrayOfString"Listing 13.15 Throw Exception (Visual Basic .Debugging ASP. s.Dim sret(3) As String Try sret(0) = "Hello" sret(1) = "Bye" sret(13) = "Testing" 'This will generate an error Catch e As Exception Throw(e) 'Return the exception to the client End Try ErrorStringCatch = sret End Function If you look at the Catch statement. By adding this line of code. System.16 and 13.17. } catch(Exception ex) { TextBox1.EventArgs e) { try { localhost.Debugging ASP.16 The try-catch Statement on the Client Side (C#) private void Button2_Click(object sender. To show how the client should be implemented. Don’t forget that you need to use the try-catch statement on the client side as well.NET made by dotneter@teamfly <WebMethod()> Public Function ErrorStringCatch() As String() ' Catch the error then return it inside a SOAP message. Text = ex. A simple change in one line of code can make a world of difference when it comes to debugging. you change the behavior of the Web Service dramatically.New Riders . Listing 13. } } . take a look at the code in Listings 13.Message.Service1().Service1 s = new localhost. ErrorStringCatch (). to be able to handle the exception being passed back by the server. the Throw method is being called to pass the error on to the client. at System.Click Try localhost. If there is an error on the server side.Service1(s = New localhost. –-> System. Take a look at the following error message and see if you can identify why we received this message: System. Error Returning Certain Types of Data One of the errors we ran into was simple because we were not paying attention to the details of the code.Service1()) s.” meaning that this is the way it is supposed to be. ByVal e As System.Write3_Ob ject(Object o) .NET made by dotneter@teamfly Listing 13. Boolean xsiType) atn2499d7d93ffa468fbd8861780677ee41. Boolean needType) atn2499d7d93ffa468fbd8861780677ee41.Write1_Ob ject(String n.EventArgs) Handles Button1.WriteTypedPrimitive(S tring name. String ns.Object.XmlSerializationWriter.17 The try-catch Statement on the Client Side (Visual Basic . as a software developer would say. Boolean isNullable.XmlSerializationWriter1. the Web Service passes back the error information.Serialization.New Riders . Text = ex.Xml.XmlSerializationWriter1. and it can be received in the catch statement.ErrorStringCatch() catch(Exception ex) TextBox1. Object o.Message End Try End Sub In Listings 13. the user clicks the button to invoke the method of the Web Service.Debugging ASP.17. it is a problem “as designed. String ns.16 and 13.Object[] may not be used in this context.Exception: The type System.NET) Private Sub Button1_Click(ByVal sender As System.. it is displayed in the text box that is on the page.Exception: There was an error generating the XML document. Object o.When an error does get caught. The code compiled fine—we’re not sure whether it was a compiler issue or. Listing 13.19 Correct Syntax (C#) [WebMethod] public object[] ErrorStringCatch() Listing 13.Serialize(TextWriter textWriter.New Riders .Web.NET made by dotneter@teamfly at System.Protocols.Xml.WebServiceHandler.Web.Protocols.Web. Object returnValue) at System.Object class. Object o.Services. you must make sure to define your WebMethod as returning an array.CoreProcessRequest() This message contains a lot of repetitive information here.Services.WriteReturns(Object[] returnValues) at System.Protocols.You’ll notice that Object and Serialize are used several times. but it also has some very specific bits that narrow down where the problem is coming from. not just a single value or object.WriteReturns(Object[] returnValues.Services.Services. Object o) at System.XmlReturnWriter.18–13.NET) <WebMethod()> Public Function ErrorStringCatch() As Object .Debugging ASP. Now you are taking the next step to passing arrays in Web Services. XmlSerializerNamespaces namespaces) at System.Protocols.XmlSerializer. Now you know from the error message at the beginning that the problem originates with the System.Web. Passing simple types back and forth is very simple to do.20 Code with Error (Visual Basic .18 Code with Error (C#) [WebMethod] public object ErrorStringCatch()// Missing [] from object Listing 13.Serialization.HttpServerProtocol.WebServiceHandler. Listings 13.21 give some examples of problems that you might run into when returning array’s of objects.Serialize(XmlWriter xmlWriter.Protocols.WebServiceHandler.Invoke() at System. Stream outputStream) at System.Serialization.Xml.When working with any sort of array.Web. Stream outputStream.Write(HttpResponse response.Services.XmlSerializer. (int)fs.22 GetFile WebMethod (C#) [WebMethod] public FileStream GetFile() { FileStream fs = new FileStream("c:\\winnt\\greenstone.// = new byte[fs.NET) <WebMethod()> Public Function ErrorStringCatch() As Object() This might seem like a trivial item. buf = new Byte(fs.Length].bmp". FileMode.New Riders . byte[] buf. buf = new Byte[fs.NET made by dotneter@teamfly Listing 13.Debugging ASP. you might have experimented with different types of data that could be returned by a Web Service.Length]. Working with Streams While playing around with Web Services. . FileMode.Length).23 give a simple example of how you might attempt to pass back a FileStream object.NET) <WebMethod()>public function GetFile() as FileStream FileStream fs = new FileStream("c:\\winnt\\greenstone.// = new byte[fs.Open). If you tried to do some basic functions such as getting a directory listing or finding out some information on a file. but don’t forget that most errors that you will run into probably are simple errors that you overlooked.21 Correct Syntax (Visual Basic . return fs. fs.23 GetFile WebMethod (Visual Basic .Length). Listings 13.0.Read(buf.22 and 13. } Listing 13. Listing 13.Length]. you might have run into some problems.bmp". Dim buf() as byte().Open). Take a look at Listings 13. Serialization problems. But when you try to use it.Read(buf. but each case needs to be handled in a different way because Web Services need to pass the data in one of two basic forms: text or binary.1. This error seems to be very popular with Web Services. Because you are already dealing with an array of bytes. Take a look at the error shown in Figure 13. fs. End Function The code in Listings 13. you end up getting an error.0. in this case.New Riders . Listing 13. The data format can be binary or. There is a simple resolution to this problem.Debugging ASP.24 Returning Binary Data (C#) [WebMethod] public Byte[] GetFile() . just pass that array back to the client. an array of bytes.22 and 13. needs to be in an XML format of some sort.25 to see how this would be done in the code.23 looks like it should work. As for the text format it. return fs.Length). Figure 13.NET made by dotneter@teamfly fs.24 and 13.1 to see what the end result will be. 25 Returning Binary Data (Visual Basic .New Riders . FileMode.2. buf = new Byte[fs. you might run into the error message shown in Figure 13.NET made by dotneter@teamfly { FileStream fs = new FileStream("c:\\winnt\\greenstone.bmp".Length].Debugging ASP.Close().Read(buf. FileMode.Length) fs. Writing to a binary file.Open) Dim buf as byte() buf = new Byte(fs.Length). as we did.(int)fs.NET) <WebMethod()>public Function GetFile() as Byte() FileStream fs = new FileStream("c:\\winnt\\greenstone.fs.bmp". .Open). Figure 13. If you start playing around with the writing of the array of bytes. return buf.2.Close() return buf. As you can see. Not all your problems will be this simple though. you simply need to change the return type of the method to an array of bytes.0. br. byte[] buf.Length) br. End Function With a simple change to the way the method returns the data. fs. } Listing 13. the problem is solved.Read(buf.0. NET made by dotneter@teamfly If you take a close look at Listings 13.FileMode.Length). you will notice that we were trying to change the offset of where the file started to write.26 Offset Error (C#) private void Button1_Click(object sender. e as System.Write) fs.Service1 svc = new localhost.GetPicture().New Riders .EventArgs e) { localhost.27 Offset Error (Visual Basic .1.ba.EventArgs ) dim svc as new localhost.Write(ba.//error on the offset fs.Debugging ASP.Create. System.Write(ba.Service1() dim ba as byte() ba = svc.Close() .GetPicture() Dim fs as new FileStream("c:\\download\\my.ba.1.Create.FileMode. Byte[] ba = svc.26 and 13.Write).NET) private Function Button1_Click(sender as object. FileAccess. FileStream fs = new FileStream("c:\\download\\my.Service1().bmp". } Listing 13.Length) ‘Error on the offset fs.27. Listing 13.bmp". FileAccess. fs.Close(). WSDL. If you are using Visual Studio .Debugging ASP.New Riders .NET made by dotneter@teamfly end Function Tools Microsoft has provided some additional tools to simplify the process of consuming or building client applications for Web Services. Soapsuds. some of these tools won’t be necessary to use because Visual Studio . .exe or SoapSuds.NET. Soapsuds.exe tool to generate a proxy class for you.exe program.1 .NET has most of these command-line tools built into the IDE. the Soapsuds tool also can generate the source code to communicate with the Web Service. It creates runtime assemblies to access services described by XML schemas. try using the Soapsuds. the latest version is WSDL 1. So far. or it can be dynamically downloaded from the Internet.exe performs the following functions: • • It creates XML schemas describing services exposed in a common language runtime assembly. A schema definition can be a local file. Because SOAP and remoting are basically the same thing. but we’ll take a look here at how these differences can affect your client’s communication with the Web Service.discomap discovery documents. and . XSD schemas.NET Web Services and Web Service clients from WSDL contract files.exe? If you are having problems using the WSDL.exe The Soapsuds tool helps you compile client applications that communicate with Web Services using a technique called remoting. One problem that you might run into is that there are different versions of WSDL floating around. What Should I Use—WSDL. Web Service Descriptor Language The WSDL file provides you with all the necessary interface information to consume a Web Service.exe The Web Services Description Language tool generates code for ASP. WSDL-1. but if you try to run the WSDL. This is great in some cases because this provides developers with many consumable resources. SDL. people are building Web Services and putting them on the web as fast as they can make them. This is yet another new standard being developed for SOAP-enabled users to find and use other SOAP-enabled users—or. As soon as you get a new Web Service out there. that is probably because the files versions are different. Common SOAP Errors You might encounter this common error: The underlying connection was closed: Unable to connect to the remote server. This is because there are several versions of WSDL or SDL or even SDC.you can throw your custom exception. Universal Discovery Descriptor Interface (UDDI) You will start to run into the Universal Discovery Descriptor Interface (UDDI) more often. Because this is an evolving technology. more likely.exe One of the most common problems that you might run into involves trying to generate the proxy code for a remote Web Service. it might give you one of many errors. whenever an error occurs . Then. But. this is one approach to take." .1) floating around on the web.NET made by dotneter@teamfly Many sites on the web promote Web Services.exe tool to generate a proxy stub for yourself. Because the Web Service has very little interaction with the user interface. The other approach is to build your own exception class that you can customize. The best way to look at UDDI is as the Yellow Pages for Web Services. on the flip side.New Riders . Basic Web Services Debugging As with any program. businesses. So now there is the issue of different formats (SDC. Errors While Using WSDL. a newer version of the standard might have been released. If you try to use the WSDL. the technology and standards for SOAP are evolving just as fast.1. The WSDL tool is designed to work with wsdl 1. The most basic of these involves logging all your debugging information to a file that can be reviewed later. no matter what type of environment you are in.Debugging ASP.0/1. you can use some common methods.exe tool on a WSDL file and it gives you an error. NET made by dotneter@teamfly This is typically the result of problems on the network.Debugging ASP. but.Message.Service1 s = new localhost. not necessarily in your code. .28 and 13. you might be wondering. Let’s take a closer look at this class in the next section. But one of the key elements of debugging Web Services is the SoapException class. Another error is the SoapException error.Xml.Detail. Take a look at Listings 13. which looks like this: The request reached the server machine. Text = ex. what wasn’t processed successfully? Why didn’t it tell me what wasn’t processed? If you are using the standard exception class to handle errors.29 to see how you can use the SoapException class to catch errors.XmlNode xn = ex. you could be missing out on some important information provide by the SoapException class. Getting the Most Out of SoapException Up to this point. Listing 13. If you run into this one.New Riders . A second pair of eyes is always beneficial.Service1(). you have looked at various problems that might arise and learned how to work around them. there are some very useful pieces of data that can help to identify what the problem is. within the properties of the exception. Not only does the class handle exceptions that were thrown. System. //Get any additional detailed information System. but was not processed successfully. But don’t rule out your code—have someone else look at it to see if you have overlooked something. s.HelloWorld(2).EventArgs e) { try { localhost.28 Using SoapException (C#) private void Button2_Click(object sender.//This will throw an exception } catch(SoapException ex) { //Display the main message TextBox2. HelloWorld(2) 'This will throw an exception . Type st = r.Attributes.Count.NET made by dotneter@teamfly //How many Attributes are there TextBox3.MethodBase r = ex. System.XmlQualifiedName qn = ex. //Display the piece of code that caused the problem TextBox5.Service1(s = New localhost. //The name of the app or object that caused the problem TextBox4.Code.New Riders .Actor. //Get the XML qualified name TextBox6. ToString(). Text = ex. Text = qn. } } Listing 13. Text = st.Name.Debugging ASP.AssemblyQualifiedName. //Get the XML namespace TextBox7. Text = qn. //Get the type of SOAP fault code System.Reflection.Xml.EventArgs) Try localhost.29 Using SoapException (Visual Basic . ByVal e As System. Text = ex.DeclaringType. TargetSite. //Get the assembly name where the exception came from TextBox8.Namespace. Text = xn.NET) Private Sub Button2_Click(ByVal sender As Object.Service1()) s. //Get the method that throws the exception System.Source. Text = xn.Message 'Get any additional detailed information System.Count.XmlQualifiedName qn = ex.Namespace 'Get the method that throws the exception Dim r As System. TargetSite Dim st As System.XmlNode(xn = ex.Xml.Actor 'Get the type of SOAP fault code Dim qn As System.Reflection.MethodBase r = ex.Code 'Get the XML qualified name TextBox6. Type st = r.New Riders . Text = st. Text = qn.DeclaringType 'Get the assembly name where the exection came from TextBox8. Text = ex. Text = ex. Text = ex.Attributes. Text = qn.Source 'Display the piece of code that caused the problem TextBox5. ToString() The name of the app or object that caused the problem TextBox4.AssemblyQualifiedName End Try End Sub .Xml.Detail) 'How many Attributes are there TextBox3.Debugging ASP.NET made by dotneter@teamfly catch(SoapException ex) 'Display the main message TextBox2.Name 'Get the XML namespace TextBox7. NET components. But how do you debug a component? Let’s take a look at some techniques that you can use and some common pitfalls that you might run into. If you really look at the amount of work that must have been put into . Chapter 14.NET framework. . copy all the files and subdirectories to the server that the service will be hosted on.28 and 13. Summary Web Services are an exciting new development in the distributed application space.NET framework is installed on the server where you are deploying your Web Service. it is really astonishing.NET to make all of this work. you need to roll up your sleeves and do the rest of the work. creating a Web Service can be very simple. Debugging . As you can see. your Web Service will not work.NET Components and HttpHandlers COMPONENTS ARE THE HEART OF THE . These additional properties could help you identify the problem. Sooner or later you will need to create a component to accomplish a unique task. Problems Deploying Your Web Service? When you are ready to deploy your Web Service. though.Debugging ASP. either directly or indirectly. But you still might run into some problems. there is not very much that you need to do. In Chapter 14 we move on into debugging of the .29 demonstrate how to extract the additional elements that are part of the SoapException class. One of the first things that you should check is that the . If it is not on that server.NET made by dotneter@teamfly Listings 13. but as soon as you start to push the limits of the technology. but that’s not true for everything yet. This becomes more apparent when you start trying to return complex structures of data such as FileStreams and DirectoryInfo objects. Many components can be serialized into XML. After you have developed you Web Service. You have to admit that Microsoft has done a good job so far.New Riders . using System. take notice of which class was inherited.NET framework.NET made by dotneter@teamfly The Component You’ll start by creating a simple component with a method that returns the date. using System.Forms Class Composition Designer support /// </summary> container.Diagnostics. This is the minimal amount of code to create a basic component in the . using System.IContainer container) { /// <summary> /// Required for Windows. Listing 14. /// </summary> private System.ComponentModel.ComponentModel. As you look at the listings that follow.Container components = null.1 Basic Component (C#) using System.New Riders .2). this plays an important role in a component.Collections. not just a class (See Listings 14.Component { /// <summary> /// Required designer variable. namespace WebProject1 { /// <summary> /// Summary description for DateStuff.Add(this). /// </summary> public class DateStuff : System. public DateStuff(System.ComponentModel. // // TODO: Add any constructor code after InitializeComponent call // } public DateStuff() .Debugging ASP. Also take a closer look at the ComponentModel namespace—this will give you a better idea of the core pieces that make up a component.ComponentModel. InitializeComponent(). and keep your eyes open for areas in the code that use the Container class.1 and 14. IContainer) MyClass. // TODO: Add any constructor code after InitializeComponent call } /// <summary> /// Required method for Designer support .Forms Class Composition Designer support Container.NET) Public Class getdate Inherits System. InitializeComponent() .New() 'Required for Windows.ComponentModel.Container().Component Public Overloads Sub New(Container As System. ToString(). } public string GetToday() { return DateTime.Add(me) End Sub Public Overloads Sub New() MyBase.do not modify /// the contents of this method with the code editor.New Riders . /// </summary> private void InitializeComponent() { components = new System.Now.New() 'This call is required by the Component Designer.NET made by dotneter@teamfly { /// <summary> /// Required for Windows. } } } Listing 14.ComponentModel.Debugging ASP.Forms Class Composition Designer support /// </summary> InitializeComponent().ComponentModel.2 Basic Component (Visual Basic . the gui-debugger. Listing 14. Next you’ll look at how you can use this in your code. If your process is already attached to an instance of a debugger. then you can start the debugger and begin stepping through the process (See Listings 14. It’s not much. just use the existing one.3 and 14. but it’s a start for now. then there is no need for you to start a new instance.Container 'NOTE: The following procedure is required by the Component Designer 'It can be modified using the Component Designer. If you are working on the server in which the component is running. If it is not.Diagnostic namespace.ComponentModel.Diagnostics.ComponentModel. Debug. The Debugger class enables you to see whether your process is attached to an existing debugger.Container() End Sub Public Function GetDate() As String GetDate = DateTime. including Stacktrace. Here is a sample of how to check whether the process is already attached to a debugger. and Debugger.IsAttached()== False) { //Check if debugger started successfully .Now End Function End Class Added in this component is a simple method to return today’s date and time in a string. Otherwise. Tracing.New Riders . You can use several different classes in the System.3 Check for Existing Debugger (C#) //Check to see if this process is attached to a debugger if (Debugger. Let’s take a look at how you can use these classes to help track down bugs.NET made by dotneter@teamfly 'Add any initialization after the InitializeComponent() call End Sub 'Required by the Component Designer Private components As System.DebuggerStepThrough()> Private Sub InitializeComponent() components = New System. 'Do not modify it using the code editor.4). <System.Debugging ASP. then you can use the Debugger class. start a new instance of the debugger and then step through your code using that instance. StackTrace can be used only if an exception is thrown. If you want to look at the stack before an exception is thrown or just to see what is going on.Diagnostic namespace because it enables you to dig deeper into . then you need to use the System.4 Check for Existing Debugger (Visual Basic . This method returns a string that represents the stack.New Riders .Environment. you can use the Stack Trace class in the System. If you need more information. Now two different StackTrace components can be used.NET) Dim db As New Debugger() 'Check if this process is attached to a debugger If Not db. System.Debugging ASP. or by each step that has been executed.IsAttached Then 'Now start a new debugger instance for this process If db.Launch() == True) { //The debugger was started successfully } else { //There was a problem launching the debugger } } else { //the process is currently attached to a debugger } Listing 14.NET made by dotneter@teamfly if (Debugger.Launch Then 'The debugger was successfully started Else if 'There was an error launching the debugger End If Else 'The debugger is already attached to this process End If Another very useful class is StackTrace. which enables you to look at the stack frame by frame.Diagnostic.StackTrace method. Diagnostics. Response. } catch(Exception ex) { StackFrame sf.8 to see how to accomplish this. int b=1.6 Stack Trace Dump (Visual Basic . The Environment class has some great features that you can use to look at the system’s environment before an exception occurs.7 Get Each Stack Frame (C#) try { int a=0. . Listing 14. int i. stackdump = Environment. Response.StackTrace class. Listings 14.StackTrace.GetFrame(i).Write(stackdump) Now if an exception is thrown.New Riders . you will probably want to use the System.Write(stackdump). int c. Listing 14.StackTrace() Response.i++) { sf = st. This class works differently than the Environment Class: you get the StackTrace and then navigate through the stack frame by frame.FrameCount. c = b/a. for (i = 1. Take a look at Listings 14.7 and 14.NET) Imports System Dim stackdump As String stackdump = Environment. Next we’ll cover a few more features that are useful in debugging. StackTrace st = new StackTrace().5 and 14. i <st.Write (c). Listing 14.NET made by dotneter@teamfly the stack.5 Stack Trace Dump (C#) Using System.Debugging ASP. string stackdump.6 provide a couple of simple examples of how you can implement this into your code. System.9 and 14.Write (sf) Response. and several other bits of information. Listings 14.NET) Try Dim a = 0 Dim b = 1 Dim c As Integer c = b / a Catch Dim sf as StackFrame Dim I as integer Dim st = new StackTrace() for i = 1 to st.Write (sf).Diagnostic. } } Listing 14. you can keep tabs on what is being executed. offset.FrameCount sf = st.NET made by dotneter@teamfly Response. One particular piece of information that you can extract is the method that your process is currently in. Listing 14. .New Riders .8 Get Each Stack Frame (Visual Basic . Response. method.Debugging ASP. Response.9 Stack Frame GetMethod (C#) private void Page_Load(object sender.EventArgs e) { // Put user code to initialize the page here StackFrame sf = new StackFrame().Write ("<BR>").GetMethod()).StackTrace class by exposing a frame of the stack and information such as the filename.Write ("<BR>") Next End Try Several specific pieces of information can be extracted through the System.Write(sf.10 both show examples of how this could be used. line number.Write("<BR>").GetFrame(i) Response. Response. As you jump from function to function. Write(sf.Object. or Windows applications. Let’s take a look at Listing 14. Now because the constructor does quite a bit of work.Debugging ASP. Listing 14.11.New Riders . The previous example shows this in the Page_Load method of the aspx page. you can use the functionality in components. Because StackTrace and StackFrame are part of the System. ByVal e As System.NET) Private sub Page_Load(ByVal sender As System.GetMethod()).Object.GetMethod()) end sub Here’s the output you would see from both listings: Void Page_Load(System. it could be having problems in several places.11 Pseudocode Example (C#) . } private void Method2() { StackFrame sf = new StackFrame(). but it doesn’t have to stop there. these tools can be used pretty much anywhere you need them.GetMethod()) Response. the component throws an exception. as the system steps from one function to another.Write(sf. Let’s say that you created a component. Response.Diagnostics namespace. and whenever you tried to instantiate it.EventArgs) Void Method2() As you can see.NET made by dotneter@teamfly Method2(). System. you can keep tabs on what the process is doing.Write(sf. which shows a scenario in which the stack trace would come in handy.10 Stack Frame GetMethod (Visual Basic . } Listing 14. A good example of where to use a stack trace is inside your component. services.EventArgs) ' Put user code to initialize the page here Dim sf as new StackFrame() Response.Write("<BR>") Method2() end sub private sub Method2() Dim sf as new StackFrame() Response. StackTrace). These methods enable you the write to a file or listener. The only difference between the Write and Writeline methods is that the Writeline method adds a line terminator. try { //Open a connection to the database OpenDBConnection().Debugging ASP.12 and 14. and Writelineif.NET made by dotneter@teamfly public UserActivity(String userX) { InitializeComponent(). If not. In this particular case. whereas the Write function does not. especially if you are debugging your components on a remote system (which is most likely what most people will be doing). Write.New Riders . A few more useful methods in the Debug class are Writeif. this would catch it and write the stack trace to the debug listener.12 Debug WriteLineIf example (C#) public int mydivide( int x . //Update the database to show userX has entered a secure area UpdateDB(). This can be a tremendous help. } } If there was a problem during the creation of this component. //Open and read our own XML configuration file ReadXMLConfigFile().13. The use of these methods is illustrated in Listings 14. with or without conditions applied.Writeline method. Let’s take a closer look at how you can use this method to write to the console or to a file. int y) { int z. } catch(Exception ex) { Debug. the ReadXMLConfigFile function was incapable of opening the file to read it. you have the option to write to the Listener with the Writeif or Writelineif method. This is great for checking whether a variable falls within the required parameters that you need. //x= -5 . You might have noticed that the previous example used the Debug. Listing 14.WriteLine(ex. Add(new TextWriterTraceListener(File.Diagnostics Imports System.IO Debug.WriteLineIf(x<0. z = x/y.txt"))) Debug.IO.NET) public function mydivide( x as integer .Add(new TextWriterTraceListener(File.Listeners.Close().Write("Debug output on Page Load").14 Trace Listener Example (C#) using System. You can also write your debug output to a file as shown in Listings 14.Listeners.Diagnostics. Debug.Create("c:\\debug_output.WriteLineIf(x<0.Create("c:\debug_output.Debugging ASP. Listing 14. Listing 14. using System. Debug.13 Debug WriteLineIf example (Visual Basic .15 Trace Listener Example (Visual Basic . Debug.New Riders . y as integer) as int 'x= -5 'y = 100 'if x < 0 then I want to see the value of x 'to get an idea of what is happening less than 1 which is BAD: x="+ x ) z = x/y Mydivide = z End function These methods are very useful if you need to check the condition or state of your variables before writing to the listener. Debug.15.NET made by dotneter@teamfly //y = 100 //if x < 0 then I want to see the value of x //to get an idea of what is happening Debug.14 and 14. } Listing 14."x was less than 1 which is BAD: x="+ x ). Return z.txt"))).Flush().NET) Imports System."x was . diagnostics> </configuration> Interfaces Interfaces provide you.16. If you fail to do this. However.New Riders . you don’t have to call the flush method every time you want the contents of the listener to be written to the file.config <configuration> <system. One way to avoid this is set the AutoFlush property to True when you create your Debug object. you still need to call the close method to close the file as illustrated in Listing 14. This can occur if you don’t flush and close your listener.diagnostics> <debug autoflush="true" /> </system.Debugging ASP.txt" because it is being used by another process.16 Web. you might come across the following message: The process cannot access the file "c:\debug_output. Or. Listing 14. None of your debug write statements have been sent to the file yet.You should consider a few items when developing your own interfaces. This message is displayed because the file is still open and is being used by another application or process. the developer. All interfaces in the .Close() If you are writing your debug output to a file.Config file you can set the AutoFlush to true by default. you have only accomplished creating a new file and opening it. This way. with the most basic of functionality to build upon.NET framework are identified with a capital letter I at the beginning of the interface class name.Flush() Debug. as in Listing 14.16.Write("Debug output on Page Load") Debug.NET made by dotneter@teamfly Debug. in your Web. Interfaces can be made from the following types: • • • • • Nested types Static members Virtual members Abstract members Properties . you should follow this naming convention. If you plan to develop interfaces. The following list of restrictions applies to interfaces: • • • Any interface members must have public accessibility.NET made by dotneter@teamfly • Events You also should consider a few rules when designing interfaces. you might be surprised to find that not a lot is happening in these classes. Before you start developing your own components. HttpHandlers If you have ever programmed in Java and used Struts. you will learn how IHttpHandler is used through the HttpHandler component. Still. look at how they provide a strong foundation to build upon. This is accomplished by modifying the httphandlers section of the Web.config file. HttpHandlers are very cool! You can do some wicked things with custom handlers. we had better deliver the goods. If you haven’t used Struts but are more familiar with ISAPI extensions. take a look at how handlers work. take a look at a few of these and see what these classes are doing.Debugging ASP. The following list includes some of the common interfaces that you might see in some of the components that you use when debugging your web page or building a component. then this is very much the same idea. Interfaces cannot instance constructors. it’s very similar. This is just a sample of what is out there. you will notice that the IComponent and ICollection interfaces are used under the hood. HttpHandlers map incoming URL requests to an HttpHandler class. First.New Riders . So now that we have gotten you all worked up. . but they can define class constructors. • • • • • • • IConfigurationSectionHandler ICollection IHttpAsyncHandler IHttpHandler IHttpHandlerFactory IComponent IServiceProvider In the next section. This is where you tell the system to map requests to a specific HttpHandler class. Let’s break down the important items here. Also. if you dig deep enough. Security permissions cannot be attached to the interface or its members. This is the final item needed to tell the server what component will handle this request. Table 14. This is great because it gives you the capability to map a different HttpHandler for every page.NET made by dotneter@teamfly First.Debugging ASP. Subtags in the HttpHandler Section Subtag <add> Specifies verb/path IHttpHandlerFactory class.config file to tell it what to do when it receives a request and which HttpHandler to use. This is where the namespace and class that will be used to handle the request are defined. <remove> Removes a verb/path mapping to an IHttpHandler class. Three subtags exist under the HttpHandler section: Add. The <remove> directive must exactly match the verb/path combination of a previous <add> directive. Listing 14. you need to add a line to the Web. <clear> Removes all IHttpHandler mappings currently configured or inherited by the specified Web. Now you can take a closer look at what you need to do to implement your own HttpHandler class. you should identify the parts of the HttpHandler section. you can affect specific HttpHandlers. Verb: verb list (“GET. You’ll look at using the <add> subtag to accomplish this first step. If you just want to remove all handlers.config file. The verb identifies if it is a Post or Get.aspx page. Table 14.New Riders .Config Description mapping to an IHttpHandler or .[Class]. With the <add> and <remove> subtags. using * for both.POST.aspx.1 provides a description of each subtag. Pay attention to how you configure the path. If you set it up to ?.17 <httphandlers> section in Web. Wildcards are not supported.?”) The path tells the system what specific page or type of page to map the HttpHandler to. if needed. Remove.[Assembly name] Listing 14.config file would look like if you mapped all requests for the MyHandle. Path: path/wildcard The next item that needs to be identified is the type. First.17 shows what the actual web. remember that it will intercept all requests for aspx pages.1. The potential for this is huge if you step back and look at the big picture. Type: [Namespace]. use the <clear> subtag. and Clear. 18 and 14.Web Namespace MyHttpHandler Public Class MyHandler Implements IHttpHandler ' Override the ProcessRequest method.aspx" type="MyHttpHandler.18 Simple HttpHandler class C# using System. using System. and completing a number of other tasks. .19).NET made by dotneter@teamfly <configuration> <httphandlers> <add verb="*" path="MyHandler.19 Simple HttpHandler class Visual Basic . using filters.Debugging ASP. namespace MyHttpHandler { public class MyHandler: IHttpHandler { public void ProcessRequest(HttpContext context) { context. MyHandler" /> </httphandlers> </configuration> This is a great feature for creating custom HttpHandlers to intercept specific types of requests and then handle each request with special functionality. } public bool IsReusable { get { return true. Listing 14. } } } } Listing 14. Let’s look at how a request to MyHandler.Web.Response.NET Imports System Imports System.Write("My HttpHandler").New Riders . This is also an ideal feature for debugging.aspx is intercepted and handled (see Listings 14.MyHandler. Debugging ASP. This gives you access to all the server components— Request. You would just have virtual pages. in a manner of speaking. html.Write("My HttpHandler") End Sub ' Override the IsReusable property. asp. Response. you can do anything in an HttpHandler that you could do on an aspx page. It is a fictitious page. htm.Response.IsReusable Get Return True End Get End Property End Class End Namespace Notice that the ProcessRequest method is the point at which you intercept the requests being made to MyHandler. With access to these components. Most system designs tend to shy away from maintaining any state on their websites because of the complexities involved when scaling out. If you are looking for that page. What Are the Issues? Scalability is a common issue when designing websites. it doesn’t exist. The other item of importance is the HttpContext that is passed in. You can use ASP session state on a web farm in a number of ways. Session. Public ReadOnly Property IsReusable() As Boolean _ Implements IHttpHandler.NET made by dotneter@teamfly Public Sub ProcessRequest(context As HttpContext) _ Implements IHttpHandler. Let’s take a look at the issues that seem to come up repeatedly. State-Management Issues State management is an important issue whenever you are designing large-scale websites. or any other physical page. and Server. Many companies use routers that use sticky IP. you could create a complete website without one single aspx. to make sure that each request from a user is routed to the right server .aspx.ProcessRequest context.New Riders . Just by adding one server. . Why is Microsoft telling you that session variables are okay to use in a scalable architecture? With Microsoft’s new . Now we get to the cookies issue. but you also now have fault tolerance designed into your architecture. With this type of setup. how can you scale up to handle more traffic? You could always throw more hardware at it. such as Big 5 or Cisco’s Load Director. you will have a section that looks similar to Listing 14.Config file.NET. the load-balancing appliance will distribute the request to the servers with the least amount of load. Not only have you decreased the load. In addition to the existing session-state management.Debugging ASP.NET by separating the session state into two different modes: in-process and out-of-process. Take a look at where all this is configured. This seems like a costly method to be able to maintain session state. Without cookies enabled. a user must return to the same host or computer where the information is maintained because this information is maintained in memory and is directly related to the process that it is running. and out-of-process is new to .NET Framework. How is this possible? Microsoft has added alternative session-management solutions. In-process is the old method of state management. and you can store the information in SQL server 7. you can use session states in a server farm.NET’s solution. If the session state is maintained in memory.20. This brings up the next point: scalability limitations.x or 2000 as well. you can cut the load on your website in half. Now you can store the user’s session data in memory on a shared server. but it’s one that should be addressed. session states are very difficult to implement.NET Session State Let’s look at the three issues identified previously in the light of ASP. two more ways exist to manage session data.New Riders . The current limitations with ASP session state revolve around three areas: • • • Host dependence Scalability limitations Cookie dependence To use session state. In the Web.NET made by dotneter@teamfly and to ensure that session information can be maintained. Microsoft has solved these problems in ASP. but for how long and at what cost? The preferred method is to use a server farm and some sort of load-balancing appliance. ASP. This is not necessarily such a terrible problem. 0. Indicates that sessions without cookies should be used.2 lists all the available attributes and parameters for the Session State section.web> </configuration> If you look at the sessionState section in this listing.NET made by dotneter@teamfly Listing 14.1:42424" sqlConnectionString="data source=127.Debugging ASP.NET framework.0. (One thing to keep in mind when using the SqlServer mode is that you shouldn’t use the system administrator account.1.0. Session State Attributes Attribute mode Off Inproc Option Description Specifies where to store the session state.2. Indicates that session state is stored locally. </system.) Next.New Riders . Table 14.0. you will notice that the web server has been set up to maintain session information in a SQL server database.20 Web. you’ll take a more detailed look at the parameters that are used in the Session State section. Session State Section Parameters Looking at all the parameters used in the session-state section will help you to identify some of the additional features that have been added to the . SqlServer cookieless true Indicates that session state is stored on a SQL server.user id=sstate.config <configuration> <system. Indicates that session state is not enabled. Table 14. Specifies whether sessions without cookies should be used to identify client sessions. StateServer Indicates that session state is stored on a .web> <sessionState mode="SqlServer" stateConnectionString="tcpip=127. remote server. This attribute is required when mode is set to SqlServer. this is the fastest. The default is false. no one is going to want to take the time to navigate through it.0. This attribute is required when mode is set to StateServer. If you use this mode. One thing that you might want to consider. the client must always return the original web server in which it made its first request. most common method of maintaining a session state. is what happens if the server that maintains the out-of-process session data goes down? SQL Server 7. x and 2000 . In-Process If you have a small website and don’t plan to scale up.0.Debugging ASP. If your site is slow. though.password=.New Riders . example. 127.1:42424. but it is stored on a separate process. If your site is down or constantly producing errors. You still get the benefit of maintaining the session-state information in memory.user id=sa. which is shared by all your web servers in a web farm. sqlConnectionString Specifies the connection string for a SQL server—for datasource=127.NET made by dotneter@teamfly false timeout Indicates that sessions without cookies should not be used.0. Specifies the number of minutes that a session can be idle before it is abandoned. then your customers will find what they need somewhere else.NET Session-State Modes When you’re building a website. two items should always be at the top of your list: performance and reliability. on the server from which it made the request. This mode is best suited for a single server. Performance and Reliability Considerations with ASP. This is because the session information is stored in memory. connectionString Specifies the server name and port where session state is stored remotely—for example. then out-of-process mode will be better suited for your architecture. The default is 20.0. Out-of-Process If you are looking for a more scalable solution without losing too much performance.1. Let’s look at session-state modes with these concepts in mind. You no longer have to register your DLL on the server to use it. whether it is in local memory or remotely.3. Granted. you just need to copy it to your \bin directory to be executed. Another notable point is that assemblies used by several applications on a machine should be stored in the global assembly cache.NET components and COM components.NET COM . it still has to be maintained somewhere.NET components only need to be placed in the \bin directory. .NET made by dotneter@teamfly If performance is not your top priority and you’re more concerned with reliability. Because all the session data is maintained in a SQL server. the assembly must have strong names. but it is definitely the most reliable.New Riders .config file can exist only in the root of the web application. whereas . but a few must be in the root file. this is your best solution. Table 14. These new options are great and open up a whole area of functionality that was once considered taboo.Debugging ASP.config File One problem that you might run into is that the session state section in the Web. This is data that is either taking up memory or being marshaled back and forth between computers. COM components need to be registered in the registry.NET components. If an assembly will be stored in the global cache. This does not apply to all the settings.NET components and COM components? Does DLL Hell ring any bells? This problem is eliminated with .NET Components Versus Registered COM Components So what is so different about . Just remember not to go overboard with session data. your data is stored in a persistent location. this solution involves a lot more overhead. . Now let’s look at a breakdown of the differences between .NET Comparison to COM Characteristics Does not require a Registry entry Is self-describing X X .What if the server fails? You can set up the SQL server in a clustered solution for redundancy. Required Location for Web. take a huge burden off the developers. you can do so with a few extra steps. Second.exe. it looks like . To do this. as with all other COM objects.NET components in an unmanaged project. We’ll tackle some issues and solutions here. This exposes the necessary interfaces to allow them to communicate with each other.Debugging ASP. You need to mimic a typical COM object and how it operates. there is no bridge to communicate from the unmanaged process to the managed process.NET does provide a way for managed code to work with unmanaged COM objects.NET made by dotneter@teamfly Compiles to a binary Exposes interfaces Uses Xcopy installation Runs parallel versions Requires RegSrvr32 X X X X X X X After this comparison.NET project. such as not having to register your managed components.NET has all the benefits of COM. you must use the COM Interop utility (TlbImp. This gives your . there are still circumstances in which you have to register your component. but others. it puts the information in the registry. without the hassles. Security Issues with Components You should be asking yourself some common questions when working with any component: . use RegAsm.We’ll be covering this in the next section. If you are looking to use a COM component in your .New Riders . Otherwise.NET component the capability to imitate a COM object. it exports the managed types into a type library.exe) for importing the COM types. This does two things. Some of the characteristics might not seem to be a big deal. COM Component Issues Surely you have run across one or more of the following issues when trying to debug a new or changed COM object. Granted. If you want to use your . Using Interop to Work with Unmanaged COM Components . First. Debugging ASP. the other COM objects know how to communicate with your component. .New Riders . this process is simple and straightforward. but it is new to C#.NET Component Versus a COM One of our favorite things about . If you plan to have an existing COM component or a new COM object call in your . Installing Component a . The Registry Now with . you will need to register your component with the registry. . Garbage Collection Garbage collection is not new to Visual Basic. This is no longer an issue in the .NET framework. who is the component logged in as? Does the login have all the rights to perform the needed tasks? One of the annoying tasks of working with COM components is keeping track of references made to the component and making sure that you clean up and release any memory or objects you that have used.NET component. You no longer have to release or dereference your components because the garbage collector will handle it for you. upgrading) a COM component.NET is how to install and upgrade components.NET Components Microsoft has taken some of the nifty features of Visual Basic and carried them over to the . Although the previous versions of garbage collection might not have been as robust as desired. is the component configured properly to run on the remote computer? • Is the component accessing other services of the operating system or network? Does the component need to write a file to a directory that has been secured to only a specific group or user? Are you trying to create a network connection to another computer? • When the component is running as a service. Unlike installing (or.NET made by dotneter@teamfly • Where is the component running? Is it local or on a remote system? If it’s remote. This way. even worse.NET framework because memory management is handled for you. it looks as if Microsoft has enhanced the process of cleaning up.NET there is no need to register your component in the registry to use it—with one exception. One of these features is memory management. Thus. This usually means that you have to be logged onto the system or must be using some sort of remote-control software. the COM object typically needs to be deregistered and removed from memory. Another annoying issue is that you must execute this on the computer on which the component is installed. depending on what the COM object is being used by. then you will still have to register your component in the registry as you would any COM component. One thing to keep in mind is that this technique applies to managed components only. . you learned how using components can help in cleaning up your garbage when you are finished with a component. you’ll end up restarting the web service or computer. The only thing that you need to do to install a new or updated version of your . it makes a copy of itself. This involves deregistering the COM object and then deleting it to replace it with the new version.NET components are self-describing. you can kiss all these problems goodbye. Now the only thing that needs to be done is to copy the new component to your \bin directory. essentially overwriting the old component (assuming that one already exists) and replacing it with the new one. Well. To do this. no matter addressed the additional features for state management in .Wow! That wasn’t very hard. If your component is using the Interop namespace because it needs to simulate a COM object.Debugging ASP. it will release the old copy and load the new version into memory. It couldn’t be easier. you can copy right over the old version without having to deal with the file being locked by another user because it is in use at the time.NET framework to help developers.exe for . In this chapter. No need to deregister the old component and register the new one. Because the system will detect that the component is newer. Presto. Microsoft has finally given you a way to get around these problems. To do this. You also learned how the stack trace classes can help you determine where a bug could be coming from.NET component is use Xcopy.NET made by dotneter@teamfly So How Do You Install a New .exe or your preferred method of copying a file from one location to another. when a component is running.NET Component? You no longer need to use Regsrv32. but you still need to use it for COM components. Uninstalling Components Typical problems with COM components have to do with upgrading them. change-o—now your system is running with your new component.New Riders .NET and showed .NET components. Summary Microsoft has built many cool new features into the . Because the . The application might restrict the type of transaction being processed. So much of Microsoft’s platform has been built on these technologies that you always have needed to interact with some sort of COM or COM+ component. Now Microsoft is evolving again. you will start to see the power of role-based security.New Riders . One example is a purchasing system that enables a . Two more important sections in this chapter managed components are self-describing. roles would be useful in an application used by stockbrokers.000 shares of stock. which saves developers from having to register them in the registry. the majority of component issues revolve around containment and cleanup. class. As you start to implement different levels of security. and now there’s COM+. whereas the managers might have an unlimited amount available to them. Role-based security also can be used when an application requires multiple actions to complete the process. Often in the development process. Role-Based Security Security roles can be supported at the method.NET. Use some of these examples to get a jump-start on debugging your components.NET made by dotneter@teamfly how how deep in the code it might be buried.Debugging ASP. Remember. First there was OLE.NET framework is new. and this time the technology is called . and interface levels. One way to accomplish this is to use the same components but restrict access to portions of the information based on what role the users are in. assembly. As an example. We covered a lot of ground in this chapter. people look for ways to reuse components. COM+ Issues MICROSOFT HAS CONTINUED TO ENHANCE THIS COMPONENT software technology. then it evolved to COM. there needs to be a bridge to COM+ functionality. and we’ll look at those issues in this chapter. just keep this in mind when you are developing your component. This bridging brings its own set of issues. and it is up to you to take it to the next level. Chapter 15. depending on whether the user is a stockbroker or a manager. Stockbrokers might have authorization to purchase only 1. which then becomes a purchase order. using System. Take a look at Listings 15. // The ApplicationName attribute specifies the name of the // COM+ Application that will hold assembly components [assembly: ApplicationName("RoleBasedApp")] // The ApplicationActivation.snk file can be generated with sn.exe [assembly: ApplicationActivation(ActivationOption. to get an idea of what is involved and where you might run into problems.Forms.Debugging ASP.ActivationOption attribute specifies // where assembly components are loaded on activation // Library : components run in the creator's process // Server : components run in a system process. using System. // The . dllhost. using System.EnterpriseServices.New Riders . which use role-based security.Reflection.1 and 15.1 Simple Role-Based Security (C#) using System.Server)] // AssemblyKeyFile specifies the name of the strong key // that will be used to sign the assembly.Windows.2. The attribute maps to the // securities tab in a component within a COM+ application [ComponentAccessControl] // SetEveryoneAccess(true) indicates we .exe from the command prompt [assembly: AssemblyKeyFile("RolebasedKey. Listing 15.NET made by dotneter@teamfly customer representative to generate a purchase request but that allows only a supervisor to authorize that request. } public string GetCallerAccountName() { string ret = "Caller Unknown". // get a handle to the context of the current caller sec = SecurityCallContext. if (ContextUtil.NET) Imports System Imports System.New Riders . } return ret.DirectCaller.Windows.Reflection Imports System.EnterpriseServices ' The ApplicationName attribute specifies the name of the ' COM+ Application that will hold assembly components <Assembly: ApplicationName("RoleBasedApp")> ' the ApplicationActivation.Forms Imports System.AccountName. } } } Listing 15.Debugging ASP.IsCallerInRole("SecurityAppDeveloper").2 Simple Role-Based Security (Visual Basic . // get the current caller account name ret = sec.CurrentCall. SetEveryoneAccess = true)] public class MySecurityObject : ServicedComponent { public bool IsCallerInRole() { // Check if the user is in the role return ContextUtil.IsSecurityEnabled) { SecurityCallContext sec.NET made by dotneter@teamfly // we want the role to be populated with 'Everyone' when created [SecurityRole("EveryoneRole".ActivationOption attribute specifies ' where assembly components are loaded on activation ' Library : components run in the creator's process . NET made by dotneter@teamfly ' Server : components run in a system process.snk file was generated with sn.Server)> ' AssemblyKeyFile specifies the name of the strong key ' that will be used to sign the assembly.exe <Assembly: AssemblyKeyFile("RolebasedKey. The attribute maps to the ' securities tab in a component within a COM+ application <ComponentAccessControl(). SetEveryoneAccess(true) indicates ' we want the role to be populated with 'Everyone' when created Inherits ServicedComponent Public Function IsCallerInRole() As Boolean ' Check if the user is in the role Return ContextUtil.Debugging ASP. SetEveryoneAccess:=True)> _ Public Class RBSecurityObject ' SecurityRole configures a role named RbSecurityDemoRole ' on our component. dllhost. SecurityRole("EveryoneRole". ' The .IsCallerInRole("EveryoneRole") End Function Public Function GetCallerAccountName() As String Dim ret As String = "Caller Unknown" .exe <Assembly: ApplicationActivation(ActivationOption.New Riders . 3 and 15. The assistant needs to be able to add or modify information for clients. Let’s say that you are building a web-based trading system for a financial firm. This is where roles are really handy. Let’s take a look at an example in which the roles are not set up correctly for the situation just described (see Listings 15.NET role-based security can be used. only the stockbroker is. But the assistant is not allowed to place trades on the system. The attribute maps to the // securities tab in a component within a COM+ application [ComponentAccessControl] // SetEveryoneAccess(true) indicates // we want the role to be populated with 'Everyone' when created [SecurityRole("EveryoneRole".1 and 15.New Riders .4).IsSecurityEnabled Then Dim sec As SecurityCallContext ' CurrentCall is a static property which ' contains information about the current caller sec = SecurityCallContext. Here is an example: A stockbroker has an assistant. but now take a look at a more real-life scenario.int shares) { .3 Complex Role Configuration (C#) namespace TradingApp { // ComponentAccessControl enables security checking // at the component level.Debugging ASP.CurrentCall ' retrieve the current caller account name ret = sec. One of the issues that you run into first is how you will control security for different people using the system.AccountName End If Return ret End Function End Class End Namespace Listings 15. Listing 15.2 should give you a feel for how .DirectCaller.NET made by dotneter@teamfly If ContextUtil. SetEveryoneAccess = true)] public class MyTradingObject : ServicedComponent { public void Buy(string symbol. Debugging ASP.DirectCaller. bl = ContextUtil.int shares.IsCallerInRole("StockBroker")) { //Allow the trade to go through } else { //Don't allow the trade } } public void Sell(string symbol. if (ContextUtil. if(ContextUtil.IsSecurityEnabled) { string acctname. // get the current caller account name acctname = sec. // get a handle to the context of the current caller sec = SecurityCallContext.IsCallerInRole("StockBroker")) { SecurityCallContext sec.New Riders .NET made by dotneter@teamfly // Check if the user is in the role bool bl.AccountName. //verify if this account belongs to this user //Do some code to check user name against account //if the account belongs to the user allow the user //to sell the stock } } else { //Don't allow the trade } .CurrentCall. string accountid) { // Check if the user is in the role and if the account is his/hers if(ContextUtil.IsCallerInRole("StockBroker"). NET made by dotneter@teamfly } } } Listing 15.IsCallerInRole("StockBroker")) Then 'Allow the trade to go through Else 'Don't allow the trade End If End Sub Public Sub Sell(ByVal symbol As String.NET) Namespace TradingApp ' ComponentAccessControl enables security checking ' at the component level.New Riders .4 Complex Role Configuration (Visual Basic .IsSecurityEnabled) Then . ByVal shares As Integer) ' Check if the user is in the role Dim bl As Boolean If (ContextUtil.Debugging ASP. The attribute maps to the ' securities tab in a component within a COM+ application <ComponentAccessControl()> ' SetEveryoneAccess(true) indicates ' we want the role to be populated with 'Everyone' when created <SecurityRole("EveryoneRole". ByVal shares As Integer. SetEveryoneAccess = true)> Public Class MyTradingObject Inherits ServicedComponent Public Sub Buy(ByVal symbol As String. ByVal accountid As String) ' Check if the user is in the role and if the account is his/hers If (ContextUtil. Debugging ASP.DirectCaller. For the TradingApp.NET made by dotneter@teamfly Dim acctname As String If (ContextUtil.1. Component Services Microsoft Management Console The Component Services Microsoft Management Console provides you with a way to monitor and manage your components and their properties. Now let’s take a look at where all these information and configuration settings are maintained and controlled.1 shows where the different roles can be managed for each component.AccountName 'verify if this account belongs to this user 'Do some code to check user name against account 'if the account belongs to the user allow the user 'to sell the stock Else 'Don't allow the trade End If End If End Sub End Class End Namespace You’ll notice that these listings have implemented multiple levels of security based on which role the user is in.New Riders .CurrentCall() ' get the current caller account name acctname = sec.IsCallerInRole("StockBroker")) Then Dim sec As SecurityCallContext ' get a handle to the context of the current caller sec = SecurityCallContext. Figure 15. which is great. This gives you more of a granular approach to applying security to the components. Component services roles for TradingApp. . Figure 15. we have defined two roles that the user will fall under. But Bob does not want to give his password and login ID to his assistant because this would allow his assistant to place trades. In Figure 15. you can see that we have added Everyone to the StockBroker role.Debugging ASP. When setting up roles. .New Riders .2. Figure 15. One way to solve this problem is to create another role called Assistants. make sure that the users within those roles are appropriate. StockBroker role. Bob the stock broker might have an assistant who needs to get in the system and enter data for him.NET made by dotneter@teamfly The roles are basically containers and are not useful unless you add specific users to those roles. you could restrict the type of actions that assistants are allowed to perform. For instance. In this role. which the assistant is not licensed to do.2. but no one has been assigned to the Manager role. 3. If the screen is disabled and grayed out. as it appears here.Debugging ASP. you will need to make sure that the activation type is set to Server Application. If you want to force the components to run under a specific login.4. Figure 15. you can enter that information here. Application Properties Identity tab. Application Properties Security tab. . you can change the way the component behaves by simply changing the settings in the Security tab of the Properties dialog box (see Figure 15. this can be found under the Activation tab.4. If you want the component to run under a specific login.3). Figure 15. as shown in Figure 15.New Riders . you can set the user identity in the Application Properties Identity tab.NET made by dotneter@teamfly If you want to exercise more control over your component. To get started.New Riders . Instead. The Disable Changes property applies only to the properties being altered. If the SetEveryoneAccess property is set to true. If you check the Disable Changes property and then try to add a new user to a role.Debugging ASP. Transaction Models The . Figure 15. the role Everyone is added as a member. If the Disable Deletion box is checked. Transaction Issues When you get to the point of utilizing transactions.NET Framework uses two basic transaction models: • Automatic —Automatically aborts or completes the transaction . which means that no users are assigned to a role. you must configure them manually.NET made by dotneter@teamfly If you are having problems making changes or deleting the component. check the Advanced tab in the Properties dialog box (see Figure 15. or you want to make sure that everything gets rolled back if there is a problem during the process. This technique is best used for the role of Administrator. Advanced tab in the Properties dialog box. which has exclusive control over the system.5). you must make sure that an action or a series of actions are completed successfully. you’ll look at some of the basics of transactions before you look at how to debug them. you will be able to do so. The default is false.5. you will not be able to delete the component until you uncheck it. not all transactions are automatic. To use this service. } catch(Exception ex) { //If the method call throws an exception.5 AutoComplete Transaction (C#) namespace TradeComponent { [Transaction(TransactionOption.New Riders . Imagine that you are having problems developing a component to implement transactions. For a managed object to participate in an automatic transaction. However. the class must derive directly or indirectly from the System. the automatic model should suffice.6).EnterpriseServices. Then take a look at a simple example of a class that implements automatic transaction (see Listings 15. string symbol) { try . the transaction will be aborted } } [AutoComplete] public bool Sell(int NumShares. the managed class must be registered with Windows 2000 Component Services. Automatic transaction processing is a service provided by COM+ that enables you to configure a class at design time to participate in a transaction at runtime. In most cases.NET made by dotneter@teamfly • Manual —Requires you to call SetComplete or SetAbort The amount of control that you want over the processing of your transaction determines which method you use.5 and 15. string symbol) { try { //Some code that buys x amount of shares //The transaction will automatically SetComplete() if the method call returns normally.Required)] public class StockTrade : ServicedComponent { [AutoComplete] public bool Buy(int NumShares.ServicedComponent class.Debugging ASP. Listing 15. NET made by dotneter@teamfly { //Code that Sells x amount of shares //The transaction will automatically SetComplete() if the method call returns normally.EnterpriseServices Namespace TradeComponent <Transaction(TransactionOption.Debugging ASP. the transaction will be aborted } } } Listing 15. ByVal symbol As String) As Boolean 'Some code that buys x amount of shares 'The transaction will automatically SetComplete() if the method call returns normally. ByVal symbol As String) As Boolean . 'If the method call throws an exception.New Riders .6 AutoComplete Transaction (Visual Basic .Required)> Public Class StockTrade Inherits ServicedComponent Public Function <AutoComplete()> Buy(ByVal NumShares As Integer.NET) Imports System. the transaction will be aborted End Function Public Function <AutoComplete()> Sell(ByVal NumShares As Integer. } catch(Exception ex) { //If the method call throws an exception. the transaction will be aborted End Function End Class End Namespace In each of the methods in the class. . Now that you have looked at the different attributes of a transaction. Various assembly-level attributes are also used to supply COM+ registration information. 'If the method call throws an exception. let’s take a look at some known issues with transactions. if you look closely. This ensures that the management of the transaction will be handled by the system based on whether the function executes successfully. the AutoComplete attribute has been assigned. This attribute instructs the runtime to automatically call the SetAbort function on the transaction if an unhandled exception is generated during the execution of the method. which is equivalent to using the COM+ Explorer to set the transaction support on a COM+ component. • AutoCompleteAttribute —Applied to the Buy and Sell methods. Assemblies Assemblies cannot be placed in the global assembly cache and should use dynamic registration.NET made by dotneter@teamfly 'Code that Sells x amount of shares 'The transaction will automatically SetComplete() if the method call returns normally.Debugging ASP. Deriving the class from ServicedComponent ensures that the contexts of StockTrade objects are hosted in COM+. Also. otherwise. Two critical attributes must be applied for this example to work: • TransactionAttribute —Applied to the StockTrade class to set the transaction to Required. you will notice that the StockTrade class derives from the ServicedComponent class. A serviced component must be strong-named and should be placed in the global assembly cache (GAC) for manual registration. the runtime calls the SetComplete function.New Riders . Restart IIS to ensure that the changes are recognized. 4. you will not be able to generate the strong-named assembly. 6. Strong-Named Assemblies Are you having problems getting your component to work? If so. you might need to change the security access permissions.Without a key.NET security model.New Riders .NET applications. 3. 5.NET made by dotneter@teamfly Known Issues with Microsoft Transaction Server Components Microsoft has made some changes in the ASP. click Edit Default and add the user created in Step 2. To create a strong-named key. created solely for this purpose. Even though you can build and register your component as a COM object. Under Default Access Permissions. If you have existing Microsoft Transaction Server components that you plan to use with ASP.snk After you have generated a key. and change the account under which the component runs to that of a new local machine account.8 illustrate how to properly implement the attributes to make your component a strong-named component. One of the common exceptions seen when calling an MTS component without the necessary security permissions is. In Component Services.” If you are running into this. Going through all these steps should solve your permission problem.Debugging ASP. 2. bring up the Properties dialog box for the MTS application. Listings 15. use this code: [C#] sn -k TradeComponent. you might not be able to use it. have you implemented strong-named assemblies? You might have overlooked this requirement.exe. you can then reference it in your code to create a strong-named assembly.7 and 15. you might want to try the following steps to resolve the problem: 1. Select the Default Security tab. .“Permission denied. Select the Identity tab. Run the utility Dcomcnfg. } [AutoComplete] public bool Sell(int NumShares. } //Everything has completed successfully ContextUtil.New Riders . } .Debugging ASP.Registration details // Supply the COM+ application name.Required)] public class StockTrade : ServicedComponent { public bool Buy(int NumShares.SetComplete(). string symbol) { try { //Code that Sells x amount of shares //The transaction will automatically SetComplete() if the method call returns normally. [assembly: AssemblyKeyFileAttribute("TradeComponent. string symbol) { try { //Some code that buys x amount of shares //The transaction will automatically SetComplete() if the method call returns normally.7 Strong-Named Component (C#) // .snk")] namespace TradeComponent { [Transaction(TransactionOption.NET made by dotneter@teamfly Listing 15.SetAbort(). [assembly: ApplicationName("TradeComponent")] // Supply a strong-named assembly. } catch(Exception ex) { //If the method call throws an exception //you must call SetAbort so the transaction will be aborted ContextUtil. the transaction will be aborted } } } } Listing 15.SetComplete() . Catch ex As Exception ' If the method call throws an exception.SetAbort() End Try ContextUtil.NET) Imports System.NET made by dotneter@teamfly catch(Exception ex) { //If the method call throws an exception.Debugging ASP.8 Strong Named Component(Visual Basic . the transaction will be aborted ContextUtil.Required)> Public Class StockTrade Inherits ServicedComponent Public Function Buy(ByVal NumShares As Integer.New Riders . ByVal symbol As String) As Boolean Try ' Some code that buys x amount of shares ' The transaction will automatically SetComplete() if the method call returns normally.EnterpriseServices Namespace TradeComponent <Transaction(TransactionOption. you will notice that the AutoComplete attribute is missing but that the Transaction attribute is still on the Class definition.SetAbort() End Try ContextUtil. ByVal symbol As String) As Boolean Try ' Code that Sells x amount of shares ' The transaction will automatically SetComplete() if the method call returns normally. After you have built your component. the transaction will be aborted ContextUtil. To accomplish this. the examples above (Listings 15. . Catch ex As Exception 'If the method call throws an exception. you probably will want to make sure that it is being executed as you intended it to.NET made by dotneter@teamfly End Function Public Function Sell(ByVal NumShares As Integer. The next section looks at how to monitor transactions.8) shows how this can be accomplished.SetComplete() End Function End Class End Namespace If you look at the method definitions in these listings. you will need some sort of monitoring program to monitor transactions.7 and 15.Debugging ASP.New Riders . Now if you are trying to call the SetComplete or SetAbort methods on the transaction object. New Riders . and with the onset of .NET.Debugging ASP. tool for you to use to monitor transactions on the system. In this chapter. Another great feature is role-based security.6). you can look at the statistics (see Figure 15. The nice thing about the Distributed Transaction Coordinator is that. yet informative. such as . some changes are bound to crop up in the future. Not only do you have the capability to apply permissions to a whole component. you will need to drill down into the Component Services console. the . if you have permission on a remote server where transactional components are running. These new attributes. but also you can take it down to the method level. if you need that type of granularity. Finally. This gives you the capability to group users by functionality and then administer which users should be in which roles. Microsoft has provided a very simple. Figure 15.NET Framework offers some pretty nifty features that have been added to aid in developing transaction-based components. Summary COM+ can be a complicated beast. Distributed Transaction Coordinator. This can be found under the Distributed Transaction Coordinator.6. To access this tool. you will probably want to see how many are being processed by the system and see if there are any problems with any of them. you learned how permissions play a vital role in the COM+ design.NET made by dotneter@teamfly Monitoring Your Transactions When you start working with transactions. NET in Chapter 16. Relationships among tables Data visitation Old ADO the New ADO.1 presents these differences. It all became very clear to us—and by the end of this chapter.New Riders . we really began to realize the importance of this chapter. This should help you identify some of the changes up front.NET features. we turn to ADO. it will be clear to you.NET made by dotneter@teamfly AutoComplete.NET.“Debugging ADO.1. But as we dug deeper under the hood. but supports standardized access. too. To wind up our look at debugging aspects of the of . Debugging ADO.NET framework.NET can contain one or more tables represented by DataTable objects. Follows relationships rows in to navigate from rows in one table to corresponding table. are great for identifying which methods you want to take care of themselves.NET Feature Memory-resident Uses data representation object. to the which RecordSet.NET SO WHAT IS SO DIFFERENT ABOUT ADO. Requires the use of a JOIN Supports the DataRelation object associate rows in one tables in a RecordSet Scans RecordSet DataTable object with rows in another DataTable object. multiple query to return data from to RecordSet Uses the DataSet object. Let’s first take a look at the differences between the old ADO and the new ADO. Differences Between ADO and ADO.NET that we need to devote a whole chapter of this book to it? Well that’s what we were thinking when we started to write this chapter. which .” Chapter 16.Debugging ASP. DataAdapter another sequentially. Table 16. Disconnected access Available connected through a Communicates to a database with calls object. Table 16. rows Uses a navigation paradigm for nonsequential access to rows in a table. demand resources. underlying data structure because names for code items of a data source. the Data to strongly is typed of transmit programming characteristic self-describing the by “real-world” the code. Sharing disconnected between tiers components of Uses COM marshalling to Transmits a DataSet as XML. Programmability Uses object commands the to to by object. The two entry points for data access are as follows: • • SqlClient OleDb .Debugging ASP. The data transmit a disconnected XML format places no restrictions or record set. Transmission data firewalls of Is problematic to through firewalls configured system-level Scalability typically are DataSet objects use XML. to the communicates a to an OLE DB You provider.Data Namespace When accessing data. requests such as COM marshalling. database by using the OLE Connection Uses address XML.NET made by dotneter@teamfly represented Connection communicate DB providers.NET standard. Requires type conversions. the COM which system because Supported because ADO. Database locks and active Disconnected access to database database connections long for durations limited resources. or directly to SQL Server. This supports datatypes and requires no type only defined those by datatypes conversions. there are two distinct ways to retrieve information. making code easier to read and write. correspond problem solved Underlying data constructs such as tables and rows do not appear. Understanding the System. contend data without retaining for database locks or active database limits contention for limited database connections for lengthy periods database resources. which prevent can pass through firewalls.New Riders . Debugging ASP. We have written a small function (see Listings 16. you can gather enough information in the SqlException class to figure out what is happening.1.NET data components.NET made by dotneter@teamfly By using one or both of these classes. For the examples in this section. The key here is to identify which class will best suit your needs. Relationships among . you’re stuck using the OleDb class. we will be using the SqlClient class.1 give you a bird’s-eye view of how things in the overall .NET data component are organized. if they are thrown. We will make it very simple for you: If you are using Microsoft SQL Server. If you are using the OleDb class. you can read and write data to almost any database. you will need to use the corresponding OleDbException class to catch exceptions. Now that we have looked at the big picture.1 and 16. If you need to connect to any other third-party SQL databases or OLE DB–supported databases such as Oracle. Figure 16. The SqlException class is designed to handle exceptions that are thrown while executing a SQL statement. Figure 16.2) that will display all the .New Riders . you will obviously need to use the appropriate exception class. let’s start looking at the details by beginning with catching SQL errors. When an exception is thrown. Depending on which data class you are working with. so we will use the SqlException class to catch any exception thrown. you most likely will want to use the SqlClient class for all your work. Catching SQL Errors Two distinct exception classes exist for catching exceptions. sDump = "Class: " + Ex.Procedure + nl+ "Server: " + Ex. You might find that this function will be more helpful during more complicated SQL errors. string nl = "<br>". This gives you a good idea of what is happening when the error occurs.Class + nl + "Errors: " + Ex.LineNumber + nl+ "Message: " + Ex. Most of the time.State + nl+ "Target site: " + Ex.New Riders .2 GetSqlExceptionDump Function (Visual Basic .StackTrace + nl+ "State: " + Ex.InnerException + nl + "Line#: " + Ex.1 GetSqlExceptionDump Function (C#) private string GetSqlExceptionDump(SqlException Ex) { string sDump. you can just change the nl (short for “new line”) to a carriage return and line feed. return sDump. TargetSite + nl.HelpLink + nl + "InnerException: " + Ex. sDump = "". } Listing 16. The main purpose of this function is to serve as an additional utility to help you debug SQL problems.Source + nl+ "Stack Trace: " + Ex. This code shown in Listings 16.NET made by dotneter@teamfly properties of the SqlException class when an exception is thrown.Server + nl+ "Source: " + Ex. this code might shed more light on the situation.2 is a simple function that you can include in your program. If you are having problems debugging a SQL problem. If you want to use it in a different context. This function was written to return a string that is formatted with some HTML tags to display on a web page. Listing 16.1 and 16. if you prefer.Errors + nl + "Help Link: " + Ex. such as writing to a file.Message + nl+ "Procedure: " + Ex.Debugging ASP. you will find the error that you get will be the result of a simple spelling mistake.NET) private Function GetSqlExceptionDump(Ex as SqlException ) as string dim sDump as String dim nl = "<br>" . 3 Sample Code to Connect to the Database (C#) try { SqlConnection nwindConn = new SqlConnection("Data Source=localhost.New Riders .Server + nl & _ "Source: " + Ex.Errors + nl & _ "Help Link: " + Ex.Message + nl & _ "Procedure: " + Ex.Close().Procedure + nl & _ "Server: " + Ex.State + nl & _ "Target site: " + Ex. nwindConn.Close().ExecuteReader(). SqlCommand myCommand = new SqlCommand("SELECT dCategoryID.HelpLink + nl & _ "InnerException: " + Ex.3 and 16.StackTrace + nl & _ "State: " + Ex.What could go wrong with that? Listing 16.Write("<br>" + myReader. . Listings 16.Initial Catalog=northwind").NET made by dotneter@teamfly sDump = "" sDump = "Class: " + Ex.Integrated Security=SSPI. nwindConn. SqlDataReader myReader = myCommand.Class + nl & _ "Errors: " + Ex.4 present a small sample of code to connect to the database and select a few rows of data.GetValue(1)).//'This command will trigger an error while (myReader.Debugging ASP. myReader.Open(). nwindConn). CategoryName FROM Categories". TargetSite + nl GetSqlExceptionDump = sDump End Function Let’s take a look at what happens when an error occurs during the execution of a web page.InnerException + nl & _ "Line#: " + Ex.Source + nl & _ "Stack Trace: " + Ex.Read()) Response.LineNumber + nl & _ "Message: " + Ex. New Riders - Debugging ASP.NET made by dotneter@teamfly } catch (SqlException Ex) { Response.Write(GetSqlExceptionDump(Ex)); } Listing 16.4 Sample Code to Connect to the Database (Visual Basic .NET) Try Dim nwindConn = new SqlConnection("Data Source=localhost; Integrated Security=SSPI;Initial Catalog=northwind") nwindConn.Open() Dim myCommand = new SqlCommand("SELECT dCategoryID, CategoryName FROM Categories", nwindConn) Dim myReader = myCommand.ExecuteReader()'This command will trigger an error Do while (myReader.Read()) Response.Write("<br>" + myReader.GetValue(1)) myReader.Close() nwindConn.Close() loop catch Ex as SqlException Response.Write(GetSqlExceptionDump(Ex)) End Try Everything seems to look okay. Looking at the code in Listings 16.3 and 16.4, you probably don’t see anything wrong. That’s because syntactically it is correct—but what about runtime errors? That’s what we are really talking about. Those can be the most difficult errors to track down and debug. Now that you have an example to work with, let’s see what happens if this code is executed. Look at the output shown in Listing 16.5. Listing 16.5 Sample Output of GetSqlExceptionDump Class:16 Errors:System.Data.SqlClient.SqlErrorCollection New Riders - Debugging ASP.NET made by dotneter@teamfly Help Link: InnerException: Line#:1 Message:Invalid column name 'dCategoryID'. Procedure: Server:DIGITAL-LAPTOP Source:SQL Server Managed Provider StackTrace: at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream) at System.Data.SqlClient.SqlCommand.ExecuteReader() at adoerrors.WebForm1.Page_Load(Object sender, EventArgs e) in c:\inetpub\wwwroot\adoerrors\webform1.aspx.cs:line 53 State:3 Target site:System.Data.SqlClient.SqlDataReader ExecuteReader(System.Data.CommandBehavior, System.Data.SqlClient.RunBehavior, Boolean) Line:1Index #0 Error: System.Data.SqlClient.SqlError: Invalid column name 'dCategoryID'. As you can see, we were trying to retrieve data from an invalid column, called dCategoryID. This would not necessarily stand out to another developer debugging your code, so how you organize your code is important. It is not necessarily going the extra mile—it’s just taking that extra little step to ensure that your code won’t blow up in the client’s face. You can still have an error occur, but you can present it in a pleasant way that won’t alarm the user but that instead will inform him of the current situation. New Connection Components First let’s look at the differences in the two connection components. We’re not going to cover the basics, but we’ll focus instead on just the differences and where you might run into problems. Whether you are using the SqlConnection component or the OleDbConnection class, keep one thing in mind when developing your application: Are you using integrated security? If so, you might run into some problems when deploying your web page. Don’t forget that when your web page is running, the trusted user will be IUSR_COMPUTERNAME because this is the default account for IIS. If you specify a user account, it will be much easier for you to debug because you already know which user account you are using. If you use a separate account to log into the database, you will also have more control over the amount of security that user account has. New Riders - Debugging ASP.NET made by dotneter@teamfly SqlClient.SqlConnection This component was designed to be used specifically with Microsoft SQL Server and nothing else. One of the benefits that you get by using it is speed. SqlClient uses its own protocol to communicate with SQL Server. This eliminates the overhead and layers of OleDbConnection. Keep that in mind if you want to accomplish any specific task related to SQL Server, such as use a remote server. One of the features available on SqlConnection that you don’t have in OleDbConnection is the Packetsize property. This can be very useful if you need to adjust the size of the network packets being sent. This is not something that you would typically change, but if you were sending large amounts of text or even images, you might want to increase the packet size. On the flip side, let’s say that you are developing a wireless site that will use very small chunks of data; you might want to adjust the packet size to a more efficient size because the default value is 8192 bytes. You might not think that this is that important, but when your web site starts to scale up and your traffic increases, attention to details like this will start to add up and make a big difference. If you are having a problem connecting to SQL Server 6.5 , that is because the SQLClient namespace does not support it; it supports only SQL Server 7.0 and higher. You will need to use the OleDb namespace to connect to earlier versions of SQL Server. OleDb.OleDbConnection One of the first problems you might run into is trying to use a data source name (DSN). This option is no longer supported in the .NET Framework, so if you want to transition your code over to .NET, keep in mind that you will have to replace all your connection strings that contain DSN. If you compare the OleDbConnection string to its counterpart, it is almost identical, except for a few noticeable modifications in the use of keywords and properties. The major difference between the two is that the OleDbConnection component is designed to be backward compatible and to work with all databases that have an OLE DB driver. The following list contains these exceptions: • • • • You must use the Provider keyword. The URL keyword is not supported. The Remote Provider keyword is not supported. The Remote Server keyword is not supported. New Riders - Debugging ASP.NET made by dotneter@teamfly Also if you take a close look at the properties you will notice that the PacketSize property is not available for the OleDbConnection object. If you happen to be using this with SqlConnection object, it will not port over to the OleDbConnection side. Issues with the DataReader Class The sole purpose of this class is to read a forward-only stream of data from the SQL Server database. The biggest benefit of the data reader is speed. One of the common problems that we have come across is that developers are trying to open another connection while the data reader is still reading from the database. You might run into this problem if you are trying to make nested database calls: You cannot do this because the connection object is directly connected to the data reader. While the data reader is in use, no other operation can be performed on the connection object except closing it. This might seem like a bug, but Microsoft designed it this way. In high-volume sites, it might be better to use DataSets or DataTables because they are disconnected and they release the connection back to the pool more quickly. This also gives you the capability to work with nested database calls. Working with Transactions When we first started working with transactions, we ran into a few steps that we needed to incorporate into our code to get a transaction to run. There are a few steps that you need to initiate to process an insert, update, or delete in a transaction. Here are the steps to complete a successful transaction: 1. The first thing you will come across is the SqlTransaction class. You can get this only by calling the BeginTransaction method on the Connection object. 2. The connection object passes back a SqlTransaction object with all the necessary information to complete your transaction. 3. Now you can process your database update just as you would normally do. 4. Finally, when you have completed the database operation, you can call commit, or, if there is a problem, you can roll back the transaction. These points are illustrated in Listings 16.6 and 16.7. Listing 16.6 Transactions Example (C#) SqlConnection myConnection = new SqlConnection("Data Source=localhost;initial New Riders - Debugging ASP.NET made by dotneter@teamfly catalog=Pubs;persist security info=False;user id=sa;workstation id=DIGITAL-LAPTOP;packet size=4096"); myConnection.Open(); SqlCommand myCommand = new SqlCommand(); SqlTransaction myTrans;(Exception ex) { myTrans.Rollback("JobTransaction"); Response.Write("There was an error during the Insert process.<br>"); Response.Write(ex. ToString()); } finally { myConnection.Close(); New Riders - Debugging ASP.NET made by dotneter@teamfly } Listing 16.7 transactions example (Visual Basic .NET) Dim myConnection = new SqlConnection("Data Source=localhost;initial catalog=Pubs;persist security info=False;user id=sa;workstation id=DIGITAL-LAPTOP;packet size=4096") myConnection.Open() Dim myCommand = new SqlCommand() Dim myTrans as SqlTransaction Ex as Exception myTrans.Rollback("JobTransaction") Response.Write("There was an error during the Insert process.<br>") Response.Write(ex. ToString()) Instead.NET Framework. 2. you might be wondering about the statement "SET IDENTITY_INSERT Jobs ON". error handling has been completely revamped. Make sure that the initial catalog or database is spelled correctly.Close() end try You might notice a few things in these examples. You are trying to connect to a database. Make sure that the server name and IP address are correct. This will help you identify if there is a network problem. We did this because the job_id field is an Identity field and because we already have a unique identity. There is such an enormous amount of errors to cover that you could dedicate a whole book to just this subject.New Riders . otherwise. 3. So what’s wrong? Here are the steps you should take to debug this problem: 1. Verify that the login ID and password are valid by logging in through a different source. ping the server you are trying to connect to. This message seems vague: Is it a security issue or a networking issue? You would think they could have figured this out by now and identify whether it is one or the other.NET made by dotneter@teamfly finally myConnection. Well. it will throw an exception. Error Codes and How to Debug Them In the . 4. Access Denied One of the most common errors could be this one: Access Denied: [DBNETLIB][ConnectionOpen (Connect()). First. we have tried to focus on the more common areas where you might run into problems and explain the errors that you will probably encounter. and you have all the proper permissions to use it. We must use the "SET IDENTITY_INSERT" statement to accomplish this. .]SQL Server does not exist or access denied. and you get this error message. you know that the database exists. we needed to make sure that the unique identity stayed in sync throughout the database.Debugging ASP. 8 and 16.9. you will notice that we’re trying to connect to the database with a user named testuser. computer name. we were trying to perform a SELECT statement on the product table with the different account. This error is typically easy to fix. If you take a look at Listings 16. In this case. Table 16. One of the first things you should do when setting up a new database-driven web site is create a SQL user account for your web site and not use the system administrator account. there are really no restrictions.Initial Catalog=northwind") Server login account The name of the database The SQL Server password Explanation The hostname. Microsoft has been meddling quite a lot with the ADO portion of .pwd=. This way you can control what that user has access to right from the beginning. Because the system administrator account has privileges to do anything that the owner wants. Acceptable Keywords Keyword Data Source ServerAddressAddrNetwork Address User ID The SQL Initial Catalog Password Pwd Here are a couple of examples of what a basic connection string could look like: SqlConnection ("server=digital-laptop.2. database 'Northwind'.2 gives you a list of acceptable keywords. not system administrator.uid=sa. owner 'dbo'.New Riders . This causes serious problems if you try to change the user connecting to the database.Debugging ASP. . or network address of the SQL Server SELECT Permission Denied Another common error is exemplified here: SELECT permission denied on object 'Product'.NET. Integrated Security=SSPI.NET made by dotneter@teamfly Let’s take a close look at the connection string. so you need to pay extra attention to details here. and you will avoid major headaches if you try to switch the user account that is logging into the database.database=northwind) SqlConnection("Data Source=localhost. Table 16. initial catalog = northwind. you will notice that there are no permissions set for any user. Open the SQL Enterprise Manager. SQL Enterprise Manager table permissions. Then everything will work fine.Open().Open() Dim ds as DataSet = new DataSet() Dim dr as SqlDataAdapter = new SqlDataAdapter("Select * from product". Expand the tables. you will need to do the following steps: 1.ConnectionString = "server=digital-world. Conn.user id= testuser. 4.user id= testuser. pwd=password". pwd=password" Conn. DataSet ds = new DataSet(). 2.NET) Dim Conn as SqlConnection = new SqlConnection() Conn.NET made by dotneter@teamfly Listing 16.Debugging ASP.9 Connecting to the Database (Visual Basic .8 Connecting to the Database (C#) SqlConnection Conn = new SqlConnection(). .initial catalog = northwind. 3. Select Permissions.New Riders .2.ConnectionString = "server=digital-world. 5. Listing 16. If you take a look at Figure 16.Conn) To see who has permissions on the product table. Figure 16. Select the Northwind database. Right-click the Product table. Conn. What you need to do is check the testuser SELECT column and save the changes.2. x and 2000 .Debugging ASP.NET made by dotneter@teamfly Now that the problem has been identified. [Response.IndexOutOfRangeException was thrown. you can grant access down to the column level. There are several ways to fix this type of problem.GetValue(2)). including the following: • • Give the testuser select permissions on the product table. IndexOutOfRangeException Here is an error message that you might run into if you are not careful: An exception of type System. Column-Level Security One other point that you might want to consider is column-level security. but you should be aware of this feature in SQL server.Write("<br>" ++ myReader. Grant the WebUsers role select permissions on the product table. Create a new role called WebUsers in SQL server.New Riders . let’s look at how to solve it. such as Social Security numbers. . In SQL Server 7. This need would arise only in rare cases. This is very useful if you want to restrict access to sensitive information in a table. • Call a stored procedure and grant execute permission to testuser on that stored procedure. and then add the testuser to that role. ExecuteReader() Dim i As Int32 While myReader.Write("<br>" + myReader. nwindConn).ExecuteReader(). If you will use the GetValue method to retrieve the data from a field. SqlDataReader myReader = myCommand2. nwindConn. This is exemplified in Listings 16. Listing 16.Open() Dim myCommand2 = New SqlCommand("SELECT CategoryID. } myReader. you can initiate a loop to iterate through the fields and read them. while (myReader.11 Reading Values from the SqlDataReader (Visual Basic .i<myReader.11. Listing 16.NET) Dim nwindConn = New SqlConnection("Data Source=localhost.Read() For i = 0 To myReader.Initial Catalog=northwind") nwindConn. So how can you prevent this from happening? Let’s take a look at your options for this situation.NET made by dotneter@teamfly This is a prime example of stepping outside the boundaries of an array. Integrated Security=SSPI. nwindConn.FieldCount.Debugging ASP.Close().New Riders . Integrated Security=SSPI.10 and 16.Close().FieldCount . CategoryName FROM Categories". int i.Open().i++) { Response. CategoryName FROM Categories".GetValue(i)). SqlCommand myCommand2 = new SqlCommand("SELECT CategoryID.Read()) for (i=0. Using that value. you can use the fieldcount property to identify how many fields were returned. nwindConn) Dim dsCustomer = New DataSet() Dim myReader = myCommand2.Initial Catalog=northwind").10 Reading Values from the SqlDataReader (C#) SqlConnection nwindConn = new SqlConnection("Data Source=localhost. Debugging ASP.GetValue(i)) Next End While myReader. Another method is to access each field by name.Write("<br>" + myReader. This could require more code than the last example.Close() Using this method. but it gives you that extra level of manipulation and control.12 and 16.3. Figure 16. You still might run into an error if you do not spell the column name correctly or provide an invalid column.3.NET made by dotneter@teamfly Response. If this happens. Listings 16.New Riders . you will have to make sure that you reflect that change in your code as well. you have the spelling correct.13 show how you would use the column name instead of the index number. And if you change the name of a column in the database schema.Close() nwindConn. you should get an error that looks similar to the message shown in Figure 16. . Column error. you avoid trying to access an array element outside the valid range. Make sure that if you use the column names. connection objects. But you need to assign the data source to your data grid.NET made by dotneter@teamfly Listing 16. we were amazed at how flexible the data source property was.Read()) { Response. TOP: 59px" runat="server" DataSource="<%# sqlCommand1 %>"</asp:DataGrid> Because the data source is looking for a dataset or a list object. If you are working in Visual Studio . and even datasets. If you look at the following code. you will see that it is trying to assign the sqlcommand1 object as the data source. LEFT: 25px.NET) Do While myReader. ToString()) Response.Write("<br>").Write("<br>") Loop Invalid Data Source You can assign a data source to your data control in a couple different ways.12 Accessing Data by Column Name (C#) While (myReader. ToString()).Write(myReader("CategoryName"). you can define the data source property in the aspx code. Another option is to set the data source in the code behind the page where you do all the database work and initialize your page components. } Listing 16. Response. While we were working with data grids. ToString()). Response.Write(myReader["CategoryID"]. <asp:DataGrid id="Datagrid2" style='Z-INDEX: 101. This section focuses on just working with the data grid control.Write(myReader["CategoryName"]. POSITION: absolute. This enables you to add additional debugging .13 Accessing Data by Column Name (Visual Basic .NET. you would need to create a dataset and assign that to the DataSource property. When setting up a data grid. you can drag and drop command objects. This opens up the possibilities of assigning an incorrect data source if you are not familiar with the process. You can pass it a dataset or various types of lists or collections. but you don’t have to.Debugging ASP.Write(myReader("CategoryID").New Riders . ToString()) Response. You can also set that value in the code behind the aspx page.Read() Response. DataGrid1. pwd=" Conn.BackColor = System.EventArgs) Dim Conn = new SqlConnection() Conn. Listings 16. Conn.14 Setting the data source from behind the aspx page(C#) private void Page_Load(object sender.Close().AllowSorting=true. DataGrid1.NET) Private Sub Page_Load(sender as object.user id= sa.Open().ConnectionString = "server=localhost."Products").Color.Conn) dA2.AllowSorting=true DataGrid1.initial catalog = northwind.BackColor = System.Conn).14 and 16. //Populate the dataset dA2. such as writing to a debug listener or a file to help you with any problems that you might run across.Fill(ds2) DataGrid1.NET made by dotneter@teamfly code.Fill(ds2.AliceBlue .15 Setting the data source from behind the aspx page (Visual Basic .user id= sa. e as System.15 illustrate that you make the call to the database during the load process and then bind the results to the data grid.Debugging ASP. SqlDataAdapter dA2 = new SqlDataAdapter("Select * from products". } Listing 16. pwd=". //Make sure to close the connection and free unused resources Conn.DataBind().Color. System.initial catalog =northwind.EventArgs e) { SqlConnection Conn = new SqlConnection(). //Create the association between the data grid and the dataset DataGrid1. Listing 16.DataSource = ds2. DataGrid1.Drawing.AliceBlue. DataSet ds2 = new DataSet().Open() Dim ds2 = new DataSet() Dim dA2 = new SqlDataAdapter("Select * from products".Drawing. Conn.New Riders .ConnectionString = "server=localhost. EventArgs e) { // Put user code to initialize the page here SqlConnection Conn = new SqlConnection().17 Databind method (C#) private void Page_Load(object sender. LEFT: 25px. . </HEAD> <body MS_POSITIONING="GridLayout"> <form id="WebForm1" method="post" runat="server"> asp:DataGrid id=DataGrid1"></asp:DataGrid> </form> </body> </HTML> Listing 16.New Riders . Listing 16.Debugging ASP.DataSource = ds2 DataGrid1. you might want to look at your code and make sure that you are using the DataBind method after you set your data source.Close() End Sub No Data in the Data Grid What if your data does not show up in your data grid and you don’t get any error messages? If this is the case. Conn. pwd=". DataSource = ds."Products").Open(). DataGrid1. Conn.Open() 'Create a new dataset to hold the records Dim ds = new DataSet() Dim dA = new SqlDataAdapter("Select * from products". DataGrid1.Conn). not windows within windows applications. pwd=" ' Open the connection Conn.EventArgs e) ' Put user code to initialize the page here Dim Conn = new SqlConnection() Conn.BackColor = System.Close().Close() This applies only to aspx pages.AliceBlue.Conn) 'Populate the dataset by using the dataAdapters Fill method dA.NET made by dotneter@teamfly Conn. System.Drawing.Debugging ASP.18 Databind method (Visual Basic . DataSet ds = new DataSet().New Riders . } Listing 16. dA. SqlDataAdapter dA = new SqlDataAdapter("Select * from products"."Products") DataGrid1. DataGrid1.Color. . DataGrid1.user id= sa.BackColor = System.initial catalog = northwind.Fill(ds.ConnectionString = "server=localhost.Color.DataBind() Conn.AliceBlue DataGrid1.DataSource = ds DataGrid1.AllowSorting=true.Fill(ds.Drawing.NET) private void Page_Load(object sender.DataBind().AllowSorting=true DataGrid1. This is a good sign that your connection is in use and that you need to either complete the current database operation or close the connection. we get an exception. Figure 16.19 and 16. Error that can result when working with multiple connections. As soon as the command is executed against the database. If you are getting this error.4.New Riders . Solutions include closing the DataReader before continuing or creating a second connection object and running your process against that. however. You will notice in Listings 16.20 that one data reader is open and that we have read the first row of information from it. Then a second command object is created and more data is read from a different table. Microsoft has designed the . Figure 16. check to see if you have closed the DataReader object before executing another operation on the same connection.4 provides an example of an error that you might get when trying to work with multiple connections. This might occur when you are trying to read from one table and then trying to perform an operation on another table.NET framework to function a little differently than previous versions of ADO. This is because the .NET made by dotneter@teamfly Problems with Connections One of the problems that you might run into first involves working with connections.Debugging ASP. 19 DataReader problem(C#) SqlConnection Conn = new SqlConnection().initial catalog = northwind.Conn). Listing 16.20 DataReader problem (Visual Basic . //Create a datareader SqlDataReader dr.initial catalog = northwind.ConnectionString = "server=localhost. //Now create a second command that using the same connection SqlCommand com2 = new SqlCommand("select * from jobs".NET) Dim Conn = new SqlConnection() Conn. //Create a SqlCommand using the connection that was just created SqlCommand com1 = new SqlCommand("select * from products". pwd=". //now try to execute this command on the existing connection in use dr2 = com2. Conn.Conn) //Create a datareader Dim dr as SqlDataReader dr = com1.NET made by dotneter@teamfly connection is right in the middle of reading data from the first SELECT statement. At this point.ExecuteReader() //Now start reading a row at a time dr.Read(). the only thing that can be done is to finish reading the data or close the DataReader object. pwd=" Conn.Open() //Create a SqlCommand using the connection that was just created Dim com1 = new SqlCommand("select * from products". Listing 16.user id= sa. dr = com1.Read() . //Now start reading a row at a time dr.Read().ExecuteReader().New Riders .ConnectionString = "server=localhost. //Create a second datareader SqlDataReader dr2.Conn). Conn.user id= sa.ExecuteReader().Debugging ASP. //This line will throw and exception! dr2.Open(). Conn) //Create a second datareader Dim dr2 as SqlDataReader //now try to execute this command on the existing connection in use dr2 = com2.ExecuteReader() //This line will throw and exception! dr2. If you create connections all over your code and they all point to the same location.Initial Catalog=Store". That way you can keep it in memory while you make another connection to the database. Working with Multiple Connections and Using Connection Pooling Connection pooling is built into the .NET made by dotneter@teamfly //Now create a second command that using the same connection Dim com2 = new SqlCommand("select * from Orders".Config file under a custom section called appsettings. Keep this in mind when you are developing.New Riders .Debugging ASP. SqlConnection conn = new SqlConnection(). // Pool 1 is created. Listing 16. keep a global connection string around so that you don’t start creating unmeaningful database connections and wasting resources.ConnectionString = "Integrated Security=SSPI.Open(). A good place to store your connection string would be in the Web. conn. you will need to pull it into a dataset or some other form. But if the connection strings differ. conn. . Listings 16.NET Framework.Read() If you need to persist the data to work with it.22 illustrate when a new connection will be made and when an existing connection will be used or pooled. Common Pitfalls This section looks at some common issues that you might encounter when developing and tells how to work around them.21 and 16. the system automatically pools the connections for you.21 Connection Pooling (C#) SqlConnection conn = new SqlConnection(). you will get a new nonpooled connection. If you create all your connections with the same connection string. add to it. or display it in a data grid? These are important questions to consider because they determine which component you can use. In some cases. conn.ConnectionString = "Integrated Security=SSPI.NET) Dim conn = new SqlConnection(). read it. as opposed to which one you want to use. ' Pool 2 is created // The second pool is created due to the differences in connection strings Dim conn = new SqlConnection(). ' pool 1 is used. conn.Debugging ASP.New Riders .ConnectionString = "Integrated Security=SSPI. First.Open().Initial Catalog=Orders". you will need to create two separate connections to perform an operation. conn.Initial Catalog=Store".ConnectionString = "Integrated Security=SSPI.Open().Open().ConnectionString = "Integrated Security=SSPI. conn.Initial Catalog=Store".Initial Catalog=Orders". conn.Open(). .Initial Catalog=Store".22 Connection Pooling (Visual Basic . // Pool 2 is created // The second pool is created due to the differences in connection strings SqlConnection conn = new SqlConnection(). what do you need to do with the data? Do you need to change it. conn.NET made by dotneter@teamfly conn.Open(). This is because the DataAdapter is used to populate a dataset. Dim conn = new SqlConnection(). Listing 16. Should I Use a DataReader or DataAdapter? You should ask yourself a few basic questions before you start grabbing data from the database. which is the preferred method of populating a data grid.ConnectionString = "Integrated Security=SSPI. // pool 1 is used. This might occur when you are in the middle of reading information from a data reader and want to make changes to the database as you iterate the data. conn. conn. conn. If you want to display data in a data grid. your only option is to use the DataAdapter. ' Pool 1 is created. So you figure that you will just query the database every time someone requests the page. The DataReader is strictly for reading data—hence. On the front page. But what happens when you start to receive a lot of hits on your web site? I have seen this happen literally overnight! The next day traffic doubles and continues to climb day by day. and you sell books. don’t worry. So. use a DataReader object. . so it must be okay to do. by all means. Currently all the information that you need is contained in a database. the number of trips that you make to the database can have a significant impact on the performance—not to mention the scalability—of your web site. It doesn’t matter if it is a very small amount of data that is being retrieved from the database. it can—and usually will—come back to bite you later. if you want to edit data as you are navigating through the rows.NET made by dotneter@teamfly If you just need to read the data row by row. you won’t be able to. presto—your web page automatically picks up the new XML file. the name DataReader. you want to rotate books from a preselected group of books on sale. But they are real. If there are new items. you might be only rotating 30 different books. Just bear in mind the following suggestions: • Keep it simple. Now just write some custom code behind the web page to do the rotating and reading the data from the XML file. Also note that this is a forward. This is just one creative way to accomplish this task. and. If you are worried about new books going on sale and having to update the XML file.Debugging ASP. Let’s say that you have a web store. One solution is to keep on the web server an XML file of the books that are currently on sale. you can use a trigger or schedule a task to check for new books on sale. These are the situations that you thought you saw only in TV commercials. if it is in a high-traffic page. the system can export the results to an XML file. you start doing redundant work. then. How Many Times Do You Really Need to Talk to the Database? So what is the big deal about calling the database a bazillion times? You don’t see any impact while developing it. but there are many other ways to accomplish the same thing. read-only mechanism. In my experience. trust me! If you stop to look at this situation. after the 30th time to the database. On MS SQL Server. Then your server only needs to look on its own hard drive to retrieve the data that it needs.New Riders . And some of the stress testing seemed to be fine. So what is the benefit of using parameters? Let’s take a look at the type of problems you might run into and how to avoid them. the names of the parameters must match the names of the parameter placeholders in the stored procedure. The SQL Server .5. Keep these points in mind when you are developing your web site. though. Here we focus on the issues with the command object and parameter collections. As much as you might want to jump in and start coding. it is always beneficial to have a plan can be called in a few different ways without using the parameter components. Figure 16. Using Parameters with SqlCommand When using parameters with SqlCommand. Keep the number of connection to a minimum. and think through what you are trying to accomplish. If you accidentally use the question mark. you will probably get the error shown in Figure 16.NET made by dotneter@teamfly • • • Persist common data elements in memory or to a local file. . The SQLCommand class does not support the question mark (?) placeholder for passing parameters to a SQL statement or a stored procedure call.5. Error generated by use of a question mark.New Riders . Keep the number of trips to the database minimal.Debugging ASP.NET Data Provider treats these as named parameters and searches for the matching parameter placeholders.We could never understand why people used the parameter object to add parameters to their query when they could simply format a text string to do the same thing and use less code to accomplish the task. Debugging ASP.23 Implementing a Parameter Using the SqlCommand Class (Visual Basic .database=northwind. conn) 'Add the parameter and value to the command object myCommand2. conn). Listing 16.Parameters.3).Trusted_Connection=y es") conn. CategoryName FROM Categories where CategoryID > @param1". CategoryName FROM Categories where CategoryID > @param1".ExecuteReader() In these listings.23 to see how you would properly implement a parameter using the SqlCommand class. you will notice that when we created SqlCommand.NET made by dotneter@teamfly In this case.Add("@param1".22 and 16. 3) Dim myReader = myCommand2.22 Implementing a Parameter Using the SqlCommand Class (C#) SqlConnection conn = new SqlConnection("server=(local). The next line of code added the parameter value to the command statement by adding it to the parameter collection and specifying the parameter name and value that should replace it.Add("@param1".database=northwind.New Riders . //Add the parameter and value to the command object myCommand2.Parameters. Just remember that the OleDBCommand object does not operate the same way. Let’s take a look at the differences in the next section.ExecuteReader(). Take a look at Listings 16.NET) Dim conn = New SqlConnection("server=(local). named parameters must be used. we included a named parameter called @param1.Open() 'Build the Command using a named parameter Dim myCommand2 = New SqlCommand("SELECT CategoryID.Trusted_Connection=yes").Open(). conn. Using Parameters with OleDbCommand . SqlDataReader myReader = myCommand2. Listing 16. //Build the Command using a named parameter SqlCommand myCommand2 = new SqlCommand("SELECT CategoryID. For instance. SQLClient is native to . SQL ADO.NET Objects Versus OleDb ADO. It is inside where the differences are tremendous. Take a look at the following example: SELECT * FROM Products WHERE ProductID = ? and price = ? It is important that the order in which parameter objects are added to the ParametersCollection must directly correspond to the position of the question marks in the SQL statement. It communicates to SQL using its own protocol. the names of the parameters added to OleDbParameterCollection must match the names of the parameter in the stored procedure.NET Data Provider treats these as named parameters and searches for the matching parameter marker. if you are working with parameters. When debugging. Even though most of the features look and act alike. the connection strings differ slightly.NET Data Provider does not support named parameters for passing parameters to a SQL statement or a stored procedure called by a command object. you don’t have that option here. the question mark (?) placeholder must be used. you might run into some problems.NET and MS SQL Server.Debugging ASP. If you look at how the two components differ. you need to keep in mind the difference between the two namespaces. When connecting to a Microsoft SQL Server database.NET made by dotneter@teamfly When using parameters with OleDbCommand. This enables it to work more quickly and to avoid having to use the ODBC layer or OleDB to communicate to legacy database drivers.New Riders . The OLE DB . . In this case.NET Objects So what is the big difference? And why are there two different components to manipulate data? The biggest difference is not apparent on the outside. Unlike the SqlCommand object. you need to make sure that you are formatting the string correctly. The OLE DB . in which the parameters have names associated with them. you might want to increase your overall performance by using SqlDataAdapter along with its associated SqlCommand and SqlConnection. you could detect which database you are using and then use the classes that best suit your needs.Debugging ASP. this can be a lengthy list. you might think about keeping it flexible enough to work on— say. We also covered some typical error messages and how to fix the errors that they represent. When you are developing a database component.NET made by dotneter@teamfly Data Connection Performance Note If you are concerned about performance. Appendix A. Then the chapter moved into possible problems that you might run into with the SQLConnection class and showed how to avoid common mistakes. This is a very powerful tool for debugging.NET. so take advantage of them to do as much work for you as possible. Unfortunately. Next we dug into how the system used the SQLException class to handle errors that are thrown by the system. To get the best performance. This would fall under more of a commercial software product feature. but that is one bit of information to think about. so get used to using it in your code. .NET.NET.NET framework.New Riders .NET SO NOW THAT YOU HAVE DECIDED TO start writing all your web-based applications in ASP. Summary Let’s take a look at what we have covered so far. a variety of changes will need to be made to your existing VBScript code. with Oracle and SQL Server. If you are porting an existing application to the ASP. keep in mind the type of connection you will be using. those data grid controls are a powerful tool. Remember. you must be wondering what issues you will encounter while migrating from ASP to ASP. First we looked at some of the new features Microsoft has added and showed where you might stumble in your transition to . depending on the situation you are in. Issues that Arise When Migrating from ASP to ASP. NET. Keep in mind that this is not an exhaustive list of changes between the two versions. We will also take a look at C# and why you might want to use it for your server-side programming language. In ASP. using Visual Basic . Listing A. and Listing A. we discuss the syntactical changes that have been made to Visual Basic. a lot of the fundamentals have changed. In the next few sections. Moving from ASP to ASP.NET made by dotneter@teamfly However. Now.New Riders .NET Although ASP.NET.NET. you will need to remember only the language and logic changes that are now in the Visual Basic language. This appendix attempts to look at only features that existed in the previous version that have been migrated to the new version. This tells the ASP interpreter that everything between the two tags is program code and should be executed on the server.Write psString End If End Sub MyFunction Request. Here we explore some of the basic changes that you will encounter when migrating from ASP to ASP.Form("txtText") %> <html> .NET that were not in a previous version will not be discussed here. of you are starting a project from scratch.1 ASP Page (C#) <% Sub MyFunction(psString) If psString <> "" then Response. while implementation logic is contained in between the <% and %> tags.3 show the same page written in ASP. all server-side code is written between <% and %> tags. <%%> Versus <script> In ASP. Features in .2 and Listing A.NET is based on the ASP technology.1 shows an example ASP page. the rules have changed a bit.NET and C#.Debugging ASP. Listing A. respectively. all variable and function declarations are placed between <script> and </script> tags. Debugging ASP.Write(psString).Get("txtText")) %> <html> <body> <form action="test. } </script> <% MyFunction(Request.aspx" method="POST"> <input type="text" name="txtText"> <input type="submit"> </form> </body> </html> Listing A. %> .NET) <%@ Page Sub MyFunction(psString as String) If psString <> "" then Response.New Riders .NET made by dotneter@teamfly <body> <form action="test.3 ASP.Form.asp" method="POST"> <input type="text" name="txtText"> <input type="submit"> </form> </body> </html> Listing A.Form.NET Page (C#) <%@ Page void MyFunction(String psString) { if(psString != "") Response.NET Page (Visual Basic .Get("txtText")).Write(psString) End If End Sub </script> <% MyFunction(Request.2 ASP. First. depending on which language you are using as your server-side code. your code will not compile. However. Page Directives The page directives that you are familiar with in ASP are still available in ASP. as described previously.NET Page Directives Directive @ Page @ Control @ Import @ Register @ Assembly @ Description Specifies page-specific attributes Specifies control-specific attributes Imports a namespace into the page Associates aliases with namespaces and class names for concise notation in Custom Server Control Syntax Links an assembly with the current page Controls the caching of the page . notice that instead of calling Request.Form("txtText") to get the value of the text box upon submission. note the format of the <script> tag. ASP.1. You will want to use the language parameter to specify this. you must define all functions within <script> tags. you set the language parameter to vb.NET. Table A. is placed between the standard ASP <%%> tags.Debugging ASP.New Riders . notice that the function definition is placed between the <script> tags.NET. you set it to c#. in C#. calls to the function must be placed within the standard <%%> tags.NET.aspx" method="POST"> <input type="text" name="txtText"> <input type="submit"> </form> </body> </html> You should notice a few things here. Finally. Second.NET made by dotneter@teamfly <html> <body> <form action="test. but the actual call to it. you use a method of the Form object called Get. a few new ones are worthy of mention. For Visual Basic . otherwise. However. Be sure to explicitly specify this in your code when trying to access any of the members of these collections. In ASP. This method exists off the QueryString and Cookie collections also.1 lists the new directives and what they can do for you. Table A. the implementation logic. aspx" In ASP.Navigate when you want to completely stop execution of one page and immediately move to the other. Transfer terminates execution of the current ASP.Redirect Server. Transfer method should be used when you want to start execution on another page without discarding any of the previously computed information.Redirect "OtherPage. the Redirect method of the Response object is used as shown: Response. Transfer "OtherPage. You should use Page.NET. Server.End. The first is the Navigate method of the Page object.4 ASP Cookie Code <% .aspx" At this point in the code. Cookies The use of cookies is very different in ASP. It takes the following form: Server. Listing A. unloads all controls in the tree. your code would look similar to Listing A.4.Navigate calls Response. whatever script was being executed stops and then starts at the top of OtherPage.Debugging ASP.Redirect: the URL to redirect to.New Riders . discarding the results of the current page. Instead of a cookie collection being part of the Response and Request objects. cookies are now objects that are manipulated individually and tossed into a master collection.NET page and then begins execution of the request on the page specified. The Server. The second method.Redirect.Transfer Versus Page.NET. you have two alternatives to this method.NET made by dotneter@teamfly OutputCache Response. Previously. to add a cookie to the client machine. The difference between the two methods is that Page. The function takes the same parameter as Response. and then calls Response.asp.Navigate Versus To send the browser to a new page in ASP. AppendCookie(oCookie) %> <html> <body> <% Response. oCookie = new HttpCookie("MyCookie"). Listing A.NET) <%@ Page Language="vb"%> <% Dim oCookie As HttpCookie oCookie = new HttpCookie("MyCookie") oCookie.6 illustrates the sequence in C#.NET made by dotneter@teamfly Response. set its value.Debugging ASP.Get("MyCookie"). and Listing A.AppendCookie(oCookie).5 illustrates the procedure in Visual Basic . as was shown in the previous section.Values.Cookies."Brian").Cookies("MyCookie")) %> </body> </html> The same code in ASP. Just remember to use the Get method. %> ."Brian") Response.6 Cookies (C#) <%@ Page Language="c#"%> <% HttpCookie oCookie. Listing A.You create and instantiate a cookie object. Requesting it back out of the collection is very similar to doing so in ASP.5 Cookies (Visual Basic . Response.Cookies("MyCookie") = "Brian" %> <html> <body> The cookie is: <% Response.What makes it different is that cookies are now treated as objects.Write(Request.Value) %> </body> </html> Listing A.NET is quite different.Values. oCookie.Add("Name".Write(Request.New Riders .Add("Name".NET. and then append it onto the cookie collection. NET looks like this: objMyObj = objSomeOtherObj Properties Properties have been greatly simplified in the world of the new Visual Basic.NET.Debugging ASP.New Riders . This section looks at most of the changes in Visual Basic . and so on.NET. Previously. so you will not be able to hook into every possible event. The standard object instantiation in Visual Basic is shown here: Set objMyObj = objSomeOtherObj In Visual Basic . this code has been shortened to more closely match the C# programming language.Get("MyCookie"). it is gone.Cookies.7. In short.NET works is that it is based on an event model much like a typical Visual Basic program.Value). Moving from VBScript to Visual Basic You should be aware of quite a few syntactical changes to the Visual Basic programming language before you start any project in this language. such as a mouse move or a key-down event. Instead of your ASP script being executed from top to bottom. all these events will occur on the server side. you can respond to events such as button clicks.NET made by dotneter@teamfly <html> <body> <% Response. text box changes. Of course. a series of properties looked like the code in Listing A. Set Let’s start out with the keyword Set. . %> </body> </html> Events Another major change to the way ASP. The same code in Visual Basic .Write(Request. 8 Visual Basic . regardless of whether you are doing something with the return value.New Riders .NET.7 Visual Basic Property Code Private gsString As String Private goObject as Object Public Property Let StringProp(ByVal psData as String) gsString = psData End Property Public Property Get StringProp() As String StringProp = gsString End Property Public Property Set ObjectProp(ByVal poObj as Object) Set goObject = poObj End Property In Visual Basic . For example.9 would work in Visual Basic or VBScript. Calling Subs Calls of all types (function.8. the code in Listing A.9 Visual Basic Function Calls . Listing A. Listing A. an example is given in Listing A. and sub) must use parentheses around the parameters. method.NET made by dotneter@teamfly Listing A. there is no longer a distinction between a Set and a Let because of the change mentioned in the last section.NET Property Code Private gsString as String Private goObject as Object Public Property StringProp as String Get StringProp = gsString End Get Set gsString = StringProp End Set End Property As you can see.Debugging ASP. this is much shorter. 13 Visual Basic .11 would no longer work in Visual Basic .NET. 2.10 Visual Basic . Visual Basic supports a wide range of variable types. it is highly recommended that. Listing A.New Riders .Debugging ASP.13 shows a few examples of how to declare variables of specific types. plLng2 As Long. So.NET. the function in Listing A. plVal However. 2.11 Visual Basic Function with ByRef Parameters Sub MyFunction(plLng1 As Long. this subroutine would have to be rewritten as shown in Listing A.NET.10. you would need to change these same calls to the code shown in Listing A. In previous server-side code you have written.NET made by dotneter@teamfly dtDate = Date MyFunction "Value1". all variables were declared as type Variant. Listing A. Now. plLng3 As Long) plLng3 = plLng1 + plLng2 End Sub In Visual Basic . Listing A.NET.12. Previously. all intrinsic types are passed ByVal. you define them appropriately so that they will not use excess memory and will be far more efficient.12 Visual Basic .NET Variable Declarations .NET. The same can be said of Visual Basic .NET Function Calls dtDate = Date() MyFunction("Value1". plLng2 As Long. in Visual Basic . ByRef plLng3 as Long) plLng3 = plLng1 + plLng2 End Sub Datatypes Unlike VBScript. With your server-side code written in Visual Basic .NET. when you’re declaring variables. all parameters were passed ByRef if no method was specified. Listing A. Listing A. plVal) Parameters A major change to the ways parameters are passed has been introduced into Visual Basic .NET Function with ByRef Parameters Sub MyFunction(plLng1 As Long. Listing A. to get the type of a specific variable. Long is now 64 bits instead of 32 bits.14.14 Visual Basic .2.NET made by dotneter@teamfly Dim psString As String Dim plLong as Long Dim bByte as Byte Also be aware that the Currency datatype has been removed. and Integer is now 32 bits rather than 16 bits.Debugging ASP.NET is the capability to initialize variables and arrays at the time of declaration.NET Intrinsic Datatypes Datatype Byte Short Integer Long Single Double Decimal 1 byte (8 bits) 2 bytes (16 bits) 4 bytes (32 bits) 8 bytes (64 bits) 4 bytes (32 bits) 8 bytes (64 bits) 12 bytes (96 bits) Size Note here that the sizes for Integer and Long datatypes have changed.NET is the VarType function. Refer to Table A.New Riders .GetTypeCode.NET. Now.NET Variable Initializations Dim psString As String = "Hello!" Dim piInt as Integer = 123 Const cSTRING = "Goodbye!" Dim psArray(2) As String = ("Brian".GetType. Table A. It has been replaced with the universal Object type.value Declarations A new feature of Visual Basic . Something else to note is that the sizes of certain intrinsic datatypes have changed. "Jon") .2 for more information. Variant The Variant datatype no longer exists in Visual Basic . you can use the following property that is a member of all the intrinsic datatypes: SomeObj. as shown in Listing A. Also removed from Visual Basic . Visual Basic . 16 Structured Error Handling in Visual Basic .NET Try ' Some code Catch ' What to run when an error occurs Finally ' Code that always executes after try or catch End Try Structure Declaration In Visual Basic. structures were defined using the Type…End Type. C++. as shown in Listing A. The code in Listing A. the statement Dim psString As String * 5 is no longer valid.NET made by dotneter@teamfly Note. Therefore.NET Structure Type Employee EmpName As String .17.NET. and Java.NET. Listing A.15 Shorthand Assignments in Visual Basic .NET. The code in Listing A. Shorthand Syntax Visual Basic .NET plVal = 100 plVal += 10 ' plVal now equals 110 plVal -= 10 ' plVal now equals 100 plVal *= 5 ' plVal now equals 500 plVal /= 5 ' plVal now equals 100 Error Handling Although the standard On Error GoTo XXX and On Error Resume Next exist in Visual Basic . however. Listing A.17 Visual Basic .15 illustrates a few examples of the shorthand notation.Debugging ASP.NET now supports shorthand assignment much like C.New Riders . which is similar to that of languages such as C++ and Java. that the capability to declare strings of a predefined length is missing in Visual Basic . Listing A.16 shows an example of structured error handling in Visual Basic . you also might want to take advantage of its built-in structured error handling. this statement is actually shorthand for the following: .NET is slightly different from that of Visual Basic. however.18. At that point. it cannot be seen outside the loop in the previous example.NET made by dotneter@teamfly EmpNumber As Long EmpAge As Integer End Type In the new Visual Basic .New Riders . it would reference a new instance of MyObject.NET.19 Variable Scope in Visual Basic . the code in Listing A.NET. its scope is inside that block. this same structure would be declared using the Structure…End Structure keywords. Listing A.NET Structure Structure Employee EmpName As String EmpNumber As Long EmpAge As Integer End Structure Variable Scope The scope of variables in Visual Basic . the following statement would declare an object and set it to Nothing until it was used. Listing A.19 would be valid. as shown in Listing A. however. Object Creation In Visual Basic.18 Visual Basic . Therefore.NET For plCount = 0 to 10 Dim plVal as Long plVal = plVal + plCount Next plVal2 = 2 ^ plVal In Visual Basic . Dim poObject as New MyObject In Visual Basic .NET. In Visual Basic. because the variable plVal is declared inside the For…Next loop.Debugging ASP. well. here is a list of everything that has been removed from the Visual Basic .NET.Assert method Debug. IsMissing is. This means that all optional parameters to a function must be declared with a default value.NET. While…Wend was valid.NET made by dotneter@teamfly Dim poObject As MyObject = New MyObject In this statement. the object is created and references a new instance of the object. in Visual Basic . as follows: Sub MySub(Optional psStr = "Default value!") Control-of-Flow Statements Certain control-of-flow tatements have been removed from Visual Basic . including these: • • • GoSub On…GoSub On…GoTo (Error…GoTo is still valid) A change also has been made to the While loop. missing.New Riders .Debugging ASP. now the syntax has changed to While…End While. And the Rest To wrap this all up.Print method Deftype statements DoEvents function Empty keyword Eqv operator GoSub statement Imp operator . Previously. IsMissing Ironically.NET programming language: • • • • • • • • • • • • • • As Any keyword phrase Atn function Calendar property Circle statement Currency datatype Date function and statement Debug. however. . be aware that this is not a comprehensive look at the entire C# programming language—it’s merely a guide to ease you into the C# language. If you are currently a Visual Basic programmer wanting to move to C#. C++ or Java.Debugging ASP. you should be aware of a few syntactic changes. and Property Set statements PSet method Rnd function Round function RSet statement Scale method Set statement Sgn function Sqr function String function Terminate event Time function and statement Type statement Variant datatype VarType function Wend keyword Opting for C# If you are more familiar with the syntax of C.New Riders . however. Property Let. you might be interested in using C# (pronounced “C sharp”) as your server-side programming language. We will look at some of these differences in the next sections.NET made by dotneter@teamfly • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Initialize event Instancing property IsEmpty function IsMissing function IsNull function IsObject function Let statement Line statement LSet statement MsgBox function Null keyword On … GoSub construction On … GoTo construction Option Base statement Option Private Module statement Property Get . Listing A. "Jon") MyLongerFunction("Brian". Listing A. "Jon") MyLongerFunction("Brian".21 Valid Code (C#) psStr = psStr & "Hello" plVal1 = plVal1 + plVal2 Semicolons One of the very first things to be aware of is the use of semicolons to end programming statements. square brackets— [ and ] —are used. they are terminated with a .21 would be needed in C#. In Visual Basic. Therefore.Debugging ASP. Listing A. Brackets Another syntactical change is the type of brackets that are used for collection and array indexers. .New Riders . _ "Jon") Listing A.20 Valid Code (Visual Basic .NET) psStr = psstr & "Hello" plVAL1 = plval1 + plval2 Listing A. Listing A. character. in C#.23 The Same Statements (C#) n += 10. while Visual Basic is not.23 shows the equivalent code in C#. parentheses—( and ) —are used around these values.NET) n += 10 MyFunction("Brian". "Jon").22 Some Statements (Visual Basic .20 is acceptable in Visual Basic. MyFunction("Brian". In Visual Basic.NET made by dotneter@teamfly Case Sensitivity One of the main differences between the languages is that C# is case-sensitive. and Listing A. statements are terminated by a carriage return. although the code in Listing A. in C#.22 shows a few lines of code in Visual Basic. the code in Listing A.24 shows the use of brackets in Visual Basic. First. MyCollection["Brian"] = "Cool". In C#.Debugging ASP.27 is a listing of the same function in C#. .New Riders .26 Function (Visual Basic .NET) MyArray(0) = "Brian" MyArray(1) = "Jon" MyCollection("Brian") = "Cool" MyCollection("Jon") = "Not So Cool" In C#. Listing A.NET made by dotneter@teamfly Listing A. the type of the function is declared before the name of the function as String. It is extremely important that you remember that C# is a case-sensitive language. } Let’s talk about the differences. Listing A. these same statements would be written as shown in Listing A. Listing A. in Visual Basic. MyCollection["Jon"] = "Not So Cool". MyArray[1] = "Jon". Second. Functions Functions are declared very differently in C# than in Visual Basic.26 illustrates a typical function implementation in Visual Basic.NET) Public Function MyFunction(psStr As String) As String MyFunction = psStr End Function Now let’s look at that same function in C#. you will see that the Visual Basic function is closed with the End Function statement.24 Use of Brackets (Visual Basic . Listing A. note the case difference in each of the languages.25. you will notice that.27 Function (C#) public String MyFunction(String psStr) { return psStr. Third. Listing A. the type of the function is noted at the end of the function line with the words As String.25 Use of Brackets (C#) MyArray[0] = "Brian". however. however.. Finally. Listing A. the variables are typed before the variable name. } else { . Else . If Statement In Visual Basic.. to return a value.NET) If plVal < 10 Then . just like the function.New Riders . In C#. this format is somewhat modified. End If In C#.28. Listing A. In this section.. In C#.NET made by dotneter@teamfly In C#. the standard If statement takes the form of the code in Listing A.. you should pay attention to the parentheses around the test statement and the use of curly braces to enclose the code . all statements of a function are enclosed around curly braces—{ and }. MyFunction. Control-of-Flow Statements All the standard flow-control statements that you are familiar with in Visual Basic exist in C# and have the same functionality. Listing A. Fourth.Debugging ASP.29 shows the same statement in C#... you use the return statement.29 if Statement in (C#) if (plVal < 10) { .. the Visual Basic version of the function returns the value of psStr by assigning it to the name of the function. again like the type of the function.28 If Statement (Visual Basic . their syntax is different.. you’ll see that the parameters passed in the Visual Basic version are typed after the variable name. we look at all the flow-control statements and their syntax in C#. which does the same thing. } Although it isn’t drastically different. however. .New Riders . Listing A.33 shows a typical while loop in Visual Basic. For Statement The For loop is written very differently in C# than it is in Visual Basic. Generically. Also you will see that the End If clause is missing.33 While Loop (Visual Basic .NET made by dotneter@teamfly statements.30 shows a Visual Basic for loop.. they each have a very different syntax.NET) While plVal1 < plVal2 plVal1 = plVal1 + 1 Loop .. and Listing A.30 For Statement (Visual Basic . statements. the C# version of the for loop is written as shown in Listing A.Debugging ASP. Listing A. } Both of these loops perform the exact same function. } While Loop The while loop is very similar in both programming languages. Listing A. plCount++) { plVal = plVal + 1.NET) For plCount = 0 to 100 plVal = plVal + 1 Next Listing A. Finally. . Listing A.31 shows the same loop written in the C# programming language. test. increment) { ..32 Generic C#for Loop for(initialization. plCount < 100. note the case of the words if and else. Listing A.31 For Statement (C#) for(plCount = 0.32. however. The two things to remember are the parentheses around the test statement and the curly braces in place of the Loop statement to enclose the statements to be executed while the conditional is true. It looks like the code in Listing A..36 switch Statement (C#) switch(plVal) { case 1: psStr = "Brian".35 Select…Case…End Select Statement (Visual Basic . break. Listing A.End Select statement.36.NET) Select Case plVal Case 1 psStr = "Brian" Case 2 psStr = "Jon" Case Else psStr = "Else!" End Select In C#. Listing A.. break. case 2: psStr = "Jon".34.New Riders . this same loop would be written as shown in Listing A.Debugging ASP. default: . this same statement is called a switch and is written like the code in Listing A. one of the most useful flow-control statements is the Select. Listing A.Case. Switch Versus Select In the world of Visual Basic.NET made by dotneter@teamfly In the C# programming language..34 While Loop (C#) while(plVal1 < plVal2) { plVal1 = plVal1 + 1 } This syntax is very similar to that of the for loop discussed previously.35. for better or worse.New Riders .NET—or even potentially into the C# programming language. This will certainly help you when you’re trying to figure out why your new programs aren’t compiling properly.Debugging ASP.NET made by dotneter@teamfly psStr = "Default!". With this information.NET or C# code from scratch without having to battle with the new syntax changes inherent in both languages. You will also be ready to start writing Visual Basic . . It also discussed some of the very basic syntactical changes that exist between the Visual Basic and C# programming languages. you will be prepared to convert any existing Visual Basic code to Visual Basic . } Summary This appendix looked at the ways in which the Visual Basic programming language has changed from the language that you know and love. break.
https://www.scribd.com/doc/61876596/Debug-ASP
CC-MAIN-2017-43
refinedweb
73,677
60.82
An Intro: Manipulate Data the MXNet Way with NDArray¶ Overview¶ This guide will introduce you to how data is handled with MXNet. You will learn the basics about MXNet’s multi-dimensional array format, ndarray. This content was extracted and simplified from the gluon tutorials in Dive Into Deep Learning. Prerequisites¶ MXNet installed in a Python environment. Python 2.7.x or Python 3.x. To get started, let’s import mxnet. We’ll also import ndarray from mxnet for convenience. We’ll make a habit of setting a random seed so that you always get the same results that we do. import mxnet as mx from mxnet import nd Let’s start with a very simple 1-dimensional array with a python list. x = nd.array([1,2,3]) print(x) Now a 2-dimensional array. y = nd.array([[1,2,3,4], [1,2,3,4], [1,2,3,4]]) print(y) Next, let’s see how to create an NDArray, without any values initialized. Specifically, we’ll create a 2D array (also called a matrix) with 3 rows and 4 columns using the .empty function. We’ll also try out .full which takes an additional parameter for what value you want to fill in the array. x = nd.empty((3, 3)) print(x) x = nd.full((3,3), 7) print(x) empty just grabs some memory and hands us back a matrix without setting the values of any of its entries. This means that the entries can have any form of values, including very big ones! Typically, we’ll want our matrices initialized and very often we want a matrix of all zeros, so we can use the .zeros function. x = nd.zeros((3, 10)) print(x) Similarly, ndarray has a function to create a matrix of all ones aptly named ones. x = nd.ones((3, 4)) print(x) Often, we’ll want to create arrays whose values are sampled randomly. This is especially common when we intend to use the array as a parameter in a neural network. In this snippet, we initialize with values drawn from a standard normal distribution with zero mean and unit variance using random_normal. y = nd.random_normal(0, 1, shape=(3, 4)) print(y) Sometimes you will want to copy an array by its shape but not its contents. You can do this with .zeros_like. z = nd.zeros_like(y) print(z) As in NumPy, the dimensions of each NDArray are accessible via the .shape attribute. y.shape We can also query its .size, which is equal to the product of the components of the shape. Together with the precision of the stored values, this tells us how much memory the array occupies. y.size We can query the data type using .dtype. y.dtype float32 is the default data type. Performance can be improved with less precision, or you might want to use a different data type. You can force the data type when you create the array using a numpy type. This requires you to import numpy first. import numpy as np a = nd.array([1,2,3]) b = nd.array([1,2,3], dtype=np.int32) c = nd.array([1.2, 2.3], dtype=np.float16) (a.dtype, b.dtype, c.dtype) As you will come to learn in detail later, operations and memory storage will happen on specific devices that you can set. You can compute on CPU(s), GPU(s), a specific GPU, or all of the above depending on your situation and preference. Using .context reveals the location of the variable. y.context
https://mxnet.apache.org/versions/1.6/api/python/docs/tutorials/packages/ndarray/01-ndarray-intro.html
CC-MAIN-2020-34
refinedweb
597
66.44
. It’s been a slow week due to the holidays. In the next week or two 0.9 is being released. It’s an exciting release, but in more subtler ways than the previous 3. Many small details, especially around the runtime and linking, have changed that make Rust faster and more flexible without necessarily being a breaking change. As always, the detailed changelog will have the nitty-gritties. What’s cooking on master? 36 pull requests were merged this week. bors was feeling unwell for a bit, due to a deadlock in a scheduler test that was fixed today and a deadlock in (incorrect usage of) LLVM. Breaking changes - The commprimitives are never Freezeanymore. - The linkattribute is now forbidden on crates. All hail crate_id! - All of our C++ dependencies have been removed. This is only breaking because it changes the debugging experience; rust_begin_unwindis gone and catch throwdoesn’t work because we don’t use C++ exceptions anymore. To set a breakpoint on task failure, break _Unwind_RaiseException. - The underbelly of the runtime has been completely overhauled. Alex wrote an email to the list about the practical implications of this. std::result::collectnow uses an iterator. ClonableIteratorhas been renamed to CloneableIterator. Other Changes - libnative has process and TCP implementations. - Coercion of types into trait objects is now supported, which means as ~SomeTraitand as &Readercan be left out. - I normally wouldn’t mention this since it’s internal to the compiler, but Patrick made a heroic effort to remove @mutfrom all the places. - rustdoc can now test doc comments. See the pull request for details on how and what is tested (also in the rustdoc manual). New contributors - Sébastien Paolacci Meeting There was no meeting this week due to the holiday. This Week in Servo Servo is a web browser engine written in Rust and is one of the primary test cases for the Rust language. Mozilla is on an extended holiday break until January 2nd, but we still landed 2 PRs this week. Notable additions - Jack Moffitt re-enabled building with make to enable work on cross-targeting ARM in #1441. - ms2ger cleaned up how we handle namespaces in DOM elements #1438 Announcements, etc - rust-openssl has been formed from the union of sfackler’s rust-ssl and erickt’s rustcrypto. - Concurrency models, Rust, and Servo. - Rust is surprisingly expressive. - irust, a basic REPL written in Ruby.
http://cmr.github.io/blog/2013/12/30/this-week-in-rust/
CC-MAIN-2018-26
refinedweb
396
67.96
Today I needed to use Fibonacci numbers to solve a problem at work. Fibonacci numbers are great fun, but I don’t recall needing them in an applied problem before. I needed to compute a series of integrals of the form over the unit square for a statistical application. The function p(x, y) is a little complicated but its specific form is not important here. If the constants a, b, c, and d are all positive, as they usually are in my application, the integrand can be extended to a continuous periodic function in the plane. Lattice rules are efficient for such integration problems, and the optimal lattice rules for two-variable integration are given by Fibonacci lattice rules. If Fk is the kth Fibonacci number and {x} is the remainder when subtracting off the greatest integer less than x, the kth Fibonacci lattice rule approximates the integral of f by This rule worked well. I compared the results to those using the two-variable trapezoid rule and the lattice rule always gave more accuracy for the same number of integration points. (Normally the trapezoid rule is not very efficient. However, it can be surprisingly efficient for periodic integrands. See point #2 here. But the Fibonacci lattice rule was even more efficient.) To implement this integration rule, I first had to write a function to compute Fibonacci numbers. This is such a common academic exercise that it is almost a cliché. I was surprised to have a practical need for such a function. My implementation was int Fibonacci(int k) { double phi = 0.5*(1.0 + sqrt(5.0)); return int( pow(phi, k)/sqrt(5.0) + 0.5 ); } The reason this works is that where is the golden ratio. The second term in the formula for Fk is smaller than 1, and Fk is an integer, so by rounding the first term to the nearest integer we get the exact result. Adding 0.5 before casting the result to an integer causes the result to be rounded to the nearest integer. If you’re interested in technical details of the numerical integration method, see Lattice Methods for Multiple Integration by I. H. Sloan and S. Joe. 12 thoughts on “Fibonacci numbers at work” One interesting thing about these classic sequences is that they grow very quickly. So if you’re programming in C++ with fixed width integers, int Fibonacci(int) might as well use a simple lookup table. For a 4 byte int, you’d need fewer than 50 entries. And int Factorial(int) would only need 13 entries! Of course you’d still use a program to calculate these entries. Never seen the formula written like ((GoldenRatio)^n – (-GoldenRatio)^-n)/Sqrt[5] as you do, but it is correct and I liked it ( but mathematicians should be conservative to preserve past work ). The most efficient way of calculating the Fibonacci numbers I have found thus far is f(n_):=Floor[(GoldenRatio)^n/Sqrt[5]+0.5]. (Using Mathematica syntax) I like your blog and have added it to Math Blogs on There really is something about seeing the old-school examples pop up in the real world…I think it speaks volumes about the way we are all, at some level, verschulert to the point that even the most avid learner starts to believe that the things of education are part of a mythical world only loosely related to reality. Interesting post. It is possible to generate a Fibonacci sequence for a 3D function ? I was thinking as something like: j/Fk(1,Fk-1,Fk-2) Thanks. Paul, I have only seen Fibonacci points defined for 2D. For 3D, you could use a quasi-random sequence such as the Sobol sequence. Numerical Recipes has a description of these sequences. Thanks, I will definitely check the Sobol sequence. I’ve seen the golden ratio used in a virtual machine’s garbage collection code before… for no discernible reason other than nerdery, I’d expect that 3/2 would have yielded no meaningfully different behavior. If I didnt know better I’d say you were solving a “crack” and/or “spark” option Fibonacci 71 by your method gives 308061521170130 and the actual value is one less than this. For less than 71, your method beats traditional method in python (2.7.6) Here is my python code def method2(n): return int(math.pow(0.5*(1+math.sqrt(5.0)), n)/math.sqrt(5.0)+0.5) Did I do something wrong? msk2: Just the limitations of floating point accuracy.
https://www.johndcook.com/blog/2008/04/23/fibonacci-numbers-at-work/
CC-MAIN-2021-31
refinedweb
763
63.09
UFDC Home myUFDC Home | Help | RSS <%BANNER%> TABLE OF CONTENTS HIDE Section A: Main Section A: Main: Daily Briefin... Section A: Main: State & Natio... Section A: Main: Opinion Section A: Main: State and Nation... Section A: Main: Local & Natio... Section A: Main: State & Nation... Section A: Main: Health Section A: Main: World Section B: Regional News Section B: Regional News:...256 Related Items Preceded by: Lake City reporter and Columbia gazette Table of Contents Section A: Main page A 1 Section A: Main: Daily Briefing page A 2 Section A: Main: State & Nation page A 3 Section A: Main: Opinion page A 4 Section A: Main: State and Nation continued page A 5 Section A: Main: Local & Nation page A 6 Section A: Main: State & Nation continued page A 7 Section A: Main: Health page A 8 page A 9 Section A: Main: World page A 10 Section B: Regional News page B 1 Section B: Regional News: Sports page B 2 page B 3 Section B: Regional News: Sports page B 4 Section B: Regional News: Sports page B 5 Section B: Regional News: Advice & Comics page B 6 Section B: Regional News: Classified page B 7 page B 8 Full Text WEATHI Inside 2A H i: ..... Low: Partly Clo LIBRARY OF FL ST DII 3 PO BOX1170LL7 32611TO 07Ry G .-,i !E :51V I L L E FL 3 2 6 1 I 7 0 0 7 Wisconsin 24 Auburn 10 Capital One Bowl V.Tech 35 Louisville 24 Gator Bowl Ohio State 34 Notre Dame 20 Lake Tuesday, January 3, 2006 City R reporter Vol. 13 1, No. 2941K 50 cents mI -= -_ 1 __1 _I_ I GAS PAINS CONTINUE A taxing time runmls oI raq %ory rII CD O cm) 10. MN AN&MM NI.0 &AIo M - wugr~i l I hunerq k linDt *b m h m LINDSAY DOWNEY/ Lake City Reporter Lake City resident Donna Larson, 50, fills up her gas tank at a Hess.gas station on U.S. 90, which was charging $2.31 for regular unleaded fuel o.n Monday morning. Resident: Gas prices are 'getting outrageous' as icsare Consumers feel pinch as ftiuel costs stay up after holiday weekend. By LINDSAY DOWNEY Idowney@lakecityrep6rter.com Columbia County's Second Local Option Gas Tax ended at midnight Saturday, lowering gas prices by five cents, but many area residents still are upset about the price of gasoline. Just two days before the tax was dropped, prices at several area stations rose by as much as 13 cents. "It's a little strange that the prices went up just before the tax ended," said Lake City resident Paul Anschuotz, 38. "I'm a little disgusted. It left me scratching my head a little bit." Anschuotz pumped gas Monday at a Stop and Go Food Store on SW Main Street, which was charging $2.33 for regular unleaded fuel. Prices in the area Monday ranged from about $2.31 to about $2.34 per gallon. The local option tax, which was implemented in Columbia County on Jan. 1, 2001, collected about $10 mil- lion to complete the Bascom Norris connector project. Some fuel-consumers said they think stations may have increased prices so they would not lose money when the tax ended. Several local gas stations declined to comment on the situation. "We're not as stupid as they think we are," said 68-year-old Glen Doan of Lake City. 'They're playing games that we have no control over." Lake City resident Donna Larson, 50, filled up her tank for $2.31 per gal- lon Monday at a Hess station on U.S. 90. Larson said she noticed a price hike just before the local tax ended last week. 'They were supposed to go down I thought," Lai-son said of the'prices. "I know they (the gas stations) need to make money too, but this is getting way out of hand. Somebody's making. a lot of money off of this." - ON THE WEB Larson, who works in Gainesville, said she will try to fill up in Alachua County whenever she can because gasoline prices are cheaper there. The. lowest gasoline price in the state Monday was $2.06 per gallon for regular unleaded fuel in Pensacola, according to the Florida Gas Prices Web site,. com. Two Shell stations in Key West were consumer-ranked as the highest in the state at $2.62 per gallon. Because local gasoline prices increased before the sunset of the tax Saturday, many motorists said they don't feel the relief of the five-cent decline and they are hoping prices will go down soon. "But I guess that's hoping in vain," Lake City resident J.J. Jones, 60, said. "They need to do some- thing about the prices. Poor people just can't afford it it's getting outrageous." b.& I A hi, ruM "Copyrighted Material Syndicated Content Available from Commercial News Providers'" ~ nnn~~nn~nnnnu~0 ,~n.n ~~n CALL US: (386) 7S2-1293 SUBSCRIBE TO THE REPORTER: S Voice: 755-5445 1 ,.,_-. i,.ii u i Fax: 752-9400 INSIDE Classified ............... 7B Cornics .. ....... 6B Health .. ..... ...... 8A Obituaries ......... ... .6A Opinion ............... .4A Puzzles 5B State & Nation 5A WVorld C A q TODAY IN HEALTH liaI1- L b, ri'ni need j| I-,eel,:har 8A COMING WEDNESDAY l < I; to t jT. hOr'r- iri .IKit' -, ,1 ,' Fiesta Bowl %mharoI U O oN ............... NOW *~~b i;~ ~ dowowitb LAKE CITY REPORTER DAILY BRIEFING TUESDAY, JANUARY 3, 2006 4 Monday: 3-7-2-7 Monday: 7-9-7 MEET YOUR NEIGHBOR wwao Sunday: 5-8-12-13-34. PEOPLE IN THE NEWS I *mtb W" shr ar-~1m UT" in 4d w -.- "Copyrighted Material Syndicated Content Available from Commercial News Providers" - l. -- .I p sno o- 4w S a - a.~q, - ~,. S - a - - S - a ~-low S0 "o - 4w -qa. 0 a - -- - - - a.- 0 - -a. a a a. - * 0 4. age, - -- - Bobbie Misner Lake City, Sales Associate at Haven Hospice Attic Age: 57 Family: Husband, dog and cat. Hobbies: Shopping Favorite pastimes: Playing on computer M What would you most like to see improved in your town: "We need more retail stores." Who is your hero or inspiration, and why?: "My husband is my inspiration because he is always there for me. He has made me what I am today." Lake City EWTORNBHUSwKxlbu .......75404.., ~- abon hd -rohm em. S a. a- a gal -40a. -GO- a c - a S fl - a. -- - a.a. a.a. 0 _ 5- a - * ~- - -~- - ~-. - a. a -~ a. a. -~ a a ~- a ~- - - a a - - -~ a ~ - - a. .~ a. 1 ARM -New -M 411b 4& mb qw m Qmwwb m - a ~ - - a. - - - al. a. - a. 0 - - .Im O 0 2 2 CD C, 0 0 2 (D- - 0 CD CL Now -a I I I I I I I U -- U U .a. a.,- a * a a.~. -. U ~ a. a a. a I --a. 0 a. -~ - ..~ .a - -a. - a a. * a - a a - 4l- qm . ft * - a ~ a-5 a ma.-- a. a. a. a. -- a.. a -- - -a a -a a a. a WI 0 0 0. zD m (D rNI = (D. ( MI) va - a. - 0041b - .4w a. a * .- - a. a- * a. also 40 sogm lob- .oqam wm Page Editor: Joseph DeAngelis, 754-0424 Ill . -M m o * - o I % 1 fc fc i I b -- o pp- w I Page Editor: Chris Bednar, 754-0404 LAKE CITY REPORTER STATE & NATION TUESDAY, JANUARY 3, 2006 Blast at coal mine traps 13 miners underground 0 4- 4=- - -om I,, cl~ a * "Copyrighted Material b-mp Syn'dic'ated'Content Available from Commercial News Providers" U", S c REPORTER Classifieds In Print and On Line 0 C Now Serving Columbia County 120,Gallon Tank Set & Filled only $1 69 gal. 24 HR. Emergency Service Complete Parts & Service Seuo' deiten VDidCO.at Toll Free 1-877-203-2871 ir l t i. uill l Open 6 Days a week Monday-Saturday YouR.. * Positive Attitude NOW HIRING! * Dynamic Personality Ir v * Computer Experience Us .... * Casual, Fun Work Environment Apply todayI *Various Schedules Apply today! * Benefits Package f 1152 SW Business Point Drive B e1-' i Lake City, Florida 32025 LJ ii CLWENTL*GIC Le's Connect! 86-754-8600 et' s Eonne, IN MEMORIAL OF ( VIOLET TUNSIL SUMPTER JANUARY 2, 2002 &ep Afft~rdable c~UP xalflence It is ouf phllOstqplb i 11-1,j lo get'patienlts jin u41hi?) 1 -2 47.1 s. If vouneed lunitelalfle assistance gC~ve its 17 call" -- She was Different sIe I; 4' *a Iscfl iT -1 I L-" % cd'k l L',,Irq .r' if .7 r 1, 1 m Si 0 me /. h ,e t ,/ ur. e / r .J/ .-ur J YOUR LOVING HUSBAND OF 62 YEARS. AARON; DAUGHTERS, MARIAN &r "PFACHFS", SON-IN-LAWS, GRANDCHrI.DREN &' GREAT GRANDCHILDREN S *I 1 .J-I: l:NlMIVAK(-e illLrl' ii'V likAM 'D-Wii 1*i 4 :Jlmi-1 .rmM I*"Soft-Touch" Initial Exam ADA 001101 i $ oI.00 I *Panoramic X-Ray (ADA-00330) only ty e *Diagnosis (if needed) with this ad. Reg. $103 Coupon #0oo08 A savings of $69.00! - -_-_-_-_- __---- ---------------- -- - 1788 SW Barnett Way (Hwy. 47 South) 752-2336 Mon., Tue., Thurs. 10AM-7PM, Wed. 9AM-6PM, Fri. 8AM-5PM, Sat. 8AM-2PM I flT Ii ~ 'I~i' I, I -- '---,, ------------------ N,,,,.... HOMETOWN (9)mOAS :07A I D LAK CIY RPORER STATE & N RATION USDY ANAY3,20 Page Editor: Chris Bednar, 754-0404 rim AL lillm * r OPINION Tuesday, January 3, 2006 * * om ptqbIl I i11 f or SI) lavoft ill cOlklg football Available COMMENTARY If it hp rwtdl. Hi I )1a %rn6r a4t Il t T Vdatatbw " M I* AiAld 3< U 0* Q -CD a -qtr. ab -- - - a .~ HI G HLI G HTS IN HISTORY Today is Tuesday, Jan. 3, the third day of 2006. There are 362 days left in the year. On Jan. 3, 1777, Gen. George Washington's army routed the British in the Battle of Princeton, N.J. In 1521, Martin Luther was excommunicated from the Roman Catholic Church. In 1938, the March of Dimes campaign to fight polio was organized. lb - 2 0 0- >22:C S.. - CD a : - C5, -m U Man- Ina a 0 * -fashioned War (o am a -. 0-- 0. -EIL 9:-E S (D--- .0 0 3.1 MII CD S =I'- - 0-.p CD* .5 -U- A ve -a - a -* am = a a .5 a-a. - a a - S - aa - --* a. - . 1<> CD .--U 0 5 ~C) ;0: :2 32 --CD U S. - '4 (D qb-C O5 Cl) 0* CD. <-^ 00 Q-0 - - -' 0 rII CD MII srmP (Dt. - UA a * a - -. a. - ~.atrnalimnaround the htAsdav% - 4ib - 40 40 -. - a a * -ma =m - d - -: a. S a - a a --a.lone "-. a a a- - a - a -~ a- - a..- a. - -up ~- 0 - - -- - a ~ a. ~ - -a- -~ a -~ - 0 - S.- 0-a -- Wa. - 0 a -a.' -- -a S a Oe - 0- a - - * a o- - a a up - 4A "Copyrighted Material Syndicated Content from Commercial News Providers". w -S a .~ - W- - -. - .. -. a - 0 a -- I I -I I I II I I 4 ** . * 4 A 6 o D . .. . - . . -..M c - 8 - * - qw - - e LAKE CITY REPORTER STATE & NATION TUESDAY, JANUARY 3, 2006 Study: Children no safe SUVs than cars 0* -0 0- .wpm ( D 4 0w~ w .- ID 41M.- a f.- - o da. -hp 4D dom Gm- Go a/-0 % e me fl C'D~ G to 0.00 O I nals g a rms CD a 4ow -- a COMPUTER Sales Repairs Parts Upgrades Custom Built Computers At Discount Store Prices ... 2600 AMDXP S. .. .o C0eDRerabte e 93 6 i i : -- o i Hr::. li ,,',' ,, : , rF'e i pair4 ,' 4Ir g ii * - - - a aiend at abound Inmpifce Air tohut Mwn wrrvkThundAy - - . a a4b a ~I- -a... - - d a - a - a. 'a a ~ a a~ a-a - a a a. - - a a -. - 'a a a ~ -a _ . . The General Store 1 .d4 ,& u e' Of SeewAev/& .. .A4ad 7/taee " -."' ':, 'o.ertUe. tafte ti i td 9B tet Foodd. 30-75% OFF ENTIRE STORE 248 N. Marion Ave., Lake City, FL 386-752-2001 Mon.-Sat. 10-5 Frank & Patricia Albury Prescription Drug Sign-Up Has Begun I Baya Pharmacy will have Insurance Specialists at ___ __Y ,'*^ c h each location to sign up Do yol beneficiaries for the new questions -.. Medicare Part D drug about the -ew I coverage. Medicare Medicare .Call to schedule an Prescrip tijt appointment or to get plan?'., more information. I I I i I I C f, Coveagestars Jnur 1 20 In rdr t @b cverd n Jnury Baya East 780 SE Baya Dr. Lake City 755-6677 Baya West 1465 US 90 W Lake City 755-2233 Jasper Location 1150 US 41 NW Jasper 792-3355 colk 41. a a a a a. v - - a a I- U i ll T;jE WA- 4C a ag 0- CL U (D: --a. a a 'a -~ a - a a a - a - 5- 'a - - a 'a S - a S. =min I Page Editor: Joseph DeAngelis, 754-0424 t o - o 0 4 L it II LAKE CITY REPORTER LOCAL & NATION TUESDAY, JANUARY 3, 2006 COMMUNITY CALENDAR * To submit your Community Calendar item, contact S. Michael Manley at 754-0429 or by email at smanley@ lakecityreporter., com.- nas, 6all Mona or Ray at (904) 269-5871.. Project Hope extends help Olustee Festival Pageant fn ,",, ^.1,. zo inaimnina vniamsii Healihcare's Dr. Dawn Snipes Sal 374-5600 ext. 8309, Corlis Duncan-Nelson at 374-5615 exxt. 8280 or (800) 330-5615 ext. 8013 for Shanna or Barbie. Newcomers to host 'monthly meeting Part D, prescription drugs from 9-11 a.m. on Thursday queslionaire I'. 1:7 in the Lake City. Community College Foundation. Board Room, downtown Lake City. For more information contact Mike Lee, executive director of the LCCC foundation at 754-4392 or 754-4433, Columbia County science fair coming in January *i Lake City Community College will host the 2006 Columbia County Science Fair. I The annual fair will be .Jan. 18 and 19 in the Howard Gym on LCCC campus. Approximately 250 student projects-will be on display. . Judging will take place from 8 ahm.. *., Feb. 22. The awards ceremony will be 10 a.m., Feb. 23 in the Alfonso Levy Performing Arts Center. Classes Pottery classes coming to Stephen Foster WHITE SPRINGS Spend Monday nights working at the potterto register, call Heyward Christie- at 758-5448. Historical museum v to host volunteer class Lake City!Columbia County Historical Museum is forming a volunteer training class. For more iniormaton, contact Glenda Reed at! historicseL\ ingk 'aol.com. or call the museum at 755-9096. -.. --Copyrighted Material-- - - -.gob. -0 - --Syndicated Content - ell -'Available from Commercial News Providers" v - - --~ - - C - 6 aS - C * 4b ft -h - OBITUARIES Mrs. Linda Michelle Kirkland Mrs'. Linda Michelle Kirkland. 23, of Lake Ci), died earl'i SundaN morning as are-ull of injuries sustained in ar automobile accident. A native of Plant Citi.Florida. _Mr-,. Kirldand had been a resident of ,-Lake City since 1908 ha' ing moed here from Plant City. The daughter ot Burle\ & luanita McNutt Kirkland. Mr,. Kirkland had been employed \wi h McDonald's She v .as of the Baptist faith and spent all of her spare time with her chil- dren. Mrs. Kirkland % as preceded in death b) a brother, Ralph Frier. Mrs. Kirkland is sur\t\ ed tb\ \so daughtct'. Hannah Marie David and Rebecca L nn Kirkland both of Lake Ci%. her parents. Burle\ & Juaruta Kirkland of Lake City.. tihe brothers. Ro% Frier. Mulberry. Florida. John Henry Kirkland. High Spring'.. Florida. \ William Fner. Burle) Kirkihnd. and Torin' Ra', Kirkland all of Lake Cito: three sis- iies. Ju.nita Fr.tniz. Plant Citi, Florida: Sue Ann Frier and Bobbie Fi er both o' Ljke City and her for- mer husband and fnend Robert Troidl of Plant Citt. Florida. Numerous nieces, nephew s. aunts, uncles and other family members also survive. Graveside funeral services for Mrs. Kirkland w ill be conducted at 11:001 A.M ,Wednesday, January 4. 2006 in Memorial Cemetery v ith the Re%. Russell Woodard officiating. Interment \\ ill follow. The family, 'LU receive fnends-at the funeral home from Tuesday evening from 5-7 Arrangements are under the duection of the DEES FAMILY' FUNERAL HOME & CREMA- TION SERVICES. 76S West Duval Street, Lake City (961-9500) Jack Berry Hollis lack Berr Hollis, a resident of Dublin. Georgia died December 31, 200(15 a[t the Mleado% s Recional Hospital \'idalia. Georgia after an extended illness IMr Hollis v-as a naii.e of \\ hittfield Count\, Georgia having lived in Dublin for the Direct Cremation' $Q95* Complete *(Basic services of funeral director and staff, removal from place of death to funeral home S 1 ,i,, ,i.k ,, ,,.t;,'.. i i. cremation ,,i ,.f d .l ,Jalternative container) GATEWAY'-FOREST LAWN FUNERAL HOME, Ted L. Guerry Sr., L.F.D. & Brad Wheeler, L.F.D., Owners 3596 South Hwy 441 Lake City, Florida 32025 (386) 752-1954 past forty years. He is the son of the late John Henry and Mildred Louise Huggins Hollis. Survivors include his wife: Shirley Hollis, Dublin,.Ga. Two Sons: Tim Moss, Sumertown, Ga. and Mike Poumell of Kite, Ga. Three Daughters: Jana Gay Hollis, Savannah, Ga,, Wendy Caraway, East Dublin, Ga. and Angie Johnson, East Dublin, Ga. Four Sisters: Mary Ann Coty, Colorado, Barbara Davis, East Dublin, Ga., Shirley Crews and Betty Rogers both of Lake Cit\. Florida One Brother: Cecil John Hollis. Lake City, Florida. Eight Grandchildren also survive. Graveside funeral services will be conducted Wednesday January 4, 2006 at 11:00 A.M. at the Corinth Cemetery, Lake City, Florida. The family will receive friends Tuesdayi, January 3, 2006 from 6:00-8:00 P.M. at the funeral home. GUER- RY FUNERAL HOME 2659 SW. Mafi Blvd. Lake City, Florida is in charge of arrangements. 386-752- 2414. Mrs. Esther E. North Mrs. Esther E: North, 83, of Lake City, died early Monday morning, January 2, 2006, in the Lake Citi Medical Center following a brief illness. A native of Branford, Florida, Mrs. North was the daughter of the late Martin Albert & Rosa Lee Hardee Tompkins. She had been a resident of Lake City for-the past 67 years. Mrs. North had been an Avon Representative for .the past twenty Great Computer Buys fo;r the entire family! We will provide... Computer Deliver Install n Software Technical Support. and more Call Today (386) 719-6902 Sandy Lyon Services '1 i =--- =4 -r- w a-r"li r- I 4 "x--=-- ] [=-I years and had received the Presidents Award for sales every year. Mrs. North was a member of the Pine Grove Baptist Church and was very active and very much enjoyed participating in her Sunday School Class. In her spare time Mrs. North was an excellent seam- stress and loved spending time with -her family. Mrs. North is survived by a daugh-. ter, Carolyn Terry (Bill) of Callahan, Florida; four sons, David Richard Williams, Greeley, Colorado; James M. "Jimmy" North, Lake City; John H. "Johnny" North (Claire) Geneva, Florida; and Alston "Dut" North, Jr., of Lake City; and a brother, Jack Tompkins of Jacksonville, Florida. Eight grandchildren and nine great-grandchildren also survive. Funeral services for Mrs. North' dill be conducted at 11:00 A.M. Thursday, January 5, 2006 in the Pine Grove Baptist Church with Rev. James Roberts & Rev. Jerry Tyre officiating. Interment % ill follow in the Corinth Cenietery' (441 North). \.isitanion %% ith the family will be held at the Pine Grove Baptist Church for one hour prior to the service on Thursday. Arrangements are under the direc- tion of the DEES FAMILY .FUNERAL HOME & CREMA- TION SERVICES. 768 West Duval Street, Lake CitN. ,961-9500) Obituaries are paid advertisements. For details, call the Lake City Reporter's classified department at 752-1293 PXoBERT kk L41urt' I irp E Apv CLULIE k% t 4.LLLIih APTHli \ VALER Owe and- Operated by Deran- arishDee 768W.DualStee eLae it, loid - . dlb-w 4b - - Page Editor: Chris Bednar, 754-0404 0 - -.Odbl - 4w - Q -4 LAKE CITY REPORTER STATE & NATION TUESDAY, JANUARY 3, 2006 Irupkal S orm /ra drifts in \Illnu k kL '-' ""' .-.'* -- 6 RailA kwr perUI^a (ftr~r Afntrn'\ u'r",tain future "Copyrighted Material- Syndicated Content Available from Commercial News Providers" ,eLIi : **** ', - ,- ,. ,; pIhIU;BSUUSk~KZeIw7*~Ihq Plush Pillow Top QUEEN set s499 TwinSet.............. 349 FullSet. ...........$479 King (3 pc.)Set.. 699 Plush $AQ QUEEN set s699 Twin Set.............. 499 Full Set............... 6 5 9 King (3 pc.)Set.. 999 Cushion Firm QUEEN set $599 Twin Set .............. 398 Full Set ................. 559 King (3 pc.)Set.. 849 FURNITURE SHOWPLACE Wholesale Sleep Distributors US 90 West (Next to 84 Lumber) Lake City, 386-752-9303 4.39% APY for 24 months ... Now that's banking! Page Editor: S. Michael Manley, 754-0429 ,* Vic )(~k r lik^ ^lr w Ahyt rtlie LAKE CITY REPORTER HEALTH TUESDAY, JANUARY 3, 2006 Newr drug beatt tamnaml for w (Iwtre pmen Ing buras cauwr rvcurrmesn *0W a 40E 0=4 IF % -Wsf d mani "Copyrighted Material Syndicated Content * Available from Commercial News Providers" - .- S. - - S e - E b T I ir ft- - 0 - - lb 4 w Shammi Bali, M.D. Internal Medicine, Board Certified Is pleased to announce the, opening of his new primary care medical practice :ach visit you will be .seen )y Dr. Bali, MD king care of adult medical needs I l^'^^'^ d I i^ .Ii I t^^^ ~ cardiac, preventive arid geriatric care - Routine physical and women health. 334 SW Commerce Dr., Ste 2, Lake City (inside Senior United Bldg) Accepting Medicare, most major insurances & private pay. For appt. 386-755-1703 -- 040-r -.. of North Florida , General Eye Care & Surgery 917 W. Duvai Street, Lake City, FL 32055 ., (386) 755-7595 . Diogenes F Duarte, M.D. PA. Board Certified in: *Pulmonary (Breathing Problems) i -*Sleep Medicine S" .' Accepting Medicare and most private insurance' 334 SW Commerce Drive, Suite 1 Lake City, Fl. 386-754-1711' NORTH FLORIDA , MLIL f ,r N I)IrlIll [. li iC',N,:R C IRE N TMI l IH r Bobby E. Harrison, M.D. " IMRT * 3-D Treatment Planning * Personalized Cancer Care Purenda P. Sinha. NI.D. North Florida Cancer Center Lake City 795 SW Hwy q7 Lake City, Florida 32025 386-758-7822 r L A MEDIPLEX Now Accepting New Patients FamilyvPractice e Internal Medicine Women's Health Lab X-Ray lhtrasound CT Scan Nuclear Bone Scans Same Day Surgery Come'visit our new website! 404 NW Hall of Fame Drive, Lake City, FLi Surgical and Medical Therapies All patients are given personal and confidential attention. Page Editor: Chris Bednar, 754-0404 0 t o o n.: ,|n t..Cif r . a P Aw Page Editor: Chris Bednar, 754-0404 LAKE CITY REPORTER HEALTH TUESDAY, JANUARY 3, 2006 -. -ka I)ocor uy Iraqi b wwm t dbut may end M up dlng Q - "Copyrighted Material Syndicated Content Available from Commercial News Providers" p - S - 12 2 .. ...in B .d .. .. Ci .. *Preventive & Curative Medicine ,.. *Routine Health Maintenance aGynecological Exam SPhysical Exams *Others Jean Fi IAdn J.it. 1. MPH Now accepting New Patients. Call for an appointment 719-6843 idmprehensive omen's Specializing in oro e s- GObstetrics ' S Ith Laparoscopic Surgery FREE Women's Primary Health Care O.B. Nurse Praitioner on staff PREGNANCY D. r hale, Delivery in LakeCity. TEST 755-9;190 440 SW Perimeter Glen (off SW 47) Lake City, FL 32025 AVMED* BC/BS CIGNA MAdtarl M. dca, & "Many mor ,l sfurnt accupied Comprehensive Pain Management of North Florida AMNI *Work Related Injuries -Motor vehicle accidents *Comprehensive evaluation and treatment of other pain conditions Gateway Center 1037 Highway 90 West, Suite 140 Lake City, FL 32055 386-719-9663 Fax: 386-719-9662 We Ca Hel 0Ybu We Need Your Help TODAY! Items Needed ~ Gently Used Furniture, Clothing and Household Items Call Today To Schedule A Free Pick-Up Volunteers Are Also Needed To Sort Donations HOSPICE. ATTIC A 'J.AIC CsflpC Open l0am-6pm Monday-Saturday 2133 US Hwy 90 West 386-752-0230 Your SupportAdds Life to Someone's Days . f C IMA Y* You will be seen by a Board Certified MD each visit PRIMARY Most Appointments within 48 hours S CARE We are now a provider for Av-Med and BCB5 Health Options EDICINEr Geriatric (are, Preventive Care and Women's Health Board Certified Internal Medicine *' "' .~ (386) 754-DOCS (3627) 861 NW Eadie St. (Next to ChildrenE Medical Center) Dr. Minesh Patel SouthernMediplex G~o dIiO4 MEDUPIMEX 404 NW Hall of Fame Drive Lake City 755-0421 Most Insurance Accepted PULMONARY CLINIC TREATS ALL RESPIRATORY DISEASES NEW PATIENTS \\LCONIE - M. Choudhury, M.D. 155 NW Enterprise Way Suite A, Lake City HEARING CARE FOR YOUR ENTIRE FAMI *Comprehensive Hearing Evaluation for Adults & Children *Digital Hearing Aids *Repair on All Brands *Batteries and Supplies eAssistive -Listening Devices Call for information or to get an appointment 386-758-3222 Lake City 386-330-2904 Live Oak THE ORHOPAEDC CrN- LY s-' |, J -,11' Edward J. Sambey, M.D. Sports Medicine Non-Surgical Orthopaedics Occupational Medicine Worker's Compensation & Most Insurance Plans Accepted [S (386) 755-9215 1-888-860-7050 4367 NW American Lane Sam3Da*o Net Dy ppontmnt WE LISTEN. WE CARE WE HELP! Nahed Sobhy, M.D. MERCY MEDICAL URGENT CARE 305 East Duval Street Lake City, FL 386-758-2944' Physicians Billing & Consulting Services Complete Medical Billing Services Certified Coders on Staff Internal Medicine OB / GYN Pediatrics Urology S ental Health ENT 20 Years Exp. Evelyn Padgeti. Owner 752-2396 @' bellsouth.net r- rr, LAK CTYREPRTR EALTHTUSAJNRY320 Page Editor: Chris Bednar, 754-0404 a a 4 O -- Q 0 - ab * ^ ^ nE I LAKE CITY REPORTER WORLD TUESDAY, JANUARY 3, 2006 risii CL' Itig EM*'0C), ZI (DD MII CD EW%,, ~aqi% CD .. . i4aiur tilg a ..t.I i(t. iii I urcli "Copyrighted Material Syndicated Content Available from Commercial News Providers" ~IIii ~ One Convenient Location! New Location * *.-~"* * , ~. *~-~ ~ '~R' - -~.4 *4~ -. z~1r.ra ~4 _ 55.. All Christmas 75% /tc C-, Orrell S off SW DEPUTY J. DAVIS LANE (FORMERLY PINEMOUNT RD.) 752-3910 MONDAY-SATURDAY 8:00AM-5:30PM CLOSED SUNDAY '" 1-;7;;;7ac-ideral Page IEditor: S. Michael Manley, 754-0429 AN Lake City Reporter Story ideas? Contact Tim Kirby Sports Editor 754-0421 tkirby@lakecityreportercom Tuesday, January 3, 2006 -- ________ Section B SaRI:ed 491m m q (D -h C 0(0 0~< 0o Gators hang on Co ri hte' Material 'eSyndicated Content Available from Commercial News Providers' .... . m) =r CD CL Meyer has Gators on right track ~CD U 4~) TAMPA The victory was textbook Florida foot- ball. It was Urban Mey- er's Florida football. Nothing fancy. No blowout. Tense in the fourth quarter. It was the game outline that brought the first-year head coach and this year's Gators to the 20th Annual Outback Bowl. Counter plays. End-arounds. Methodical drives that move the chains. Quick, short passes. Timely big-play passes from an offensive unit that has adapted to new schemes and now complements a vicious defensive unit and a special teams squadof mercenaries. This year's Gators did not match up to Meyer's preseason billing as the coach who would implement an explosive spread-option solu- tion for the program's Zookitis. In the end, everyone adjusted. Meyer's play-calling adapted to the defensive speed of the SEC game and the players in camp gradually embraced the wide-open attack. Meanwhile, a strong early season defense got stronger as the year progressed. ON THE GATORS Todd Wilson Phone: (386) 754-0428 twilson@Oakecityreportercom The coach that was scheduled to win with 50 points a game pleased the fans with low-scoring thrillers. But they were victories - a 9-3 finish overall - including Monday's 31-24 victory against the Iowa Hawkeyes. The program needed victories this season. A lot of big victories. Florida beat its Big 3 rivals - Tennessee, Georgia and Florida State and won its bowl game, the first time the four things have. been accom- plished in the same season. "No one's ever done it," Meyer said. "We're getting our, seniors a game ball and putting that on it. They can keep it forever." The Gators never trailed Iowa and put seven points on the board before the offense touched the field, thanks to Te quick hands of Jemalle Cornelius blocking an Iowa punt. Tremaine McCollum scooped up the ball and covered six yards for the touchdown. I The defense came through as well. Vernell Brown, the defensive back Meyer called "the face of Florida football," intercepted a Drew Tate pass and returned it 60 yards for a touchdown with 1:57 left before halftime to put the Gators ahead 17-0. Brown broke his leg during the first half of the Vanderbilt game on Nov. 5 and vowed to return for the Outback Bowl. Brown said he was playing with some pain at about 85 percent of his normal ability. He broke to the outside, stepped in front of the pass and turned on the jets. "I had a lot of grass in front of me," Brown said. "I knew I had to put it in the end zone." The offense had its moments Monday, as quar- terback Chris Leak turned in a solid performance, complet- ing 25-40 passes for 278 yards and two touchdowns both of which went to the Outback Bowl's MVP, Dallas Baker. "I couldn't have done it GATORS continued on 3B P1st I~ 'it bdkk'' irn 0* -Ii -I ~.m-m~~ 0' 2' -'I CE) j*4 (Di (flu -a -I 0 CD -I C,) rritsi.na .0I() ' 0; 0 =r CL (D en a. CL 0 M rIL D 0 0 m? 3O (D.l 1W m c c C m m 2 4 CD -U' C, - U 0 (D P(.) RT S LAKE CITY REPORTERSPORTS TUESDAY, JANUARY 3, 2006 Page Editor: Mario Sarmento, 754-0420 SCOREBOARD TELEVISION TV Sports Today COLLEGE FOOTBALL 8 p.m. ABC Orange Bowl, Penn St. vs. Florida St., at Miami MEN'S COLLEGE BASKETBALL 10:30 p.m. FSN Oklahoma St. at Pepperdine NHL HOCKEY 7:30 p.m. OLN Minnesota at Detroit FOOTBALL NFL standings AMERICAN CONFERENCE y-New England Miami Buffalo N.Y.Jets x-Indianapolis z-Jacksonville Tennessee Houston y-Cincinnati z-Pittsburgh Baltimore Cleveland y-Denver Kansas City San Diego Oakland East W L 10 6 9 7 5 II 4 12 South W L 14 2 12 4 4 12 2 14 North W L II 5 II 5 6 10 6 10 West W L 13 3 10 6 9 7 4 12 Pct PF .625 379 .563 318 .313 271 .250 240 Pct PF .875 439 .750 361 .250 299 .125 260 Pct PF .688 421 .688 389 .375 265 .375 232 Pct PF .813 395 .625 403 .563 418 .250 290 NATIONAL CONFERENCE East W L T Pct PF PA y-N.Y. Giants II 5 0 .688 422 314 z-Washington 10 6 0 .625 359 293 Dallas 9 7 0 .563 325 308 Philadelphia 6 10 0 .375 3101 388 South W L T Pct PF PA y-Tampa Bay II1 5 0 .688 300 274 z-Carolina II 5 0 .688 391 .259 Atlanta 8 8 0 .500 351 341 New Orleans 3 13 .0 .188 235 398 North W L T Pct PF PA y-Chicago II 5 0 .688 260 202 Minnesota 9 7 0 .563 306 344 Detroit 5 II 0 .313 254 345 Green Bay 4 12 0 .250 298 344 West W L T Pct PF PA x-Seattle 13 3 0 .813 452 271 St. Louis 6 10 0 .375 363 429 Arizona 5 II 0 .313.311 387 San Francisco 4 12 0 .250 239 428 x-clinched conference y-clinched division -z.clr.ch .. mild ca-' . x..h.-ih,:.jl pa yoff spot Saturday Denver 23, San Diego 7, N.Y. Giants 30, Oakland 21 Sunday's Games N.Y.Jets 30, Buffalo 26 Carolina 44,Atlanta II Pittsburgh 35, Detroit 21 Indianapolis 17,Arizona 13 Green Bay 23, Seattle 17 Miami 28, New England 26 Kansas City 37, Cincinnati 3 Cleveland 20, Baltimore 16 Tampa Bay 27, New Orleans 13 San Francisco 20, Houston 17, OT Jacksonville 40,Tennessee 13 Minnesota 34, Chicago 10 Washington, 31, Philadelphia 20 St. Louis 20, Dallas 10 End Regular Season NFL Playoffs Wild-card Playoffs Saturday, Jan. 7 Washington atTampa Bay, 4:30 p.m. (ABC) Jacksonville at New England, 8 p.m. (ABC) Sunday, Jan. 8 Carolina at NewYork Giants, I p.m. (FOX) Pittsburgh at Cincinnati, 4:30 p.m. (CBS) r-.e .j 49 Central Florida 48,OT Motor City Bowl Memphis 38, Akron 31 Champs Sports Bowl. Clemson 19, Colorado 10 Insight Bowl Arizona State 45, Rutgers 40 MPC Computers Bowl, Boston College 2- Bo,!.: ;,,I.E 21 Alamo Bowl Nebraska 32, Michigan 28 Emerald Bowl Utah 38, G.org., T.,:h : . Holiday Bowl Oklahoma 17, Oregon 14 Music City Bowl Virginia 34, Minnesota 31 Sun Bowl UCLA 50, Northwestern 38 Independence Bowl Missouri 38.S.:...r.- C rc.i,,.a 31 Peach Bowl. LSU 40, Miami 3 Saturday Meineke Bowl North Carolina State 14, South Florida 0 Liberty Bowl Tulsa 31, Fresno State 24 Houston Bowl TCU 27. Iowa Statu 24 Monday Georgia vs.West Virginia (n) Today Orange Bowl At Miami Penn State (10-1) vs. Florida State (8-4), 8 p.m. (ABC) Wednesday Rose Bowl At Pasadena, Calif. Texas (12-0) vs. Southern Cal (12-0), 8 p.m. (ABC) Outback Bowl No. 16 Florida 31, No. 25 Iowa 24 Iowa 0 7 0 17 24 Florida 7 17 7 0 31 First Quarter Fla-T.McCollum 6 blocked punt return (Hetland kick), 13:25. Second Quarter Fla-FG Hetland 21, 9:31. Fla-V.Brown 60 interception return (Hetland kick), 1:57. Iowa-Solomon 20 pass from Tate (Schlicher kick), 1:10. Fla-D.Baker 24 pass from Leak (Hetland kick), :01. Third Quarter Fla-D.Baker 38 pass from Leak (Hetland kick), 5:23. Fourth Quarter Iowa-Hinkel 4 pass from Tate (Schlicher kick), 13:51. Iowa-Hinkel 14 pass from Tate (Schlicher Sickk, 6:59. Iowa-FG Schlicher 45, 1:24. A-65,88 I. Iowa Fla First downs 23 26 Rushes-yards 20-64 42-169 Passing 346 278 Comp-Att-Int 32-55-1 25-40-0. Return Yards 2 92 Punts-Avg. 6-30.7 3-46.3 Fumbles-Lost 0-0 1-1 Penalties-Yards 8-60 5-30 Time of Poss. 24:35 35:25 INDIVIDUAL STATISTICS RUSHING-Iowa,Young 13-34,Tate 3-24, Sims 4-6. Florida, Moore 13-88,Wynn 8-34, Leak 13-31, M.Manson 4-13, Latsko 1-5, C.Jackson 1-2,Team 2-(minus 4). PASSING-Iowa, Tate 32-55-1-346. Florida, Leak 25-40-0-278. RECEIVING-Iowa, Hinkel 9-87, Solomon 7-96, Chandler 7-89,Young 3-12, Grigsby 2-22, Sims 1-12, Busch 1-10, Davis 1-10, Davis Jr. I- 8. Florida, D.Baker 10-147, C.Jackson 7-76, Casey 2-27, Tookes Jr. 2-14, Moore 2-4, Cornelius 1-8,Wynn 1-2. BASKETBALL NBA standings EASTERN CONFERENCE Atlantic Division W L Pct GB New Jersey 17 12 .586 - Philadelphia 1IS' 15 .500 2'/2 Boston 12 17 .414 5 Toronto 8 22 .267 9,/2 New York 7 21 .250 9'/2 Southeast Division W L Pct GB Miami 19 13 .594 - Orlando 12 15 .444 4/s Washington 12 16 .429 5 Charlotte 10 20 .333 8 Atlanta 7 21 .250 10 Central Division W L Pet GB Detroit 24 4 .857 - Cleveland 18 10 .643 6 Milwaukee 16 II .593 7/2 Indiana 16 12 .571 8 Chicago, 12 17 .414 12'/ WESTERN CONFERENCE Southwest Division W L Pet GB San Antonio- 24 7 .774 - Dallas 22 8 .733 11/1 Memphis 19 10, .655 4 New Orleans 12 17 .414 II Houston 10 18 .357 12' Northwest Division W, L Pct GB Minnesota 14 14 .500 - Utah 15 16 .484 'h Denver 14 17 .452 I' Seattle' 13 17 .433 2 Portland 10 20 .333 5 Pacific Division W L Pct GB Phoenix 19 ,10 .655 - L.A. Clippers 17 12 .586 2 Golden State' 17 14 .548' 3 'LA.Lakers 15 15 .500 4'1 Sacramento 12 17 .414 7, Sunday's Games Miami 97, Minnesota 70 L.AClippers 100, Portland 94 Utah 98, LA. Lakers 94 Monday's Games (Late Games Not Included) Indiana 115, Seattle 96 Phoenix at NewYork (n) Charlotte.vs. New Orleans (n) Milwaukee at Chicago (n) Boston at Denver (n) Today's Games Toronto at Atlanta, 7 p.m. Houston at Washington, 7 p.m. / Orlando at Detroit, 7:30 p.m. Golden State at Memphis, 8 p.m. Portland at Dallas, 8:30 p.m. LA. Lakers at Utah, 9 p.m. Philadelphia at Sacramento, 10 p.m. Wednesday's Games . Orlarido at Toronto, 7 p.m. Charlotte at Boston, 7:30 p.m. Miami vs. New Orleans at Oklahoma City, .'8 p.m. ' Cleveland at Milwaukee; 8 p.m. Dallas at Minnesota, 8 p.m. Seattle at Chicago, 8:30 p.m. Portland at San Antonio, 8:30 p.m. Indiana at Denver, 9 p.m. Philadelphia at Phoenix, 9 p.m. College scores Saturday EAST ll. Ohio St. 78, LSU 76 S. Illinois 64, Drake 50 SOUTHWEST Houston 78, McNeese St. 67 Oklahoma 68,Alabama 56 Oklahoma St. 84,Arkansas St. 64 Texas A&M 73, Northwestern St. 61 FAR WEST Air Force 77, IPFW 42 Arizona 96,Washington 95,20T California 68, UCLA 61 Colorado 83, Dartmouth 65 Gonzaga 102, Saint Joseph's 94 Top 25 schedule Monday's Games No. I Duke 84, Bucknell 50 No. 15 Texas 69, No. 4 Memphis 58 No. 10 Washington vs. Cornell (n) Today's Games No. 2 Connecticut at Marquette, 8 p.m. No. 5 Florida vs. Morgan State, 6 p.m. No. II Boston College vs. Massachusetts, 7 p.m. No. 12 Oklahoma at SMU, 8 p.m. No. 13 N.C. State vs. North Carolina- Greensboro, 7 p.m. No. 16 Indiana vs. Michigan, 6 p.m. No. 19 Kentucky vs. UCF, 7 p.m. No. 23 Wake Forest vs. East Carolina, 9 p.m. No. 25 North Carolina vs. Davidson, 7 p.m. Wednesday. Friday's Games No games scheduled Saturday's Games No.2 Connecticut vs. LSU at the Hartford Civic Center, 4 p.m. No..m. No. 19 Kentucky at Kansas, Noon No. 20 George Washington at Marshall, 7 p.m. No. 21 Arizona vs. Southern California, 2 p.m. APTop 25 The .op 25 teams in The Associated Press' men's college basketball poll, with first-place votes in parentheses, records through Jan. I, total points based on 25 points for a first- place vote through one point for a 25th-place vote and last week's ranking: Record Pts Pvs I.Duke (63) 12-0 1,789 I Connecticut (7) 11-0 1,727 2 3.Villanova (2) 9-0 ,664 3 ,4. Memphis I I-I 1,554 .4 5. Florida 12-0 1,475 5 6. Illinois 14-0 1,449 6 7. Michigan St. 12-2 1,305 9 8. Gonzaga 10-3 1,235 8 9. Louisville 11-1 1,144 10 0O.Washington I1-1 1,057 7 'II. Boston College 10-2 999 13 12. Oklahoma 8-2 826 14 13.N.C.State I1-1 812 19 14. Maryland 10-2 793 16 I S.Texas 10-2 778 15 16. Indiana 8-2 679 17 17.UCLA 11-2 615 II 18. Ohio St. 10-0 585 21 19. Kentucky 9-3 572 18 20. George Washington 8-1 512 12 21.Arizona 9-3 367 - 22. Pittsburgh 11-0 360 - 23.Wake Forest 10-2 298 22 24.WestVirginia 8-3 195 24 25. North Carolina 7-2 171 23 Others- receiving votes: Nevada 114, Syracuse 52,Wisconsin 48, Iowa 47, Cincinnati 38,Tennessee 36, Oklahoma St. 14, N. Iowa 13, Vanderbilt 13, Bucknell I I, Michigan 9,Wichita St. 7, Air Force 6, Buffalo 6, Colorado 6, Arkansas 4,Texas A&M 4, Indiana St. 3, Iowa St. 3, California 2, Xavier 2, UAB I. -E .2' o (D - 0- < - -E 0. CD ~eccr', RedskinA - 0.~9, -E-- U -. C -D m =D Oma 3-d 5D' * - d - ~ a ~e aim ....mb 4 U a 0 0 LAE IT EPRTR SPORTS TEDYJNAY ,20 Page Editor: Mario Sarmento, 754-0420 - ft - 4mim --.a 4 8 Page Editor: Mario Sarmento, 754-0420 LAKE CITY REPORTER SPORTS TUESDAY, JANUARY 3, 2006 take I k- a IHml CD.A 3 0 w 3C O 0 S 3 lmL -5 C 0E CD (D O /m II (D :s /m (0Q 0. r1) fruwrs dlu Cm -U' HuciLr- TAMPA -The following is a list of facts compiled fol- lowing Florida's 31-24 victory against Iowa in Monday's 20th Annual Outback Bowl at Raymond James Stadium: With the victory, the 2005 Florida Gators football team becomes the first in school history to win a bowl game and defeat Tennessee, Georgia and Florida State- all in the same season. Urban Meyer becomes the first Gators head coach to win his first bowl game at Florida since Charley Pell led UF to victory against Maryland (35-20) in the Tangerine Bowl on Dec. 20, 1980. Florida head coaches are now 4-5 in their first bowl game with the Gators. Florida finished the sea- son with a 9-3 overall record and ended the year with back-to-back victories. The Gators won nine games in a season for the first time since 2001 and closed out a year with back-to-back wins for the first time since 1997. Florida .head coach Urban Meyer led the Gators to nine victories this season to match Ray Graves (1960) and Steve Spurrier (1990) for the most wins by a first-year Florida head coach. Four of the Gators' nine wins came against ranked opponents (No. 4 Tennessee, 16-7; No. 4 Georgia, 14-10; No. 23.Florida State, 34-7; No. 25 Iowa, 31-24.) Before Urban Meyer, no first-year UF head coach had ever defeated more than three ranked oppo- nents in his initial campaign. Jemalle Cornelius, blocked punt in the first quar- ter of the game marked UF's fifth blocked kick this season. It was also the sixth blocked punt for a touchdown in school history and the sec- ond this year. Prior to Monday's Outback Bowl, UF never had blocked a punt for a touchdown in a bowl game. Tremaine McCollum,s recovery of Jemalle Cornelius' blocked punt with 13:25 remaining in t he first quarter marked the quickest the Gators have scored in any game this season. Vernell Brown's 60-yard interception return for a touchdown with 1:57 remain- ing before the half was the third interception of his career and his first career touchdown. It was the first interception returned for a touchdown by a Gator since Keiwan Ratliff did against Vanderbilt in a 52-yard romp on Nov. 8, 2003. Brown's touchdown return Monday was the first by a Gator in a bowl game since Lawrence Wright's 52-yard score against West Virginia on Jan. 1, 1994, in the Sugar Bowl. Dallas Baker's 38-yard touchdown reception with 5:23 left in the third quarter marked his eighth reception of the game, which marked a career high. His two touch- down receptions matched the best effort of his career (vs. Arkansas, 2004). Baker eclipsed 100 yards receiving in a game for the third time in his career all of which have come this season. Gators center Mike Degory made the 50th start of his career to match the UF career record (Larry Kennedy, CB, 1991-94). Degory started every game of his Gators career and fin- ished by equaling the fourth- longest streak in SEC history. * Compiled by Todd Wilson from UF and Outback Bowl sources. OUTBACK: Meyer's first year a success "Copyrighted Material Syndicated Content Available from Commercial News Providers" Continued From Page 1B. 1%t. usai tr tua rt4 hwe Ih iteLa "Copyrighted Material Syndicated Content Available from Commercial News Providers" 0 Aharri % out with win "Copyrighted Material Syndicated Content Available from Commercial News Providers" Notebook: Fun facts and figures from Florida's season LAE IT EPRTR SPOR S TUSAY AUAY3,20 Page Editor: Mario Sarmento, 754-0420 a k LAKE CITY REPORTER OUTBACK BOWL TUESDAY, JANUARY 3, 2006 Page Editor: Mario Sarmento, 754 0420 Ar -~ A ,su. A~ A ~ L ~ I. *:..--~ 1*; Ai~ A-- - ..,: ABOVE: Florida quarterback Chris Leak scrambles out of the pocket during the Gators' 31-24 win against Iowa. Leak threw for 277 yards and two touchdowns on the day. Florida coach Urban Meyer tries to get his point across with a referee about a questionable call at the Outback Bowl. Meyer matched Ray Graves and Steve Spurrier for most victories in his first season at Florida with nine. 'A- 'A.., L ,' *M`- .v,7.'.,, -_______________________ Florida senior center Mike Degory manhandles Iowa linebacker Chad Greenway at the Outback Bowl. Degory anchored an offense that gained 447 yards of total offense and built a 31-7 fourth quarter lead against the Hawkeyes. A day at the Outback Photos by JENNIFER CHASTEENILake City Reporter LEFT: Florida Gators Kestahn Moore (front left) and Nyan Boateng celebrate with Albert and other fans in the North endzone after their 31 to 24 victory against the Iowa Hawkeyes in the 2006 Outback Bowl at Raymond James Stadium in Tampa Monday. BELOW RIGHT: Florida wide reciever Dallas Baker leaps for a pass over an Iowa defender. Baker caught 10 passes for 147 yards and two touchdowns in the Gators' victory. Page Editor: Mar~io Sarmento, 754-0420 TUESDAY, JANUARY 3, 2006 ..,*. ' ,*^ .., LAK CTYREOREROUTBBACK BOWLO~ LAKE CITY REPORTER SPORTS TUESDAY, JANUARY 3, 2006 NFI. cowhing ,,I %pins famer 3 -51 -O NI IBluer Ikvh roul Bucwkne %VAl AM sa up be ftmm 0 0 2 2 CD -E Z CD a 0 ar 0: 0: (o 51) "3" U.' -E age 10 " a's m ft Jat 28J-a~ - -mm 0 *up* am wmb~ boa -_ wb * . 6f? a 6 a. Li' a.6 - a U ~ 'e6 ~ A Sa *ro V* m - e * a p qp aw im 0 qw v A m.f 0 quow 0 db 14l 0*bow40 0 t * __ 0 Page Editor: Mario Sarmento, 754-0420 * * ^^ 9 =r rmPL Hbqor~ LAKE CITY REPORTER ADVICE & COMICS TUESDAY,JANUARY 3,2006 j * m~e~ S 0 5- * m - .5 gmL .. lh~ A * - -w. v we in-. .o" .IPII LT ~ =dD ow ~~f7w - & -j CDi -h .~oil CIO ia m .0^^ 1B K2" MA.-M - i*0 oI Q1 w 41 '.m S 4 0 * d 0C) -*1 C) (DL ri jp me "IV a e "em &Agg "W -I._~L~ * * -MI (M Pe Lb 1 H> E~mj14 4 ~ ~- 0 - a * * S *0000 S.. ... ~I.- 5~ 5- 000 S... ~ 5- S.. S. *~ S - c~ .. t oo 3D- 5qp " o 00 CD CD -9 Q4 0 'lg o ,r piomn -M * fom ames du lkA 4mbqm 4 * IL- -4000"...0 Page Editor: Chris Bednar, 754-0404 - 44Do e 91P , - m AM& oft A& Agoolh f A Q P i" -D r I . Gob 0 FL prwl- 4m Aa., % I 1w Classified Department: 755-5440 4 lines 6 days Oneitem perad personal moercsrivatoaindividualssellingess. I Each Itom, must include a price. This Is a non.refundabte rate. $o900 4 lines 6 days One item per ad Rate applies to private Individuals selling Each additional personal merchandise totalling S00 or n less. Each Item must include a price. This i line $1.00. a non-refundable rate. $ 1 25 4 lines 6 days One Item per ad R5te applies to private Individuals selling Each additional peRasonlomerchandlise totalnli SIOO or less. line $10i -- Each Item nmust .nc.lde a pr... This Is S. ^ y line $1.05 non-rofundablo Trase. S2 24 lines 6 days Ono item per ad 2 0 Ratoe applieo to private Individuals soiling Each additional P rota p nchI n a IV prco a. i a adda5 peronrl merchandise tottalln 100e or lOeB . i0nes135 non-refundablo rate. $2 2 4 lines 6 days One Item per ad 2 2 Rate applies to private Individuals selling Each additional 00.01 na eehsde tsifln $1a0orlSe. Each additional $1.45 Each Iteorn must Include a prc. This Is line $45 non-rofundable rate. S2850 S Each additional , i n '. line $1.55 4 lines 6 days One ltem por ad Rate applies to private individuals selling personal mIarchndie totliH $100a lss. Each Item must Include a prico. This Is a non-refundablo rate. LAKE CITY REPORTER CLASSIFIED TUESDAY, JANUARY 3, 2006 Lake City Reporter Take ADvantage of the Reporter C lassitieds! 755-5440 ADvantage I I 4 line minimumS2.55 per line Add an additional $1.00 per ad for each Wednesday insertion. Number of Insertions Per line Rate 3 ................................. a1.65 4-6 .................. .......... S1.50 7 -13 . .. .. .. . . .. ... .. 4 5 14-23 ...................... . 1.20 24 or more ..................... 990 Add an additional $1.00 per ad for each Wednesday insertion. Limited to service type advertising only. 4 lines, one month ................... 170.00 $9.50 each additional line Add an additional $1.00 per ad for each Wednesday insertion. $61 0 120 1 -- >, and Lie w w ?:-- *.-< i'Mr'h .001 Computer Services Services mnsg. TIME TO MULCH Make your flower beds look like new. Delivered & spread or just delivery. 386-935-6595 CLEAN FREAKS Mobile Auto Detailing at your home Sor aw email. 010 Announcements Is Stress Ruining Your Life? Read DIANETICS by Ron L. Hubbard Call (813) 872-0722 or send $7.99 to Dianetics, 3102 N. Habana Ave., Tampa FL 33607 030 Personals 05509167 Lonely? Young at Heart? Over 65? Looking for a great companion? If so, we would be great together. 386-961-8453 060 Services ARRESTED NEED A LAWYER? All Criminal Defense. Felonies eMisdemCtstablished 1977. 100 Job 1o0 Opportunities !! LOOK! LOOK !! You Too Can Sell Real Estate! BIG BUCKS! Call 386-466-1104 DUMP TRUCK DRIVER. Experienced w/mrin. 2 yrs clean MVR & Class A CDL. Starting Pay $10.50/ph Drug Free Workplace 386-623-2853 a,.,eiiing. 100 Job: SDave Kimler 180 E. Duval St. Lake City, FL 32055 email: dkimler@falakecityreporter.com Accounting Clerk Experience in G/L, A/R, A/P & P/R Salary Open. Fax resume to: 386-397-1130 Our awaseU cruise SaM Evil'r Rates Aviastns - 0i Ca-[rnival NABORS OFFSHORE CORPORATIOh DOUBLE YOUR INVESTMENT IN ONLY 1 YEAR! Builders Lots Available in the Fastest Growing Areas in Florida a WHLSL PRICING too Job SOpportunities, comp.ii'. b.enerf s -.- ill. REPORTER Classifieds In Print and On Line CLASSIFIED ROCYMUNT*24 Log Home Packages To Be Offered At Public Auction. Rogers Realty & Auction Co. Saturday Jan. 14th FL License #AU2922 11:00 A.M. 336.789.2926 or Orlando, FL , (Port of Sanford) r Llr Urll 1 r. For More Information! 1.888.562.2246 Or Log Onto: SAdvertisement Do you need a loan? If you are searching for the best home equity loan, ask these three questions: 1) Will you guarantee the lowest rate in writing? We promise the lowest rate in writing. We won't merely match your lowest rate If we can't beat it-even after you've gone through the entire loan process within us- we will pay vou $250 just for applying with us. 2) Will my interest rate increase, if I have a low credit score? To other loan companies, you are just a faceless credit score. The lower your score, the higher your interest rates. Hoe aeHm oasi lcne b h At Honey Mae Home Loans, we don't let a computer tell us what to do. We can aive you a loan when others say no even if you have a low credit score. 3) What are the chances my loan will beapproved? We approve 6 out of 7 applications. And some of these people have credit scores below 530. We can give you a quote over the phone, in complete privacy, without obligation-no matter your financial situation. 1-800-700-1242 ext. 258 Call J.G. Wentworth's Annuity Purchase Program J.G.WENTWORTH. 866-FUND-549. ANNUITY PURCHASE PROGRAM Honey Mae Home ~~~ LoansII is license by Me I londa Liepartment oi t-in1ncia t -emces. EE 100 Job Opportunities (055(9132 Immediate Job Openings. Some positions require experience, some available for training. We offer competitive compensation plan. Excellent fringe benefit package, which includes paid vacation, holidays, group health insurance, and a 401K Plan. Some hand tools required. Please apply in person at Hunter Marine on Highway 441 in Alachua, Fl.. for the following jobs: Autobody Technician Spray Painter-Night Shift Best Western Inn is looking for FT & PT Front Desk Clerk. Must be able to work Weekends, Nights & Holidays. Apply at 1-75 & US 90;W BLUE JEAN JOB $ Money $ Seeking sharp go getters, Able to TRAVEL USA. Demo chemical products. Good people skills & enjoy working in a Rock in Roll evir. Call Kelly 1-800-201-3293. 9-6. Must start immed. FLAT BED DRIVERS Atlantic Truck Lines $4,000.00 Sign on Bonus Class A, in state & home every night. $600-$750/wk. Yearly $1,000 safety bonus. 3 yrs. exp. Paid vacation, health/dental. Call 1-800-577-4723 Monday-Friday LAKE CITY REPORTER CLASSIFIED TUESDAY, JANUARY 3, 2006 100 Job 100 Opportunities Bookkeeper Office Manager Local manufacturing company seeks full-time bookkeeper/office manager. Computer skills necessary. Accounting knowledge preferred. Insurance & 401 K benefits. Send resume & salary requirements to: Send reply to Box 05005, C/O The Lake City Reporter. P.O. Box 1709, Lake City, FL, 32056 Cashier Needed. IOPM 6 AM Texaco in Ellisville, 1-75 & Hwy 44-1 S. Apply in person ONLY Drug Free Workplace COUNTRY INN AND SUITES Housekeepers! Applicants who are mature, serious & seeking long term employment & have cleaning experience. Apply at Country Inn and Suites, Florida Gateway Dr. 1-75 & Hwy 90. Excellent working environment, competitive pay, benefits incl. vacation & holiday. CYPRESS TRUCK LINES, INC Driver Designed Dispatch. FLA ONLY/Flat Bed Students welcome. Home Every WeekEnd Most Nights (800)545-1351 045012014 TECHNICIANS/MECHAINICS NEEDED Seeking technicians/mechanics 3-5 years exp. repairing Heavy Equip. Must have own hand tools. Apply in person at Ring Power, 390 SW Ring Ct., Lake City, FL 32025 or online at. EOE DENTAL ASSISTANT Highly Experienced Dental Assistant needed for busy quality general practice. $17.00 hr plus paid insurance, vacation & bonuses. Fax resume to: 386-752-7681 or call 386-752-8531. Experienced Tandem Dump Truck Driver. Asphalt, Milling Exp. Class B CDL & clean driving -ecord. PDOE 386-5911-0783 FORKLIFT TECH Manufacturing Firm has full time position for positive enthusiastic tech. Must be experienced. Excellent pay & benefits. Call 904-275-2833 M-F Fronl Desk Emploiee needed to[.- bu,,, PKc airic ,,fllie Medical experience helpful. Call 386-758-0003 FT Food Service Workers for . co, i, hi t.! :1iir"1 Bencfh]. .flier 90 days. 4-1 K. St,-iLl. B,-,,i I , .acaji.in NiJ,. :,rino .i rcol'id Food .S !c E pieliinc' I !dpful Apply i n pel .on i'' CC L. e C1 'I CI. *-. .5-3379 *ext 2251 EOE/M/F/D/V. GRADER OPERA TOR n:-.cd-J n.nnidijiel_, T. ,p pa. toir Inith production operator. Complete benefits package available. Apply in Ipel I, i[ i \ .l -. C .:- ,AIr', li,[ ri C all '. i' -l '"- l,:a l 'l i ll.:.. HLINGRY HO( lies r; hiring dcli, eir. dri- Musthave car I. t iij .nce- & 2 r.., lii. iIn cI .p. Fle' -.:he.hiic F.T a P.T a.1l.. ..CASH PAID DAILY!, Earn $8. $15./ hour. Apply in person at 857 S\\ M.un BI id." Kaaim Iran.,mission needs ep Auto Tech, or R&R Mechanic with experience. Must have own tools. Apply in person 125 NE Jonesway Lake City, 32055 or 386-758-8436 Legal Secretary Phone & Computer skills required. Send reply to Box 05007, C/O The Lake City Reporte;r, OTR DRll ERS NEEDED l-leau1 Haul.CI-,. A CDL. 2 week (urnarounid good pds, Call Southern Specialized,LLC 386-752-9754 * Sheet Intal roolers needed No cirmniln ba,. grourind. C all .(,- i-. -17 i Short Term & Long Term Temp In Perm Mar d.iltjitra, pos .h:,rI :' Jil'ble!! S -all \VaI-Salat ersol el 386-755-1991 or 386-755-7911 Small dealership homking lor pars person ,:irJ ,_:'ii[,;,de iile- I'oI lIe,' lerl il'l t ilp, -il' n il e i. ',I c r. (C all .I r pp. liaim.:I r i _.i)l ?55 .--5 - TE ANIS! 10110 ign on bonus ea. App,, i I[IIII'..l. 2. i O( TR, N,. DUI D\\ I N...rII.-in FL .,re:, Excellent Equipment, E'c:ellenic Lanes Great Benefits Home \ cc .cil.J. '- i 2i 1* -lil a. '* L lb.p h. :,cL n'l WELDERS/LABORERS N MACHINE SHOP EXP. ppl, t I', 'por Grinzzly Mfg. I 1 N'E Com[ez Terrace Lake City, FL (Across from airport) YOUNG ENERGETIC Person 'for M.iMaiifactured FHine ..ales. Business degree a plu., \\ ll train right person. Call 386-364-1340. Ask for Mr. Selph or Mr. Corbet 100 Job 100 1012,12 (55(19271 Baya Pointe Nursing Center Has the following Open Positions: AFT LPN/ RN 3:00 pm-I 1:00 pm ACNA 3:00 pm- 11:00 pmi PT Weekend LPN/RN 7:00 am-3:00 pmi 2AFront Office Receptionist Mon-Fri 10:00am-6:00 pm Sat-Sun 9:00am -5:00(pm Apply in Person to: 587 SE Ermine Ave Lake City, Fl 32025 CNA/ MA Needed for LK City Medical Office. Experienced preferred. Fax resume to: 386-754-1712. Experienced Medical Assistant Needed for fast paced Doctors Office. Fax resume to: 386-758-5987 Medical Office Help Wanted 6 months exp required Please bring resume to: 155 NW Enterprise Way, Lake City, Fl 32055 or contact us at (386) 755-9457 RN NEEDED, Part-Time, 3-lip & llp-7a. Please apply at The Health Center of Lake City, 560 SW McFarlane Avenue, Lake City. Equal Opportunity Employer/ Drug Free Work Place/Americans with Disabilities Act. 170 Business Opportunities All Cash Candy Route Do you earn $800/day? 30 Machines, Free Candy : All for $9,995. (888) 629-9968 B02000033. CALL US: We will not be undersold! Vending Route: Local, All brands. Soda, Juice, Water, Pastries, Snacks, Candies. Great Equipment, & Locations. Financing Available with $7,500 down. (877)843-8726. B02002-037.! ".Sh67.1 '3 Livestock & JJ Supplies BULLS FOR SALE 386-755-3500 403 Auctions 24 Log Home Packages to be Offered at Public Auction, Saturday, January 14, 11:00 AM, Orlando, FL (Port ofSanford), Rogers Realty & ALc lion, Li .e ic WALi ... Free Brochure, Buttal:, Log Homes, (888)562-2246"oir. Bankruptcy Auction Sells regardless of price! Luxury cars, * planes, more. January 19, 11AM, 10%BP, Call for details! I 'S.'-4.uc,077 Tranzon Dril-ers, Walt Driggei1, #ABI123. . 408 Furniture NOW OPEN!! CRAZY JOHNS Furn &. STEEL BUILDINGS, Factory Clearance. New, never erected 30x40, 40x60, 50x100, and 60x100. Will sell for balance Call Frank (800)803-7982 463 Building 463 Materials Metal Roofing Save $$$ Buy Direct from Manufacturer. 20 colors in stock with all Accessories. Quick turn around! Delivery Available Toll Free (888) 393-0335 630 Mobile Homes 3U 0for Rent 16X80 3/2 & 14X70 3/2 in Clean Quiet Country Park. No Pets $550/mo.or $500/mo, plus Deposit & Ref. Req. Call 386-758-2280 . Call 386-961-0017. S Will Finance. Call Buddy 386-364-1340 C \SH DEALS. We Love Em! We 'II ll .e ,:,,i tHe very best pricing in North Florida on new or used manufactured homes! 800-769-0952 iF Yi_1 OU\\ N L -ND OR H \\E A LARGE DOWN PAYMENT. I Nl -Y BE WILLING TO OWNER FIN ANCE A NEW M NLiFFACTURREl) 650 Mobile Home 0 & Clean 1560 sf 3/2 1993 DW, private wooded acre, all lino, deck, new metal roof. $63,900. Cash Only Call 386-961-9181 LAND HOME Packages, while they last! Call Ron Now!' 386-397-4960 710 Unfurnished Apt. 70 For Rent 1 & 2 Bedroom Apartments All very nice. Convenient location. Call 386-755-2423 2/1 Fresh Paint & New Carpet Starting at $600/mth. Plus security. Pets allowed w/fee. Call Lea.386-752-9626 710 FUnfurnished Apt. 710 For Rent 2BR/1BA w/ Garage $700 + Sec. Pets w/lee. 730l 4BR/2BA on 2 acres w/garage & utility room, $1000/mth, Dep & Ref. required. 397-3500 or 755-2235 or 752-9144 BRAND NEW 4 & 3 Bedroom Homes with 2 Car Attached Garage on Huge Lots. $995 mo, $995 sec. Call (904)317-4511 Mayfair Subdivision 3BR/2BA Brick Home Quiet Neighborhood Call 386-961-9959 7c0 Business & 15Ave. 8201 Farms & Acreage 04501315 REDUSED Horse Farm: Beautiful rolling 46 .ic i h *-c.,etered trees. Lots of Ro.j. Fri... with Board Fence. Large barn, Corral,Additional Facilites, Paddocks, Pasutres, Hay Fields plus Two Mobile Homes. Call Jiarc S Usher Lic. Real Estate Broker 386-755-3500 or386-365-1352 5 .('IIES .. ithii B.:.110om Columbia City Area 5 ac.wooded homesite $89,900 owner finance 352-472-3660 820 Farms & SAcreage 840f Out of Town 840 Property ASHEVILLE, NC AREA Peaceful gated community. Incredible riverfront and mountain view homesites. 1- EALESTATE.COM OR. COM. MURPHY, NORTH CAROLINA AAH COOL SUMMERS MILD WINTERS Affordable Homes & Mountain Cabins Land CALL FOR FREE BROCHURE (877)837-2288 EXIT REALTY NlOULNTAIN VIEW PROPERTIES NC MOUNTAINS Log cabin $89,900. Easy to finish cabin on secluded site. Million $$$ Views Available on 1-7 acre parcels $29,900-579,900. Free Info Available! (828)256-1004. NC MOUNTAINS 10.51 acres on mountain top in gated community. view, trees,' ,atertall & iarge pubic lake nearby, pai ed pnt ate access. $119,500 owner (866)789-8535. 850 Waterfront 8O5 Properly Coastal Southeast Georgia Large ,v. wooded v.jacr access. marsh i lew, lake front. and golf oriented homesites from the mid $SO's Li.e oaks. pool. tennis, golf. (877)266-7376 w.'. v. .cooper[poin[.corr North Carolina Gated Lakefront Community 1.5 acres plus, 90 miles of shoreline. Never before offered '. ih 21.1 pie-de elopmeni discounts-. 9011 financing. Call (800)709-5253 Riverfront! Lovely Withlacoochee river home just west of Live Oak. Pristine old Florida setting. 2BR/2BA, built 2004 on Cul-de-sac backing up to preserve. Secluded! S$200.900. Call Debbie Zeller at Coldwell Banker M.M. Parrish at -352-538-2857 or 386-454-3442 850 Waterfront 0 Property, private slips (limited). Don't miss out. Call (866)292-5769 Tennessee Waterfront Land Sale! Direct Waterfront parcels from only $9,900! Cabin Package from $64,900! 4.5 acres suitable for 4 homes and docks only $99,900! All properties are new to the market! Call toll-free (866)7.70-5263 ext. 8 940 Trucks 1994 Dodge Dakota Sport Runs good, $900. 6 Cyclinder Make good work truck Call 386-752-1682 FOR SALE: 1988 GMC Custom Low Rider. New stereo & speakers. $3,500 OBO. Call 386-755-2476 i. l.1-3512 951 Recreational S Vehicles 2005 ELITE Travel Trailer, 33ft, Super slide out. Washer/Dryer, CA/H. Asking $17,900. Trailer is local. (228)343-2701 cell. 952 Vans & Sport Util. Vehicles 1972 JEEP CJ5 Hard Top, Restored, Good Condition. $5,700 Cash. Call 386-362-4987! 1999 Chevy Z71 4x4 Sportside '8,995 OBO Reg. Cab Gall 386-755-3179 SPACE AVAILABLE NOW! q, Classified Department: 755-5440 Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2011 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Powered by SobekCM
http://ufdc.ufl.edu/UF00028308/00256
CC-MAIN-2018-30
refinedweb
10,786
77.53
Compile TypeScript Project In this chapter, you will learn about compiling a TypeScript project and also the config file tsconfig.json. As you know, TypeScript files can be compiled using the tsc <file name>.ts command. It will be tedious to compile multiple .ts files in a large project. So, TypeScript provides another option to compile all or certain .ts files of the project. tsconfig.json TypeScript supports compiling a whole project at once by including the tsconfig.json file in the root directory. The tsconfig.json file is a simple file in JSON format where we can specify various options to tell the compiler how to compile the current project. Consider the following simple project which includes two module files, one namespace file, tsconfig.json and an html file. The above tsconfig.json file includes empty curly brackets { } and does not include any options. In this case, the tsc command will consider the default values for the compiler options and compile all the .ts files in a root directory and its sub-directories. The above tsc command will generate .js files for all .ts files, as shown below. When using the tsc command to compile files, if a path to tsconfig.json is not specified, the compiler will look for the file in the current directory. If not found in the current directory, it will search for the tsconfig.json file in the parent directory. The compiler will not compile a project if a tsconfig file is absent. If the tsconfig.json file is not found in the root directory, then you can specify the path using the --project or -p option, as shown below. Until now, we used an empty tsconfig.json file and so, the TypeScript compiler used default settings to compile the TypeScript files. You can set different compiler options in the "compilerOptions" property in the tsconfig.json file, as shown below. { "compilerOptions": { "module": "amd", "noImplicitAny": true, "removeComments": true, "preserveConstEnums": true, "sourceMap": true } } In the above sample tsconfig.json file, the compilerOptions specifies the custom options for the TypeScript compiler to use when compiling a project. These are the tsc command options, which you may use while compiling a file. When compiling individual files using the tsc command, you must specify a -- or - option, e.g. --module or -m. The same options can be specified without -- or - in the "compilerOptions" section of the tsconfig.json file. You can also specify specific files to be compiled by using the "files" option. The files property provides a list of all files to be compiled. { "compilerOptions": { "module": "amd", "noImplicitAny": true, "removeComments": true, "preserveConstEnums": true, "sourceMap": true }, "files": { "Employee.ts" } } The above files option includes the file names to be compiled. Here, the compiler will only compile the Employee.ts file. There are two additional properties that can be used to include or omit certain files: include and exclude. All files specified in include will be compiled, except the ones specified in the exclude property. All files specified in the exclude option are excluded by the compiler. Note that if a file in include has a dependency on another file, that file cannot be specified in the exclude property. { "compilerOptions": { "module": "amd", "noImplicitAny": true, "removeComments": true, "preserveConstEnums": true, "outFile": "../../built/local/tsc.js", "sourceMap": true }, "include": [ "src/**/*" ], "exclude": [ "node_modules", "**/*.spec.ts" ] } Thus, the tsconfig.json file includes all the options to indicate the compiler how to compile a project. Learn more about tsconfig.json here.
https://www.tutorialsteacher.com/typescript/typescript-compiling-project-and-tsconfig
CC-MAIN-2019-18
refinedweb
575
68.57
Click here to view and discuss this page in DocCommentXchange. In the future, you will be sent there automatically. Processes download 12. (Note that printing a message to the MobiLink message log might be useful at development time but would slow down a production server.) package ExamplePackage; public class ExampleClass { String _curUser = null; public String beginDownloadTable( String user, String table ) { java.lang.System.out.println("Beginning to process download for: " + table); return ( null ); }}. (Note that printing a message to the MobiLink message log might be useful at development time but would slow down a production server.) namespace TestScripts { public class Test { string _curUser = null; public string BeginTableDownload( string user, string table ) { System.Console.WriteLine("Beginning to process download for: " + table); return ( null ); }}}
https://dcx.sap.com/1201/en/mlserver/begin-download-table-syncref.html
CC-MAIN-2022-40
refinedweb
122
50.12
Re: EFS Certs in AD or local PC? - From: "Steve" <wonderlan1@xxxxxxxxxxxxxxxxxxxxx> - Date: Thu, 29 May 2008 19:14:02 -0500 If by cert you mean a .pfx file then if you can send it to the user he could import it into his user account profile and use it. .pfx files that contain the user certificate and privtae key are password protected so he would need the password to unlock that file also. EFS files are decrypted by a user's private key. The public key certificate is used to encrypt the EFS files. Steve "Quilnux" <Quilnux@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:33239C56-B678-4634-961C-3112724B6BA8@xxxxxxxxxxxxxxxx If his profile is in AD and we import his cert, will he be able to decrypt the files under his account? "Steve" wrote: Just to add that EFS files can not be copied by anyone other then a user that can decrypt them but a user can use NTbackup to back them up to be restored on another computer such as one where the RA certificate/private key exists for attempted decryption. Also RA certificate/private key can be imported via password protected .pfx file to a computer for attempted recovery of EFS files. The users EFS private key is stored in the user's profile but not in a way that can be normally exported. There are third party tools that can scan a computer to look for EFS private keys [such as from a restored profile to a computer other than the original OS] that can possibly decrypt EFS files if the user's password is known. If there are no correct EFS private keys [user or RA with matching thumbprint] available to decrypt a users EFS files then it will not be possible to recover the EFS files. Steve --- free trial can be used to search for and unlock EFS private keys but if found the free version will only decrypt a couple bits of EFS file, just enough to let you know full version should work "Steve" <wonderlan1@xxxxxxxxxxxxxxxxxxxxx> wrote in message news:Ojo7H3YwIHA.4876@xxxxxxxxxxxxxxxxxxxxxxx While there are ways to archive EFS certificate/private keys, I believe that requires W2003 Enterprise, and in your case his certificate/private key was on the local computer. See if he possibly exported it for backup at some point in time to see if he can import it back into his computer via a .pxf file. If the domain security policy has a Recovery Agent configured then the RA [usually built in domain administrator account] could logon to a computer that contains the RA EFS certificate/private key [usually the domain controller] and deccrypt the files. Note that ANY EFS certificate used to attempt to decrypt files MUST also have the matching private key - a .cer file does NOT. Though he/you may not be able to access the files right now you can view the advanced properties/detains of them to see if a RA is included as user that can decrypt. Steve "Quilnux" <Quilnux@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:95785FD5-839E-4A23-B4C9-974A3E6884B2@xxxxxxxxxxxxxxxx Hello, We have a user which was using a desktop with an EFS folder. Recently the OS drive failed and we had to reload the system from a new HDD. The EFS folder is on a secondary drive which is ok but I need to know if he will be able to access the folder when he logs in next wednesday from his account in AD or if I need to get his EFS cert from archives. It takes archives a week to get us the disks we need so if it is saved in his AD account I may not need to contact them. Thanks, Quilnux . - References: - Re: EFS Certs in AD or local PC? - From: Steve - Re: EFS Certs in AD or local PC? - From: Steve - Re: EFS Certs in AD or local PC? - From: Quilnux - Prev by Date: 1 Notebook unable to log in - Next by Date: Re: Frequest disconnects from domain -- Why? - Previous by thread: Re: EFS Certs in AD or local PC? - Next by thread: Re: Printing from Laptops Connected Remotely - Index(es):
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.sbs/2008-05/msg03893.html
crawl-002
refinedweb
696
74.42
This chapter describes two global naming systems (DNS and X.500/LDAP) and how to federate them under FNS. "FNS and Global Naming Systems" "Obtaining the Root Reference" "Federating Under X.500/LDAP" See "Global Naming Services" for overview and background information on the relationship between FNS and global naming services. FNS supports federation of enterprise naming systems into the global naming systems, DNS and X.500/LDAP. This chapter describes the procedures for federating NIS+ with DNS and X.500. In general, the procedures involve Determining the NIS+ root reference for your NIS+ hierarchy Adding this information in the format required by the global naming system You can only federate a global naming service if your enterprise-level name service is NIS+ or NIS. If you are using a files-based name service for your enterprise, you cannot federate either DNS or X.500/LDAP. To federate an enterprise naming service under DNS or X.500/LDAP, information must be added to these respective naming systems to enable access to and from the enterprise and the global Internet outside of the enterprise. This information is the root reference, which consists of network address information describing how to reach the top of a particular enterprise namespace. The root reference consists of a single address which contains a single, XDR-encoded string. The address type and content varies according to the enterprise level name service you are using: NIS+ or NIS. When your enterprise level name service is NIS+, the root reference address type is: onc_fn_nisplus_root. There are two required, and one optional, elements in a root reference network address. The elements are separated by white spaces:Table 26-1 NIS+ Root Reference For example, suppose that the NIS+ root domain is doc.com. (notice the trailing dot), and that it can be reached using the host nism: When your enterprise level name service is NIS, the root reference address type is: onc_fn_nis_root. There are two required, and one optional, elements in a root reference network address. The elements are separated by white spaces:Table 26-2 NIS Root Reference For example, suppose that the NIS domain is doc.com, and that it can be reached using the host ypm: This". In order to federate a subordinate naming system (either NIS+ or NIS) in X.500/LDAP: Root reference information must be added into X.500 describing how to reach the subordinate naming system. An X.500 client API must be specified. Obtain the NIS+ root reference for your NIS+ hierarchy. See "Obtaining the Root Reference". Create an X.500 entry that supports XFN reference attributes. For example, the following command creates a new X.500 entry called c=us/o=doc with the object classes top, organization, and XFN-supplement (1.2.840.113536.25). The XFN-supplement object class allows the c=us/o=doc entry to store reference information for a subordinate naming system. If the X.500 entry already existed and was not defined with the XFN-supplement object class, it must be removed and re-created with the additional object class. Otherwise, it will not be able to hold reference information about the subordinate naming system. Add the reference information about the subordinate system to the entry. After creating the X.500 entry, you can then add information about the subordinate system by binding the appropriate root reference to the named entry. For example, if your subordinate naming system is NIS+, and the NIS+ server you want to use is nismaster, your would enter: If your subordinate naming system is NIS, and the NIS server you want to use is ypmaster, your would enter: These examples bind the reference for the NIS+ or NIS hierarchy with the root domain name doc.com., to the next naming system pointer (NNSP) of the X.500 entry c=us/o=doc, thus linking the X.500 namespace with the doc.com. namespace. The address format used is that of the root reference described in "Obtaining the Root Reference". Note the use of the trailing slash in the name argument to fnbind, .../c=us/o=doc/, to signify that the reference is being bound to the NNSP of the entry, rather than to the entry itself. For further information on X.500 entries and XFN references, see "X.500 Attribute Syntax for XFN References". An X.500 client API is required in order to access X.500 using FNS. You can use one of two different clients: XDS/XOM API. The XDS/XOM API must be installed. It is exported from the /opt/SUNWxds/lib/libxomxds.so shared object. Consult "Getting started with the SunLink X.500 Client Toolkit" for details on the X.500 product. LDAP (Lightweight Directory Access Protocol) API. The LDAP API is automatically installed as part of Solaris Release 2.6. The API that you use is specified in each machine's /etc/fn/x500.conf file. This file contains configuration information for X.500 and LDAP. This file can be edited directly. The default x500.conf file contains two entries: Where localhost and ldap are the IP addresses or hostnames of one or more LDAP servers. The first entry specifies the order in which X.500 accesses APIs. In the example above, X.500 will first try to use XDS/XOM. If XDS/XOM is not available, it will default to using LDAP. If the entry read: x500-access: ldap xds, X.500 would use LDAP and only fall back on XDS if LDAP were not available. The second entry lists the IP addresses or hostnames of servers running LDAP. Each server is tried in turn until a successful LDAP connection is achieved. In the example above, the localhost is tried first. If LDAP is not available on that server, the next one is tried.
http://docs.oracle.com/cd/E19455-01/806-1387/6jam692ej/index.html
CC-MAIN-2014-52
refinedweb
970
59.19
14. Neural Networks, Structure, Weights and Matrices By Bernd Klein. Last modified: 10 Jan 2022. Introduction We introduced the basic ideas about neural networks in the previous chapter of our machine learning tutorial. We have pointed out the similarity between neurons and neural networks in biology. We also introduced very small articial neural networks and introduced decision boundaries and the XOR problem. In the simple examples we introduced so far, we saw that the weights are the essential parts of a neural network. Before we start to write a neural network with multiple layers, we need to have a closer look at the weights. We have to see how to initialize the weights and how to efficiently multiply the weights with the input values. In the following chapters we will design a neural network in Python, which consists of three layers, i.e. the input layer, a hidden layer and an output layer. You can see this neural network structure in the following diagram. We have an input layer with three nodes $i_1, i_2, i_3$ These nodes get the corresponding input values $x_1, x_2, x_3$. The middle or hidden layer has four nodes $h_1, h_2, h_3, h_4$. The input of this layer stems from the input layer. We will discuss the mechanism soon. Finally, our output layer consists of the two nodes $o_1, o_2$ The input layer is different from the other layers. The nodes of the input layer are passive. This means that the input neurons do not change the data, i.e. there are no weights used in this case. They receive a single value and duplicate this value to their many outputs. The input layer consists of the nodes $i_1$, $i_2$ and $i_3$. In principle the input is a one-dimensional vector, like (2, 4, 11). A one-dimensional vector is represented in numpy like this: import numpy as np input_vector = np.array([2, 4, 11]) print(input_vector) OUTPUT: [ 2 4 11] In the algorithm, which we will write later, we will have to transpose it into a column vector, i.e. a two-dimensional array with just one column: import numpy as np input_vector = np.array([2, 4, 11]) input_vector = np.array(input_vector, ndmin=2).T print("The input vector:\n", input_vector) print("The shape of this vector: ", input_vector.shape) OUTPUT: The input vector: [[ 2] [ 4] [11]] The shape of this vector: (3, 1) Weights and Matrices Each of the arrows in our network diagram has an associated weight value. We will only look at the arrows between the input and the output layer now. The value $x_1$ going into the node $i_1$ will be distributed according to the values of the weights. In the following diagram we have added some example values. Using these values, the input values ($Ih_1, Ih_2, Ih_3, Ih_4$ into the nodes ($h_1, h_2, h_3, h_4$) of the hidden layer can be calculated like this: $Ih_1 = 0.81 * 0.5 + 0.12 * 1 + 0.92 * 0.8 $ $Ih_2 = 0.33 * 0.5 + 0.44 * 1 + 0.72 * 0.8 $ $Ih_3 = 0.29 * 0.5 + 0.22 * 1 + 0.53 * 0.8 $ $Ih_4 = 0.37 * 0.5 + 0.12 * 1 + 0.27 * 0.8 $ Those familiar with matrices and matrix multiplication will see where it is boiling down to. We will redraw our network and denote the weights with $w_{ij}$: In order to efficiently execute all the necessary calaculations, we will arrange the weights into a weight matrix. The weights in our diagram above build an array, which we will call 'weights_in_hidden' in our Neural Network class. The name should indicate that the weights are connecting the input and the hidden nodes, i.e. they are between the input and the hidden layer. We will also abbreviate the name as 'wih'. The weight matrix between the hidden and the output layer will be denoted as "who".: Now that we have defined our weight matrices, we have to take the next step. We have to multiply the matrix wih the input vector. Btw. this is exactly what we have manually done in our previous example. $$\left(\begin{array}{cc} y_1\\y_2\\y_3\\y_4\end{array}\right)=\left(\begin{array}{cc} w_{11} & w_{12} & w_{13}\\w_{21} & w_{22} & w_{23}\\w_{31} & w_{32} & w_{33}\\w_{41} &w_{42}& w_{43}\end{array}\right)\left(\begin{array}{cc} x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{cc} w_{11} \cdot x_1 + w_{12} \cdot x_2 + w_{13} \cdot x_3\\w_{21} \cdot x_1 + w_{22} \cdot x_2 + w_{23} \cdot x_3\\w_{31} \cdot x_1 + w_{32} \cdot x_2 + w_{33}\cdot x_3\\w_{41} \cdot x_1 + w_{42} \cdot x_2 + w_{43} \cdot x_3\end{array}\right)$$ We have a similar situation for the 'who' matrix between hidden and output layer. So the output $z_1$ and $z_2$ from the nodes $o_1$ and $o_2$ can also be calculated with matrix multiplications: $$ \left(\begin{array}{cc} z_1\\z_2\end{array}\right)=\left(\begin{array}{cc} wh_{11} & wh_{12} & wh_{13} & wh_{14}\\wh_{21} & wh_{22} & wh_{23} & wh_{24}\end{array}\right)\left(\begin{array}{cc} y_1\\y_2\\y_3\\y_4\end{array}\right)=\left(\begin{array}{cc} wh_{11} \cdot y_1 + wh_{12} \cdot y_2 + wh_{13} \cdot y_3 + wh_{14} \cdot y_4\\wh_{21} \cdot y_1 + wh_{22} \cdot y_2 + wh_{23} \cdot y_3 + wh_{24} \cdot y_4\end{array}\right)$$ You might have noticed that something is missing in our previous calculations. We showed in our introductory chapter Neural Networks from Scratch in Python that we have to apply an activation or step function $\Phi$ on each of these sums. The following picture depicts the whole flow of calculation, i.e. the matrix multiplication and the succeeding application of the activation function. The matrix multiplication between the matrix wih and the matrix of the values of the input nodes $x_1, x_2, x_3$ calculates the output which will be passed to the activation function. The final output $y_1, y_2, y_3, y_4$ is the input of the weight matrix who: Even though treatment is completely analogue, we will also have a detailled look at what is going on between our hidden layer and the output layer: Initializing the weight matrices One of the important choices which have to be made before training a neural network consists in initializing the weight matrices. We don't know anything about the possible weights, when we start. So, we could start with arbitrary values? As we have seen the input to all the nodes except the input nodes is calculated by applying the activation function to the following sum: $$y_j = \sum_{i=1}^{n} w_{ji} \cdot x_i$$ (with n being the number of nodes in the previous layer and $y_j$ is the input to a node of the next layer) We can easily see that it would not be a good idea to set all the weight values to 0, because in this case the result of this summation will always be zero. This means that our network will be incapable of learning. This is the worst choice, but initializing a weight matrix to ones is also a bad choice. The values for the weight matrices should be chosen randomly and not arbitrarily. By choosing a random normal distribution we have broken possible symmetric situations, which can and often are bad for the learning process. There are various ways to initialize the weight matrices randomly. The first one we will introduce is the unity function from numpy.random. It creates samples which are uniformly distributed over the half-open interval [low, high), which means that low is included and high is excluded. Each value within the given interval is equally likely to be drawn by 'uniform'. import numpy as np number_of_samples = 1200 low = -1 high = 0 s = np.random.uniform(low, high, number_of_samples) # all values of s are within the half open interval [-1, 0) : print(np.all(s >= -1) and np.all(s < 0)) OUTPUT: True The histogram of the samples, created with the uniform function in our previous example, looks like this: import matplotlib.pyplot as plt plt.hist(s) plt.show() /> The next function we will look at is 'binomial' from numpy.binomial: binomial(n, p, size=None) It draws samples from a binomial distribution with specified parameters, n trials and probability p of success where n is an integer >= 0 and p is a float in the interval [0,1]. ( n may be input as a float, but it is truncated to an integer in use) s = np.random.binomial(100, 0.5, 1200) plt.hist(s) plt.show() /> We like to create random numbers with a normal distribution, but the numbers have to be bounded. This is not the case with np.random.normal(), because it doesn't offer any bound parameter. We can use truncnorm from scipy.stats for this purpose. from scipy.stats import truncnorm s = truncnorm(a=-2/3., b=2/3., scale=1, loc=0).rvs(size=1000) plt.hist(s) plt.show() /> The function 'truncnorm' is difficult to use. To make life easier, we define a function truncated_normal in the following to fascilitate this task: def truncated_normal(mean=0, sd=1, low=0, upp=10): return truncnorm( (low - mean) / sd, (upp - mean) / sd, loc=mean, scale=sd) X = truncated_normal(mean=0, sd=0.4, low=-0.5, upp=0.5) s = X.rvs(10000) plt.hist(s) plt.show() /> Further examples: X1 = truncated_normal(mean=2, sd=1, low=1, upp=10) X2 = truncated_normal(mean=5.5, sd=1, low=1, upp=10) X3 = truncated_normal(mean=8, sd=1, low=1, upp=10) import matplotlib.pyplot as plt fig, ax = plt.subplots(3, sharex=True) ax[0].hist(X1.rvs(10000), density=True) ax[1].hist(X2.rvs(10000), density=True) ax[2].hist(X3.rvs(10000), density=True) plt.show() /> We will create the link weights matrix now. truncated_normal is ideal for this purpose. It is a good idea to choose random values from within the interval $$(-\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}})$$ where n denotes the number of input nodes. So we can create our "wih" matrix with: no_of_input_nodes = 3 no_of_hidden_nodes = 4 rad = 1 / np.sqrt(no_of_input_nodes) X = truncated_normal(mean=2, sd=1, low=-rad, upp=rad) wih = X.rvs((no_of_hidden_nodes, no_of_input_nodes)) wih OUTPUT: array([[-0.3053808 , 0.5030283 , 0.33723148], [-0.56672167, -0.35983275, 0.22429119], [ 0.29448907, 0.23346339, 0.42599121], [ 0.30590101, 0.47121411, 0.07944389]]) Similarly, we can now define the "who" weight matrix: no_of_hidden_nodes = 4 no_of_output_nodes = 2 rad = 1 / np.sqrt(no_of_hidden_nodes) # this is the input in this layer! X = truncated_normal(mean=2, sd=1, low=-rad, upp=rad) who = X.rvs((no_of_output_nodes, no_of_hidden_nodes)) who OUTPUT: array([[-0.31382676, 0.28733613, -0.11836658, 0.29367762], [ 0.45613032, 0.43512081, 0.30355432, 0.43769041]])
https://python-course.eu/machine-learning/neural-networks-structure-weights-and-matrices.php
CC-MAIN-2022-05
refinedweb
1,810
65.01
#include <genesis/placement/pquery/name.hpp> A name of a Pquery and its multiplicity. This class is modeled after the jplace standard, which allows for multiple names for a Pquery. This is useful if there are identical sequences in the original data for which the phylogenetic placement was carried out. The placements of those sequences can then be treated as one entity, i.e., one Pquery, while still maintaining all their identifiers (names). Furthermore, each such name can have a multiplicity, which can be used to store, e.g., the number of replicates of the original sequence. It is used as a factor for the weights of PqueryPlacements in some calculations. Definition at line 55 of file name.hpp. Default constructor. Initializes the name to an empty string and the multiplicity to 1.0. Constructor that takes a name and optionally a multiplicity. Definition at line 72 of file name.hpp. Multiplicity of the name. This property is defined by the jplace standard. It is used as a count for e.g., the abundance of this Pquery (respectively this name). For some calculations, this value is used as a factor for the placment weights (see PqueryPlacement::like_weight_ratio). Thus, by default, the value is initialized to 1.0. If a Pquery has multiple names, all their multiplicities are added when being used as a weight factor. Definition at line 131 of file name.hpp.
http://doc.genesis-lib.org/classgenesis_1_1placement_1_1_pquery_name.html
CC-MAIN-2019-09
refinedweb
233
60.31
Solaris sockets, past and present Prior to Solaris 2.6, sockets were an abstraction that existed at the library level. That is, much of the socket state and socket semantics support were provided within the libsocket library. The kernel's view of a process's socket connection entailed a file descriptor and linkage to a Stream head, which provided the path to the underlying transport. The disparity between the library socket state and the kernel's view was one of several reasons a new implementation was introduced in Solaris 2.6. To provide a relevant basis for comparison, we'll start by looking at what happens in the pre-Solaris 2.6 release (that is, releases up to and including Solaris 2.5.1) when a socket is created. The major software layers are shown in Figure 1 for reference. The primary software components are the socket library and the sockmod Streams module. The specfs layer is shown for completeness and is part of the layering, due to the use of pseudodevices as an entry point into the networking layers. To digress for a moment, the special filesystem, specfs, came out of SVR4 Unix as a means of addressing the issue of device special files that exist on Unix on-disk filesystems (e.g., UFS). Unix systems have always abstracted I/O (input/output) devices through device special files. The /dev directory namespace stores files that represent physical devices and pseudodevices on the system. Using device major numbers, those device files provide an entry point into the appropriate device driver, and using minor numbers, they are able to uniquely identify one of potentially many devices of the same type. (That is something of an oversimplification, but is sufficient for our purpose here in describing specfs.) The /dev directory resides on the root filesystem, which is an instance of UFS. As such, references to the filesystem and its files and directories are handled using the UFS filesystem operations and UFS file operations. That is usually sufficient, but is not desired behavior for device special files. I/O to a device special file requires entry into a device driver. That is, issuing an open(2) system call on /dev/rmt/0 means someone wishes to open the tape device represented by /dev/rmt/0, thereby entering the appropriate driver's xx_open() routine. As a file on a UFS filesystem, the typical open routine called would be the ufs_open() code, but that's not what we want for devices. The specfs filesystem was designed to address such situations; it provides a straightforward mechanism for linking the underlying structures for file support in the kernel to the required device driver interfaces. Like all filesystems in Solaris (and any SVR4-based Unix) it's based on the VFS/vnode infrastructure. (See Solaris Internals and UNIX Internals in the Resources section for detailed information on VFS.) Getting back to sockets in Solaris 2.5.1, the specfs layer!
http://www.itworld.com/swol-0309-insidesolaris
crawl-002
refinedweb
492
54.02
Details Description Bigtop has a lot of redundancy in the way it names its directories: $ ls trunk/ bigtop-deploy bigtop-packages bigtop-test-framework CHANGES.txt DEVNOTES docs Makefile package.mk README test bigtop.mk bigtop-repos bigtop-tests check-env.sh DISCLAIMER LICENSE NOTICE pom.xml src It would be nice to remove the prefix "bigtop-" in all these directories Activity - All - Work Log - History - Activity - Transitions - What value does that bring? So far I see more value in removing this redanduncy than keeping them - I see more projects not using that standard rather than using it. I would rather say Hadoop is the exception +1 for nuking bigtop from the toplevel naming. Its painful working on packaging. Tab completion is difficult. Thank god for mc to move around the tree quickly. Still seems like a good idea? Anyone? +1 for cleaning up namespace on top level. However its seems a big task to do this change in only one commit, since I think almost nobody is fluent in all the different aspects of bigtop involved. I would prepose to postpone it to 1.1.0 and doing it gradually. (Adding subtasks) Absolutely. Moving to 1.1.0 Some cleanup is necessary in terms of nuking redundant directories - most notably the top level test dir, and possibly the top level docs dir. But I think the "bigtop-" prefix is reasonable to keep. It's become reasonably standard in open source projects to use a project prefix on directories like this - see, e.g. It helps make it easy to tell what project you're in, etc.
https://issues.apache.org/jira/browse/BIGTOP-245
CC-MAIN-2017-04
refinedweb
267
68.47
Adding delays around the SPI control signal changes reduced the OLED glitch rate from maybe a few a week to once a week, but didn’t completely solve the problem. However, (nearly) all the remaining glitches seem to occur while writing a single row of pixels, which trashes the rest of the display and resolves on the next track update. That suggests slowing the timing during the initial hardware setup did change the results. Another look at the Luma code showed I missed the Chip Enable (a.k.a. Chip Select in the SH1106 doc) change in serial.py: def _write_bytes(self, data): gpio = self._gpio if self._CE: time.sleep(1.0e-3) gpio.output(self._CE, gpio.LOW) # Active low time.sleep(1.0e-3) for byte in data: for _ in range(8): gpio.output(self._SDA, byte & 0x80) gpio.output(self._SCLK, gpio.HIGH) byte <<= 1 gpio.output(self._SCLK, gpio.LOW) if self._CE: time.sleep(1.0e-3) gpio.output(self._CE, gpio.HIGH) What remains unclear (to me, anyway) is how the code in Luma's bitbang class interacts with the hardware-based SPI code in Python’s underlying spidev library. I think what I just changed shouldn’t make any difference, because the code should be using the hardware driver, but the failure rate is now low enough I can’t be sure for another few weeks (and maybe not even then). All this boils down to the Pi’s SPI hardware interface, which changes the CS output with setup / hold times measured in a few “core clock cycles”, which is way too fast for the SH1106. It seems there’s no control over CS timing, other than by changing the kernel’s bcm2708 driver code, which ain’t happening. The Python library includes a no_cs option, with the caveat it will “disable use of the chip select (although the driver may still own the CS pin)”. Running vcgencmd measure_clock core (usage and some commands) returns frequency(1)=250000000, which says a “core clock cycle” amounts to a whopping 4 ns. Forcibly insisting on using Luma’s bitbang routine may be the only way to make this work, but I don’t yet know how to do that. Obviously, I should code up a testcase to hammer the OLED and peer at the results on the oscilloscope: one careful observation outweighs a thousand opinions. One thought on “Streaming Radio Player: CE Timing Tweak”
https://softsolder.com/2018/04/16/streaming-radio-player-ce-timing-tweak/
CC-MAIN-2021-04
refinedweb
411
71.14
The PhpStorm 7 EAP introduces a new refactoring: Extract Interface. The Extract Interface refactoring allows users to quickly create a new interface based on a selected interface or class. Imagine we have a PersonRepository which features several methods for retrieving and storing data. Chances are that similar functions to getById(), getAll(), save() and delete() will be used in other classes as well. An ideal candidate for an interface! Let’s give the Extract Interface refactoring a go! When placing the caret on the PersonRepository class name, we can use the context menu and find the Refactor | Extract | Interface action. A dialog will pop up in which we can select different options for the refactoring. First of all, we can give the extracted interface a name. In this example, IRepository would make sense. We can select if we want to replace class references with interface references where possible. This is incredibly useful if we want to generalize function parameters and type hints throughout our application. We can also select the namespace for our new interface. PhpStorm will add the namespace as well as imports where needed in the application. Next, we can select the members that will form the interface. Finally, we can also choose if we want to keep PhpDoc blocks, copy or move them. Once the refactoring completes, we have a fresh interface present in our project: The PersonRepository will now also implement the IRepository interface: Give PhpStorm 7 EAP a try and let us hear your thoughts in the issue tracker, the comments below or in our forums! Develop with pleasure! – JetBrains PhpStorm Team
https://blog.jetbrains.com/phpstorm/2013/08/extract-interface-refactoring-for-php/
CC-MAIN-2016-50
refinedweb
266
57.37
What Tutorials Don’t Tell You: How to Approach Projects I often hear that people who follow tutorials find themselves unable to approach JavaScript projects on their own. One reason this happens is that tutorials give you a neat set of steps rather than the actual process of figuring out those steps on your own. Another reason people struggle with projects is that they compare their intermediate steps with someone else’s finished product and get discouraged. The truth of approaching a project isn’t as neat as the tutorials (mine included) make it seem. The reality is that rather than belting out lines of perfect code, projects are done in small pieces with plenty of trial and error and a healthy dose of searching through reference materials. In this article, you’ll learn how to approach JavaScript projects on your own. Important note: As you go through this article, you’ll see some code examples. If any of them seem new or unfamiliar, it’s okay to skim over them for now. The purpose of this article is to have you understand the overall process of approaching a project rather than getting distracted by technical details. First Get Comfortable With the Basics At a minimum, you’ll want to get familiar with some of the basics of JavaScript (and programming in general). This might include variables, functions, if statements, loops, arrays, objects, DOM manipulation methods, such as getElementById, querySelectorAll, and innerHTML. You can Google these or look them up on MDN when you’re done with this article. Once you’re comfortable with these concepts, you’ll move much faster because you can focus on creating your project instead of worrying about how to write an if statement. A lot of people rush past this step, and everything takes longer as a result. It’s like attempting to play Level 3 of a video game without getting comfortable with the controls back in Level 1. Lots of avoidable frustration. Make a Plan Instead of jumping in and trying to do your project in a series of linear steps, take some time to look at the big picture first. Make a general plan. What sorts of things need to happen? For example, if you’re trying to make a countdown clock, you might need a way to measure time, a place to hold the data, somewhere to display the numbers, and maybe a way to control the clock. At this stage, you don’t want to get bogged down in technical details because you’re still thinking through the general ideas of what you want. As long as you have an overall plan, you’ll have guideposts that will prevent you from getting too badly lost. In software design. this technique is often referred to as a use-case analysis. Write It Without Code Now that you have your plan, you’ll want to figure out the details. My favorite way to do this is to write specifically what you want each part of your project to do. The key is to write it not in code but in plain language. (This is called pseudocode.) That way, you can think clearly about what your project is doing without getting distracted by syntax details. For a countdown clock, your notes might look something like this: - Get current time - Specify end time - Find difference between current time and end time to get remaining time - Repeatedly get the remaining time for each step of the countdown - Show remaining time on screen at each step of the countdown You can break individual parts into smaller pieces like so: - Show remaining time on screen at each step of the countdown - Divide time into hours, minutes, seconds - Show hours in one container - Do the same for minutes and seconds Once you have your logic written out, you’ll have a much easier time writing code. This is because it’s simpler to write the code for a concrete step such as “subtract current time from end time” than it is to write the code for a whole project like “build a countdown clock.” Also note that you won’t need to have a perfect series of steps written out at the beginning. This is a fluid process where it’s okay to add things, remove things, get things wrong, learn, and improve. Build Small Pieces Once you have your steps written out, you can start writing small pieces of code. For a countdown clock, you might start by getting the current time: const currentTime = new Date().getTime(); console.log(currentTime); Once you’re satisfied, you might then get the countdown’s end time: const endTime = new Date(2017, 4, 4, 7, 30).getTime(); console.log(endTime); When you’re making your own clock, you can pick a specific end date as in the code sample above, but since I don’t want the code in this article to stop working after a certain date, I’m going set the end time to 10 days from now instead (note the conversion of 10 days to milliseconds since those are the units JavaScript uses): const endTime = new Date().getTime() + 10*24*60*60*1000; console.log(endTime); Here are some benefits of writing your code in small pieces: - You get a chance to make sure the individual pieces of functionality work before moving on to the next steps. - It’s easier to think through what you’re doing when you’re not distracted by too many moving parts at a time. - You’ll move faster because you’re not trying to keep track of a million things at once. - It’s a lot easier to spot and prevent errors this way. - You can experiment and learn as needed. - You’ll often end up writing helpful pieces of code you can use elsewhere. Put the Pieces Together With your individual pieces ready, you can start putting your project together. For this stage, the key challenge is to make sure the pieces that worked on their own will still work once they’re connected. This might require some small changes. For example, here’s how you might put together the start time and end time to calculate the remaining time in a countdown clock: // set our end time const endTime = new Date().getTime() + 10*24*60*60*1000; // calculate remaining time from now until deadline function getRemainingTime(deadline){ const currentTime = new Date().getTime(); return deadline - currentTime; } // plug endTime into function to output remaining time console.log(getRemainingTime(endTime)); This method of putting smaller pieces together is much easier than trying to make an entire project all at once because this way, you don’t need to keep track of everything in your head at the same time. Now that we have a function to get the remaining time, we can run the function repeatedly to keep the time display updated. The HTML: <div id="clock"></div> The JavaScript: // set our end time const endTime = new Date().getTime() + 10*24*60*60*1000; // calculate remaining time from now until deadline function getRemainingTime(deadline){ const currentTime = new Date().getTime(); return deadline - currentTime; } // store clock div to avoid repeatedly querying the DOM const clock = document.getElementById('clock'); // show time repeatedly function showTime(){ const remainingTime = getRemainingTime(endTime); clock.innerHTML = remainingTime; requestAnimationFrame(showTime); } requestAnimationFrame(showTime); In the above example, we’ve added a showTime function that displays the remaining time on the screen. At the end of the function, we include requestAnimationFrame(showTime), which basically says run showTime again as soon as the browser is ready. This allows us to keep updating the time display in a highly performant manner. You’ll notice the countdown is entirely in milliseconds. The next step will be to convert everything into days, hours, minutes, and seconds. Using the approaches you’ve learned so far (small steps, etc.), you could first convert milliseconds to seconds, see how that looks, and put it in your function. Then you can repeat this process to calculate minutes, hours, and days. The end result might look something like this: function showTime(){ const remainingTime = getRemainingTime(endTime); const seconds = Math.floor((remainingTime/1000) % 60); const minutes = Math.floor((remainingTime/(60*1000)) % 60); const hours = Math.floor((remainingTime/(60*60*1000)) % 24); const days = Math.floor(remainingTime/(24*60*60*1000)); clock.innerHTML = `${days}:${hours}:${minutes}:${seconds}`; requestAnimationFrame(showTime); } requestAnimationFrame(showTime); Experiment and Test By this point in your project, you will have done plenty of experimenting and testing to make sure everything works. Once it seems to work, see if you can break it. For example, what if the user clicks here or there? What if one of the inputs is unexpected? What if the screen size is narrow? Does everything work in the browsers you expect? Is there a more efficient approach to any part of this project? Going back to our countdown clock example, what happens if the timer reaches zero? We can add an if statement to make sure the clock stops at zero: function showTime(){ ... // ensure clock only updates if a second or more is remaining if(remainingTime >= 1000){ requestAnimationFrame(showTime); } } Note that the reason we used 1000 milliseconds (1 second) in this case is that if we used zero, the clock would overshoot and end up at -1. If your clock is using smaller units than seconds, then make the ending condition smaller than one second. A good friend pointed out this issue as I was working on this article, and it’s just another example of how code might not come out perfect the first time. This leads perfectly into the next point. Get Outside Help Getting outside help can be an important step at any point when doing a project. This help can come from reference materials or other people. The reason I bring this up is that there’s a common myth that developers sit down and write perfect code without having to look anything up or ask anyone for advice. I’ve often heard that newer developers are surprised to know how frequently an experienced developer will look things up. In fact, since it’s impossible to know everything, being able to look up information is one of the most valuable skills you can have. Tools and techniques change, but the skill of learning doesn’t go away. Refactor Your Code Before you finish your project, you’ll want to refactor your code. Here are some questions you can ask yourself in order to improve your project: Is your code concise and readable? If you have to make a choice between conciseness and readability, you’ll usually want to pick readability unless there’s a huge performance reason. Readability makes your code easier to maintain, update, and fix. Is your code efficient? For example, if you’re searching your document for the same element over and over, you could store the element in a variable instead, to make your code do less work. We’ve already done this in our countdown clock example with the following piece: // store clock div to avoid repeatedly querying the DOM const clock = document.getElementById('clock'); Have you used clear naming for your functions and variables? For example a function name like showTime would be much clearer than st. This is important because people often name things thinking they make sense, and then they get lost later because they forgot what their abbreviations meant. A good test for clarity is whether you’d have to explain a name too much to someone who is unfamiliar with the code. Are there any potential naming collisions? For example, are you using names like “container” that are highly likely to be used elsewhere? Are you polluting global scope with too many variables? One easy way to protect global scope is to throw your countdown clock code into an IIFE (immediately-invoked function expression). That way, the clock can access all its variables, but nothing else can. (function(){ // code goes here })(); Has the editing process caused any errors? For instance, have you changed a variable name in one place without changing it everywhere else? Have you added something to an object but forgotten to put in an extra comma? Does the output need to be polished? In our countdown clock example, it would be nice to see leading zeroes (so 10:09 instead of just 10:9). One way would be to see if a number is less than 9, and then put a ‘0’ in front of it, but that’s kind of long. I saw a code sample once that had a neat trick which was to add a ‘0’ in the front and then use slice(-2) to just take the last two digits no matter what. The changes would make our code look like this: function showTime(){ const remainingTime = getRemainingTime(endTime); const seconds = ('0' + Math.floor((remainingTime/1000) % 60)).slice(-2); const minutes = ('0' + Math.floor((remainingTime/(60*1000)) % 60)).slice(-2); const hours = ('0' + Math.floor((remainingTime/(60*60*1000)) % 24)).slice(-2); const days = ('0' + Math.floor(remainingTime/(24*60*60*1000))).slice(-2); clock.innerHTML = `${days}:${hours}:${minutes}:${seconds}`; // ensure clock only updates if a second or more is remaining if(remainingTime >= 1000){ requestAnimationFrame(showTime); } } requestAnimationFrame(showTime); Is your code unnecessarily redundant? Are you repeating code that could be in a function or a loop instead? With reference to the above, we could move the code to add an extra zero to the output into its own function. This reduces duplication and makes things easier to read. function pad(value){ return ('0' + Math.floor(value)).slice(-2); } const seconds = pad((remainingTime/1000) % 60); Would it help to look at this project with fresh eyes? Try coming back to your code after a few days. With a fresh perspective, you’ll start to see which parts can be made cleaner and more efficient. As you refactor, your code will start to seem more and more elegant. Then you’ll put out the finished product and people will wonder how you wrote such perfect code. A few months later, you’ll look back on it and realize you could have made it so much better. As a friend of mine wisely said, that’s a good thing; it means you’re making progress. In case you’re curious, here’s a live demo of the clock example (with some added styles): See the Pen requestAnimationFrame Countdown by SitePoint (@SitePoint) on CodePen. Recap A coding project is rarely a linear process. If there’s one thing I’d like you to take away from this article, it’s that small pieces and experimentation will take you farther than trying to do everything at once. If you’ve struggled with JavaScript projects in the past, I hope this article has been helpful, and if you have any other ideas that have helped you approach projects, I’d love to hear them in the comments. This article was peer reviewed by Vildan Softic and Matt Burnett. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
https://www.sitepoint.com/how-to-approach-javascript-projects-what-the-tutorials-dont-tell-you/?utm_source=rss
CC-MAIN-2020-05
refinedweb
2,520
70.43
I use this code to do a count for the variable "counting" ! As seen in the code the First for-loop do this counting and put this to the "Mainfile" as it is written but without the // ofcourse before. Below I have a Second For loop. As seen I am counting from: 1535 until (20 + 1). If I put back the // before the First for loop before MainFile and try to use the Second ouput for MainFile nothing is written to the file. So the problem is why does the First MainFile write to the file but not the other one ? Code:#include "stdafx.h" #include <iostream> #include <fstream> #include <sstream> #include <string> #include <vector> using namespace std; int main () { int Combinations = 0; int counting = 0; ofstream MainFile; MainFile.open ("Main.txt"); ifstream StockFile ("CCE.txt"); for (int Number = 1535; Number < 1555; Number++) { if ((Number >= 1535) && (Number <= 1555)) { counting = (counting + 1); //MainFile << counting << "\n"; } } for (int Number2 = 1535; Number2 < (20 + 1); Number2++) { if ((Number2 >= 1535) && (Number2 <= 1555)) { MainFile << Number2 << "\n"; } } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/97802-2-loops.html
CC-MAIN-2014-52
refinedweb
172
69.72
Static Blog with Angular 10, Scully and JAMstack Until now Angular didn't have a static site generator unlike the other two popular libraries -- React and Vue.js but thanks to the HeroDevs team which has recently released a static site generator for Angular projects named Scully. Nowadays, static site generators become popular especially among developers for creating blogs and server-rendering their JavaScript SPAs for performance and SEO reasons. Scully can render your Angular 9+ app in the server side but can also be used as a blog generator with Markdown support which can be used as an alternative to popular generators such as Jekyll or Gatsby. What is Scully? Scully is a JAMstack solution for Angular 9+ developers like Gatsby for React or VuePress for Vue.js. It's a static site generator renders your Angular 10 app to HTML and CSS and since it supports Markdown you can use it for blogging without worrying about SEO and search engine discoverability. According to their official website: Scully makes building, testing and deploying Jamstack apps extremely simple. Create your Blog with Angular 10 and Scully Now that we have seen some theory, let's see how to use Angular 10 and Scully to create a blog. Let's get started with the. - Scully uses Chromium. Therefore, your Operating System, as well as its administrator rights must allow its installation and execution. Step 1 — Installing Angular CLI 10 Let's start by installing the latest Angular CLI v10 version. Angular CLI is the official tool for initializing and working with Angular projects. To install it, open a new terminal and run the following command: $ npm install -g @angular/cli At the time of this blog post, angular/cli v10 will be installed on your system. Step 2 — Creating a New Angular 10 App Let's now create our project. Head back to your terminal and run the following commands: $ cd ~ $ ng new angular10blog The CLI will ask you a couple of questions — If Would you like to add Angular routing? Type y for Yes and Which stylesheet format would you like to use? Choose CSS. Scully depends on the app's router module in order to generate the website's pages Step 3 — Installing Scully Now, navigate in your project's folder and install Scully using the following commands: $ cd angular10blog $ ng add @scullyio/init The @scullyio/init schematic will automatically make all the required changes in your Angular 10 project. This is the kind of output you'll get in your terminal if everything is ok: Installing packages for tooling via npm. Installed packages for tooling via npm. Install ng-lib for Angular v9 ✅️ Added dependency UPDATE src/app/app.module.ts (466 bytes) UPDATE src/polyfills.ts (3028 bytes) UPDATE package.json (1376 bytes) ✔ Packages installed successfully. ✅️ Update package.json ✅️ Created scully configuration file in scully.angular10blog.config.ts CREATE scully.angular10blog.config.ts (188 bytes) UPDATE package.json (1436 bytes) The command will generate a Scully configuration file named scully.<projectName>.config.ts where the projectName is the name of your Angular project. This file will be used to configure various aspects of your static app. This is the initial configuration file: import { ScullyConfig } from '@scullyio/scully'; export const config: ScullyConfig = { projectRoot: './src', projectName: '<projectName>', outDir: './dist/static', routes: {}, }; You can now render your Angular 10 app statically using Scully. Please note that any routes in your Angular 10 project that contain route parameters will not be pre-rendered until you specify the required parameters in the Scully configuration file. Step 5 — Generating your Static Blog Before you can run Scully, you'll need to build your Angular 10 project using the following command: $ ng build Once your Angular 10 project is built, you can run Scully to build your static blog using the following command: $ npm run scully That's it! You now have a fast pre-rendered Angular 10 static site. You can find the generated static files under the ./dist/static folder. It contains all the static pages of your application. Step 6 — Serving your Angular 10 Blog You can access the statically rendered app in the /dist/static folder where you can find an index.html file for each route in your Angular 10 app. Scully provides its own server that enables you to run your JAMstack site. You can start the Scully's server by running the following command: $ npm run scully:serve The command will start two servers, one of the ng build command and other for the static build which allow you to run both versions of your Angular app. Conclusion In this tutorial, we've seen how to use Angular 10 with Scully to build a static JAMstack app that can be used as a blog that all the performance and SEO benefits of typical blogs.
https://www.techiediaries.com/angular-10-static-blog-scully/
CC-MAIN-2021-39
refinedweb
807
53.92
Query. Django supports negation, addition, subtraction, multiplication, division, modulo arithmetic, and the power operator on query expressions, using Python constants, variables, and even other expressions. from django.db.models import Count, F, Value from django.db.models.functions import Length, Upper from django.db.models.lookups import GreaterThan #)) ) # Lookup expressions can also be used directly in filters Company.objects.filter(GreaterThan(F('num_employees'), F('num_chairs'))) # or annotations. Company.objects.annotate( need_chairs=GreaterThan(F('num_employees'), F('num_chairs')), ) Note These expressions are defined in django.db.models.expressions and django.db.models.aggregates, but for convenience they’re available and usually imported from django.db.models. F()expressions¶ An F() object represents the value of a model field, transformed. Let’s try this with. To access the new value saved this way, the object must be reloaded: reporter = Reporters.objects.get(pk=reporter.pk) # Or, more succinctly: reporter.refresh_from_db(): getting the database, rather than Python, to do work reducing the number of queries some operations require Support for transforms of the field was added. F()¶( fragment for the database function. Returns a tuple (sql, params), where sql is the SQL string, and params is the list or tuple of query parameters. must: A class attribute, as a format string, that describes the SQL that is generated for this aggregate. Defaults to '%(function)s(%(distinct)s%(expressions)s)'. A class attribute describing the aggregate function that will be generated. Specifically, the function will be interpolated as the function placeholder within template. Defaults to None. Defaults to True since most aggregate functions can be used as the source expression in Window. A class attribute determining whether or not this aggregate function allows passing a distinct keyword argument. If set to False (default), TypeError is raised if distinct=True is passed. Defaults to None since most aggregate functions result in NULL when applied to an empty result set. The expressions positional arguments can include expressions, transforms of the model field, default argument takes a value that will be passed along with the aggregate to Coalesce. This is useful for specifying a value to be returned other than None when the queryset (or grouping) contains no entries. The **extra kwargs are key=value pairs that can be interpolated into the template attribute. Support for transforms of the field was added. The default argument was added. or its transform.'))) Support for transforms of the field was added. (1) as "a" FROM "comment" U0 WHERE ( U0."created_at" >= YYYY-MM-DD HH:MM:SS AND U0."post_id" = "post"."id" ) LIMIT 1 )). Sometimes database expressions can’t easily express a complex WHERE clause. In these edge cases, use the RawSQL expression. For example: >>> from django.db.models.expressions import RawSQL >>> queryset.annotate(val=RawSQL("select col from sometable where othercol = %s", (param,))) These extra lookups may not be portable to different database engines (because you’re explicitly writing SQL code) and violate the DRY principle, so you should avoid them if possible. RawSQL expressions can also be used as the target of __in filters: >>> queryset.filter(id__in=RawSQL("select id from sometable where col = %s", (param,))). Defaults to False. The SQL standard disallows referencing window functions in the WHERE clause and Django raises an exception when constructing a QuerySet that would do that. accepts an expression or a sequence an expression or. This attribute is set to 'RANGE'. PostgreSQL has limited support for ValueRange and only supports use of the standard start and end points, such as CURRENT ROW and UNBOUNDED FOLLOWING.), >>> ), >>> ). >>> from django.db.models import Avg, F, ValueRange, Window >>> Movie.objects.annotate( >>> avg_rating=Window( >>> expression=Avg('rating'), >>> partition_by=[F('studio'), F('genre')], >>> order_by=F('released').asc(), >>> frame=ValueRange(start=-12, end=12), >>> ), >>> ). Tells Django that this expression can be referenced in QuerySet.filter(). Defaults to True. Tells Django that this expression can be used as the source expression in Window. Defaults to False. Tells Django which value should be returned when the expression is used to apply a function over an empty result set. Defaults to NotImplemented which forces the expression to be computed on the database.. summarize is a boolean that, when True, signals that the query being computed is a terminal aggregate query. for_save is a boolean that, when True, signals that the query being executed is performing a create or update. Returns an ordered list of inner expressions. For example: >>> Sum(F('foo')).get_source_expressions() [F('foo')] Takes a list of expressions and stores them such that get_source_expressions() can return them. Returns a clone (copy) of self, with any column aliases relabeled. Column aliases are renamed when subqueries are created. relabeled_clone() should also be called on any nested expressions and assigned to the clone. change_map is a dictionary mapping old aliases to new aliases. Example: def relabeled_clone(self, change_map): clone = copy.copy(self) clone.expression = self.expression.relabeled_clone(change_map) return clone A hook allowing the expression to coerce value into a more appropriate type. expression is the same as self.. Returns the expression ready to be sorted in ascending order. nulls_first and nulls_last define how null values are sorted. See Using F() to sort null values for example usage. Returns the expression ready to be sorted in descending order. nulls_first and nulls_last define how null values are sorted. See Using F() to sort null values for example usage..
https://django.readthedocs.io/en/stable-4.0.x/ref/models/expressions.html
CC-MAIN-2022-40
refinedweb
882
52.05
example explanation - Java Beginners example explanation can i have some explanation regarding...; FileInputStream fis = null; ObjectInputStream in = null; try...(); FileOutputStream fos = null; ObjectOutputStream out = null; try Nested try : java Demo 2 3, it will give output 0 , the try block of method is called which... is converted to int and gives 0. java Demo 2 2, it will give output 1 , the try.... java Demo 3 2, it will give output 0 , the try block of method is called which co - Java Beginners Can you give me some good factory pattern examples? Can you give me some good factory pattern examples? HI, I am looking for some factory pattern examples to learn about them, if you can point me towards some of the examples that would we very helpful. Thanks Hello Give difference between LinkedList and ArrayList - Java Beginners . For more information, visit the following links: difference between LinkedList and ArrayList Hi, What explanation - Java Beginners java application. Hi Friend, Please visit the following links to know about the Garbage collection. I have create small java appication. I don't know about Good tutorials for beginners in Java Good tutorials for beginners in Java Hi, I am beginners in Java... in details about good tutorials for beginners in Java with example? Thanks.  ... the various beginners tutorials related to Java SQL tutorial with examples SQL tutorial with examples Hi, I am looking for good SQL tutorial with examples code. Please give me good urls for SQL tutorial with example code..., Codes and Tutorials page. Above tutorial is for beginners and more circles - Java Beginners more circles Write an application that uses Circle class you created in the previous assignment. ? The program includes a static method... in the object. Hi Friend, Try the following code: import examples - Java Beginners input boolean bool=false; do{ try huffman code give the explanation for this code huffman code give the explanation for this code package bitcompress; class Compress extends BitFilter { private boolean encode_flag = false; private BitFilter next_filter = null; private int value; private even more circles - Java Beginners even more circles Write an application that compares two circle objects. ? You need to include a new method equals(Circle c) in Circle class... is printed. Hi Friend, Try the following code: import java.util. Ajax Examples of good examples like Google Suggest, GMail and alike, so I decided to cut a long... Ajax Examples There are a few AJAX demos and examples on the web right now. While (); } } } ------------------------------- read for more information, help me to give code Write a program that reads a file named famous.txt and prints out the line with the longest length. In the case HOW TO BECOME A GOOD PROGRAMMER HOW TO BECOME A GOOD PROGRAMMER I want to know how to become good...: CoreJava Tutorials Here you will get lot of examples with illustration where you can learn java easily and make a command over core java to proceed further. Thanks Can you give few examples of final classes defined in Java API? Can you give few examples of final classes defined in Java API? Hi, Can you give few examples of final classes defined in Java API? thanks please help me to give code - Java Beginners please help me to give code Write a program that prints an n-level...(); } } } ---------------------------------- read for more information, Java examples for beginners of examples. Java examples for beginners: Comparing Two Numbers The tutorial provide you the simple form of examples of Java, which will highlight you the method...In this tutorial, you will be briefed about the word of Java examples, which More than 1 preparedStatement object - Java Beginners java... Thanks Hi Friend, You can use more than one prepared Statement object in the java class.Try the following code: import java.sql....More than 1 preparedStatement object Hey but I want to use more than display co-occurrence words in a file display co-occurrence words in a file how to write java program for counting co occurred words in the file Pleae help me to give logic and code for this program - Java Beginners ); } } ----------------------------- read for more information, help me to give logic and code for this program Write examples - Java Beginners examples as am new to java can you please help me with basic programs on java with their examples Hi Friend, Please visit the following link: Hope Java tutorials for beginners with examples Java Video tutorials with examples are being provided to help the beginners..., methods, interface, variables, etc. Java tutorials for beginners with examples... programs are accompanied with Java video that displays Java examples need ENUM examples need ENUM examples i need enum sample examples Hi Friend, Try the following code: public class EnumExample { public enum Languages{ C, Java, DOTNET, PERL } public static void main(String[] args){ int Java try, catch, and finally Java try, catch, and finally The try, catch, and finally keywords are Java keywords... exceptions in Java is achieved through the use of the try and catch blocks. Catch can u plz try this program - Java Beginners can u plz try this program Write a small record management application for a school. Tasks will be Add Record, Edit Record, Delete Record, List.... --------------------- <%@ page language="java Plz give the answers - Java Interview Questions information,Examples and tutorials on java visit to : http...Plz give the answers 1.Computing cos(x) using the cosine series...), is the number itself. Write a JAVA program to find all perfect numbers between 2 JSP Tutorial For Beginners With Examples JSP Tutorial For Beginners With Examples In this section we will learn about... presentation logic. For more tutorial/examples you may go through the link http... with an example wherever it is required. JSP also called Java Server Pages is used examples more connectivity examples with different queries from the following links...examples Hi sir...... please send me the some of the examples... questions . Hello Friend, Which type of connectivity examples do you Real time examples - Java Beginners ,constructor overloading concept in java and explain with real time examples? .../java/master-java/method_overloading.shtml Ajax examples Ajax examples Hi, I am Java programmer and I have done programming in Java. Now I am learning ajax from scratch. Tell me the good examples of Ajax. Thanks Hi, Since you have already experience in development more doubts sir. - Java Beginners more doubts sir. Hello sir, Sir i have executed your code... in the bottom of the page.sir i also need to add some more buttons as in internet exoplorer such as search bar and some more buttons.Sir help me try-with-resource Statement The try-with-resource Statement In this section, you will learn about newly added try-with-resource statement in Java SE 7. The try-with-resource... or work is finished. After the release of Java SE 7, the try-with-resource Java Tutorial with examples Java Tutorial with examples What is the good urls of java tutorial with examples on your website? Thanks Hi, We have many java tutorial with examples codes. You can view all these at Java Example Codes Java Code Explanation Java Code Explanation Can you please Explain the following code : import java.util.*; class Difference { public static void main(String args[]) { int n,i,res=0,sum=0,dif give the code for this ques/// give the code for this ques/// write a program in java in which there is a clss readline. the functionality of this class is to read a string from... can use some symbol/character as a terminator/// Hi Friend, Try Examples of iText in java. In this tutorial you will know more about the landscape portrait... Examples of iText  ... have a table and we want to give a title to it.   JSP Simple Examples ; Using Protected Access in JSP In java there are three types... access specifiers to be more accessible.  ...; EL and Complex Java Beans Java Beans give the code for this ques/// give the code for this ques/// write a program in java which contains a class simple. this class contains 4 members variables namely a,b,c,and d...// Hi Friend, Try the following code: class Simple{ int a,b,c,d explanation the explanation in internet,but not very clear about it. Thank you Jboss 3.2 EJB Examples this problem for good, we now proceed to develop examples for all the types... explains the method of developing various types of Enterprise Java Beans... in console-mode.. While, the EJB tutorials gave examples of accessing try catch method in java try catch method in java try catch method in java - when and how should i use the try and catch method in Java ? Please visit the following links: http Struts Layout Examples - Struts be clicked on. Im not able to find simple examples/explanation on it. Any help... sending you a link. This link will help you . Visit for more information. http java try catch java try catch try{ return 1; }catch(exception e){ return 2; } finally{ Return 3; } What is the out put if any exception occurred Please give me the code for the below problem - Java Interview Questions Please give me the code for the below problem PROBLEM : SALES TAXES... Vidya Hi Friend, Try the following code: import java.util....; tax.calculateSalesTax(); list.add(tax); no++; System.out.print("Add More Products [y Free PHP Books ; PHP 5 Power Programming In this book, PHP 5's co-creator and two... insights and realistic examples illuminate PHP 5's new object model, powerful design patterns, improved XML Web services support, and much more. Whether you're Hi good afternoon Hi good afternoon write a java program that Implement an array ADT with following operations: - a. Insert b. Delete c. Number of elements d. Display all elements e. Is Empty try and finally block try and finally block hello, If I write System.exit (0); at the end of the try block, will the finally block still execute? hii, if we use System.exit (0); statement any where in our java program Java guide for beginners and understand it completely. Here is more tutorials for Java coding for beginners...Java guide provided at RoseIndia for beginners is considered best to learn... topic, Videos with voice overs on Java programs and hundreds of Java examples give idea give idea how to plot graph in java Please visit the following link: JSP Simple Examples We can have more than one try/catch block. The most specific... JSP Simple Examples Index 1. Creating... in a java. In jsp we can declare it inside the declaration directive java: try finally blocks execution java: try finally blocks execution java: try finally blocks execution try Java Keyword try Java Keyword The try is a keyword defined in the java programming language. Keywords... : -- The try keyword in java programming language is used to wrap the code in a block Good Looking Java Charts and Graphs Good Looking Java Charts and Graphs Is there a java chart library that will generate charts and graphs with the quality of visifire or fusion charts? The JFreeChart graph quality is not professional looking. Unless it can Nested Try-Catch Blocks Nested Try-Catch Blocks In Java we can have nested try and catch blocks. It means that, a try statement can be inside the block of another try. If an inner try sir, please give me answer - Java Beginners hello sir, please give me answer Write a program in Java that calculates the sum of digits of an input number, prints... ways in java? so , sir please tell me full solution of this program Here is your complete Extending thread - Java Beginners ("DONE! " + getName()); } } For more information,Tutorials and Examples...Extending thread what is a thread & give me the programm of exeucte...()); try { sleep((int)(Math.random() * 1000 Java Thread - Java Beginners are the links where you can find very good examples of wait(), notify(), currentThread... and simple examples of "Multithreading". 1. Thread hii i feel confusion in tread. i want to know about 1 again java - Java Beginners is stored in database it is not the good idea. my requirement is the image is stored... = null; PreparedStatement psmnt = null; try { Class.forName...://localhost:8080/examples/page.jsp [Here examples is out web-application folder Java Exercises for Beginners database connectivity Java Methods Java Examples More Java Exercises... for the beginners of Java tutorials. The tutorial highlights the facts, which... beginners and these exercises will help you in doing so. Java Basics Java JSF Examples JSF Examples In this section we will discuss about various examples of JSF. This section describes some important examples of JSF which will help you... examples, I have tried to list these examples in a sequence that will help you Nested try catch be written in the try block. If the exceptions occurs at that particular block then it will be catch by the catch block. We can have more than one try/catch...Nested try catch   give the code for servlets session give the code for servlets session k give the code of total sample examples of servlet session Flex Examples the for each loop in other languages like C#, Java etc. For more Examples...Flex Examples In this section we will see some examples in Flex. This section... the various examples in Flex which will help you in to understand that how Multiple try catch be written in the try block. If the exceptions occurs at that particular block then it will be catch by the catch block. We can have more than one try...Multiple try catch   Java Util Examples List Java Util Examples List - Util Tutorials  ... examples that demonstrate the syntax and example code of java util package.... Java remove() In this section, you will get the detailed explanation about Java Program - Java Beginners for the change. Hi friend, Please give full details and source code to solve the problem For more information,Tutorials and examples on Java visit...Java Program Hi! pls. help me to solve this problem.........Allow Struts 2 tutorial for beginners with examples Struts 2 tutorial for beginners with examples Where is the Struts 2 tutorial for beginners with examples on your website. Thanks Hi... for beginners with examples Thanks Java Exceptions Tutorials With Examples Java Exceptions Tutorials With Examples  ... in Java is to use try and catch block. How.... Nested Try-Catch Blocks In Java we how to give link from jsp to jsp page how to give link from jsp to jsp page hi this is my following code... file is Modify but here i have to give modifyUser.jsp file but i don't khow how... ="root"; String password="root"; int sumcount=0; Statement st; try jQuery - jQuery Tutorials and examples jQuery - jQuery Tutorials and examples The largest collection of jQuery examples... piece of code that provides very good support for ajax. jQuery can be used JPA Examples In Eclipse Subquery and many more. So, let's get started with JPA Examples using Eclipse IDE... JPA Examples In Eclipse In the JPA Examples section we will provide you almost all Java beginners tutorial The beginners tutorial in Java helps you learn the basic of Java... these complete online class of Java for the beginners. The Java professional... can take help of these examples and even post your query. Java professionals Java - Java Beginners friend, Plz give full details and source code where you having the problem. For more information,Tutorials and Examples on Excel in JSP visit plz give me answer plz give me answer Discuss & Give brief description about string class methods Java string methods php video tutorial for beginners with examples php video tutorial for beginners with examples php video tutorial for beginners with examples PHP: Hypertext Preprocessor PHP is an open source server side scripting language. One can use PHP to create dynamic web Learn Hibernate programming with Examples with the examples detailing the use of Hibernate API to solve the persistence problems In this tutorial I will provide you the examples of Hibernate API which teaches you... in creating the database and tables in the database You have good Java Final Project - Java Beginners not try and do more than you are capable of. Think, plan, design and code YOUR... in the Message Area at the bottom. The Customer Display will give a brief explanation...Java Final Project I am having a real hard time with this java plz give me answer plz give me answer description about string class methods Java string methods Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/1251
CC-MAIN-2016-07
refinedweb
2,775
55.24
We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies. For the learners: you should know that doing something like the setup for this challenge inclines you to do is a bad practice. Never do the following: def f(): if condition: return True else: return False This is just dumb. You are returning a boolean, so why even use if blocks in the first place? The correct what of doing this would be: def f(): return condition Because this already evaluates as a boolean. So in this challenge, forget about ifs and elses, and that leap variable, and just do the following: def is_leap(year): return year % 4 == 0 and (year % 400 == 0 or year % 100 != 0) Don't be redundant, be DRY. The problem's explanation is not clear. I am not an english speaker and the bullet points were not clear enough for me. Can that be improved? Suggested explanation is bellow: In the Gregorian calendar three criteria must be taken into account to identify leap years The year can be evenly divided by 4; If the year can be evenly divided by 100, then it is not a leap year The year is also evenly divisible by 400 then it is a leap year. Method 1 def is_leap(year): return year % 4 == 0 and (year % 400 == 0 or year % 100 != 0) Method 2 def is_leap(y): return (y%400==0) or (y%100!=0 and y%4==0) Method 3 def is_leap(year): leap = False if (year % 4 == 0): leap = True if (year % 100 == 0): leap = False if (year % 400 == 0): leap = True return leap Sort 1191 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/write-a-function/forum
CC-MAIN-2021-10
refinedweb
297
77.57
04 March 2009 18:17 [Source: ICIS news] By Mailini Hariharan MUMBAI (ICIS news)--Delays to project schedules are commonplace in the ?xml:namespace> The industry is still waiting for projects due in 2008 to be commissioned. I ndeed, companies have struggled since 2005 to execute plans in the midst of rapidly rising construction costs and a severe shortage of skilled manpower. But there appears to be some relief in sight for those that intend to build refinery and petrochemical plants in the region. The economic crisis has spread to this part of the world slowing down construction activity and easing the pressure on materials and manpower. As a result, companies are now willingly deferring schedules to take full advantage of falling costs. Negotiations have already started with engineering and construction (E&C) companies for a 15-20% reduction in construction costs. Talks are getting tougher this year as clients have become more demanding, admitted one E&C source. The trend was set by Saudi Aramco late last year when it decided to renegotiate contracts for development of its Manifa oilfield. This was followed by delays in the bidding process for its joint-venture refinery projects with Total in Jubail and Conoco Phillips in Yanbu. The entire process of renegotiating or re-tendering of contracts can easily result in a six months delay to schedules. But companies are not too worried as any reduction in costs will only help improve project economics overall. E&C sources protest that it is still too early to look for a 20% reduction. There has certainly been a softening in the cost of steel and cement but it is not to the extent that allows for such a heavy cut, they say. Additionally, equipment costs have yet to ease as workshops still have full order books. But there are not many who dispute that costs are heading south and that the 20% figure would be achievable in just a few more months. It might be tempting for companies to wait longer in expectation of even lower numbers. The road to global economic recovery promises to be long and the daily dose of bad news has yet to stop. But this could be risky, believes one E&C source as the situation could easily change with the award of a few big contracts by companies such as Aramco. “If you believe your project is good, then do it now. Don’t waste time; enjoy this period of low prices,” he advises. But while costs are getting lower finding money is still a major headache and this could be another reason to look for postponements. Project financing started getting difficult from the second quarter of 2008 and the situation has not eased since then. The number of banks lending in the region has fallen. International banks are no longer queuing up for cross-border financing and those that are still active are scrutinising projects closely. When they do lend, banks are demanding a higher spread and this is justified by tight liquidity and heightened credit risk, said a source from a regional bank. And if it is any consolation, it is a difficult time for all industries. “Any project coming to market faces a tough sell. Plus lenders may not want historical debt equity ratios. They will be asking for more equity,” says a source from another regional bank. Putting together finances for mega projects such as the Aramco-Dow Chemical joint venture at Ras Tanura will be extremely challenging. Front end engineering and design (FEED) preparatory work for Ras Tanura kicked off in early January, according to a source close to the project. The end of FEED and tendering of engineering, procurement and construction (EPC) contracts is not likely before early next year. There are many who doubt if Ras Tanura will able to maintain its 2012-14 completion schedule given the problems at Dow. Rumours abound in the region of Aramco increasing its stake in the venture or another company stepping in to take the project forward. According to a recent local media report , Aramco and Dow are planning to drop the Royal Bank of Scotland (RBS), the lead financier for the project, as the bank is facing a severe cash crunch. Ras Tanura might be delayed but the backing of Aramco and the Saudi government should help the project as lenders are still supportive of ventures with state sponsors. There are now two sets of projects in the region according to a source from an engineering company. “The first set is of projects where financing can’t be done. So these are dead. The second category is of those that can be financed and where the plants are needed. But here clients are waiting for prices to fall,” he explained. It is probably time to factor in delays of a year or more
http://www.icis.com/Articles/2009/03/04/9197659/INSIGHT-Middle-East-project-delays-set-to-continue.html
CC-MAIN-2014-52
refinedweb
810
61.16
span8 span4 span8 span4 This tutorial will give an overview of how to use JSON data and messages. Converting to and from JSON and translating to and from GIS will be covered, as well as how to parse and update JSON. The exercises in this tutorial will require FME Desktop version 2017.0 or higher in order to leverage recent improvements in JSON support. JSON is a common data interchange format that has become one of the leading choices for supporting web sites and mobile device applications. Compared to XML, JSON is more lightweight than XML and does not have a separate schema document (XSD) making it an ideal messaging and exchange format. Complex schemas are one aspect that makes XML challenging to work with. JSON is easier to read than XML, which makes it user-friendly. For example, it does not have namespaces the way XML does. Since JSON doesn’t reference an external schema it is easier to update and store data without restrictions. On the other hand, there is no automatic way to validate JSON, which is one of the reasons why XML /GML is chosen for standards-based exchange formats. JSON is the data format of NoSQL databases, another database option in addition to relational databases such as Oracle, MySQL, PostgreSQL, MS SQL Server and others. JSON is an encoding mechanism for describing structured data, such as used for messaging and web services. With JSON, it is possible to create a highly functional, yet fast web application, for example web messaging on social media sites. Many websites provide web feeds, some with coordinate information, some without. These feeds are easy to import and use on the server-side. JSON - JavaScript Object Notation XML - Extensible Markup Language Array - series of items [0,1,2,3,4] Object - unit of related values in a data structure {“name” : “Bob”} JQuery - the path to extract a feature from the JSON data structure Relational databases have a schema and store attributes that are divided into flat database tables. On the other hand, JSON is highly nested, which means that in comparison to relational data models, values are structured as nested objects, instead of tables. JSON values can be an array, an object or a primitive (strings or numbers). A big challenge with JSON data translation to / from GIS is the fundamental differences in data modelling. GIS typically has relational structures, while JSON typically is object oriented. Practically speaking, this means that JSON usually has a highly nested tree structure, and contains series / list features and multiple geometries. So it may take several tables and rows to model a given JSON object in a relational system. FME can model complex data objects, that in JSON are called by different names. While FME can convert any JSON object into an FME feature, the specific file structure of JSON has to be taken into consideration. The first example in the tutorial will use the JSON reader to read in a file with Data Inspector (DI). JSON text is either an object with syntax { “a” : 5 }, or an array with syntax [1,2,3]. JSON is a nested format, that contains more properties the deeper we go into a file. For example, in the code below the name : “SFO” of the type : ”Airport” is nested one level below name : “JSONFeature” with type : “FeatureCollection”. The features are listed in an array with geometry and properties as types. { "name" : "JSONFeature", "type" : "FeatureCollection", "features" : [ { "ID" : "1001", "geometry" : { "type" : "Point", "coordinates" : [ -122.4194155, 37.7749295 ] }, "properties" : { "name" : "SFO", "type" : "Airport" } } ] } JSON object example This JSON object has a structure or schema and can also be represented in a table when queried from the features array: JSON object example represented in a table In order to read JSON we need to tell FME which nested objects should become features. The FME approach to JSON reading offers two modes for doing this in the JSON Reader: Auto and JSON Query. Auto mode works fine for simpler, relatively flat JSON structures. You can always try the Auto mode first to see if FME can extract features from your JSON automatically. JSON Query offers a more powerful mode that allows you to specify a filter query which extracts feature data from more complex nested structures. The challenge is that the JSON query syntax can be a bit tricky to construct. To help build these JSON query extraction expressions we added the tree selection tool. This can be accessed in the JSON reader settings within the "JSON query for feature object" parameter. The user can select a portion of the tree, by clicking in the nested data structure. In the case above, we could simply select "properties" node in the tree to get the SFO Airport data and ignore the parent data above. Example of the JSON Reader Tree View Once we know which objects should become features we have the option to flatten any nested properties within those objects into feature attributes and geometry for a more relational structure typical of GIS. Flattening typically involves taking nested parent-child JSON objects and creating a parent.child field names on the FME features. Parent/child ids can also be recorded to support associations between features. Arrays become lists unless part of a geometry. There are other ways, to process JSON within a workspace. For example, the JSONFlattener transformer can flatten JSON objects, extract the object keys and values into FME feature attributes (JSON category) for one feature. Fragmenting is used for extracting portions of JSON formatted text into multiple new FME features using the JSONFragmenter. These transformers are often used to support messaging applications which are more dynamic in nature than a file translation workspace. To go the other way and write FME features to a JSON document, we use the JSONTemplater transformer to merge FME attributes into the required JSON structure. Each feature type typically is written into its own sub-template. Once the complete JSON document is generated, the Text File Writer is then used to write it to a file. Note that we don't typically use the FME JSON writer unless we want to generate a very simple flat JSON structure with no nesting. So in short, FME covers the full scope of JSON data transformations, both from JSON to GIS, and from GIS back to JSON. This overview covered the background to JSON and its challenges. The FME approach to JSON included the terminology and airport object example. The following article contains an example that shows how JSON can be used to feed messages to users who want to be updated on the weather and status at US airports. JSON Reader Configuration Converting from JSON to a spatial format (GIS) JSON Writing with JSONTemplater JSON Transformations: Extraction with JSONExtractor, JSONFlattener and JSONFragmenter JSON Reader Configuration Converting from JSON to a spatial format (GIS) JSON Writing with JSONTemplater Connect to Anything: Web Services, FME, and JSON Box.com - 2. Download a File Create Three.js from SketchUp Box.com - 1. Get Content Information for a Folder Flood Notification Scenario
https://knowledge.safe.com/articles/39188/tutorial-getting-started-with-json.html
CC-MAIN-2018-09
refinedweb
1,179
52.49
Good evening, I’m starting to study and use the Nengo simulator and I have a question about the spiking activity of neurons. Can I know when a neuron (presynaptic or postsynaptic) spikes? I’ve tried to understand the spikes2events method of the utils/neurons.py library, but I cannot understand which parameters I have to give or if there is another method in order to solve my problem. spikes2events utils/neurons.py If it’s possible to know the temporal activity for each neuron, is possible to use it in order to implement a new model of STDP? P.S.: Sorry if there is another topic with this issue, I have not found it. Thank you in advance, Kind regards and happy new year, Emanuele I cannot understand which parameters I have to give or if there is another method in order to solve my problem. I cannot understand which parameters I have to give or if there is another method in order to solve my problem. Yeah, that could be documented a bit better. Here’s an example: import nengo from nengo.utils.neurons import spikes2events import numpy as np with nengo.Network() as model: sin_input = nengo.Node(np.sin) sin_ens = nengo.Ensemble(100, 1) nengo.Connection(sin_input, sin_ens) spikes_probe = nengo.Probe(sin_ens.neurons, synapse=None) with nengo.Simulator(model) as sim: sim.run(5.0) spikes = sim.data[spikes_probe] spike_times = spikes2events(sim.trange(), spikes) # print the first 10 spikes of the print(spike_times[0][:10]) If you’re looking to implement a custom learning rule, you should check out my example using a nengo.Node. nengo.Node Thank you @Seanny123! To not open another topic, I reply here in order to receive some help, if it is possible. Of course, you can close this topic or move to a new one. I’ve to implement this kind of learning rule (Arena et al., 2009) where: I’m looking your repository and it has been very helpful! In any case, I have to manage the presynaptic and postsynaptic times in order to update the weights, according to (1). How can I do that? Are there some attributes that I can use? If it can be useful, this is the class I’ve implemented but I have some problems to implement properly the function of the class. class STDP(object): def __init__(self, tau= 0.005, tau_plus=0.020, tau_minus=0.010, learning_rate=1e-6, in_neurons=1, out_neurons=1, A_plus=0.2, A_minus=-0.2, sample_every=0.1, start_weights=weights): self.up_weights = start_weights.copy() # Parameters of the synapse self.A_plus = A_plus self.A_minus = A_minus self.tau_plus = tau_plus self.tau_minus = tau_minus self.tau = tau self.in_nrns = in_neurons # Impulse response of the synapse self.epsilon = np.exp(1)*nengo.Lowpass(tau).make_step(in_neurons, in_neurons, dt, None) self.weight_history = [] I’m really sick right now, so I won’t be able to give you a detailed answer, but I don’t think you actually need to manage the spike times, because of how synaptic filtering works in the NEF. @tbekolay will be able to tell you more about this if he has time.
https://forum.nengo.ai/t/spike-time-of-neurons/452
CC-MAIN-2018-05
refinedweb
521
69.28
A Beginner's Introduction to POEby Jeffrey Goff January 17, 2001 By Dennis Taylor, with Jeff Goff What Is POE, And Why Should I Use It? Most. POE Design: - States - The basic building block of the POE program is the state, which is a piece of code that gets executed when some event occurs -- when incoming data arrives, for instance, or when a session runs out of things to do, or when one session sends a message to another. Everything in POE is based around receiving and handling these events. - The Kernel - POE's kernel is much like an operating system's kernel: it keeps track of all your processes and data behind the scenes, and schedules when each piece of your code gets to run. You can use the kernel to set alarms for your POE processes, queue up states that you want to run, and perform various other low-level services, but most of the time you don't interact with it directly. - Sessions - Sessions are the POE equivalent to processes in a real operating system. A session is just a POE program which switches from state to state as it runs. It can create ``child'' sessions, send POE events to other sessions, and so on. Each session can store session-specific data in a hash called the heap, which is accessible from every state in that session. POE has a very simple cooperative multitasking model; every session executes in the same OS process without threads or forking. For this reason, you should beware of using blocking system calls in POE programs. Those are the basic pieces of the Perl Object Environment, although there are a few slightly more advanced parts that we ought to explain before we go on to the actual code: - Drivers - Drivers are the lowest level of POE's I/O layer. Currently, there's only one driver included with the POE distribution -- POE::Driver::SysRW, which reads and writes data from a filehandle -- so there's not much to say about them. You'll never actually use a driver directly, anyhow. - Filters - Filters, on the other hand, are inordinately useful. A filter is a simple interface for converting chunks of formatted data into another format. For example, POE::Filter::HTTPDconverts HTTP 1.0 requests into HTTP::Requestobjects and back, and POE::Filter::Lineconverts a raw stream of data into a series of lines (much like Perl's <> operator). - Wheels - Wheels contain reusable pieces of high-level logic for accomplishing everyday tasks. They're the POE way to encapsulate useful code. Common things you'll do with wheels in POE include handling event-driven input and output and easily creating network connections. Wheels often use Filters and Drivers to massage and send off data. I know this is a vague description, but the code below will provide some concrete examples. - Components - A Component is a session that's designed to be controlled by other sessions. Your sessions can issue commands to and receive events from them, much like processes communicating via IPC in a real operating system. Some examples of Components include POE::Component::IRC, an interface for creating POE-based IRC clients, or POE::Component::Client::HTTP, an event-driven HTTP user agent in Perl. We won't be using any Components in this article, but they're a very useful part of POE nevertheless. A Simple Example For this simple example, we're going to make a server daemon which accepts TCP connections and prints the answers to simple arithmetic problems posed by its clients. When someone connects to it on port 31008, it will print ``Hello, client!''. The client can then send it an arithmetic expression, terminated by a newline (such as `` 6 + 3\n'' or `` 50 / (7 - 2)\n'', and the server will send back the answer. Easy enough, right? Writing such a program in POE isn't terribly different from the traditional method of writing daemons in Unix. We'll have a server session which listens for incoming TCP connections on port 31008. Each time a connection arrives, it'll create a new child session to handle the connection. Each child session will interact with the user, and then quietly die when the connection is closed. And best of all, it'll only take 74 lines of modular, simple Perl. The program begins innocently enough: 1 #!/usr/bin/perl -w 2 use strict; 3 use Socket qw(inet_ntoa); 4 use POE qw( Wheel::SocketFactory Wheel::ReadWrite 5 Filter::Line Driver::SysRW ); 6 use constant PORT => 31008; Here, we import the modules and functions which the script will use, and define a constant value for the listening port. The odd-looking qw() statement after the `` use POE'' is just POE's shorthand way for pulling in a lot of POE:: modules at once. It's equivalent to the more verbose: use POE; use POE::Wheel::SocketFactory; use POE::Wheel:ReadWrite; use POE::Filter::Line; use POE::Driver::SysRW; Now for a truly cool part: 7 new POE::Session ( 8 _start => \&server_start, 9 _stop => \&server_stop, 10 ); 11 $poe_kernel->run(); 12 exit; That's the entire program! We set up the main server session, tell the POE kernel to start processing events, and then exit when it's done. (The kernel is considered ``done'' when it has no more sessions left to manage, but since we're going to put the server session in an infinite loop, it'll never actually exit that way in this script.) POE automatically exports the $poe_kernel variable into your namespace when you write `` use POE;''. The new POE::Session call needs a word of explanation. When you create a session, you give the kernel a list of the events it will accept. In the code above, we're saying that the new session will handle the _start and _stop events by calling the &server_start and &server_stop functions. Any other events which this session receives will be ignored. _start and _stop are special events to a POE session: the _start state is the first thing the session executes when it's created, and the session is put into the _stop state by the kernel when it's about to be destroyed. Basically, they're a constructor and a destructor. Now that we've written the entire program, we have to write the code for the states which our sessions will execute while it runs. Let's start with (appropriately enough) &server_start, which is called when the main server session is created at the beginning of the program: 13 sub server_start { 14 $_[HEAP]->{listener} = new POE::Wheel::SocketFactory 15 ( BindPort => PORT, 16 Reuse => 'yes', 17 SuccessState => \&accept_new_client, 18 FailureState => \&accept_failed 19 ); 20 print "SERVER: Started listening on port ", PORT, ".\n"; 21 } This is a good example of a POE state. First things first: Note the variable called $_[HEAP]? POE has a special way of passing arguments around. The @_ array is packed with lots of extra arguments -- a reference to the current kernel and session, the state name, a reference to the heap, and other goodies. To access them, you index the @_ array with various special constants which POE exports, such as HEAP, SESSION, KERNEL, STATE, and ARG0 through ARG9 to access the state's user-supplied arguments. Like most design decisions in POE, the point of this scheme is to maximize backwards compatibility without sacrificing speed. The example above is storing a SocketFactory wheel in the heap under the key ' listener'. The POE::Wheel::SocketFactory wheel is one of the coolest things about POE. You can use it to create any sort of stream socket (sorry, no UDP sockets yet) without worrying about the details. The statement above will create a SocketFactory that listens on the specified TCP port (with the SO_REUSE option set) for new connections. When a connection is established, it will call the &accept_new_client state to pass on the new client socket; if something goes wrong, it'll call the &accept_failed state instead to let us handle the error. That's all there is to networking in POE! We store the wheel in the heap to keep Perl from accidentally garbage-collecting it at the end of the state -- this way, it's persistent across all states in this session. Now, onto the &server_stop state: 22 sub server_stop { 23 print "SERVER: Stopped.\n"; 24 } Not much to it. I just put this state here to illustrate the flow of the program when you run it. We could just as easily have had no _stop state for the session at all, but it's more instructive (and easier to debug) this way. Here's where we create new sessions to handle each incoming connection: 25 sub accept_new_client { 26 my ($socket, $peeraddr, $peerport) = @_[ARG0 .. ARG2]; 27 $peeraddr = inet_ntoa($peeraddr); 28 new POE::Session ( 29 _start => \&child_start, 30 _stop => \&child_stop, 31 main => [ 'child_input', 'child_done', 'child_error' ], 32 [ $socket, $peeraddr, $peerport ] 33 ); 34 print "SERVER: Got connection from $peeraddr:$peerport.\n"; 35 } Our POE::Wheel::SocketFactory will call this subroutine whenever it successfully establishes a connection to a client. We convert the socket's address into a human-readable IP address (line 27) and then set up a new session which will talk to the client. It's somewhat similar to the previous POE::Session constructor we've seen, but a couple things bear explaining: @_[ARG0 .. ARG2] is shorthand for ($_[ARG0], $_[ARG1], $_[ARG2]). You'll see array slices used like this a lot in POE programs. What does line 31 mean? It's not like any other event_name = state> pair that we've seen yet. Actually, it's another clever abbreviation. If we were to write it out the long way, it would be: new POE::Session ( ... child_input => &main::child_input, child_done => &main::child_done, child_error => &main::child_error, ... ); It's a handy way to write out a lot of state names when the state name is the same as the event name -- you just pass a package name or object as the key, and an array reference full of subroutine or method names, and POE will just do the right thing. See the POE::Session docs for more useful tricks like that. Finally, the array reference at the end of the POE::Session constructor's argument list (on line 32) is the list of arguments which we're going to manually supply to the session's _start state. If the POE::Wheel::SocketFactory had problems creating the listening socket or accepting a connection, this happens: 36 sub accept_failed { 37 my ($function, $error) = @_[ARG0, ARG2]; 38 delete $_[HEAP]->{listener}; 39 print "SERVER: call to $function() failed: $error.\n"; 40 } Printing the error message is normal enough, but why do we delete the SocketFactory wheel from the heap? The answer lies in the way POE manages session resources. Each session is considered ``alive'' so long as it has some way of generating or receiving events. If it has no wheels and no aliases (a nifty POE feature which we won't cover in this article), the POE kernel realizes that the session is dead and garbage-collects it. The only way the server session can get events is from its SocketFactory wheel -- if that's destroyed, the POE kernel will wait until all its child sessions have finished, and then garbage-collect the session. At this point, since there are no remaining sessions to execute, the POE kernel will run out of things to do and exit. So, basically, this is just the normal way of getting rid of unwanted POE sessions: dispose of all the session's resources and let the kernel clean up. Now, onto the details of the child sessions: 41 sub child_start { 42 my ($heap, $socket) = @_[HEAP, ARG0]; 43 $heap->{readwrite} = new POE::Wheel::ReadWrite 44 ( Handle => $socket, 45 Driver => new POE::Driver::SysRW (), 46 Filter => new POE::Filter::Line (), 47 InputState => 'child_input', 48 ErrorState => 'child_error', 49 ); 50 $heap->{readwrite}->put( "Hello, client!" ); 51 $heap->{peername} = join ':', @_[ARG1, ARG2]; 52 print "CHILD: Connected to $heap->{peername}.\n"; 53 } This gets called every time a new child session is created to handle a newly connected client. We'll introduce a new sort of POE wheel here: the ReadWrite wheel, which is an event-driven way to handle I/O tasks. We pass it a filehandle, a driver which it'll use for I/O calls, and a filter that it'll munge incoming and outgoing data with (in this case, turning a raw stream of socket data into separate lines and vice versa). In return, the wheel will send this session a child_input event whenever new data arrives on the filehandle, and a child_error event if any errors occur. We immediately use the new wheel to output the string ``Hello, client!'' to the socket. (When you try out the code, note that the POE::Filter::Line filter takes care of adding a line terminator to the string for us.) Finally, we store the address and port of the client in the heap, and print a success message. We will omit discussion of the child_stop state, since it's only one line long. Now for the real meat of the program: the child_input state! 57 sub child_input { 58 my $data = $_[ARG0]; 59 $data =~ tr{0-9+*/()-}{}cd; 60 return unless length $data; 61 my $result = eval $data; 62 chomp $@; 63 $_[HEAP]->{readwrite}->put( $@ || $result ); 64 print "CHILD: Got input from peer: \"$data\" = $result.\n"; 65 } When the client sends us a line of data, we strip it down to a simple arithmetic expression and eval it, sending either the result or an error message back to the client. Normally, passing untrusted user data straight to eval() is a horribly dangerous thing to do, so we have to make sure we remove every non-arithmetic character from the string before it's evaled (line 59). The child session will happily keep accepting new data until the client closes the connection. Run the code yourself and give it a try! The child_done and child_error states should be fairly self-explanatory by now -- they each delete the child session's ReadWrite wheel, thus causing the session to be garbage-collected, and print an expository message explaining what happened. Easy enough. That's All For Today And that's all there is to it! The longest subroutine in the entire program is only 12 lines, and all the complicated parts of the server-witing process have been offloaded to POE. Now, you could make the argument that it could be done more easily as a procedural-style program, like the examples in man perlipc. For a simple example program like this, that would probably be true. But the beauty of POE is that, as your program scales, it stays easy to modify. It's easier to organize your program into discrete elements, and POE will provide all the features you would otherwise have had to hackishly reinvent yourself when the need arose. So give POE a try on your next project. Anything that would ordinarily use an event loop would be a good place to start using POE. Have fun! Related Links - - The POE home page. All good things stem from here.
http://www.perl.com/pub/a/2001/01/poe.html
crawl-002
refinedweb
2,541
59.74
Using GoLang to manage secrets in AWS You want to keep your secrets safe, don’t you. But how do you go about this in the cloud? And how do you do it without having to deploy and maintain complex infrastructure. Why? There are a number of awesome solutions for managing secrets available but they tend to require you to deploy and maintain servers to host them. How about a solution that does not require additional infrastructure. This is where AWS became handy. They provide the KMS (Key Management Service) for encrypting your secrets and the S3 service for storing these encrypted secrets. What about interacting with these services? Maybe you need to support multiple operating systems? GoLang works beautifully in this situation as it can build binaries, which contain all dependancies, for each platform. So. Lets do this thing! Getting set up Before you start taking a look at the GoLang code, you need to set up the AWS services first. You are going to need a KMS key. This can be created using the AWS console by following these steps. Keep a note on the KMS key arn as you will need it for the encryption method. Next, create an S3 bucket. Follow these steps if you don’t already know how. Now you have your key and bucket you need to add the below imports to your GoLang project. These are required to use the AWS GoLang SDK. import ( "os" "io/ioutil" "strings" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/kms" "github.com/aws/aws-sdk-go/service/s3" "github.com/aws/aws-sdk-go/service/s3/s3manager" ) How do you encrypt and upload your secrets? Ok. Code time. The method below uses the key you created to encrypt the secret and save it to a temporary file. The encrypt method returned the name of a file that has your encrypted secret. You can now upload this secret file to your S3 bucket using the below. Now you have your encrypted secret stored in S3, you need to do a little more to be able to access it. How do you decrypt and download your secrets? Lets download your encrypted secret. You have your encrypted file. Now you want to extract your secret. Lets pass the name of the encrypted file to a decryption method. So, whats next? The above methods manage your secrets. Wrap them up in a GoLang cli for ease of use. Remembering to take extra time in naming your application, because thats half the fun.
https://medium.com/comparethemarket/using-golang-to-manage-secrets-in-aws-4b0da646b002?utm_source=golangweekly&utm_medium=email
CC-MAIN-2022-40
refinedweb
438
76.42
Mark Reinhold jpms-spec-comments@openjdk.java.net This is an informal overview of enhancements to the Java SE Platform prototyped in Project Jigsaw and proposed for JSR 376: The Java Platform Module System. A related document describes enhancements to JDK-specific tools and APIs, which are outside the scope of the JSR. As described in the JSR, the specific goals of the module system are to provide Reliable configuration, to replace the brittle, error-prone class-path mechanism with a means for program components to declare explicit dependences upon one another, along with Strong encapsulation, to allow a component to declare which of its public types are accessible to other components, and which are not. These features will benefit application developers, library developers, and implementors of the Java SE Platform itself directly and, also, indirectly, since they will enable a scalable platform, greater platform integrity, and improved performance. This is the second edition of this document. Relative to the initial edition this edition introduces material on compatibility and migration, revises the description of reflective readability, reorders the text to improve the flow of the narrative, and is organized into a two-level hierarchy of sections and subsections for easier navigation. There are still many open issues in the design, the resolutions of which will be reflected in future versions of this document. In order to provide reliable configuration and strong encapsulation in a way that is both approachable to developers and supportable by existing tool chains. A module’s self-description is expressed in its module declaration, a new construct of the Java programming language. The simplest possible module declaration merely specifies the name of its module: module com.foo.bar { } One or more requires clauses can be added to declare that the module depends, by name, upon some other modules, at both compile time and run time: module com.foo.bar { requires org.baz.qux; } Finally, exports clauses can be added to declare that the module makes all, and only, the public types in specific packages available for use by other modules: module com.foo.bar { requires org.baz.qux; exports com.foo.bar.alpha; exports com.foo.bar.beta; } If a module’s declaration contains no exports clauses then it will not export any types at all to any other modules. The source code for a module declaration is, by convention, placed in a file named module-info.java at the root of the module’s source-file hierarchy. The source files for the com.foo.bar module, e.g., might include: module-info.java com/foo/bar/alpha/AlphaFactory.java com/foo/bar/alpha/Alpha.java ... A module declaration is compiled, by convention, into a file named module-info.class, placed similarly in the class-file output directory.. Module declarations are part of the Java programming language, rather than a language or notation of their own, for several reasons. One of the most important is that module information must be available at both compile time and run time in order to achieve fidelity across phases, i.e., to ensure that the module system works in the same way at both compile time and run time. This, in turn, allows many kinds of errors to be prevented or, at least, reported earlier—at compile time—when they are easier to diagnose and repair. Expressing module declarations in a source file which is compiled, along with the other source files in a module, into a class file for consumption by the Java virtual machine is the natural way in which to establish fidelity. This approach will be immediately familiar to developers, and not difficult for IDEs and build tools to support. An IDE, in particular, could suggest an initial module declaration for an existing component by synthesizing requires clauses from information already available in the component’s project description. Existing tools can already create, manipulate, and consume JAR files, so for ease of adoption and migration we define modular JAR files. A modular JAR file is like an ordinary JAR file in all possible ways, except that it also includes a module-info.class file in its root directory. A modular JAR file for the above com.foo.bar module, e.g., might have the content: META-INF/ META-INF/MANIFEST.MF module-info.class com/foo/bar/alpha/AlphaFactory.class com/foo/bar/alpha/Alpha.class ... A modular JAR file can be used as a module, in which case its module-info.class file is taken to contain the module’s declaration. It can, alternatively, be placed on the ordinary class path, in which case its module-info.class file is ignored. Modular JAR files allow the maintainer of a library to ship a single artifact that works both as a module, on Java SE 9 and later, and as a regular JAR file on the class path, on all releases. We expect that implementations of Java SE 9 which include a jar tool will enhance that tool to make it easy to create modular JAR files. For the purpose of modularizing the Java SE Platform’s reference implementation, the JDK, we will introduce a new artifact format that goes beyond JAR files to accommodate native code, configuration files, and other kinds of data that do not fit naturally, if at all, into JAR files. This format leverages another advantage of expressing module declarations in source files and compiling them into class files, namely that class files are independent of any particular artifact format. Whether this new format, provisionally named “JMOD,” should be standardized is an open question. A final advantage of compiling module declarations into class files is that class files already have a precisely-defined and extensible format. We can thus consider module-info.class files in a more general light, as module descriptors which include the compiled forms of source-level module declarations but also additional kinds of information recorded in class-file attributes which are inserted after the declaration is initially compiled. An IDE or a build-time packaging tool, e.g., can insert attributes containing documentary information such as a module’s version, title, description, and license. This information can be read at compile time and run time via the module system’s reflection facilities for use in documentation, diagnosis, and debugging. It can also be used by downstream tools in the construction of OS-specific package artifacts. A specific set of attributes will be standardized but, since the Java class-file format is extensible, other tools and frameworks will be able to define additional attributes as needed. Non-standard attributes will have no effect upon the behavior of the module system itself. The Java SE 9 Platform Specification will use the module system to divide the platform into a set of modules. An implementation of the Java SE 9 Platform might contain all of the platform modules or, possibly, just some of them. The only module known specifically to the module system, in any case, is the base module, which is named java.base. The base module defines and exports all of the platform’s core packages, including the module system itself: module java.base { exports java.io; exports java.lang; exports java.lang.annotation; exports java.lang.invoke; exports java.lang.module; exports java.lang.ref; exports java.lang.reflect; exports java.math; exports java.net; ... } The base module is always present. Every other module depends implicitly upon the base module, while the base module depends upon no other modules. The remaining platform modules will share the “ java.” name prefix and are likely to include, e.g., java.sql for database connectivity, java.xml for XML processing, and java.logging for logging. Modules that are not defined in the Java SE 9 Platform Specification but instead specific to the JDK will, by convention, share the “ jdk.” name prefix. Individual modules can be defined in module artifacts, or else built-in to the compile-time or run-time environment. To make use of them in either phase the module system must locate them and, then, determine how they relate to each other so as to provide reliable configuration and strong encapsulation. In order to locate modules defined in artifacts the module system searches the module path, which is defined by the host system. The module path is a sequence, each element of which is either a module artifact or a directory containing module artifacts. The elements of the module path are searched, in order, for the first artifact that defines a suitable module. The module path is materially different from the class path, and more robust. The inherent brittleness of the class path is due to the fact that it is a means to locate individual types in all the artifacts on the path, making no distinction amongst the artifacts themselves. This makes it impossible to tell, in advance, when an artifact is missing. It also allows different artifacts to define types in the same packages, even if those artifacts represent different versions of the same logical program component, or different components entirely. The module path, by contrast, is a means to locate whole modules rather than individual types. If the module system cannot fulfill a particular dependence with an artifact from the module path, or if it encounters two artifacts in the same directory that define modules of the same name, then the compiler or virtual machine will report an error and exit. The modules built-in to the compile-time or run-time environment, together with those defined by artifacts on the module path, are collectively referred to as the universe of observable modules. Suppose we have an application that uses the above com.foo.bar module and also the platform’s java.sql module. The module that contains the core of the application is declared as follows: module com.foo.app { requires com.foo.bar; requires java.sql; } Given this initial application module, the module system resolves the dependences expressed in its requires clauses by locating additional observable modules to fulfill those dependences, and then resolves the dependences of those modules, and so forth, until every dependence of every module is fulfilled. The result of this transitive-closure computation is a module graph which, for each module with a dependence that is fulfilled by some other module, contains a directed edge from the first module to the second. To construct a module graph for the com.foo.app module, the module system inspects the declaration of the java.sql module, which is: module java.sql { requires java.logging; requires java.xml; exports java.sql; exports javax.sql; exports javax.transaction.xa; } It also inspects the declaration of the com.foo.bar module, already shown above, and also those of the org.baz.qux, java.logging, and java.xml modules; for brevity, these last three are not shown here since they do not declare dependences upon any other modules. Based upon all of these module declarations, the graph computed for the com.foo.app module contains the following nodes and edges: In this figure the dark blue lines represent explicit dependence relationships, as expressed in requires clauses, while the light blue lines represent the implicit dependences of every module upon the base module.. Thus, in the above graph, the com.foo.app module reads the com.foo.bar and java.sql modules but not the org.baz.qux, java.xml, or java.logging modules... The readability relationships defined in a module graph, combined with the exports clauses in module declarations, are the basis of strong encapsulation: The Java compiler and virtual machine consider the public types in a package in one module to be accessible by code in some other module only when the first module is readable by the second module, in the sense defined above, and the first module exports that package. That is, if two types S and T are defined in different modules, and T is public, then code in S can access T if: S’s module reads T’s module, and T’s module exports T’s package. A type referenced across module boundaries that is not accessible in this way is unusable in the same way that a private method or field is unusable: Any attempt to use it will cause an error to be reported by the compiler, or an IllegalAccessError to be thrown by the Java virtual machine, or an IllegalAccessException to be thrown by the reflective run-time APIs. Thus, even when a type is declared public, if its package is not exported in the declaration of its module then it will only be accessible to code in that module. A method or field referenced across module boundaries is accessible if its enclosing type is accessible, in this sense, and if the declaration of the member itself also allows access. To see how strong encapsulation works in the case of the above module graph, we label each module with the packages that it exports: Code in the com.foo.app module can access public types declared in the com.foo.bar.alpha package because com.foo.app depends upon, and therefore reads, the com.foo.bar module, and because com.foo.bar exports the com.foo.bar.alpha package. If com.foo.bar contains an internal package com.foo.bar.internal then code in com.foo.app cannot access any types in that package, since com.foo.bar does not export it. Code in com.foo.app cannot refer to types in the org.baz.qux package since com.foo.app does not depend upon the org.baz.qux module, and therefore does not read it. If one module reads another then, in some situations, it should logically also read some other modules. The platform’s java.sql module, e.g., depends upon the java.logging and java.xml modules, not only because it contains implementation code that uses types in those modules but also because it defines types whose signatures refer to types in those modules. The java.sql.Driver interface, in particular, declares the public method public Logger getParentLogger(); where Logger is a type declared in the exported java.util.logging package of the java.logging module. Suppose, e.g., that code in the com.foo.app module invokes this method in order to acquire a logger and then log a message: String url = ...; Properties props = ...; Driver d = DriverManager.getDriver(url); Connection c = d.connect(url, props); d.getParentLogger().info("Connection acquired"); If the com.foo.app module is declared as above then this will not work: The getParentLogger method returns a Logger, which is a type declared in the java.logging module, which is not readable by the com.foo.app module, and so the invocation of the info method in the Logger class will fail at both compile time and run time because that class, and thus that method, is inaccessible. One solution to this problem is to hope that every author of every module that both depends upon the java.sql module and contains code that uses Logger objects returned by the getParentLogger method remembers also to declare a dependence upon the java.logging module. This approach is unreliable, of course, since it violates the principle of least surprise: If one module depends upon a second module then it is natural to expect that every type needed to use the first module, even if the type is defined in the second module, will immediately be accessible to a module that depends only upon the first module. We therefore extend module declarations so that one module can grant readability to additional modules, upon which it depends, to any module that depends upon it. Such implied readability is expressed by including the public modifier in a requires clause. The declaration of the java.sql module actually reads: module java.sql { requires public java.logging; requires public java.xml; exports java.sql; exports javax.sql; exports javax.transaction.xa; } The public modifiers mean that any module that depends upon the java.sql module will read not only the java.sql module but also the java.logging and java.xml modules. The module graph for the com.foo.app module, shown above, thus contains two additional dark-blue edges, linked by green edges to the java.sql module since they are implied by that module: The com.foo.app module can now include code that accesses all of the public types in the exported packages of the java.logging and java.xml modules, even though its declaration does not mention those modules.. Thus far we have seen how to define modules from scratch, package them into module artifacts, and use them together with other modules that are either built-in to the platform or also defined in artifacts. Most Java code was, of course, written prior to the introduction of the module system and must continue to work just as it does today, without change. The module system can, therefore, compile and run applications composed of JAR files on the class path even though the platform itself is composed of modules. It also allows existing applications to be migrated to modular form in a flexible and gradual manner. If a request is made to load a type whose package is not defined in any known module then the module system will attempt to load it from the class path. If this succeeds then the type is considered to be a member of a special module known as the unnamed module, so as to ensure that every type is associated with some module. The unnamed module is, at a high level, akin to the existing concept of the unnamed package. All other modules have names, of course, so we will henceforth refer to those as named modules. The unnamed module reads every other module. Code in any type loaded from the class path will thus be able to access the exported types of all other readable modules, which by default will include all of the named, built-in platform modules. An existing class-path application that compiles and runs on Java SE 8 will, thus, compile and run in exactly the same way on Java SE 9, so long as it only uses standard, non-deprecated Java SE APIs. The unnamed module exports all of its packages. This enables flexible migration, as we shall see below. It does not, however, mean that code in a named module can access types in the unnamed module. A named module cannot, in fact, even declare a dependence upon the unnamed module. This restriction is intentional, since allowing named modules to depend upon the arbitrary content of the class path would make reliable configuration impossible. If a package is defined in both a named module and the unnamed module then the package in the unnamed module is ignored. This preserves reliable configuration even in the face of the chaos of the class path, ensuring that every module still reads at most one module defining a given package. If, in our example above, a JAR file on the class path contains a class file named com/foo/bar/alpha/AlphaFactory.class then that file will never be loaded, since the com.foo.bar.alpha package is exported by the com.foo.bar module. The treatment of types loaded from the class path as members of the unnamed module allows us to migrate the components of an existing application from JAR files to modules in an incremental, bottom-up fashion. Suppose, e.g., that the application shown above had originally been built for Java SE 8, as a set of similarly-named JAR files placed on the class path. If we run it as-is on Java SE 9 then the types in the JAR files will be defined in the unnamed module. That module will read every other module, including all of the built-in platform modules; for simplicity, assume those are limited to the java.sql, java.xml, java.logging, and java.base modules shown earlier. Thus we obtain the module graph We can immediately convert org-baz-qux.jar into a named module because we know that it does not refer to any types in the other two JAR files, so as a named module it will not refer to any of the types that will be left behind in the unnamed module. (We happen to know this from the original example, but if we did not already know it then we could discover it with the help of a tool such as jdeps.) We write a module declaration for org.baz.qux, add it to the source code for the module, compile that, and package the result as a modular JAR file. If we then place that JAR file on the module path and leave the others on the class path we obtain the improved module graph The code in com-foo-bar.jar and com-foo-app.jar continues to work because the unnamed module reads every named module, which now includes the new org.baz.qux module. We can proceed similarly to modularize com-foo-bar.jar, and then com-foo-app.jar, eventually winding up with the intended module graph, shown previously: Knowing what we do about the types in the original JAR files we could, of course, modularize all three of them in a single step. If, however, org-baz-qux.jar is maintained independently, perhaps by an entirely different team or organization, then it can be modularized before the other two components, and likewise com-foo-bar.jar can be modularized before com-foo-app.jar. Bottom-up migration is straightforward, but it is not always possible. Even if the maintainer of org-baz-qux.jar has not yet converted it into a proper module—or perhaps never will—we might still want to modularize our com-foo-app.jar and com-foo-bar.jar components. We already know that code in com-foo-bar.jar refers to types in org-baz-qux.jar. If we convert com-foo-bar.jar into the named module com.foo.bar but leave org-baz-qux.jar on the class path, however, then that code will no longer work: Types in org-baz-qux.jar will continue to be defined in the unnamed module but com.foo.bar, which is a named module, cannot declare a dependence upon the unnamed module. We must, then, somehow arrange for org-baz-qux.jar to appear as a named module so that com.foo.bar can depend upon it. We could fork the source code of org.baz.qux and modularize it ourselves, but if the maintainer is unwilling to merge that change into the upstream repository then we would have to maintain the fork for as long as we might need it. We can, instead, treat org-baz-qux.jar as an automatic module by placing it, unmodified, on the module path rather than the class path. This will define an observable module whose name, org.baz.qux, is derived from that of the JAR file so that other, non-automatic modules can depend upon it in the usual way: An automatic module is a named module that is defined implicitly, since it does not have a module declaration. An ordinary named module, by contrast, is defined explicitly, with a module declaration; we will henceforth refer to those as explicit modules. There is no practical way to tell, in advance, which other modules an automatic module might depend upon. After a module graph is resolved, therefore, an automatic module is made to read every other named module, whether automatic or explicit: (These new readability edges do create cycles in the module graph, which makes it somewhat more difficult to reason about, but we view these as a tolerable and, usually, temporary consequence of enabling more-flexible migration.) There is, similarly, no practical way to tell which of the packages in an automatic module are intended for use by other modules, or by classes still on the class path. Every package in an automatic module is, therefore, considered to be exported even if it might actually be intended only for internal use: There is, finally, no practical way to tell whether one of the exported packages in an automatic module contains a type whose signature refers to a type defined in some other automatic module. If, e.g., we modularize com.foo.app first, and treat both com.foo.bar and org.baz.qux as automatic modules, then we have the graph It is impossible to know, without reading all of the class files in both of the corresponding JAR files, whether a public type in com.foo.bar declares a public method whose return type is defined in org.baz.qux. An automatic module therefore grants implied readability to all other automatic modules: Now code in com.foo.app can access types in org.baz.qux, although we know that it does not actually do so. Automatic modules offer a middle ground between the chaos of the class path and the discipline of explicit modules. They allow an existing application composed of JAR files to be migrated to modules from the top down, as shown above, or in a combination of top-down and bottom-up approaches. We can, in general, start with an arbitrary set of JAR-file components on the class path, use a tool such as jdeps to analyze their interdependencies, convert the components whose source code we control into explicit modules, and place those along with the remaining JAR files, as-is, on the module path. The JAR files for components whose source code we do not control will be treated as automatic modules until such time as they, too, are converted into explicit modules. Many existing JAR files can be used as automatic modules, but some cannot. If two or more JAR files on the class path contain types in the same package then at most one of them can be used as an automatic module, since the module system still guarantees that every named module reads at most one named module defining a given package and that named modules defining identically-named packages do not interfere with each other. In such situations it often turns out that only one of the JAR files is actually needed. If the others are duplicates or near-duplicates, somehow placed on the class path by mistake, then one can be used as an automatic module and the others can be discarded. If, however, multiple JAR files on the class path intentionally contain types in the same package then on the class path they must remain. To enable migration even when some JAR files cannot be used as automatic modules we enable automatic modules to act as bridges between code in explicit modules and code still on the class path: In addition to reading every other named module, an automatic module is also made to read the unnamed module. If our application’s original class path had, e.g., also contained the JAR files org-baz-fiz.jar and org-baz-fuz.jar, then we would have the graph The unnamed module exports all of its packages, as mentioned earlier, so code in the automatic modules will be able to access any public type loaded from the class path. An automatic module that makes use of types from the class path must not expose those types to the explicit modules that depend upon it, since explicit modules cannot declare dependences upon the unnamed module. If code in the explicit module com.foo.app refers to a public type in com.foo.bar, e.g., and the signature of that type refers to a type in one of the JAR files still on the class path, then the code in com.foo.app will not be able to access that type since com.foo.app cannot depend upon the unnamed module. This can be remedied by treating com.foo.app as an automatic module temporarily, so that its code can access types from the class path, until such time as the relevant JAR file on the class path can be treated as an automatic module or converted into an explicit module. The loose coupling of program components via service interfaces and service providers is a powerful tool in the construction of large software systems. Java has long supported services via the java.util.ServiceLoader class, which locates service providers at run time by searching the class path. For service providers defined in modules we must consider how to locate those modules amongst the set of observable modules, resolve their dependences, and make the providers available to the code that uses the corresponding services. Suppose, e.g., that our com.foo.app module uses a MySQL database, and that a MySQL JDBC driver is provided in an observable module which has the declaration module com.mysql.jdbc { requires java.sql; requires org.slf4j; exports com.mysql.jdbc; } where org.slf4j is a logging library used by the driver and com.mysql.jdbc is the package that contains the implementation of the java.sql.Driver service interface. (It is not actually necessary to export the driver package, but we do so here for clarity.) In order for the java.sql module to make use of this driver, the ServiceLoader class must be able to instantiate the driver class via reflection; for that to happen, the module system must add the driver module to the module graph and resolve its dependences, thus: To achieve this the module system must be able to identify any uses of services by previously-resolved modules and, then, locate and resolve providers from within the set of observable modules. The module system could identify uses of services by scanning the class files in module artifacts for invocations of the ServiceLoader::load methods, but that would be both slow and unreliable. That a module uses a particular service is a fundamental aspect of that module’s definition, so for both efficiency and clarity we express that in the module’s declaration with a uses clause: module java.sql { requires public java.logging; requires public java.xml; exports java.sql; exports javax.sql; exports javax.transaction.xa; uses java.sql.Driver; } The module system could identify service providers by scanning module artifacts for META-INF/services resource entries, as the ServiceLoader class does today. That a module provides an implementation of a particular service is equally fundamental, however, so we express that in the module’s declaration with a provides clause: module com.mysql.jdbc { requires java.sql; requires org.slf4j; exports com.mysql.jdbc; provides java.sql.Driver with com.mysql.jdbc.Driver; } Now it is very easy to see, simply by reading these modules’ declarations, that one of them uses a service that is provided by the other. Declaring service-provision and service-use relationships in module declarations has advantages beyond improved efficiency and clarity. Service declarations of both kinds can be interpreted at compile time to ensure that the service interface (e.g., java.sql.Driver) is accessible to both the providers and the users of a service. Service-provider declarations can be further interpreted to ensure that providers (e.g., com.mysql.jdbc.Driver) actually do implement their declared service interfaces. Service-use declarations can, finally, be interpreted by ahead-of-time compilation and linking tools to ensure that observable providers are appropriately compiled and linked prior to run time. For migration purposes, if a JAR file that defines an automatic module contains META-INF/services resource entries then each such entry is treated as if it were a corresponding provides clause in a hypothetical declaration of that module. An automatic module is considered to use every available service. The remainder of this document addresses advanced topics which, while important, may not be of interest to most developers. To make the module graph available via reflection at run time we define a Module class in the java.lang.reflect package and some related types in a new package, java.lang.module. An instance of the Module class represents a single module at run time. Every type is in a module, so every Class object has an associated Module object, which is returned by the new Class::getModule method. The essential operations on a Module object are: package java.lang.reflect; public final class Module { public String getName(); public ModuleDescriptor getDescriptor(); public ClassLoader getClassLoader(); public boolean canRead(Module target); public boolean isExported(String packageName); } where ModuleDescriptor is a class in the java.lang.module package, instances of which represent module descriptors; the getClassLoader method returns the module’s class loader; the canRead method tells whether the module can read the target module; and the isExported method tells whether the module exports the given package. The java.lang.reflect package is not the only reflection facility in the platform. Similar additions will be made to the compile-time javax.lang.model package in order to support annotation processors and documentation tools. A framework is a facility that uses reflection to load, inspect, and instantiate other classes at run time. Examples of frameworks in the Java SE Platform itself are service loaders, resource bundles, dynamic proxies, and serialization, and of course there are many popular external framework libraries for purposes as diverse as database persistence, dependency injection, and testing. Given a class discovered at run time, a framework must be able to access one of its constructors in order to instantiate it. As things stand, however, that will usually not be the case. The platform’s streaming XML parser, e.g., loads and instantiates the implementation of the XMLInputFactory service named by the system property javax.xml.stream.XMLInputFactory, if defined, in preference to any provider discoverable via the ServiceLoader class. Ignoring exception handling and security checks the code reads, roughly: String providerName = System.getProperty("javax.xml.stream.XMLInputFactory"); if (providerName != null) { Class providerClass = Class.forName(providerName, false, Thread.getContextClassLoader()); Object ob = providerClass.newInstance(); return (XMLInputFactory)ob; } // Otherwise use ServiceLoader ... In a modular setting the invocation of Class::forName will continue to work so long as the package containing the provider class is known to the context class loader. The invocation of the provider class’s constructor via the reflective newInstance method, however, will not work: The provider might be loaded from the class path, in which case it will be in the unnamed module, or it might be in some named module, but in either case the framework itself is in the java.xml module. That module only depends upon, and therefore reads, the base module, and so a provider class in any other module will be not be accessible to the framework. To make the provider class accessible to the framework we need to make the provider’s module readable by the framework’s module. We could mandate that every framework explicitly add the necessary readability edge to the module graph at run time, as in an earlier version of this document, but experience showed that approach to be cumbersome and a barrier to migration. We therefore, instead, revise the reflection API simply to assume that any code that reflects upon some type is in a module that can read the module that defines that type. This enables the above example, and other code like it, to work without change. This approach does not weaken strong encapsulation: A public type must still be in an exported package in order to be accessed from outside its defining module, whether from compiled code or via reflection. Every type is in a module, and at run time every module has a class loader, but does a class loader load just one module? The module system, in fact, places few restrictions on the relationships between modules and class loaders. A class loader can load types from one module or from many modules, so long as the modules do not interfere with each other and all of the types in any particular module are loaded by just one loader. This flexibility is critical to compatibility, since it allows us to retain the platform’s existing hierarchy of built-in class loaders. The bootstrap and extension class loaders still exist, and are used to load types from platform modules. The application class loader also still exists, and is used to load types from artifacts found on the module path. This flexibility will also make it easier to modularize existing applications which already construct sophisticated hierarchies or even graphs of custom class loaders, since such loaders can be upgraded to load types in modules without necessarily changing their delegation patterns. We previously learned that if a type is not defined in a named, observable module then it is considered to be a member of the unnamed module, but with which class loader is the unnamed module associated? Every class loader, it turns out, has its own unique unnamed module, which is returned by the new ClassLoader::getUnnamedModule method. If a class loader loads a type that is not defined in a named module then that type is considered to be in that loader’s unnamed module, i.e., the getModule method of the type’s Class object will return its loader’s unnamed module. The module colloquially referred to as “the unnamed module” is, then, simply the unnamed module of the application class loader, which loads types from the class path when they are in packages not defined by any known module. The module system does not dictate the relationships between modules and class loaders, but in order to load a particular type it must somehow be able to find the appropriate loader. At run time, therefore, the instantiation of a module graph produces a layer, which maps each module in the graph to the unique class loader responsible for loading the types defined in that module. The boot layer is created by the Java virtual machine at startup, by resolving the application’s initial module against the observable modules, as discussed earlier. Most applications, and certainly all existing applications, will never use a layer other than the boot layer. Multiple layers can, however, be of use in sophisticated applications with plug-in or container architectures such as application servers, IDEs, and test harnesses. Such applications can use dynamic class loading and the reflective module-system API, thus far described, to load and run hosted applications that consist of one or more modules. Two additional kinds of flexibility are, however, often required: A hosted application might require a different version of a module that is already present. A Java EE web application, e.g., may require a different version of the JAX-WS stack, which is in the java.xml.ws module, than the one that is built-in to the run-time environment. A hosted application might require service providers other than the providers that have already been discovered. A hosted application might even embed its own preferred providers. A web application, e.g., might include a copy of its preferred version of the Woodstox streaming XML parser, in which case the ServiceLoader class should return that provider in preference to any others. A container application can create a new layer for a hosted application on top of an existing layer by resolving that application’s initial module against a different universe of observable modules. Such a universe can contain alternate versions of upgradeable platform modules and other, non-platform modules already present in the lower layer; the resolver will give these alternate modules priority. Such a universe can also contain different service providers than those already discovered in the lower layer; the ServiceLoader class will load and return these providers before it returns providers from the lower layer.. A layer’s module graph can hence be considered to include, by reference, the module graphs of every layer below it. It is occasionally necessary to arrange for some types to be accessible amongst a set of modules yet remain inaccessible to all other modules. Code in the JDK’s implementations of the standard java.sql and java.xml modules, e.g., makes use of types defined in the internal sun.reflect package, which is in the java.base module. In order for this code to access types in the sun.reflect package we could simply export that package from the java.base module: module java.base { ... exports sun.reflect; } This would, however, make every type in the sun.reflect package accessible to every module, since every module reads java.base, and that is undesirable because some of the classes in that package define privileged, security-sensitive methods. We therefore extend module declarations to allow a package to be exported to one or more specifically-named modules, and to no others. The declaration of the java.base module actually exports the sun.reflect package only to a specific set of JDK modules: module java.base { ... exports sun.reflect to java.corba, java.logging, java.sql, java.sql.rowset, jdk.scripting.nashorn; } These qualified exports can be visualized in a module graph by adding another type of edge, here colored gold, from packages to the specific modules to which they are exported: The accessibility rules stated earlier are refined as follows: If two types S and T are defined in different modules, and T is public, then code in S can access T if: S’s module reads T’s module, and T’s module exports T’s package, either directly to S’s module or to all modules. We also extend the reflective Module class with a method to tell whether a package is exported to a specific module, rather than to all modules: public final class Module { ... public boolean isExported(String packageName, Module target); } Qualified exports can inadvertently make internal types accessible to modules other than those intended, so they must be used with care. An adversary could, e.g., name a module java.corba in order to access types in the sun.reflect package. To prevent this we can analyze a set of related modules at build time and record, in each module’s descriptor, hashes of the content of the modules that are allowed to depend upon it and use its qualified exports. During resolution we verify, for any module named in a qualified export of some other module, that the hash of its content matches the hash recorded for that module name in the second module. Qualified exports are safe to use in an untrusted environment so long as the modules that declare and use them are tied together in this way. The module system described here has many facets, but most developers will only need to use some of them on a regular basis. We expect the basic concepts of module declarations, modular JAR files, the module path, readability, accessibility, the unnamed module, automatic modules, and modular services to become reasonably familiar to most Java developers in the coming years. The more advanced features of reflective readability, layers, and qualified exports will, by contrast, be needed by relatively few. This document includes contributions from Alan Bateman, Alex Buckley, Mandy Chung, Jonathan Gibbons, Chris Hegarty, Karen Kinnear, and Paul Sandoz.
http://openjdk.java.net/projects/jigsaw/spec/sotms/
CC-MAIN-2016-26
refinedweb
7,219
54.63
IN THIS CHAPTER, you’ll look at some of the basic data types that are built into C++ and that you’re likely to use in all your programs. You’ll also investigate how to carry out simple numerical computations. All of C++’s object-oriented capability is founded on the basic data types built into the language, because all the data types that you’ll create are ultimately defined in terms of the basic types. It’s therefore important to get a good grasp of using them. By the end of the chapter, you’ll be able to write a simple C++ pro gram of the traditional form: input – process – output. In this chapter, you’ll learn about - Data types in C++ - What literals are and how you define them in a program - Binary and hexadecimal representation for integers - How you declare and initialize variables in your program - How calculations using integers work - Programming with values that aren’t integers—that is, floating-point calculations - How you can prevent the value stored in a variable from being modified - How to create variables that can store characters Data and Data Types C++ is a strongly typed language. In other words, every data item in your program has a type associated with it that defines what it is and your C++ compiler will make extensive checks to ensure that, as far as possible, you use the right data type in any given context and that when you combine different types, they’re made to be compatible. Because of this type checking, the compiler is able to detect and report most errors that would arise from the accidental interpretation of one type of data as another or from attempts to combine data items of types that are mutually incompatible. The numeical values that you can work with in C++ fall into two broad categories: integers (in other words, whole numbers) and floating-point values, which can be fractional. You can’t conclude from this that there are just two numerical data types, however. There are actually several data types in each of these categories, and each type has its own permitted range of values that it can store. Before I get into numerical types in detail, let’s look at how you carry out arithmetic calculations in C++, starting with how you can calculate using integers.{mospagebreak title=Performing Simple Calculations} To begin with, let’s get some bits of terminology out of the way. An operation (such as a mathematical calculation) is defined by an operator— + for addition, for example, or * for multiplication. The values that an operator acts upon are called operands, so in an expression such as 2*3 , the operands are 2 and 3 .+ for addition, for example, or * for multiplication. The values that an operator acts upon are called , so in an expression such as 2*3 , the operands are 2 and 3 . To begin with, let’s get some bits of terminology out of the way. An operation (such as a mathematical calculation) is defined by an — + for addition, for example, or * for multiplication. The values that an operator acts upon are called , so in an expression such as 2*3, the operands are 2 and 3. Because the multiplication operator requires two operands, it is called a binary operator. Some other operators only require one operand, and these are called unary operators. An example of a unary operator is the minus sign in –2 . The minus sign acts on one operand—the value 2—and changes its sign. This contrasts with the binary subtraction operator in expressions such as 4 – 2 , which acts on two operands, the 4 and the 2 .Introducing Literals In C++, fixed values of any kind, such as 42 , or 2.71828 , or “Mark Twain” , are referred to as literals.In Chapter 1, when you were outputting text strings to the screen, you used a string literal—a constant defined by a series of characters between a pair of double quotes, of which “Mark Twain” is an example. Now you’ll investigate the types of literals that are numeric constants. These are the ordinary numbers you meet every day: your shoe size, the boiling point of lead, the number of angels that can sit on a pin—in fact, any defined number. There are two broad classifications of numeric constants that you can use in C++: - Integer literals are whole numbers and are written without a decimal point. - Floating-point literals (commonly referred to as floating-point numbers) are numbers that can be nonintegral values and are always written with a decimal point, or an exponent, or both. (You’ll look into exponents a little later on.) You use an integer when you’re dealing with what is evidently a whole number: the number of players on a team, for example, or the number of pages in a book. You use a floating-point value when the values aren’t integral: the circumference of a circle divided by its diameter, for example, or the exchange rate of the UK£ against the US$. Floating-point numbers are particularly helpful when you’re dealing with very small or very large quantities: the weight of an electron, the diameter of the galaxy, or the velocity of a bat out of hell, perhaps. The term “floating-point number” is used because while these values are represented by a fixed number of digits, called the precision, the decimal point “floats” and can be moved in either direction in relation to the fixed set of digits. Letting the Point Float Look at these two numbers: 0.000000000000000000001234567 1.234567×10-21 123456700000000000000000000.0 1.234567×10+26 Both numbers have seven digits of precision, but they’re very different numbers, the first being an extremely small number and the second being very large. A floating-point representation of each number on the left is shown to its right. Multiplying the number by a power of 10 shifts the decimal point in the base number, 1.234567. This flexibility in positioning the decimal point allows a huge range of numbers to be represented and stored, from the very small to the very large, in a modest amount of memory. You’ll look at how to use integers first, as they’re the simpler of the two. You’ll come back to working with floating-point values as soon as you’re done with integers.Integer Literals You can write integer literals in a very straightforward way. Here are some examples: -123 +123 123 22333 Here, the + and – signs in the first two examples are examples of the unary operators I mentioned earlier. You could omit the + in the second example, as it’s implied by default, but if you think putting it in makes things clearer, that’s not a problem. The literal +123 is the same as 123 . The fourth example is the number that you would normally write as 22,333, but you must not use commas within an integer literal. If you include a comma, the compiler is likely to treat your number as two numbers separated by the comma. You can’t write just any old integer value that you want, either. To take an extreme example, an integer with 100 digits won’t be accepted. There are upper and lower limits on integer literals, and these are determined by the amount of memory that’s devoted to storing each type of integer value on the computer that you’re using. I come back to this point a little later in the chapter when I discuss integer variables, and I also cover some further options for specifying integer literals. Of course, although I’ve written the examples of integer literals as decimal values, inside your computer they’re stored as binary numbers. Understanding binary arithmetic is quite important in programming, so in case you’re a little rusty on how binary numbers work, I’ve included a brief overview in Appendix E. If you don’t feel comfortable with binary and hexadecimal numbers, I suggest you take a look at the overview in Appendix E before continuing with the next section. Hexadecimal Integer Literals The previous examples of integer literals were decimal integers, but you can also write integers as hexadecimal values. To indicate that you’re writing a hexadecimal value, you prefix the number with 0x or 0X , so if you write 0x999 , you’re writing a hexadecimal number with three hexadecimal digits. Plain old 999 , on the other hand, is a decimal value with decimal digits, so the value will be completely different. Here are some more examples of integer literals written as hexadecimal values: You’ll remember that in Chapter 1 you saw hexadecimal notation being used in escape sequences that defined characters. What you’re looking at here is different— you’re defining integers. You’ll come back to defining character literals later in this chapter. The major use for hexadecimal literals is when you want to define a particular pattern of bits. Because each hexadecimal digit corresponds to 4 bits in the binary value, it’s easy to express a particular pattern of bits as a hexadecimal literal. You’ll explore this further in the next chapter. Octal Integer Literals You can also write integers as octal values—that is, using base 8. You identify a number as octal by writing it with a leading zero. Here are some examples of octal values: Octal values 0123 077 010101 Corresponding decimal integers 83 63 4161 Of course, octal numbers can only have digit values from 0 to 7. Octal is used very infrequently these days, and it survives in C++ largely for historical reasons from the time when there were computers around with a word length that was a multiple of 3 bits. However, it’s important to be aware of the existence of octal numbers, because if you accidentally write a decimal number with a leading zero, the compiler will try to interpret it as octal. CAUTION Don’t write decimal integer values with a leading zero. The compiler will interpret such values as octal (base 8), so a value written as 065 will be equivalent to 53 in decimal notation. As far as your compiler is concerned, it doesn’t matter which number base you choose when you write an integer value—ultimately, it will be stored in your computer as a binary number. The different ways available to you for writing an integer are there just for your convenience. You could write the integer value fifteen as 15 , as 0xF , or as 017 . These will all result in the same internal binary representation of the value, so you will choose one or other of the possible representations to suit the context in which you are using it.Integer Arithmetic The basic arithmetic operations that you can carry out on integers are shown in Table 2-1. Table 2-1. Basic Arithmetic Operations Operator Operation + Addition – Subtraction * Multiplication / Division % Modulus (the remainder after division) The operators in Table 2-1 work largely in the way you would expect, and notice that they are all binary operators. However, the division operation is slightly idiosyncratic, so let’s examine that in a little more detail. Because integer operations always produce an integer result, an expression such as 11/4 doesn’t result in a value of 2.75 . Instead, it produces 2 . Integer division returns the number of times that the denominator divides into the numerator. Any remainder is simply discarded. So far as the C++ standard is concerned, the result of division by zero is undefined, but specific implementations will usually have the behavior defined and, in some cases, will provide a programmatic means of responding to the situation, so check your product documentation. Figure 2-1 illustrates the different effects of the division and modulus operators. Figure 2-1. Contrasting the division and modulus operators The modulus operator, % , which is sometimes referred to as the remainder operator, complements the division operator in that it provides a means for you to obtain the remainder after integer division if you need it. The expression 11%4 results in the value 3 , which is the remainder after dividing 11 by 4. When either or both operands of the modulus operator are negative, the sign of the remainder is up to the particular C++ implementation you’re using, so beware of variations between different systems. Because applying the modulus operator inevitably involves a division, the result is undefined when the right operand is zero. Let’s see the arithmetic operators in action in an example.{mospagebreak title=Try It Out: Integer Arithmetic in Action} The following is a program to output the results of a miscellaneous collection of expressions involving integers to illustrate how the arithmetic operators work: // Program 2.1 – Calculating with integer constants #include <iostream> using std::cout; using std::endl; int main() { cout << 10 + 20 << endl; // Output is 30 cout << 10 – 5 << endl; // Output is 5 cout << 10 – 20 << endl; // Output is -10 cout << 10 * 20 << endl; // Output is 200 cout << 10/3 << endl; // Output is 3 cout << 10 % 3 << endl; // Output is 1 cout << 10 % -3 << endl; // Output is 1 cout << -10 % 3 << endl; // Output is -1 cout << -10 % -3 << endl; // Output is -1 cout << 10 + 20/10 – 5 << endl; // Output is 7 cout << (10 + 20)/(10 – 5) << endl; // Output is 6 cout << 10 + 20/(10 – 5) << endl; // Output is 14 cout << (10 + 20)/10 – 5 << endl; // Output is -2 cout << 4*5/3%4 + 7/3 << endl; // Output is 4 return 0; // End the program } The output from this example on my system is as follows: 30 5 -10 200 3 1 1 -1 -1 7 6 14 -2 4 It doesn’t look particularly elegant with that “ragged right” arrangement, does it? This is a consequence of the way that integers are output by default. Very shortly, you’ll come back to find out how you can make it look prettier. First, though, let’s look at the interesting parts of this example. HOW IT WORKS Each statement evaluates an arithmetic expression and outputs the result to the screen, followed by a newline character that moves the cursor to the beginning of the next line. All the arithmetic expressions here are constant expressions, because their values can be completely determined by the compiler before the program executes. The first five statements are straightforward, and the reasons why they produce the results they do should be obvious to you: cout << 10 + 20 << endl; // Output is 30 cout << 10 – 5 << endl; // Output is 5 cout << 10 – 20 << endl; // Output is -10 cout << 10 * 20 << endl; // Output is 200 cout << 10/3 << endl; // Output is 3 Because integer operations always produce integer results, the expression 10/3 in the last line results in 3 , as 3 divides into 10 a maximum of three times. The remainder, 1, that is left after dividing by 3 is discarded. The next four lines show the modulus operator in action: cout << 10 % 3 << endl; // Output is 1 cout << 10 % -3 << endl; // Output is 1 cout << -10 % 3 << endl; // Output is -1 cout << -10 % -3 << endl; // Output is -1 Here you’re producing the remainder after division for all possible combinations for the signs of the operands. The output corresponding to the first line where both operands are positive is the only one guaranteed to be the same when you run it on your system. The results of the other three lines may have a different sign. The next four statements show the effects of using parentheses: cout << 10 + 20/10 – 5 << endl; // Output is 7 cout << (10 + 20)/(10 – 5) << endl; // Output is 6 cout << 10 + 20 /(10 – 5) << endl; // Output is 14 cout << (10 + 20)/10 – 5 << endl; // Output is -2 The parentheses override the “natural order” of execution of the operators in the expressions. The expressions within parentheses are always evaluated first, starting with the innermost pair if they’re nested and working through to the outermost. In an expression involving several different operators, the order in which the operators are executed is determined by giving some operators priority over others. The priority assigned to an operator is called its precedence.With the operators for integer arithmetic that you’ve seen, the operators * , / , and % form a group that takes priority over the operators + and – , which form another group. You would say that each of the operators * , / , and % has a higher precedence than + and – . Operators within a given group— + and – , for example—have equal precedence. The last output statement in the example illustrates how prece dence determines the order in which the operators are executed: cout << 4*5/3%4 + 7/3 << endl; // Output is 4 The + operator is of lower precedence than any of the others, so the addition will be performed last. This means that values for the two subexpressions, 4*5/3%4 and 7/3 , will be calculated first. The operators in the subexpression 4*5/3%4 are all of equal precedence, so the sequence in which these will be executed is determined by their associativity. The associativity of a group of operators can be either left or right. An operator that is left associative binds first to the operand on the left of the operator, so a sequence of such operators in an expression will be executed from left to right. Let’s illustrate this using the example. In the expression 4*5/3%4 , each of the operators is left associative, which means that the left operand of each operator is whatever is to its left. Thus, the left operand for the multiplication operation is 4 , the left operand for the division operation is 4*5 , and the left operand for the modulus operation is 4*5/3 . The expression is therefore evaluated as ((4*5)/3)%4 , which, as I said, is left to right. Although the associativity of the operators in an expression is involved in determining the sequence of execution of operators from the same group, it doesn’t say anything about the operands. For example, in the expression 4*5/3%4+7/3 , it isn’t defined whether the subexpression 4*5/3%4 is evaluated before 7/3 or vice versa. It could be either, depending on what your compiler decides. Your reaction to this might be “Who cares?” because it makes no difference to the result. Here, that’s true, but there are circumstances in which it can make a difference, and you’ll see some of them as you progress through this chapter.Operator Precedence and Associativity Nearly all operator groups are left associative in C++, so most expressions involving operators of equal precedence are evaluated from left to right. The only right associative operators are the unary operators, which I’ve already touched upon, and assignment operators, which you’ll meet later on. You can put the precedence and associativity of the integer arithmetic operators into a little table that indicates the order of execution in an arithmetic expression, as shown in Table 2-2. Table 2-2. The Precedence and Associativity of the Arithmetic Operators Operators Associativity unary + – Right * / % Left + – Left Each line in Table 2-2 is a group of operators of equal precedence. The groups are in sequence, with the highest precedence operators in the top line and the lowest precedence at the bottom. As it only contains three lines, this table is rather simplistic, but you’ll accumulate many more operators and add further lines to this table as you learn more about C++. NOTE The C++ standard doesn’t define the precedence of the operators directly, but it can be determined from the syntax rules that are defined within the standard. In most instances it’s easier to work out how a given expression will execute from the operator precedence than from the syntax rules, so I’ll consider the precedence of each operator as I introduce it. If you want to see the precedence table for all the operators in C++, you can find it in Appendix D.{mospagebreak title=Try It Out: Fixing the Appearance of the Output} Although it may not appear so, the output from the previous example is right justified. The “ragged right” appearance is due to the fact that the output for each integer is in a field width that’s exactly the correct number of characters to accommodate the value. You can make the output look tidier by setting the field width for each data item to a value of your choice, as follows:Although it may not appear so, the output from the previous example is right justified. The “ragged right” appearance is due to the fact that the output for each integer is in a that’s exactly the correct number of characters to accommodate the value. You can make the output look tidier by setting the field width for each data item to a value of your choice, as follows: // Program 2.1A – Producing neat output #include #include using std::cout; using std::endl; using std::setw; int main() { cout << setw(10) << 10 + 20 << endl; // Output is 30 cout << setw(10) << 10 – 5 << endl; // Output is 5 cout << setw(10) << 10 – 20 << endl; // Output is -10 cout << setw(10) << 10 * 20 << endl; // Output is 200 cout << setw(10) << 10/3 << endl; // Output is 3 cout << setw(10) << 10 % 3 << endl; // Output is 1 cout << setw(10) << 10 % -3 << endl; // Output is 1 cout << setw(10) << -10 % 3 << endl; // Output is -1 cout << setw(10) << -10 % -3<< endl; // Output is -1 cout << setw(10) << 10 + 20/10 – 5 << endl; // Output is 7 cout << setw(10) << (10 + 20)/(10 – 5) << endl; // Output is 6 cout << setw(10) << 10 + 20/(10 – 5) << endl; // Output is 14 cout << setw(10) << (10 + 20)/10 – 5 << endl; // Output is -2 cout << setw(10) << 4*5/3%4 + 7/3 << endl; // Output is 4 return 0; // End the program } Now the output looks like this: 30 5 -10 200 3 1 1 -1 -1 7 6 14 -2 4 HOW IT WORKS That’s much nicer, isn’t it? The tidy formatting is accomplished by the changes to the output statements. Each value to be displayed is preceded in the output by setw(10) , as in the first statement: cout << setw(10) << 10 + 20 << endl; // Output is 30 setw() is called a manipulator because it enables you to manipulate, or control, the appearance of the output. A manipulator doesn’t output anything; it just modifies the output process. Its effect is to set the field width for the next value to be output to the number of characters that you specify between the parenthe ses, which is 10 in this case. The field width that you set by using setw() only applies to the next value that is written to cout . Subsequent values will be presented in the default manner. The additional #include statement for the standard header <iomanip> Calculating with integer constants is all very well, but you were undoubtedly expecting a bit more sophistication in your C++ programs than that. To do more, you need to be able to store data items in a program, and this facility is provided by variables. A variable.Calculating with integer constants is all very well, but you were undoubtedly expecting a bit more sophistication in your C++ programs than that. To do more, you need to be able to store data items in a program, and this facility is provided by variables. A. Variable Names As you saw in Chapter 1, the name that you give to a variable can consist of any combination of upper- or lowercase letters, underscores, and the digits 0 to 9, but it must begin with a letter or an underscore. As I said in Chapter 1, the ANSI standard says that a variable name can also include UCS characters, and although you could use this in defining your variable names, it’s there to allow compilers to accommodate the use of national language characters that aren’t in the basic set of upper- and lowercase letters ( A to Z). Don’t forget, you must not express any character from the basic source character set as a UCS character. All characters from the basic source character set must appear as their explicit character representation. You saw some examples of valid variable names in Chapter 1, but here are a few more: value monthlySalary eight_ball FIXED_VALUE JimBob Just to remind you of what I said in Chapter 1, a variable name can’t begin with a digit, so names such as 8ball and 7Up aren’t valid. Also, because C++ is a case-sensitive language, republican and Republican are different names. You shouldn’t use variable names that begin with an underscore followed by a capital letter or that contain two successive underscores, as names of these forms are reserved for use within the standard libraries. Generally, the names that you invent for your variables should be indicative of the kind of data that they hold. For instance, a name such as shoe_size is going to mean a whole lot more than ss —always assuming you’re dealing with shoe sizes, of course. You’ll find that you often want to use names that combine two or more words to make your program more understandable. One common approach for doing this uses the underscore character to link words in a single, for example: line_count pay_rise current_debt A convention that’s frequently adopted in C++ is to reserve names that begin with a capital letter for naming classes, which are user-defined types. You’ll learn how to define your own data types in Chapter 11. With this approach to names, Point , Person , and Program are all immediately recognizable as user-defined types and not variables. Of course, you’re free to assign any names that you want (as long as they aren’t keywords), but if you choose names that are meaningful and name your variables in a consistent manner, it will make your programs more readable and less error-prone. Appendix B contains a list of all the C++ keywords.Integer Variables Suppose you want to use a variable to record how many apples you have. You can create a variable with the name apples by means of a declaration statement for the variable, as shown in Figure 2-2.by means of a for the variable, as shown in Figure 2-2. apples by means of a for the variable, as shown in Figure 2-2. Suppose you want to use a variable to record how many apples you have. You can create a variable with the name apples by means of a for the variable, as shown in Figure 2-2. Figure 2-2. A variable declaration The statement in Figure 2-2 is described as a declaration because it declares the name apples . Any statement that introduces a name into your program is a declaration for that name. The statement in the illustration is also called a definition, because it causes memory to be allocated for the variable apples . Later, you’ll meet statements that are declarations but are not definitions. A variable is created by its definition, so you can only refer to it after the definition statement. If you attempt to refer to a variable prior to its definition, you’ll get an error message from the compiler. When you define a variable, you can also specify an initial value. For example, int apples = 10; // Definition for the variable apples defines the variable called apples and sets its initial value as 10 . The definition in the diagram had no initial value specified, so the memory assigned to the variable would contain whatever junk value was left over from previous use of the memory. Having junk values floating around in your program is a bad idea, and this leads to our first golden rule. GOLDEN RULE Always initialize your variables when you define them. If you don’t know what value a variable should have when you define it, initialize it to zero. You can use variables as operands of the arithmetic operators you’ve seen in exactly the same way as you’ve used literals. The value of the variable will be the operand value. If you apply the unary minus operator to a variable, the result is a value that has the opposite sign of the value of the variable, but the same magnitude. This doesn’t change the value stored in the variable, though. You’ll see how to do that very soon. Let’s try out some integer variables in a little program.{mospagebreak title=Try It Out: Using Integer Variables} Here’s a program that figures out how your apples can be divided equally among a group of children: // Program 2.2 – Working with integer variables #include <iostream> // For output to the screen using std::cout; using std::endl; int main() { int apples = 10; // Definition for the variable apples int children = 3; // Definition for the variable children // Calculate fruit per child cout << endl // Start on a new line << “Each child gets ” // Output some text << apples/children // Output number of apples per child << ” fruit.”; // Output some more text // Calculate number left over cout << endl // Start on a new line << “We have ” // Output some text << apples % children // Output apples left over << ” left over.”; // Output some more text cout << endl; return 0; // End the program } I’ve been very liberal with the comments here, just to make it clear what’s going on in each statement. You wouldn’t normally put such self-evident information in the comments. This program produces the following output: Each child gets 3 fruit. We have 1 left over. HOW IT WORKS This example is unlikely to overtax your brain cells. The first two statements in main() define the variables apples and children : int apples = 10; // Definition for the variable apple s int children = 3; // Definition for the variable children The variable apples is initialized to 10, and children is initialized to 3. Had you wanted, you could have defined both variables in a single statement, for example: int apples = 10, children = 3; This statement declares both apples and children to be of type int and initializes them as before. A comma is used to separate the variables that you’re declaring, and the whole thing ends with a semicolon. Of course, it isn’t so easy to add explanatory comments here as there’s less space, but you could split the state ment over two lines: int apples = 10, // Definition for the variable apple s children = 3; // Definition for the variable children A comma still separates the two variables, and now you have space for the comments at the end of each line. You can declare as many variables as you want in a single statement, and you can spread the statement over as many lines as you see fit. However, it’s considered good style to stick to one declaration per statement. The next statement calculates how many apples each child gets when the apples are divided up and outputs the result: cout << endl // Start on a new line << “Each child gets ” // Output some text << apples/children // Output number of apples per child << ” fruit.”; // Output some more text Notice that the four lines here make up a single statement, and that you put comments on each line that are therefore effectively in the middle of the state ment. The arithmetic expression uses the division operator to obtain the number of apples that each child gets. This expression just involves the two variables that you’ve defined, but in general you can mix variables and literals in an expression in any way that you want. The next statement calculates and outputs the number of apples that are left over: cout << endl // Start on a new line << “We have ” // Output some text << apples % children // Output number of apples per child << ” left over.”; // Output some more text Here, you use the modulus operator to calculate the remainder, and the result is output between the text strings in a single output statement. If you wanted, you could have generated all of the output with a single statement. Alternatively, you could equally well have output each string and data value in a separate statement. In this example, you used the int type for your variables, but there are other kinds of integer variables.Integer Variable Types The type of an integer variable will determine how much memory is allocated for it and, consequently, the range of values that you can store in it. Table 2-3 describes the four basic types of integer variables. Table 2-3. Basic Integer Variable Types Type Name Typical Memory per Variable char 1 byte short int 2 bytes int 4 bytes long int 8 bytes Apart from type char , which is always 1 byte, there are no standard amounts of memory for storing integer variables of the other three types in Table 2-3. The only thing required by the C++ standard is that each type in the sequence must occupy at least as much memory as its predecessor. I’ve shown the memory for the types on my system, and this is a common arrangement. The type short int is usually written in its abbreviated form, short , and the type long int is usually written simply as long . These abbreviations correspond to the original C type names, so they’re universally accepted by C++ compilers. At first sight, char might seem an odd name for an integer type, but because its primary use is to store an integer code that represents a character, it does make sense. You’ve already seen how to declare a variable of type int , and you declare variables of type short int and type long int in exactly the same way. For example, you could define and initialize a variable called bean_count , of type short int , with the following statement: short int bean_count = 5; As I said, you could also write this as follows: int bean_count = 5; Similarly, you can declare a variable of type long int with this statement: long int earth_diameter = 12756000L; // Diameter in meters Notice that I appended an L to the initializing value, which indicates that it’s an integer literal of type long int . If you don’t put the L here, it won’t cause a problem. The compiler will automatically arrange for the value to be converted from type int to type long int . However, it’s good programming practice to make the types of your initializing values consistent with the types of your variables. Signed and Unsigned Integer Types Variables of type short int , type int , and type long int can store negative and positive values, so they’re implicitly signed integer types. If you want to be explicit about it, you can also write these types as signed short int , signed int , and signed long int , respectively. However, they’re most commonly written without using the signed keyword. You may see just the keyword signed written by itself as a type, which means signed int . However, you don’t see this very often probably because int is fewer characters to type! Occasionally you’ll see the type unsigned int written simply as unsigned . Both of these abbreviations originate in C. My personal preference is to always specify the underlying type when using the keywords signed or unsigned , as then there’s no question about what is meant. An unsigned integer variable can only store positive values, and you won’t be surprised to learn that the type names for three such types are unsigned short int , unsigned int , and unsigned long int . These types are useful when you know you’re only going to be dealing with positive values, but they’re more frequently used to store values that are viewed as bit patterns rather than numbers. You’ll see more about this in Chapter 3, when you look at the bitwise operators that you use to manipulate individual bits in a variable. You need a way of differentiating unsigned integer literals from signed integer literals, if only because 65535 can be stored in 16 bits as an unsigned value, but as a signed value you have to go to 32 bits. Unsigned integer literals are identified by the letter U or u following the digits of the value. This applies to decimal, hexadecimal, and octal integer literals. If you want to specify a literal to be type unsigned long int , you use both the U or u and the L . Figure 2-3. Signed and unsigned integers Figure 2-3 illustrates the difference between 16-bit signed and unsigned integers. As you’ve seen, with signed integers, the leftmost bit indicates the sign of the number. It will be 0 for a positive value and 1 for a negative value. For unsigned integers, all the bits can be treated as data bits. Because an unsigned number is always regarded as positive, there is no sign bit—the leftmost bit is just part of the number. If you think that the binary value for –32768 looks strange, remember that negative values are normally represented in 2’s complement form. As you’ll see if you look in Appendix E, to convert a positive binary value to a negative binary value (or vice versa) in 2’s complement form, you just flip all the bits and then add 1. Of course, you can’t represent +32768 as a 16-bit signed integer, as the available range only runs from –32768 to +32767. Signed and Unsigned Char Types Values stored as type char may actually be signed or unsigned, depending on how your compiler chooses to implement the type, so it may be vary between different computers or even between different compilers on the same computer. If you want a single byte to store integer values rather than character codes, you should explicitly declare the variable as either type signed char or type unsigned char . Note that although type char will be equivalent to either signed char or unsigned char in any given compiler context, all three are considered to be different types. Of course, the words char , short , int , long , signed , and unsigned are all keywords.Integer Ranges The basic unit of memory in C++ is a byte. As far as C++ is concerned, a byte has sufficient bits to contain any character in the basic character set used by your C++ compiler, but it is otherwise undefined. As long as a byte can accommodate at least 96 characters, then it’s fine according to the C++ standard. This implies that a byte in C++ is at least 7 bits, but it could be more, and 8-bit bytes seem to be popular at the moment at least. The intention here is to remove hardware architecture dependencies from the standard. If at some future date there’s a reason to produce machines with 16-bit bytes, for instance, then the C++ standard will accommodate that and will still apply. For the time being, though, you should be safe in assuming that a byte is 8 bits. As I said earlier, the memory allocated for each type of integer variable isn’t stipulated exactly within the ANSI C++ standard. What is said on the topic is the following: A variable of type char occupies sufficient memory to allow any character from the basic character set to be stored, which is 1 byte. A value of type int will occupy the number of bytes that’s natural for the hardware environment in which the program is being compiled. The signed and unsigned versions of a type will occupy the same amount of memory. A value of type short int will occupy at least as many bytes as type char ; a value of type int will occupy at least as many bytes as type short int ; and a value of type long int will occupy at least as many bytes as type int . In a sentence, type char is the smallest with 1 byte, type long is the largest, and type int is somewhere between the two but occupies the number of bytes best suited to your computer’s integer arithmetic capability. The reason for this vagueness is that the number of bytes used for type int on any given computer should correspond to that which results in the most efficient integer arithmetic. This will depend on the architecture of the machine. In most machines, it’s 4 bytes, but as the performance and architecture of computer hardware advances, there’s increasing potential for it to be 8 bytes. The actual number of bytes allocated to each integer type by your compiler will determine the range of values that can be stored. Table 2-4 shows the ranges for some typical sizes of integer variables. Table 2-4. Ranges of Values for Integer Variables char 1 –128 to 127 unsigned char 1 0U to 255U short 2 –32768 to 32767 unsigned short 2 0U to 65535U int 4 –2147483648 to 2147483647 unsigned int 4 0U to 4294967295U long 8 –9223372036854775808L to 9223372036854775807L unsigned long 8 0 to 18446744073709551615UL The Type of an Integer Literal I’ve introduced the idea of prefixes being applied to an integer value to affect the number base for the value. I’ve also informally introduced the notion of the suffixes U and L being used to identify integers as being of an unsigned type or of type long . Let’s now pin these options down more precisely and understand how the compiler will determine the type of a given integer literal. First, Table 2-5 presents a summary of the options you have for the prefix and suffix to an integer value. Table 2-5. Suffixes and Prefixes for Integer Values Suffix/Prefix Description No prefix The value is a decimal number. Prefix of 0x or 0X The value is a hexadecimal number. Prefix of 0 (a zero) The value is an octal number. Suffix of u or U The value is of an unsigned type. Suffix of L or l (lowercase L) The value is of type long. The last two items in the table can be combined in any sequence or combination of upper- and lowercase U and L , so UL , LU , uL , Lu , and so on are all acceptable. Although you can use the suffix l , which is a lowercase L, you should avoid doing so because of the obvious potential for confusion with the digit 1. Now let’s look at how the various combinations of prefixes and suffixes that you can use with integer literals will be interpreted by the compiler: - A decimal integer literal with no suffix will be interpreted as being of type int if it can be accommodated within the range of values provided by that type. Other wise, it will be interpreted as being of type long. - An octal or hexadecimal literal with no suffix will be interpreted as the first of the types int , unsigned int , long , and unsigned long in which the value can be accommodated. - A literal with a suffix of u or U will be interpreted as being of type unsigned int if the value can be accommodated within that type. Otherwise, it will be inter preted as type unsigned long. - A literal with a suffix of l or L will be interpreted as being of type long if the value can be accommodated within that type. Otherwise, it will be interpreted as type unsigned long. - A literal with a suffix combining both U and L in upper- or lowercase will be interpreted as being of type unsigned long. If the value for a literal is outside the range of the possible types, then the behavior is undefined but will usually result in an error message from the compiler. You’ll undoubtedly have noticed that you have no way of specifying an integer literal to be of type short int or unsigned short int . When you supply an initial value in a declaration for a variable of either of these types, the compiler will automatically convert the value of the literal to the required type, for example: unsigned short n = 1000; Here, according to the preceding rules, the literal will be interpreted as being of type int . The compiler will convert the value to type unsigned short and use that as the initial value for the variable. If you used -1000 as the initial value, this couldn’t be converted to type unsigned short because negative numbers are by definition outside the range of this type. This would undoubtedly result in an error message from the compiler. Remember that the range of values that can be stored for each integer type is dependent on your compiler. Table 2-4 shows “typical” values, but your compiler may well allocate different amounts of memory for particular types, thus providing for different ranges of values. You also need to be conscious of the possible variations in types when porting an application from one system to another. So far, I’ve largely ignored character literals and variables of type char . Because these have some unique characteristics, you’ll deal with character literals and variables that store character codes later in this chapter and press on with integer calculations first. In particular, you need to know how to store a result.{mospagebreak title=The Assignment Operator} You can store the result of a calculation in a variable using the assignment operator, = . Let’s look at an example. Suppose you declare three variables with the statements= . Let’s look at an example. Suppose you declare three variables with the statements You can store the result of a calculation in a variable using the , =. Let’s look at an example. Suppose you declare three variables with the statements int total_fruit = 0; int apples = 10; int oranges = 6; You can calculate the total number of fruit with the statement total_fruit = apples + oranges; This statement will first calculate the value on the right side of the = , the sum of apples and oranges , and then store the result in the total_fruit variable that appears on the left side of the = . It goes almost without saying that the expression on the right side of an assignment can be as complicated as you need. If you’ve defined variables called boys and girls that will contain the number of boys and the number of girls who are to share the fruit, you can calculate how many pieces of fruit each child will receive if you divide the total equally between them with the statement int fruit_per_child = 0 ; fruit_per_child = (apples + oranges) / (boys + girls); Note that you could equally well have declared the variable fruit_per_child and initialized it with the result of the expression directly: int fruit_per_child = (apples + oranges) / (boys + girls); You can initialize a variable with any expression, as long as all the variables involved have already been defined in preceding statements.Try It Out: Using the Assignment Operator You can package some of the code fragments from the previous section into an executable program, just to see them in action: // Program 2.3 – Using the assignment operator #include <iostream> using std::endl; int main() { int apples = 10; int oranges = 6; int boys = 3; int girls = 4; int fruit_per_child = (apples + oranges)/(boys + girls); scout << endl << “Each child gets ” << fruit_per_child << ” fruit.”; cout << endl; return 0; } This produces the following output: Each child gets 2 fruit. This is exactly what you would expect from the preceding discussion. Multiple Assignments You can assign values to several variables in a single statement. For example, the fol lowing code sets the contents of apples and oranges to the same value: apples = oranges = 10; The assignment operator is right associative, so this statement executes by first storing the value 10 in oranges and then storing the value in oranges in apples , so it is effectively apples = (oranges = 10); This implies that the expression (oranges = 10) has a value—namely, the value stored in oranges , which is 10. This isn’t merely a curiosity. Occasions will arise in which it’s convenient to assign a value to a variable within an expression and then to use that value for some other purpose. You can write statements such as this: fruit = (oranges = 10) + (apples = 11); which will store 10 in oranges , 11 in apples , then add the two together and store the result in fruit . It illustrates that an assignment expression has a value. However, although you can write statements like this, I don’t recommend it. As a rule, you should limit the number of operations per statement. Always assume that one day another programmer will want to understand and modify your code. As such, it’s your job to promote clarity and avoid ambiguity.Modifying the Value of a Variable Because the assignment operation first evaluates the right side and then stores the result in the variable on the left, you can write statements like this: apples = apples * 2; This statement calculates the value of the right side, apples * 2 , using the current value of apples , and then stores the result back in the apples variable. The effect of the statement is therefore to double the value contained in apples . The need to operate on the existing value of a variable comes up frequently—so much so, in fact, that C++ has a special form of the assignment operator to provide a shorthand way of expressing this. The op= Assignment Operators The op= assignment operators are so called because they’re composed of an operator and an equals sign (=). Using one such operator, the previous statement for doubling the value of apples could be written as follows: apples *= 2; This is exactly the same operation as the statement in the last section. The apples variable is multiplied by the value of the expression on the right side, and the result is stored back in apples . The right side can be any expression you like. For instance, you could write apples *= oranges + 2; This is equivalent to apples = apples * (oranges + 2); Here, the value stored in apples is multiplied by the number of oranges plus 2, and the result is stored back in apples . (Though why you would want to multiply apples and oranges together is beyond me!) The op= form of assignment also works with the addition operator, so to increase the number of oranges by 2, you could write oranges += 2; This has the same effect as the same as the statement oranges = oranges + 2; You should be able to see a pattern emerging by now. You could write the general form of an assignment statement using the op= operator as lhs op= rhs; Here, lhs is a variable and rhs is an expression. This is equivalent to the statement lhs = lhs op (rhs) ; The parentheses around rhs mean that the expression rhs is evaluated first and th e result becomes the right operand for the operation op. NOTE lhs is an lvalue , which is an entity to which you can assign a value. Lvalues are so called because they can appear on the left side of an assignment. The result of every expression in C++ will be either an lvalue or an rvalue .An rvalue is a result that isn’t an lvalue—that is, it can’t appear on the left of an assignment operation. You can use a whole range of operators in the op= form of assignment. Table 2-6 shows the complete set, including some operators you’ll meet in the next chapter. Table 2-6. op= Assignment Operators Addition + Bitwise AND & Subtraction – Bitwise OR | Multiplication • Bitwise exclusive OR ^ Division / Shift left << Modulus % Shift right >> Note that there can be no spaces between the operator and the = . If you include a space, it will be flagged as an error.{mospagebreak title=Incrementing and Decrementing Integers} increment and decrement operators, ++ and — respectively.+= operator. I’m sure you’ve also deduced that you can decrement a variable with -= . However, there are two other rather unusual arithmetic operators that can perform the same tasks. They’re called the and , ++ and — respectively. and , ++ and –respectively. These operators are more than just other options, and you’ll find them to be quite an asset once you get further into applying C++ in earnest. The increment and decrement operators are unary operators that can be applied to an integer variable. For example, assuming the variables are of type int , the following three statements all have exactly the same effect: count = count + 1 ; count += 1; ++count; The preceding statements each increment the variable count by 1. The last form, using the increment operator, is clearly the most concise. The action of this operator is different from the other operators that you’ve seen, in that it directly modifies the value of its operand. The effect in an expression is to increment the value of the variable and then to use that incremented value in the expression. For example, suppose count has the value 5, and you execute this statement: total = ++count + 6; The increment and decrement operators are of higher precedence than all the binary arithmetic operators. Thus, count will be first incremented to the value 6, and then this value will be used in the evaluation of the expression on the right side of the assign ment operation. The variable total will therefore be assigned the value 12. You use the decrement operator in the same way as the increment operator: total = –count + 6; Assuming count is 6 before executing this statement, the decrement operator will reduce it to 5, and this value will be used to calculate the value to be stored in total , which will be 11.Postfix Increment and Decrement Operations So far, you’ve written the operators in front of the variables to which they apply. This is called the prefix form. The operators can also be written after the variables to which they apply; this is the postfix form, and the effect is slightly different. When you use the postfix form of ++ , the variable to which it applies is incremented after its value is used in context. For example, you can rewrite the earlier example as follows: total = count++ + 6; With the same initial value of 5 for count , total is assigned the value 11, because the initial value of count is used to evaluate the expression before the increment by 1 is applied. The variable count will then be incremented to 6. The preceding statement is equivalent to the following two statements: total = count + 6 ; ++count; In an expression such as a++ + b , or even a+++b , it’s less than obvious what is meant, or indeed what the compiler will do. These two expressions are actually the same, but in the second case you might really have meant a + ++b , which has a differ ent meaning—it evaluates to one more than the other two expressions. It would be clearer to write the preceding statement as follows: total = 6 + count++; Alternatively, you can use parentheses: total = (count++) + 6; The rules that I’ve discussed in relation to the increment operator also apply to the decrement operator. For example, suppose count has the initial value 5, and you write the statement total = –count + 6; This results in total having the value 10 assigned, whereas if you write the statement total = 6 + count– ; the value of total is set to 11. You should avoid using the prefix form of these operators to operate on a variable more than once in an expression. Suppose the variable count has the value 5, and you write total = ++count * 3 + ++count * 5; First, it looks rather untidy, but that’s the least of the problems with this. Second, and crucially, the statement modifies the value of a variable more than once and the result is undefined in C++. You could and should get an error message from the compiler with this statement, but in some instances you won’t. This isn’t a desirable feature in a program to say the least, so don’t modify a variable more than once in a statement. Note also that the effects of statements such as the following are undefined: k = ++k + 1; Here you’re incrementing the value of the variable that appears on the right of the assignment operator, so you’re attempting to modify the value of the variable k twice within one expression. Each variable can be modified only once as a result of evaluating a single expression, and the prior value of the variable may only be accessed to determine the value to be stored. Although such expressions are undefined according to the C++ standard, this doesn’t mean that your compiler won’t compile them. It just means that there is no guarantee of consistency in the results. The increment and decrement operators are usually applied to integers, particularly in the context of loops, as you’ll see in Chapter 5, and you’ll see later in this chapter that they can be applied to floating-point values too. In later chapters, you’ll explore how they can also be applied to certain other data types in C++, in some cases with rather specialized (but very useful) effects.The const Keyword magic numbers, particularly when their purpose and origin is less than obvious.that you’ve initialized to the value 3 makes it absolutely clear what you’re doing. Explicit numeric literals in a program are sometimes referred to as , particularly when their purpose and origin is less than obvious. feet_per_yard that you’ve initialized to the value 3 makes it absolutely clear what you’re doing. Explicit numeric literals in a program are sometimes referred to as , particularly when their purpose and origin is less than obvious. , particularly when their purpose and origin is less than obvious. Another good reason for using a variable instead of a magic number is that you reduce the number of maintenance points in your code. Imagine that your magic number represents something that changes from time to time—an interest rate, for instance—and that it crops up on several occasions in your code. When the rate changes, you could be faced with a sizable task to correct your program. If you’ve defined a variable for the purpose, you only need to change the value once, at the point of initialization. Of course, if you use a variable to hold a constant of this kind, you really want to nail the value down and protect it from accidental modifications. You can use the keyword const to do this, for example: const int feet_per_yard = 3; // Conversion factor yards to feet You can declare any kind of “variable” as const . The compiler will check that you don’t attempt to alter the value of such a variable. For example, if you put something const on the left of an assignment operator, it will be flagged as an error. The obvious consequence of this is that you must always supply an initial value for a variable that you declare as const . Be aware that declaring a variable as const alters its type. A variable of type const int is quite different from a variable of type int .Try It Out: Using const You could implement a little program to convert a length entered as yards, feet, and inches into inches: // Program 2.4 – Using const #include <iostream> using std::cout; using std::endl; int main() { const int inches_per_foot = 12; const int feet_per_yard = 3; int yards = 0; int feet = 0; int inches = 0; // Read the length from the keyboar d cout << “Enter a length as yards, feet, and inches: “; cin >> yards >> feet >> inches; // Output the length in inches cout << endl << “Length in inches is ” << inches + inches_per_foot * (feet + feet_per_yard * yards) << endl; return 0; } A typical result from this program is the following: Enter a length as yards, feet, and inches: 2 2 11 Length in inches is 107 HOW IT WORKS There’s an extra using statement compared to previous examples: using std::cin; This introduces the name cin from the std namespace into the program file that refers to the standard input stream—the keyboard. You have two conversion constants defined by the statements const int inches_per_foot = 12; const int feet_per_yard = 3; Declaring them with the keyword const will prevent direct modification of these variables. You could test this by adding a statement such as inches_per_foot = 15; With a statement like this after the declaration of the constant, the program would no longer compile. You prompt for input and read the values for yards , feet , and inches with these statements: cout << “Enter a length as yards, feet, and inches: “; cin >> yards >> feet >> inches; Notice how the second line specifies several successive input operations from the stream, cin .You do this by using the extraction operator, >> , that I mentioned briefly in the last chapter. It’s analogous to using cout , the stream output opera tion, for multiple values. The appearance of the insertion and extraction operators provides you with a visual cue as to the direction in which data flows. The first value read from the keyboard will be stored in yards , the second in feet , and the third in inches . The input handling here is very flexible: you can enter all three values on one line, separated by spaces (in fact, by any whitespace characters), or you can enter them on several lines. You perform the conversion to inches within the output statement itself: cout << end l << “Length in inches is “ << inches + inches_per_foot * (feet + feet_per_yard * yards) << endl; As you can see, the fact that your conversion factors were declared as const in no way affects their use in expressions, just as long as you don’t try to modify them.{mospagebreak title=Numerical Functions for Integers} I explain functions in detail in Chapter 8, but that won’t stop us from making use of a few from the standard library before that. Let’s do a quick reprise of what’s going on when you use functions and cover some of the terminology they introduce. A function is a named, self-contained block of code that carries out a specific task. Often, this will involve it performing some operation on data that you supply and then returning the result of that operation to your program. In those circumstances in which a function returns a value that is numeric, the function can participate in an arithmetic expression just like an ordinary variable. In general, a call to a function looks like this: FunctionName(argument1, argument2, … ) Depending on the function in question, you can supply zero, one, or more values for it to work with by placing them in parentheses after its name when you call it from your program. The values you pass to the function in this way are called arguments. Like all values in C++, the arguments you pass to a function, and the value it returns to your program, have types that you must take care to conform with in order to use the function. You can access some numerical functions that you can apply to integers by adding an #include directive for the The abs() function returns the absolute value of the argument, which can be of type int or type long . The absolute value of a number is just its magnitude, so taking the absolute value of a negative number returns the number with a positive sign, whereas a positive number will be returned unchanged. The value returned by the abs() function will be of the same type as the argument, for example: int value = -20; int result = std::abs(value); // Result is 20 The <sctdlib> The div() function takes two arguments, both of type int . It returns the result of dividing the first argument by the second as well as the remainder from the operation in the form of a structure of type div_t . I go into structures in detail later on, so for the moment you’ll just see how to access the quotient and the remainder from what is returned by the div() function through an example: int value = 93; int divisor = 17; div_t results = std::div(value, divisor); // Call the function std::cout << “nQuotient is ” << results.quot; // Quotient is 5 std::cout << “n Remainder is ” << results.rem; // Remainder is 8 The first two statements define the variables value and divisor and give them the initial values 93 and 17, respectively. The next statement calls the div() function to divide value by divisor . You store the resulting structure of type div_t that is returned by the function in the variable results , which is also of type div_t . In the first output statement, you access the quotient from results by appending the name quot to results , separated by a period. The period is called the member access operator, and here you’re using it to access the quot member of the results structure. Similarly, in the last statement you use the member access operator to output the value of the remain der, which is available from the rem member of the results structure. Any structure of type div_t will have members with the names quot and rem , and you always access them by using the member access operator. Note that you could have used literals directly as arguments to the div() function. In this case, the statement calling the function would be div_t results = std::div(93, 17); The ldiv() function performs the same operation as the div() function, but on arguments of type long . The result is returned as a structure of type ldiv_t , which has members quot and rem that are of type long . CAUTION The Being able to generate random numbers in a program is very useful. You need to be able to build randomness into game programs, for instance; otherwise, they become very boring very quickly. The The rand() function return a random integer as type int . The function doesn’t require any arguments, so you can just use it like this: int random_value = std::rand(); // A random integer You store the integer that’s returned by the rand() function here in the variable random_value , but you could equally well use it in an arithmetic expression, for example: int even = 2*std::rand(); The value returned by the rand() function will be a value that is from 0 to RAND_MAX . RAND_MAX is a symbol that is defined in Because RAND_MAX is defined by a preprocessing macro (you’ll learn what a preprocessing macro is in Chapter 10), it isn’t within the std namespace, so you don’t need to qualify the name when you use it. Any symbol that’s defined by a macro won’t be in the std namespace because it isn’t a name that refers to something. By the time the compiler gets to compile the code, such a symbol will no longer be present because it will have already been replaced by something else during the preprocessing phase. Making the Sequence Start at Random Using rand() as you have so far, the sequence of numbers will always be the same. This is because the function uses a default seed value in the algorithm that generates the random numbers. This is fine for testing, but once you have a working game program, you’ll really want different sequences each time the program runs. You can change the seed value that will be used to generate the numbers by passing a new seed value as an integer argument to the srand() function that is defined in std::srand(13); // Set seed for rand to 13 The argument to the srand() function must be a value of type unsigned int . Although the preceding statement will result in rand() generating a different sequence from the default, you really need a random seed to get a different sequence from rand() each time you execute a program. Fortunately, the clock on your computer provides a ready-made source of random seed values. The std::srand((unsigned int)std::time(0)); There are a few things that you’ll have to take on trust for the moment. The argument to the time() function here is 0 . There’s another possibility for the argument, but you don’t need it here so you’ll ignore it. The subexpression (unsigned int) serves to convert the value returned by the time() function to type unsigned int , which is the type required for the argument to the srand() method. Without this, the statement wouldn’t compile. Type conversion is something else that you’ll look into later. Let’s put a working example together that makes use of random number generation.Try It Out: Generating Random Integers Here’s the code: // Program 2.5 Using Random Integer s #include #include <cstdlib> #include <ctime> using std::endl; using std::rand; using std::srand; using std::time; int main() { const int limit1 = 500; // Upper limit for on set of random values const int limit2 = 31; // Upper limit for another set of values cout << “First we will use the default sequence from rand().n”; cout << “Three random integer from 0 to ” << RAND_MAX << “: ” << rand() << ” ” << rand() << ” ” << rand()<< endl; cout << endl << “Now we will use a new seed for rand().n”; srand((unsigned int)time(0)); // Set a new seed cout << “Three random integer from 0 to ” << RAND_MAX << “: “ << rand() << ” ” << rand() << ” ” << rand()<< endl; return 0; } On my system I get the following output: First we will use the default sequence from rand(). Three random integer from 0 to 32767: 6334 18467 41 Now we will use a new seed for rand(). Three random integer from 0 to 32767: 4610 32532 28452 HOW IT WORKS This is a straightforward use of the rand() function, first with the default seed to start the sequence: cout << “A random integer from 0 to ” << RAND_MAX << “: ” << rand() << endl; Each call to rand() returns a value that will be from 0 to RAND_MAX , and you call the function three times to get a sequence of three random integers. Next, you set the seed value as the current value of the system clock with this statement: srand((unsigned int)time(0)); // Set a new seed This statement will generally result in a different seed being set each time you execute the program. You then repeat the statement that you executed previous ly with the default seed set. Thus, each time you run this program, the first set will always produce the same output, whereas with the second set, the output should be different.{mospagebreak title=Floating-Point Operations} Numerical values that aren’t integers are stored as floating-point.Numerical values that aren’t integers are stored as. The value of a floating-point number is the signed value of the mantissa, multiplied by 10 to the power of the exponent, as shown in Table 2-7. Table 2-7. Floating-Point Number Value Sign(+/-) Mantissa Exponent Value – 1.2345 3 –1.2345×103 (which is –1234.5) You can write a floating point literal in three basic forms: - As a decimal value including a decimal point (for example, 110.0). - With an exponent (for example, 11E1) in which the decimal part is multiplied by the power of 10 specified after the E (for exponent). You have the option of using either an upper- or a lowercase letter E to precede the exponent. - Using both a decimal point and an exponent (for example, 1.1E2). All three examples correspond to the same value, 110.0. Note that spaces aren’t allowed within floating-point literals, so you must not write 1.1 E2, for example. The latter would be interpreted by the compiler as two separate things: the floating-point literal 1.1 and the name E2. NOTE A floating-point literal must contain a decimal point, or an exponent, or both. If you write a numeric literal with neither, then you have an integer.Floating-Point Data Types There are three floating-point data types that you can use, as described in Table 2-8. The term “precision” here refers to the number of significant digits in the mantissa. The data types are in order of increasing precision, with float providing the lowest number of digits in the mantissa and long double the highest. Note that the precision only determines the number of digits in the mantissa. The range of numbers that can be represented by a particular type is determined by the range of possible exponents. The precision and range of values aren’t prescribed by the ANSI standard for C++, so what you get with each of these types depends on your compiler. This will usually make the best of the floating-point hardware facilities provided by your computer. Generally, type long double will provide a precision that’s greater than or equal to that of type double , which in turn will provide a precision that is greater than or equal to that of type float . Typically, you’ll find that type float will provide 7 digits precision, type double will provide 15 digits precision, and type long double will provide 19 digits precision, although double and long double turn out to be the same with some compilers. As well as increased precision, you’ll usually get an increased range of values with types double and long double . Typical ranges of values that you can represent with the floating-point types on an Intel processor are shown in Table 2-9. The numbers of decimal digits of precision in Table 2-9 are approximate. Zero can be represented exactly for each of these types, but values between zero and the lower limit in the positive or negative range can’t be represented, so these lower limits for the ranges are the smallest nonzero values that you can have. Simple floating-point literals with just a decimal point are of type double , so let’s look at how to define variables of that type first. You can specify a floating-point variable using the keyword double , as in this statement: double inches_to_mm = 25.4; This declares the variable inches_to_mm to be of type double and initializes it with the value 25.4. You can also use const when declaring floating-point variables, and this is a case in which you could sensibly do so. If you want to fix the value of the variable, the declaration statement might be const double inches_to_mm = 25.4; // A constant conversion factor If you don’t need the precision and range of values that variables of type double provide you can opt to use the keyword float to declare your floating-point variable, for example: float pi = 3.14159f; This statement defines a variable pi with the initial value 3.14159. The f at the end of the literal specifies it to be a float type. Without the f , the literal would have been of type double , which wouldn’t cause a problem in this case, although you may get a warning message from your compiler. You can also use an uppercase letter F to indicate that a floating-point literal is of type float . To specify a literal of type long double , you append an upper- or lowercase L to the number. You could therefore declare and initialize a variable of this type with the statement long double root2 = 1.4142135623730950488L; // Square root of 2Floating-Point Operations The modulus operator, % , can’t be used with floating-point operands, but all the other binary arithmetic operators that you have seen, + , – , * , and / , can be. You can also apply the prefix and postfix increment and decrement operators, ++ and — , to a floating-point variable with the same effect as for an integer—the variable will be incremented or decremented by 1. As with integer operands, the result of division by zero is undefined so far as the standard is concerned, but specific C++ implementations generally have their own way of dealing with this, so consult your product documentation. With most computers today, the hardware floating-point operations are implemented according to the IEEE 754 standard (also known as IEC 559). Although IEEE 754 isn’t required by the C++ standard, it does provide for identification of some aspects of floating-point operations on machines on which IEEE 754 applies. The float-ing-point standard defines special values having a binary mantissa of all zeros and an exponent of all ones to represent +infinity or -infinity , depending on the sign. When you divide a positive nonzero value by zero, the result will be +infinity , and dividing a negative value by zero will result in -infinity . Another special floating-point value defined by IEEE 754 is called Not a Number, usually abbreviated to NaN . This is used to represent a result that isn’t mathematically defined, such as arises when you divide zero by zero or you divide infinity by infinity. Any subsequent operation in which either or both operands are a value of NaN results in NaN . Once an operation in your program results in a value of ±infinity , this will pollute all subsequent operations in which it participates. Combining a normal value with ±infinity results in ±infinity . Dividing ±infinity by ±infinity or multiplying ±infinity by zero results in NaN . Table 2-10 summarizes all these possibilities. Using floating-point variables is really quite straightforward, but there’s no substitute for experience, so let’s try an example.{mospagebreak title=Try It Out: Floating-Point Arithmetic} Suppose that you want to construct a circular pond in which you want to keep a number of fish. Having looked into the matter, you know that you must allow 2 square feet of surface area on the pond for every 6 inches of fish length. You need to figure out the diameter of the pond that will keep the fish happy. Here’s how you can do it: // Program 2.6 Sizing a pond for happy fis h #include int main() { const double fish_factor = 2.0/0.5; // Area per unit length of fish const double inches_per_foot = 12.0; const double pi = 3.14159265; double fish_count = 0.0; // Number of fish double fish_length = 0.0; // Average length of fish cout << “Enter the number of fish you want to keep: “; cin >> fish_count; cout << “Enter the average fish length in inches: “; cin >> fish_length; fish_length = fish_length/inches_per_foot; // Convert to feet // Calculate the required surface area double pond_area = fish_count * fish_length * fish_factor; // Calculate the pond diameter from the area double pond_diameter = 2.0 * sqrt(pond_area/pi); cout << “nPond diameter required for ” << fish_count << ” fish is ” << pond_diameter << ” feet.n”; return 0; } (Continued) With input values of 20 fish with an average length of 9 inches, this example produces the following output: Enter the number of fish you want to keep: 20 Enter the average fish length in inches: 9 Pond diameter required for 20 fish is 8.74039 feet. You first declare three const variables that you’ll use in the calculation: const double fish_factor = 2.0/0.5; // Area per unit length of fish const double inches_per_foot = 12.0; const double pi = 3.14159265; Notice the use of a constant expression to specify the value for fish_factor . You can use any expression that produces a result of the appropriate type to define an initializing value for a variable. You have declared fish_factor , inches_per_foot , and pi as const because you don’t want to allow them to be altered. Next, you declare variables in which you’ll store the user input: double fish_count = 0.0; // Number of fish double fish_length = 0.0; // Average length of fish You don’t have to initialize these, but it’s good practice to do so. Because the input for the fish length is in inches, you need to convert it to feet before you use it in the calculation for the pond: fish_length = fish_length/inches_per_foot; // Convert to feet This stores the converted value back in the original variable. You get the required area for the pond with the following statement: double pond_area = fish_count * fish_length * fish_factor; The product of fish_count and fish_length gives the total length of all the fish, and multiplying this by fish_factor gives the required area. The area of any circle is given by the formula p r2, where r is the radius. You can therefore calculate the radius of the pond as the square root of the area divided by p. The diameter is then twice the radius, and the whole calculation is carried out by this statement: pond_diameter = 2.0 * sqrt(pond_area / pi); 74 You obtain the square root using a function that’s declared in the standard header The last step before exiting main() is to output the result: cout << “nPond diameter required for ” << fish_count << ” fish is ” << pond_diameter << ” feet.n”; This outputs the pond diameter required for the number of fish specified. Working with Floating-Point Values For most computations using floating-point values, you’ll find that type double is more than adequate. However, you need to be aware of the limitations and pitfalls of work ing with floating-point variables. If you’re not careful, your results may be inaccurate, or even incorrect. The following are common sources of errors when using floating-point values: - Many decimal values don’t convert exactly to binary floating-point values. The small errors that occur can easily be amplified in your calculations to produce large errors. - Taking the difference between two nearly identical values can lose precision. If you take the difference between two values of type float that differ in the sixth significant digit, you’ll produce a result that may have only one or two digits of accuracy. The other significant digits that are stored may represent errors. - Dealing with a wide range of possible values can lead to errors. You can create an elementary illustration of this by adding two values stored as type float with 7 digits of precision but in which one value is 108 times larger that the other. You can add the smaller value to the larger as many times as you like, and the larger value will be unchanged. The header defines constants for the floating-point types that are the smallest values that you can add to 1.0 and get a different result. The constants are FLT_EPSILON , DBL_EPSILON , and LDBL_EPSILON . Let’s see how these errors can manifest themselves in practice, albeit in a somewhat artificial situation.Try It Out: Errors in Floating-Point Calculations Here’s an example contrived to illustrate how the first two points can combine to produce errors: // Program 2.7 Floating point errors The value displayed should be zero, but on my computer this program produces the following: 7.45058e-009 The reason for the error is that none of the numerical values is stored exactly. If you add code to output the values of value1 and value2 after they’ve been modified, you should see a discrepancy between them. Of course, the final difference between the values of value1 and value2 is a very small number, but you could be using this totally spurious value in other calculations in which the error could be amplified. If you multiply this result by 1010, say, you’ll get an answer around 7.45, when the result should really be zero. Similarly, if you compare these two values, expecting them to be equal, you don’t get the result you expect. CAUTION Never rely on an exact floating-point representation of a decimal value in your program code. Tweaking the Output The previous program outputs the floating-point value in a very sensible fashion. It gave you 5 decimal places, and it used scientific notation (that is, a mantissa and an exponent). However, you could have chosen to have the output displayed using “normal” decimal notation by employing some more output manipulators.{mospagebreak title=Try It Out: Yet More Output Manipulators} Here’s the same code as in the previous “Try It Out” exercise, except that it uses additional manipulators to improve the appearance of the output:// Program 2.8 Experimenting with floating point output #include using std::cout; using std::endl; int main() { float value1 = 0.1f; float value2 = 2.1f; value1 -= 0.09f; value2 -= 2.09f;cout << setprecision(14) << fixed; cout << value1 – value2 << endl; cout << setprecision(5) << scientific; cout << value1 – value2 << endl; return 0; } // Should be 0.01 // Should be 0.01 // Change to fixed notation // Should output zero // Set scientific notation // Should output zero When I run the modified program, this is the output I get:0.00000000745058 7.45058e-009 (Continued) This code uses three new manipulators. The setprecision() manipulator specifies how many decimal places should appear after the decimal point when you’re outputting a floating-point number. The fixed and scientific manipulators complement one another and choose the format in which a floating-point number should be displayed when they’re written to the stream. By default, your C++ compiler will select either scientific or fixed , depending on the particular value you’re outputting, and you saw in the first version of this program that it performed that task admirably. The default number of decimal places isn’t defined in the standard, but five is common. Let’s look at the changes made. Apart from the #include for The first line is easy: you use the manipulators like you used setw() , by sending them to the output stream with the insertion operator. Their effects can then clearly be seen in the first line of output: you get a floating-point value with 14 decimal places and no exponent. Note that these manipulators differ from setw() in that they’re modal.In other words, they remain in effect for the stream until the end of the program, unless you set a different option. That’s the reason for the third line in the preceding code—you have to set scientific mode and a precision of 5 explicitly in order to return to “default” behavior. You can see that you’ve succeeded, though, because the second line of output is the same as the one produced by the original program. NOTE Actually, the The Table 2-11. abs(arg) fabs(arg) ceil(arg) floor(arg) exp(arg) log(arg) log10(arg) pow(arg1, arg2)Description Returns the absolute value of arg as the same type as arg , where ar g can be of any floating-point type. There are versions of the abs() function declared in the Returns the absolute value of arg as the same type as the argument. The argument can be int , long , float , double , or long double. Returns a floating-point value of the same type as arg that is the smallest integer greater than or equal to arg , so ceil(2.5) produces the value 3.0. arg can be of any floating-point type. Returns a floating-point value of the same type as arg that is the largest integer less than or equal to arg so the value returned by floor(2.5 ) will be 2.0. arg can be of any floating-point type . Returns the value of e arg as the same type as arg . arg can be of any floating-point type. The log function returns the natural logarithm (to base e) of arg as the same type as arg . arg can be any floating-point type. The log10 function returns the logarithm to base 10 of arg as the same type as arg . arg can be any floating-point type. The pow function returns the value of arg1 raised to the power arg1, which is arg1arg2. Thus the result of pow(2, 3) will be 8, and the result of pow(1.5, 3) will be 3.375. The arguments can be both of type int or any floating-point type. The second argument, arg2 , may also be of type int with arg1 of type int , or long , or any floating-point type. The value returned will be of the same type as arg1. Table 2-12 shows the trigonometric functions that you have available in the The arguments to these functions can be of any floating-point type and the result will be returned as the same type as the argument(s). Let’s look at some examples of how these are used. Here’s how you can calculate the sine of an angle in radians: double angle = 1.5; // In radians double sine_value = std::sin(angle); If the angle is in degrees, you can calculate the tangent by using a value for p to convert to radians: float angle_deg = 60.0f; // Angle in degree s const float pi = 1.14159f; const float pi_degrees = 180.0f; float tangent = std::tan(pi*angle_deg/pi_degrees); If you know the height of the church steeple is 100 feet and you’re standing 50 feet from the base of the steeple, you can calculate the angle in radians of the top of the You can use this value in angle and the value of distance to calculate the distance from your toe to the top of the steeple: double toe_to_tip = distance*std::cos(angle); Of course, fans of Pythagoras of Samos could obtain the result much more easily, like this: double toe_to_tip = std::sqrt(std::pow(distance,2) + std::pow(height, 2);{mospagebreak title=Working with Characters}. You declare variables of type char in the same way as variables of the other types that you’ve seen, for example: char letter; char yes, no; The first statement declares a single variable of type char with the name letter . The second variable declares two variables of type char having the names yes and no . Each of these variables can store the code for a single character. Because you haven’t pro vided initial values for these variables, they’ll contain junk values.Character Literals When you declare a variable of type char , you can initialize it with a character literal. You write a character literal as the character that you require between single quotes. For example, ‘z’ , ‘3’ , and ‘?’ are all character literals. Some characters are problematical to enter as literals. Obviously, a single quote presents a bit of a difficulty because it’s a delimiter for a character literal. In fact, it isn’t legal in C++ to put either a single quote or a backslash character between single quotes. Control characters such as newline and tab are also a problem because they result in an effect when you press the key for the appropriate character rather than entering the character as data. You can specify all of these problem characters by using escape sequences that begin with a backslash, as shown in Table 2-13. To specify a character literal corresponding to any of these characters, you just type in the corresponding escape sequence between single quotes. For instance, new-line is ‘n’ and backslash is ” . There are also escape sequences that you can use to specify a character by its code expressed as either an octal or a hexadecimal value. The escape sequence for an octal character code is one to three octal digits preceded by a backslash. The escape sequence for a hexadecimal character code is one or more hexadecimal digits preceded by x . You write both forms between single quotes when you want to define a character literal. For example, the letter ‘A’ could be written as hexadecimal ‘x41’ or octal ’81’ in US-ASCII code. Obviously, you could write codes that won’t fit within a single byte, in which case the result is implementation defined. If you write a character literal with more than one character between the single quotes and the characters don’t represent an escape sequence— ‘abc’ is an example— then the literal is described as a multicharacter literal and will be of type int . The numerical value of such a literal is implementation defined but will usually be the result of placing the 1-byte codes for the characters in successive bytes of the int value. If you specify a multicharacter literal with more than four characters, this will usually result in an error message from the compiler. You now know enough about character literals to initialize your variables of type char properly. Initializing char Variables You can define and initialize a variable of type char with the statement char letter = ‘A’; // Stores a single letter ‘A’ 82 This statement defines the variable with the name letter to be of type char with an initial value ‘A’ . If your compiler represents characters using US-ASCII codes, this will have the decimal value 65. You can declare and initialize multiple variables in a single statement: char yes = ‘y’, no = ‘n’, tab = ‘t’; Because you can treat variables of type char as integers, you could equally well declare and initialize the variable letter with this statement: char letter = 65; // Stores the ASCII code for ‘A’ Remember that type char may be signed or unsigned by default, depending on the compiler, so this will affect what numerical values can be accommodated. If char is unsigned, values can be from 0 to 255. If it’s signed, values can be from –128 to +127. Of course, the range of bit patterns that can be stored is the same in both cases. They’re just interpreted differently. Of course, you can use the variable letter as an operand in integer operations, so you can write letter += 2; This will result in the value stored in letter being incremented to 67, which is ‘C’ in US-ASCII. You can find all the US-ASCII codes in Appendix A of this book. CAUTION Although I’ve assumed US-ASCII coding in the examples, as I noted earlier although this is usually the case this doesn’t have to be so. On older main frame computers, for instance, characters may be represented using Extended Binary Coded Decimal Interchange Code (EBCDIC), in which the codes for some characters are different from US-ASCII. You can explicitly declare a variable as type signed char or unsigned char , which will affect the range of integers that can be represented. For example, you can declare a variable as follows: unsigned char ch = 0U; In this case, the numerical values can range from 0 to 255. When you read from a stream into a variable of type char , the first nonwhitespace character will be stored. This means that you can’t read whitespace characters in this way—they’re simply ignored. Further, you can’t read a numerical value into a variable of type char —if you try, you’ll find that the character code for the first digit will be stored. When you output a variable of type char to the screen, it will be as a character, not a numerical value. You can see this demonstrated in the next example.Try It Out: Handling Character Values This example reads a character from the keyboard, outputs the character and its numerical code, increments the value of the character, and outputs the result as a character and as an integer: // Program 2.9 – Handling character value s #include int main() { char ch = 0; int ch_value = 0; // Read a character from the keyboard cout << “Enter a character: “; cin >> ch; ch_value = ch; // Get integer value of character cout << endl << ch << ” is ” << ch_value; ch_value = ++ch; // Increment ch and store as integer cout << endl << ch << ” is ” << ch_value << endl; return 0; } Typical output from this example is as follows: Enter a character: w w is 119 x is 120 After prompting for input, the program reads a character from the keyboard with the statement cin >> ch; Only nonwhitespace characters are accepted, so you can press Enter or enter spaces and tabs and they’ll all be ignored. Stream output will always output the variable ch as a character. To get the numer ical code, you need a way to convert it to an integer type. The next statement does this: ch_value = ch; // Get integer value of character The compiler will arrange to convert the value stored in ch from type char to type int so that it can be stored in the variable ch_value . You’ll see more about automatic conversions in the next chapter, when I discuss expressions involving values of different types. Now you can output the character as well as its integer code with the following statement: cout << end l << ch << ” is ” << ch_value; The next statement demonstrates that you can operate with variables of type char as integers: ch_value = ++ch; // Increment ch and store as integer This statement increments the contents of ch and stores the result in the variable ch_value , so you have both the next character and its numerical representation. This is output to the display with exactly the same statement as was used previ ously. Although you just incremented ch here, variables of type char can be used with all of the arithmetic operators, just like any of the integer types.Working with Extended Character Sets Single-byte character codes such as ASCII or EBCDIC are generally adequate for national language character sets that use Latin characters. There are also 8-bit character encodings that will accommodate other languages such as Greek or Russian. However, if you want to work with these and Latin characters simultaneously, or if you want to handle character sets for Asian languages that require much larger numbers of character codes than the ASCII set, 256 character codes doesn’t go far enough. The type wchar_t is a character type that can store all members of the largest extended character set that’s support by an implementation. The type name derives from wide characters, because the character is “wider” than the usual single-byte character. By contrast, type char is referred to as “narrow” because of the limited range of character codes that are available. The size of variables of type wchar_t isn’t stipulated by the C++ standard, except that it will have the same characteristics as one of the other integer types. It is often 2 bytes on PCs, and typically the underlying type is unsigned short , but it can also be 4 bytes with some compilers, especially those implemented on Unix workstations. Wide-Character Literals You define wide-character literals in the same way as narrow character literals that you use with type char , but you prefix them with the letter L. For example, wchar_t wide_letter = L’Z’; defines the variable wide_letter to be of type wchar_t and initializes it to the wide-char-acter representation for Z. Your keyboard may not have keys for representing other national language characters, but you can still create them using hexadecimal notation, for example: wchar_t wide_letter = L’x0438′; // Cyrillic The value between the single quotes is an escape sequence that allows you to specify a character by a hexadecimal representation of the character code. The backslash indicates the start of the escape sequence, and the x after the backslash signifies that the code is hexadecimal. The absence of x or X would indicate that the characters that follow are to be interpreted as octal digits. Of course, you could also use the notation for UCS character literals: wchar_t wide_letter = L’u0438′; // Cyrillic If your compiler supports 4-byte UCS characters, you could also initialize a variable of type wchar_t with a UCS character specified as Udddddddd , where d is a hexadecimal digit. Wide-Character Streams The streams cin and cout that you’ve been using are narrow-character streams. They only handle characters that consist of a single byte, so you can’t extract from cin into a variable of type wchar_t . The wchat_t wide_letter = 0 ; std::wcin >> wide_letter; // Read a wide character Although you’ll always be able to write wide characters to wcout , this doesn’t mean that such characters will display correctly or at all. It depends on if your operating sys tem recognizes the character codes.{mospagebreak title=Functional Notation for Initial Values} An alternative notation for specifying the initial value for a variable when you declare it is called functional notation. The term stems from the fact that you put the initial value between parentheses after the variable name, so it looks like a function call, as you’ll discover later on.An alternative notation for specifying the initial value for a variable when you declare it is called . The term stems from the fact that you put the initial value between parentheses after the variable name, so it looks like a function call, as you’ll discover later on. Let’s look at an example. Instead of writing a declaration as int unlucky = 13; you have the option to write the statement as int unlucky(13); Both statements achieve exactly the same result: they declare the variable unlucky as type int and give it an initial value of 13. You can initialize other types of variables using functional notation. For instance, you could declare and initialize a variable to store a character with this statement: char letter(‘A’); However, functional notation for initializing variables is primarily used for the initialization of variables of a data type that you’ve defined. In this case, it really does involve calling a function. The initialization of variables of the fundamental types in C++ normally uses the approach you have taken up to now. You’ll have to wait until Chapter 11 to find out about creating your own types and how those kinds of variables get initialized!Summary In this chapter, I covered the basics of computation in C++. You learned about most of the fundamental types of data that are provided for in the language. The essentials of what I’ve discussed up to now are as follows: - Numeric and character constants are called literals. - You can define integer literals as decimal, hexadecimal, or octal values. - A floating-point literal must contain a decimal point, or an exponent, or both. - Named objects in C++, such as variables, can have names that consist of a sequence of letters and digits, the first of which is a letter, and where an underscore is considered to be a letter. Upper- and lowercase letters are distinguished. - Names that begin with an underscore followed by a capital letter, and names that contain two successive underscores, are reserved for use within the standard library, so you shouldn’t use them for names of your own variables. - All literals and variables in C++ are of a given type. - The basic types that can store integers are short , int , and long . These store signed integers by default, but you can also use the type modifier unsigned preceding any of these type names to produce a type that occupies the same number of bytes but only stores unsigned integers. - A variable of type char can store a single character and occupies 1 byte. The type char may be signed or unsigned by default, depending on your compiler. You can also use variables of the types signed char and unsigned char to store integers. - The type wchar_t can store a wide character and occupies either 2 or 4 bytes, depending on your compiler. - The floating-point data types are float , double , and long double . - The name and type of a variable appear in a declaration statement ending with a semicolon. A declaration for a variable that results in memory being allocated is also a definition of the variable. - Variables may be given initial values when they’re declared, and it’s good programming practice to do so. - You can protect the value of a “variable” of a basic type by using the modifier const . The compiler will check for any attempts within the program source file to modify a variable declared as const . - An lvalue is an object or expression that can appear on the left side of an assignment. Non- const variables are examples of lvalues. Although I discussed quite a few basic types in this chapter, don’t be misled into thinking that’s all there are. There are some other basic types, as well as more complex types based on the basic set, as you’ll see, and eventually you’ll be creating original types of your own.{mospagebreak title=Exercises}. ), but that really should be a last resort.. Exercise 2-1. Write a program that will compute the area of a circle. The program should prompt for the radius of the circle to be entered from the keyboard, calculate the area using the formula area = pi * radius * radius , and then display the result. Exercise 2-2. Using your solution for Exercise 2-1, improve the code so that the user can control the precision of the output by entering the number of digits required. (Hint: Use the setprecision() manipulator.) Exercise 2-3. Create a program that converts inches to feet-and-inches— for example, an input of 77 inches should produce an output of 6 feet and 5 inches. Prompt the user to enter an integer value corresponding to the number of inches, and then make the conversion and output the result. (Hint: Use a const to store the inches-to-feet conversion rate; the modulus operator will be very helpful.) Exercise 2-4. For your birthday you’ve been given a long tape measure and an instrument that allows you to determine angles—the angle between the horizontal and a line to the top of a tree, for instance. If you know the distance, d, you are from a tree, and the height, h, of your eye when peering into your angle-measuring device, you can calculate the height of the tree with this formula: h+d*tan(angle) Create a program to read h in inches, d in feet and inches, and angle in degrees from the keyboard, and output the height of the tree in feet. NOTE There is no need to chop down any trees to verify the accuracy of your program. Just check the solutions on the Apress website ( ). Exercise 2-5. Here’s an exercise for puzzle fans. Write a program that will prompt the user to enter two different positive integers. Identify in the output the value of the larger integer and the value of the smaller integer. (This can be done with what you’ve learned in this chapter!)
http://www.devshed.com/c/a/practices/basic-data-types-and-calculations/2/
CC-MAIN-2018-17
refinedweb
17,784
56.69
HaskellEdit it) and load (or compile) HelloWorld.hs. It should show a window with title "Hello World!", a menu bar with File and About, and a status bar at the bottom, that says "Welcome to wxHaskell". If it doesn't work, you might try to copy the contents of the $wxHaskellDir/lib directory to the ghc install directory. Hello WorldEdit [Prop (Frame ())] -> IO (Frame ()). It takes a list of "frame properties" and returns the corresponding frame. We'll look deeper into properties later, but a property is typically a combination of an attribute and a value. What we're interested in now is the title. This is in the text attribute and has type (Textual w) => Attr w String. The most important thing here, is that it's a String attribute. Here's how we code it: gui :: IO () gui = do frame [text := "Hello World!"] The operator (:=) takes an attribute and a value and combines both into a property. Note that frame returns an IO (Frame ()). You can change the type of gui to IO (Frame ()), but it might be better just to add return (). Now we have our own GUI consisting of a frame with title "Hello World!". Its source: module Main where import Graphics.UI.WX main :: IO () main = start gui gui :: IO () gui = do frame [text := "Hello World!"] return () The result should look like the screenshot. (It might look slightly different on Linux or MacOS X, on which wxhaskell also runs) ControlsEdit A text labelEdit already something visible will happen when you click on it. A button is a control, just like staticText. Look it up in Graphics.UI.WX.Controls. Again, we need a window and a list of properties. We'll use the frame again. text is also an attribute of a button: gui :: IO () gui = do f <- frame [text := "Hello World!"] staticText f [text := "Hello StaticText!"] button f [text := "Hello Button!"] return () Load it into GHCi (or compile it with GHC) and... hey!? What's that? The button's been covered up by the label! We're going to fix that next. LayoutEdit The reason that the label and the button overlap, is that we haven't set a layout for our frame yet. Layouts are created using the functions found in the documentation of Graphics.UI.WXCore.Layout. Note that you don't have to import Graphics.UI.WXCore to use layouts. The documentation says we can turn a member of the widget class into a layout by using the widget function. Also, windows are a member of the widget class. But, wait a minute... we only have one window, and that's the frame! Nope... we have more, look at Graphics.UI.WX.Controls and click on any occurrence of the word Control. You'll be taken to Graphics.UI.WXCore.WxcClassTypes, and it is there we see that a Control is also a type synonym of a special type of window. We'll need to change the code a bit, but here it is. gui :: IO () gui = do f <- frame [text := "Hello World!"] st <- staticText f [text := "Hello StaticText!"] b <- button f [text := "Hello Button!"] return () Now we can use widget st and widget b to create a layout of the staticText and the button. layout is an attribute of the frame, so we'll set it here: that. row and column look nice. They take an integer and a list of layouts. We can easily make a list of layouts of the button and the staticText. The integer is the spacing between the elements of the list. Let's try something: gui :: IO () gui = do f <- frame [text := "Hello World!"] st <- staticText f [text := "Hello StaticText!"] b <- button f [text := "Hello Button!"] set f [layout := row 0 [widget st, widget b] ] return () Play around with the integer and see what happens. Also, change row into column. Try to change the order of the elements in the list to get a feeling of how it works. For fun, try to add widget b several more times in the list. What happens? Here are a few exercises to spark your imagination. Remember to use the documentation! After having completed the exercises, the end result should look like this: You could have used different spacing for row and column or have the options of the radiobox displayed horizontally. AttributesEdit After all this, you might be wondering: "Where did that set function suddenly come from?" and "How would I know if text is an attribute of something?". Both answers lie in the attribute system of wxHaskell. Setting and modifying attributesEdit the type signature of get. It's w -> Attr w a -> IO a. text is a String attribute, so we have an IO String which we can bind to ftext. The last line edits the text of the frame. Yep, destructive updates are possible in wxHaskell. We can overwrite the properties using (:=) anytime with set. This inspires us to write a modify function: modify :: w -> Attr w a -> (a -> a) -> IO () modify w attr f = do val <- get w attr set w [ attr := f val ] First it gets the value, then it sets it again after applying the function. Surely we're not the first one to think of that... Look at this operator: (:~). You can use it in set because it takes an attribute and a function. The result is a property in which the original value is modified by the function. That means we can write: gui :: IO () gui = do f <- frame [ text := "Hello World!" ] st <- staticText f [] ftext <- get f text set st [ text := ftext ] set f [ text :~ ++ " And hello again!" ] This is a great place to use anonymous functions with the lambda-notation. There are two more operators we can use to set or modify properties: (::=) and (::~). They doEditEdit There are a few classes that deserve special attention. They are the Reactive class and the Commanding class. As you can see in the documentation of these classes, they don't add attributes (of the form Attr w a), but events. The Commanding class adds the command event. We'll use a button to demonstrate event handling. Here's a simple GUI with a button and a staticText: gui :: IO () gui = do f <- frame [ text := "Event Handling" ] st <- staticText f [ text := "You haven\'t clicked the button yet." ] b <- button f [ text := "Click me!" ] set f [ layout := column 25 [ widget st, widget b ] ] We want to change the staticText when you press the button. We'll need the on function: b <- button f [ text := "Click me!" , on command := --stuff ] The type of on: Event w a -> Attr w a. command is of type Event w (IO ()), so we need an IO-function. This function is called the Event handler. Here's what we get: gui :: IO () gui = do f <- frame [ text := "Event Handling" ] st <- staticText f [ text := "You haven\'t clicked the button yet." ] b <- button f [ text := "Click me!" , on command := set st [ text := "You have clicked the button!" ] ] set f [ layout := column 25 [ widget st, widget b ] ] Insert text about event filters here
http://en.m.wikibooks.org/wiki/Haskell/GUI
CC-MAIN-2014-35
refinedweb
1,184
76.22
SpecialTacticsRanged(object) Special tactics for ranged fighters. int SpecialTacticsRanged( object oTarget ); Parameters oTarget The "enemy" object we wish to use the tactics against. Description Special tactics for ranged fighters. The caller will attempt to stay in ranged distance and will make use of active ranged combat feats (Rapid Shot and Called Shot). If the target is too close and is not currently attacking the caller, the caller will instead try to find a ranged enemy to attack. If that fails, the caller will try to run away from the target to a ranged distance. This will fall through and return FALSE after three consecutive attempts to get away from an opponent within melee distance, at which point the caller will use normal tactics until they are again at a ranged distance from their target. Returns TRUE on success, FALSE on failure. Requirements #include "x0_i0_combat" Version ??? See Also author: Baragg, editor: Mistress
http://palmergames.com/Lexicon/Lexicon_1_69/function.SpecialTacticsRanged.html
CC-MAIN-2014-42
refinedweb
151
64.71
Last Updated on August 7, 2019 deep learning and freely available datasets of photos and their descriptions. In this tutorial, you will discover how to prepare photos and textual descriptions ready for developing a deep learning automatic photo caption generation model. After completing this tutorial, you will know: -. Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. - Update Nov/2017: Fixed small typos in the code in the “Whole Description Sequence Model” section. Thanks Moustapha Cheikh and Matthew. - Update Feb/2019: Provided direct links for the Flickr8k_Dataset dataset, as the official site was taken down. How to Prepare a Photo Caption Dataset for Training a Deep Learning Model Photo by beverlyislike, some rights reserved. Tutorial Overview This tutorial is divided into 9 parts; they are: - Download the Flickr8K Dataset - How to Load Photographs - Pre-Calculate Photo Features - How to Load Descriptions - Prepare Description Text - Whole Description Sequence Model - Word-By-Word Model - Progressive Loading - Pre-Calculate Photo Features Python Environment This tutorial assumes you have a Python 3 SciPy environment installed. You can use Python 2, but you may need to change some of the examples. You must have Keras (2.0 or higher) installed with either the TensorFlow or Theano backend. The tutorial also assumes you have scikit-learn, Pandas, NumPy and Matplotlib installed. If you need help with your environment, see this post: Need help with Deep Learning for Text Data? Take my free 7-day email crash course now (with code). Click to sign-up and also get a free PDF Ebook version of the course. Start Your FREE Crash-Course Now Download the Flickr8K Dataset A good dataset to use when getting started with image captioning is the Flickr8K dataset. The reason is that it is realistic and relatively small so that you can download it and build models on your workstation using a CPU. The definitive description of the dataset is in the paper “Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics” from 2013. The authors describe the dataset as follows:. — Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics, 2013. The dataset is available for free. You must complete a request form and the links to the dataset will be emailed to you. I would love to link to them for you, but the email address expressly requests: “Please do not redistribute the dataset“. You can use the link below to request the dataset: Within a short time, you will receive an email that contains links to two files: - Flickr8k_Dataset.zip (1 Gigabyte) An archive of all photographs. - Flickr8k_text.zip (2.2 Megabytes) An archive of all text descriptions for photographs. UPDATE (Feb/2019): The official site seems to have been taken down (although the form still works). Here are some direct download links from my datasets GitHub repository: Download the datasets and unzip them into your current working directory. You will have two directories: - Flicker8k_Dataset: Contains 8092 photographs in jpeg format. - Flickr8k_text: Contains a number of files containing different sources of descriptions for the photographs. Next, let’s look at how to load the images. How to Load Photographs In this section, we will develop some code to load the photos for use with the Keras deep learning library in Python. The image file names are unique image identifiers. For example, here is a sample of image file names: Keras provides the load_img() function that can be used to load the image files directly as an array of pixels. The pixel data needs to be converted to a NumPy array for use in Keras. We can use the img_to_array() keras function to convert the loaded data. We may want to use a pre-defined feature extraction model, such as a state-of-the-art deep image classification network trained on Image net. The Oxford Visual Geometry Group (VGG) model is popular for this purpose and is available in Keras. The Oxford Visual Geometry Group (VGG) model is popular for this purpose and is available in Keras. If we decide to use this pre-trained model as a feature extractor in our model, we can preprocess the pixel data for the model by using the preprocess_input() function in Keras, for example: We may also want to force the loading of the photo to have the same pixel dimensions as the VGG model, which are 224 x 224 pixels. We can do that in the call to load_img(), for example: We may want to extract the unique image identifier from the image filename. We can do that by splitting the filename string by the ‘.’ (period) character and retrieving the first element of the resulting array: We can tie all of this together and develop a function that, given the name of the directory containing the photos, will load and pre-process all of the photos for the VGG model and return them in a dictionary keyed on their unique image identifiers. Running this example prints the number of loaded images. It takes a few minutes to run. If you do not have the RAM to hold all images (about 5GB by my estimation), then you can add an if-statement to break the loop early after 100 images have been loaded, for example: Pre-Calculate Photo Features It is possible to use a pre-trained model to extract the features from photos in the dataset and store the features to file. This is an efficiency that means that the language part of the model that turns features extracted from the photo into textual descriptions can be trained standalone from the feature extraction model. The benefit is that the very large pre-trained models do not need to be loaded, held in memory, and used to process each photo while training the language model. Later, the feature extraction model and language model can be put back together for making predictions on new photos. In this section, we will extend the photo loading behavior developed in the previous section to load all photos, extract their features using a pre-trained VGG model, and store the extracted features to a new file that can be loaded and used to train the language model. The first step is to load the VGG model. This model is provided directly in Keras and can be loaded as follows. Note that this will download the 500-megabyte model weights to your computer, which may take a few minutes. This will load the VGG 16-layer model. The two Dense output layers as well as the classification output layer are removed from the model by setting include_top=False. The output from the final pooling layer is taken as the features extracted from the image. Next, we can walk over all images in the directory of images as in the previous section and call predict() function on the model for each prepared image to get the extracted features. The features can then be stored in a dictionary keyed on the image id. The complete example is listed below. The example may take some time to complete, perhaps one hour. After all features are extracted, the dictionary is stored in the file ‘features.pkl‘ in the current working directory. These features can then be loaded later and used as input for training a language model. You could experiment with other types of pre-trained models in Keras. How to Load Descriptions It is important to take a moment to talk about the descriptions; there are a number available. The file Flickr8k.token.txt contains a list of image identifiers (used in the image filenames) and tokenized descriptions. Each image has multiple descriptions. Below is a sample of the descriptions from the file showing 5 different descriptions for a single image. The file ExpertAnnotations.txt indicates which of the descriptions for each image were written by “experts” which were written by crowdsource workers asked to describe the image. Finally, the file CrowdFlowerAnnotations.txt provides the frequency of crowd workers that indicate whether captions suit each image. These frequencies can be interpreted probabilistically. The authors of the paper describe the annotations as follows: … annotators were asked to write sentences that describe the depicted scenes, situations, events and entities (people, animals, other objects). We collected multiple captions for each image because there is a considerable degree of variance in the way many images can be described. — Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics, 2013. There are also lists of the photo identifiers to use in a train/test split so that you can compare results reported in the paper. The first step is to decide which captions to use. The simplest approach is to use the first description for each photograph. First, we need a function to load the entire annotations file (‘Flickr8k.token.txt‘) into memory. Below is a function to do this called load_doc() that, given a filename, will return the document as a string. We can see from the sample of the file above that we need only split each line by white space and take the first element as the image identifier and the rest as the image description. For example: We can then clean up the image identifier by removing the filename extension and the description number. We can also put the description tokens back together into a string for later processing. We can put all of this together into a function. Below defines the load_descriptions() function that will take the loaded file, process it line-by-line, and return a dictionary of image identifiers to their first description. Running the example prints the number of loaded image descriptions. There are other ways to load descriptions that may turn out to be more accurate for the data. Use the above example as a starting point and let me know what you come up with. Post your approach in the comments below. Prepare Description Text The descriptions are tokenized; this means that each token is comprised of words separated by white space. It also means that punctuation are separated as their own tokens, such as periods (‘.’) and apostrophes for word plurals (‘s). It is a good idea to clean up the description text before using it in a model. Some ideas of data cleaning we can form include: - Normalizing the case of all tokens to lowercase. - Remove all punctuation from tokens. - Removing all tokens that contain one or fewer characters (after punctuation is removed), e.g. ‘a’ and hanging ‘s’ characters. We can implement these simple cleaning operations in a function that cleans each description in the loaded dictionary from the previous section. Below defines the clean_descriptions() function that will clean each loaded description. We can then save the clean text to file for later use by our model. Each line will contain the image identifier followed by the clean description. Below defines the save_doc() function for saving the cleaned descriptions to file. Putting this all together with the loading of descriptions from the previous section, the complete example is listed below. Running the example first loads 8,092 descriptions, cleans them, summarizes the vocabulary of 4,484 unique words, then saves them to a new file called ‘descriptions.txt‘. Open the new file ‘descriptions.txt‘ in a text editor and review the contents. You should see somewhat readable descriptions of photos ready for modeling. The vocabulary is still relatively large. To make modeling easier, especially the first time around, I would recommend further reducing the vocabulary by removing words that only appear once or twice across all descriptions. Whole Description Sequence Model There are many ways to model the caption generation problem. One naive way is to create a model that outputs the entire textual description in a one-shot manner. This is a naive model because it puts a heavy burden on the model to both interpret the meaning of the photograph and generate words, then arrange those words into the correct order. This is not unlike the language translation problem used in an Encoder-Decoder recurrent neural network where the entire translated sentence is output one word at a time given an encoding of the input sequence. Here we would use an encoding of the image to generate the output sentence instead. The image may be encoded using a pre-trained model used for image classification, such as the VGG trained on the ImageNet model mentioned above. The output of the model would be a probability distribution over each word in the vocabulary. The sequence would be as long as the longest photo description. The descriptions would, therefore, need to be first integer encoded where each word in the vocabulary is assigned a unique integer and sequences of words would be replaced with sequences of integers. The integer sequences would then need to be one hot encoded to represent the idealized probability distribution over the vocabulary for each word in the sequence. We can use tools in Keras to prepare the descriptions for this type of model. The first step is to load the mapping of image identifiers to clean descriptions stored in ‘descriptions.txt‘. Running this piece loads the 8,092 photo descriptions into a dictionary keyed on image identifiers. These identifiers can then be used to load each photo file for the corresponding inputs to the model. Next, we need to extract all of the description text so we can encode it. We can use the Keras Tokenizer class to consistently map each word in the vocabulary to an integer. First, the object is created, then is fit on the description text. The fit tokenizer can later be saved to file for consistent decoding of the predictions back to vocabulary words. Next, we can use the fit tokenizer to encode the photo descriptions into sequences of integers. The model will require all output sequences to have the same length for training. We can achieve this by padding all encoded sequences to have the same length as the longest encoded sequence. We can pad the sequences with 0 values after the list of words. Keras provides the pad_sequences() function to pad the sequences. Finally, we can one hot encode the padded sequences to have one sparse vector for each word in the sequence. Keras provides the to_categorical() function to perform this operation. Once encoded, we can ensure that the sequence output data has the right shape for the model. Putting all of this together, the complete example is listed below. Running the example first prints the number of loaded image descriptions (8,092 photos), the dataset vocabulary size (4,485 words), the length of the longest description (28 words), then finally the shape of the data for fitting a prediction model in the form [samples, sequence length, features]. As mentioned, outputting the entire sequence may be challenging for the model. We will look at a simpler model in the next section. Word-By-Word Model A simpler model for generating a caption for photographs is to generate one word given both the image as input and the last word generated. This model would then have to be called recursively to generate each word in the description with previous predictions as input. Using the word as input, give the model a forced context for predicting the next word in the sequence. This is the model used in prior research, such as: A word embedding layer can be used to represent the input words. Like the feature extraction model for the photos, this too can be pre-trained either on a large corpus or on the dataset of all descriptions. The model would take a full sequence of words as input; the length of the sequence would be the maximum length of descriptions in the dataset. The model must be started with something. One approach is to surround each photo description with special tags to signal the start and end of the description, such as ‘STARTDESC’ and ‘ENDDESC’. For example, the description: Would become: And would be fed to the model with the same image input to result in the following input-output word sequence pairs: The data preparation would begin much the same as was described in the previous section. Each description must be integer encoded. After encoding, the sequences are split into multiple input and output pairs and only the output word (y) is one hot encoded. This is because the model is only required to predict the probability distribution of one word at a time. The code is the same up to the point where we calculate the maximum length of sequences. Next, we split the each integer encoded sequence into input and output pairs. Let’s step through a single sequence called seq at the i’th word in the sequence, where i >= 1. First, we take the first i-1 words as the input sequence and the i’th word as the output word. Next, the input sequence is padded to the maximum length of the input sequences. Pre-padding is used (the default) so that new words appear at the end of the sequence, instead of the input beginning. Pre-padding is used (the default) so that new words appear at the end of the sequence, instead of the beginning of the input. The output word is one hot encoded, much like in the previous section. We can put all of this together into a complete example to prepare description data for the word-by-word model. Running the example prints the same statistics, but prints the size of the resulting encoded input and output sequences. Note that the input of images must follow the exact same ordering where the same photo is shown for each example drawn from a single description. One way to do this would be to load the photo and store it for each example prepared from a single description. Progressive Loading The Flicr8K dataset of photos and descriptions can fit into RAM, if you have a lot of RAM (e.g. 8 Gigabytes or more), and most modern systems do. This is fine if you want to fit a deep learning model using the CPU. Alternately, if you want to fit a model using a GPU, then you will not be able to fit the data into memory of an average GPU video card. One solution is to progressively load the photos and descriptions as-needed by the model. Keras supports progressively loaded datasets by using the fit_generator() function on the model. A generator is the term used to describe a function used to return batches of samples for the model to train on. This can be as simple as a standalone function, the name of which is passed to the fit_generator() function when fitting the model. As a reminder, a model is fit for multiple epochs, where one epoch is one pass through the entire training dataset, such as all photos. One epoch is comprised of multiple batches of examples where the model weights are updated at the end of each batch. A generator must create and yield one batch of examples. For example, the average sentence length in the dataset is 11 words; that means that each photo will result in 11 examples for fitting the model and two photos will result in about 22 examples on average. A good default batch size for modern hardware may be 32 examples, so that is about 2-3 photos worth of examples. We can write a custom generator to load a few photos and return the samples as a single batch. Let’s assume we are working with a word-by-word model described in the previous section that expects a sequence of words and a prepared image as input and predicts a single word. Let’s design a data generator that given a loaded dictionary of image identifiers to clean descriptions, a trained tokenizer, and a maximum sequence length will load one-image worth of examples for each batch. A generator must loop forever and yield each batch of samples. If generators and yield are new concepts for you, consider reading this article: We can loop forever with a while loop and within this, loop over each image in the image directory. For each image filename, we can load the image and create all of the input-output sequence pairs from the image’s description. Below is the data generator function. You could extend it to take the name of the dataset directory as a parameter. The generator returns an array containing the inputs (X) and output (y) for the model. The input is comprised of an array with two items for the input images and encoded word sequences. The outputs are one hot encoded words. You can see that it calls a function called load_photo() to load a single photo and return the pixels and image identifier. This is a simplified version of the photo loading function developed at the beginning of this tutorial. Another function named create_sequences() is called to create sequences of images, input sequences of words, and output words that we then yield to the caller. This is a function that includes everything discussed in the previous section, and also creates copies of the image pixels, one for each input-output pair created from the photo’s description. Prior to preparing the model that uses the data generator, we must load the clean descriptions, prepare the tokenizer, and calculate the maximum sequence length. All 3 of must be passed to the data_generator() as parameters. We use the same load_clean_descriptions() function developed previously and a new create_tokenizer() function that simplifies the creation of the tokenizer. Tying all of this together, the complete data generator is listed below, ready for use to train a model. A data generator can be tested by calling the next() function. We can test the generator as follows. Running the example prints the shape of the input and output example for a single batch (e.g. 13 input-output pairs): The generator can be used to fit a model by calling the fit_generator() function on the model (instead of fit()) and passing in the generator. We must also specify the number of steps or batches per epoch. We could estimate this as (10 x training dataset size), perhaps 70,000 if 7,000 images are used for training. Further Reading This section provides more resources on the topic if you are looking go deeper. Flickr8K Dataset - Framing image description as a ranking task: data, models and evaluation metrics (Homepage) - Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics, (PDF) 2013. - Dataset Request Form - Old Flicrk8K Homepage API Summary In this tutorial, you discovered how to prepare photos and textual descriptions ready for developing an automatic photo caption generation model. Specifically, you learned: -. Do you have any questions? Ask your questions in the comments below and I will do my best to answer. Is this topic included in your new book ? Yes, I have a suite of chapters on developing a caption generation model. Hi jason awesome content, but i am not able to understand why did you used while( 1 ): in line number 78 in the full code. wouldn’t it work the same way withouot using while(1)?? thanks…!! def data_generator(descriptions, tokenizer, max_length): # loop for ever over images directory = ‘Flicker8k_Dataset’ while 1:————————————————————————->>>>>>line of doubt for name in listdir(directory): # load an image from file filename = directory + ‘/’ + name image, image_id = load_photo(filename) # create word sequences desc = descriptions[image_id] in_img, in_seq, out_word = create_sequences(tokenizer, max_length, desc, image) yield [[in_img, in_seq], out_word] It is a Python generator, you can learn more about generators here: This is brilliant!!! Thanks for putting this together – thoroughly appreciated! 💯 You’re welcome, I’m glad it helped! Hi Jason , I find your work very helpful , have you also implemented bottom up approach (dense captioning ) of generating image captions. Does this help: Hi Jason, this is top down approach , bottom up approach is different also called dense captioning, where we identify objects in an image the combine them to form a description. Thanks. Hi Jason, Isnt the data generator function supposed to call load_photo() instead of load_image()? In the full example, the data generator does call load_photo() on line 82. Hi Jason, I am a newbie in Python and CNN. Can I have testing source code in which I input an image and it gives output with the caption? Yes, I will have some on the blog soon and in my new book on deep learning for NLP to be released soon. Enjoyed reading your articles, you really explains everything in detail, Couldn’t be write much better! the article is very interesting and effective. Note: There is an typing error in the first time you mentioned load_clean_descriptions mapping[image_id] = ‘ ‘.join(image_desc) sould be descriptions[image_id] = ‘ ‘.join(image_desc) Thanks for sharing such interesting blog. Fixed, thanks! Hi, enjoy following your blog. I’m seeing an error here def save_doc(descriptions, filename): lines = list() for key, desc in mapping.items(): this threw me for a bit until I saw mapping is returned from def load_descriptions(doc) fix below def save_doc(descriptions, filename): lines = list() for key, desc in descriptions.items(): replaces for key, desc in mapping.items(): Ouch, I’ve fixed that example too, cheers. Hi Jason, when I run the descriptions, I am getting the following error, FileNotFoundError: [Errno 2] No such file or directory: ‘Flickr8k_text/Flickr8k.token.txt’, can you please help me with this please, I am very new to deep learning. You must download the dataset and place it in the same directory as the code. Try running from the command line, sometimes IDEs and Notebooks can mask or introduce errors. Hi Jason, thanks for this awesome post, I really enjoyed reading it. By the way, I think there is a typo when you talk about the number of steps per epoch. I think it should read “perhaps 70,000 if 7,000 images are used for training.”. Thanks, fixed. Hi Jason, When I running dump(features, open(‘features.pkl’, ‘wb’)) ,I getting the following error: “feature.pkl is not UTF-8 encoded ” Also I try to dump the output of predict function using only the first image. It was like this: {‘667626_18933d713e’: array([[[[ 0. , 0. , 0. , …, 0. , 10.62594891, 0. ], [ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 0. , 0. , …, 0. , 0. , 0. ], …, [ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 0. , 0. , …, 0. , 9.41605377, 0. ]], [[ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 0. , 0. , …, 0. , 0. , 5.36805296], [ 0. , 0. , 0. , …, 0. , 0. , 0. ], …, [ 0. , 0. , 0. , …, 1.45877278, 0. , 39.37923431], [ 0. , 0. , 0. , …, 0. , 0. , 1.39090693], [ 0. , 0. , 0. , …, 0. , 3.93747687, 0. ]], [[ 0. , 0. , 0. , …, 0. , 18.81423187, 0. ], [ 0. , 0. , 0. , …, 7.79979277, 0. , 0. ], [ 0. , 0. , 0. , …, 0. , 0. , 9.14055347], …, [ 0. , 0. , 0. , …, 48.84911346, 0. , 12.12792015], [ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 0. , 0. , …, 0. , 2.0710113 , 0. ]], …, [[ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 3.75439334, 0. , …, 0. , 0. , 0. ], [ 3.71412587, 0. , 0. , …, 0. , 0. , 0. ], …, [ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 0. , 0. , …, 0. , 18.80825424, 0. ], [ 0. , 0. , 0. , …, 0. , 13.0358696 , 0. ]], [[ 0. , 0. , 0. , …, 0. , 4.03412676, 0. ], [ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 0. , 0. , …, 0. , 0. , 0. ], …, [ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 0. , 0. , …, 0. , 7.99308109, 0. ], [ 0. , 0. , 0. , …, 0. , 32.52854919, 0. ]], [[ 0. , 0. , 0. , …, 0. , 33.73991013, 0. ], [ 0. , 0. , 0. , …, 0. , 14.52160454, 0. ], [ 0. , 0. , 0. , …, 0. , 4.05761242, 0. ], …, [ 0. , 0. , 0. , …, 0. , 0. , 0. ], [ 0. , 0. , 0. , …, 0.90452403, 0. , 0. ], [ 0. , 0. , 0. , …, 29.89839745, 38.23991394, 0. ]]]], dtype=float32)} And I was confused about whether this result correct or not. Could you help me with this? I really don’t know why my feature.pkl can save successfully. Thank you so much. . Sorry, I have not seen that error before. Perhaps the full error message to stackoverflow? Thank you for the prompt reply.I’ll try. Hello Jason, I’m able to download dataset, the link is unavailable. Could you please help me here. Thanks, Karthik *not able to Jason , i could able to download now. Please ignore above my comments. Thanks,Karthik No problem. You must fill out this form: Jason, I got model-ep005-loss3.517-val_loss4.012 . Another good article. Thanks , Karthik Very Nice! karthik can u send me a link to download model file Karthik can you provide link to download model file ? Hi, Jason I am using GPU to fit the model, but it takes too loooooooooooooong time! More or less than 9300 seconds for each epoch. My hardware: NVIDA GTX 850M(compute capability 5.0), GPU memory 4GiB and my computer Memory is 8GiB OS: Ubuntu 16.04 If i use the cpu mode, I got the Memory Error: ================= Error =============== Traceback (most recent call last): File “ICmodel.py”, line 217, in X1train, X2train, ytrain = create_sequences(tokenizer, max_length, train_descriptions,train_features) File “ICmodel.py”, line 162, in create_sequences return array(X1),array(X2),array(y) MemoryError ===============End of Error============== So I have to use my gpu to run the training program. Here is my code after modifying yours above, is there any incorrect modification? ==================== Code ==================== def data_generator(mapping, tokenizer, max_length, features): # loop for ever over images directory = ‘Flickr8k_Dataset’ while 1: for name in listdir(directory): # load an image from file filename = directory + ‘/’ + name image_id = name.split(‘.’)[0] # create word sequences if image_id not in mapping: continue desc_list = mapping[image_id] img_feature = features[image_id][0] in_img, in_seq, out_word = create_sequences4list(tokenizer,max_length, desc_list, img_feature) yield [[in_img,in_seq], out_word] # create sequences of feature, input sequences and output words for an image def create_sequences4list(tokenizer, max_length, desc_list, photo): Xfe, XSeq, y = list(), list(),list() vocab_size = len(tokenizer.word_index) + 1 # integer encode the description for desc in desc_list: seq = tokenizer.texts_to_sequences([desc])[0] # split one sequence into multiple X,y pairs for i in range(1, len(seq)): in_seq, out_seq = seq[:i], seq[i] # pad input sequence in_seq = pad_sequences([in_seq], maxlen=max_length)[0] # encode output sequence out_seq = to_categorical([out_seq], num_classes=vocab_size)[0] # store Xfe.append(photo) XSeq.append(in_seq) y.append(out_seq) Xfe, XSeq, y = array(Xfe), array(XSeq), array(y) return [Xfe, XSeq, y] ======================End of Code ================= Time-consuming running is disaster, could you give me some advice? thx. You might need more RAM. Perhaps change the code to use progressive loading? Thank you for your reply. Progressive loading is to use the python generator? What I have post above are exactly the generator function and create_sequence function adapted for the generator. Sorry for the disappeared indents… What I am confused is that whether I need to yield per line of descriptions or yield all five descriptions for one photo at one time? Good question, I think you could yield every few descriptions. Even experiment a little to see what your hardware can handle. Hey Jason Brownlee, I used this progressive Loading with this tutorial. and i’m getting this error. Can you please tell me how to define model for this particular generator ? ValueError: Error when checking input: expected input_1 to have 2 dimensions, but got array with shape (13, 224, 224, 3) I’m new to machine learning, Thanks for your wonderful tutorial ! I’ve managed to fix that one by adding inputs1 = Input(shape=(224, 224, 3)) and now have different error. Please help ValueError: Error when checking target: expected dense_3 to have 4 dimensions, but got array with shape (13, 4485) Please help on the model part. I am unable to run this. And I don’t yet have the understanding required to calculate the numbers myself were you able to solve the issue? I am stuck with the same error ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host Error in downloading VGG16 Model. Can You please help me to fix it out..? Sorry to hear that, sounds like an internet connection issue. Perhaps try again? Can anyone show me how to compile the VGG 16 model for the progressive loading in this example? Thanx in advance. Please help me to define the model ,i have used the data generator which is working fine but having trouble defining the model Perhaps you can summarize your problem in a few lines? I need a code for define model which is used before model fitting in the code: # define model # … # fit model model.fit_generator(data_generator(descriptions, tokenizer, max_length), steps_per_epoch=70000, …) Same here jason, i have been going over your ebooks to find some solution but getting no where…could u please give the code to define the model used in the progressive loading example such that we can use it with this : model.fit_generator(data_generator(descriptions, tokenizer, max_length), steps_per_epoch=70000, …) same problem here please help have you figured it out? I yes, please can you explain! Thanx in advance. After progressive loading how to evaluate the model and how to generate captions for new images. See this post: Hello Jason, I have one question regarding your discussion. As you said that steps_per_epoch will be 10*training data size i.e. 70,000 so what will happen if take steps_per_epoch equal to 70 instead of 70,000. Do increasing no of steps_per_epoch result in better model? Slower training. Perhaps worse model skill given the large increase in weight update frequency. Would you say that the Whole Description Sequence Model and Word-By-Word model are RNN based? Sure. Hello Jason , I am trying to run the code for extracting features from the photos in the flickr dataset, provided by you , but it showing following error: ‘AttributeError: ‘InputLayer’ object has no attribute ‘outbound_nodes’ I have some suggestions here: have you written tutorial on VQA. Can you suggest any python source where we can learn this. What is VQA? anyone know how to solve this error ? ValueError: Error when checking input: expected input_1 to have 2 dimensions, but got array with shape (61317, 7, 7, 512) I have some suggestions here: Can this code be used to prepare the image data for keras when ever I am using transfer learning? Yes, somewhat. Respected Sir, I am facing the following error: python3.7/site-packages/keras/engine/training_utils.py”, line 102, in standardize_input_data str(len(data)) + ‘ arrays: ‘ + str(data)[:200] + ‘…’) ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays: [array([[[[ 66.061 , 106.221 , 112.32 ], [ 63.060997 , 97.221 , 111.32 ], [ 57.060997 , 96.221 , 105.32 ], …, [ 43.060997 , 92.221 ,… Kindly guide me… I have some suggestions here: Hi Jason, I have collected the Flicker 8k dataset and have done some translation to local languages, but now, I want to expand my dataset. Is there any similarity between the Flicker 8k and Flicker 30k dataset like 8k is a subset of 30k. As, it can be seen the file naming in 8k and 30k are different. Do you have any idea regarding that? Sorry. I don’t know. Hi Jason i want do image captioning for clothes and want dataset for it if you have dataset for this please give me or iam so glad help me how create dataset with caption for it thanks This may help: should i insert the flicker8k dataset into jupyter notebook?? I recommend not using a notebook: Hi Jason. I’m trying to fit the model using data generator, but getting this error: ValueError: in user code: /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:830 train_function * return step_function(self, iterator) /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:813 run_step * outputs = model.train_step(data) /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:770 train_step * y_pred = self(x, training=True) /usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:989 __call__ * input_spec.assert_input_compatibility(self.input_spec, inputs, self.name) /usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py:197 assert_input_compatibility * raise ValueError(‘Layer ‘ + layer_name + ‘ expects ‘ + ValueError: Layer model_15 expects 2 input(s), but it received 3 input tensors. Inputs received: [, , ] Can you please help me in this regard? Sorry to hear that, these tips may help:
https://machinelearningmastery.com/prepare-photo-caption-dataset-training-deep-learning-model/
CC-MAIN-2021-31
refinedweb
6,090
62.48
Qt5 Tutorial QThreads - Creating Threads - 2017 functional type APIs and we can even cancel, pause, or resume the results from a thread run. However, we still need to know this level of APIs as well. In the next section of my Qt5 tutorial (Creating QThreads using QtConcurrent), we'll transform the code in this tutorial using QtConcurrent namespace. Starting from Qt Console Application, we need to create MyThread class. We make the MyThread class get inherited from QThread. Let's look at main.cpp: #include <QCoreApplication> #include "mythread.h" int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); MyThread thread1("A"), thread2("B"), thread3("C"); thread1.start(); thread2.start(); thread3.start(); return a.exec(); } In the code, we creates three instances of MyThread class with QString names ("A", "B", and "C"). The void QThread::start(Priority priority = InheritPriority) slot begins execution of the thread by calling run() which we overrides in MyThread class. OK. Let's look at other codes: mythread.cpp and mythread.h: // mythread.h #ifndef MYTHREAD_H #define MYTHREAD_H #include <QThread> #include <QString> class MyThread : public QThread { public: // constructor // set name using initializer explicit MyThread(QString s); // overriding the QThread's run() method void run(); private: QString name; }; #endif // MYTHREAD_H // mythread.cpp #include "mythread.h" #include <QDebug> MyThread::MyThread(QString s) : name(s) { } // We overrides the QThread's run() method here // run() will be called when a thread starts // the code will be shared by all threads void MyThread::run() { for(int i = 0; i <= 100; i++) { qDebug() << this->name << " " << i; } } We can see the three threads are running simultaneously accessing the same code segment (i.e. run()): ... "A" 87 "A" 88 "A" 89 "C" 91 "C" 92 "C" 93 "A" 90 "A" 91 "A" 92 "A" 93 "A" 94 "A" 95 "A" 96 "A" 97 "C" 94 "C" 95 "C" 96 "C" 97 "C" 98 "C" 99 "C" 100 "A" 98 "A" 99 "A" 100
http://www.bogotobogo.com/Qt/Qt5_QThreads_Creating_Threads.php
CC-MAIN-2017-34
refinedweb
319
71.24
Surely there are a lot of best practices, patterns and advices like 'name your variables proper', 'keep your methods short', 'don't repeat yourself' and so on. These practices are more or less called 'clean code'. But even with all this advises there is something you can not get rid of: the noise coming from your programming language. If you use a general-purpose language like C# than you have to deal with the limited vocabulary of it. So, without some effort you will always have to read the noise in order to understand what the code does. And even more, you will have to reengineer parts of the code to come across its intention. Just have a look at a small example and take a minute to try to get the intention of this piece of code: var now = DateTime.Now; if (now.Month == 12 && (now.Day >= 1 && now.Day <= 23)) { if (MessageBox.Show("Give a discount?", Application.ProductName, MessageBoxButtons.YesNo, MessageBoxIcon.Question) == DialogResult.Yes) { invoice.Amount = Math.Round(invoice.Amount - (invoice.Amount*15/100), 2, MidpointRounding.AwayFromZero); MessageBox.Show("The discount was given.", Application.ProductName, MessageBoxButtons.OK, MessageBoxIcon.Information); } } Got it? So, this is a piece of code with proper naming (discount, invoice, amount) - I guess, without it it would have took you a minute longer. But who can read it? It's you, me and other people with some programming background. Normally you would describe the intention of this code like that: If today is between the 1. and 23. December and the user confirms to give a (you can call it 'x-mass-') discount then the amount of the invoice is reduced by 15 percent and the user is informed about this process. Well, this sounds easy but compared to the code... the code does not look that easy at all. What the heck is going on with all these MessageBoxes with their buttons and icons, a DialogResult? Math is used but you have to figure out what Math.Round(invoice.Amount - (invoice.Amount*15/100), 2, MidpointRounding.AwayFromZero) exactly does and what date is crucial by understanding now.Month == 12 && (now.Day >= 1 && now.Day <= 23). So the code does not really show what our description looks like some sentences above. MessageBoxes DialogResult Math Math.Round(invoice.Amount - (invoice.Amount*15/100), 2, MidpointRounding.AwayFromZero) now.Month == 12 && (now.Day >= 1 && now.Day <= 23) Now, to get an idea what this article is about, have a look at the following piece of code and read on if you are interested in how it was achieved: if (DateTime.Now.IsBetween(1.December(), 23.December())) { if (User.Confirms("Give a discount?")) { invoice.Amount = invoice.Amount - 15.Percent(); Inform.User.About("The discount was given."); } } Who can read it now? I think your grandma could ;) This article will show you how to achieve the readable code above. To some people it will look like overengineered. But remember, this is just an example what could be achieved. This article will introduce you into those techniques which leaded to my second example above. As ever, there is not an "all or nothing". Choose your tools wise and cut it down to a practical solution. But with these techniques in mind I am sure you will think about your approach in another way from time to time and using one of these techniques might bring you up with a cleaner solution. Beside of that, all was done by standard C# syntax - no magic, no hacks. The downloadable project contains all code that is shown in this article. But it is not complete in any way nor tested and it brings you nothing that you can link to your projects ready to use - sorry for that... All examples are written in C# but I think a lot of the shown techniques are portable to other languages. First of all I want to repeat the importance of naming your classes, methods and variables meaningful. It is a good starting point to use names like 'invoice' or 'amount'. It does not cost you anything and it is the easiest way to let your code speak for itself. Did I mentioned the importance of it? Name your types meaningful, name your types meaningful,... Well, from a domain driven approach it is a pattern to create types for all your domain specific members - missing them or not. One of its intentions is (what a surprise) to let your code reflect the domain for better understanding. So I think you figured out that my code snippet uses a domain type as well: the 'invoice' and I am sure that you use this approach, too - for typical scenarios. But we could use this approach much more often. Obviously I introduced two more types: an User and a class named Inform. But under the hood there are some more types which make the code looking so smooth. But you didn't recognize them, did you? There is a Day, an Amount and a Percentage - and that's not the full list of new types... User Inform Day Amount Percentage OK, now I can read your mind: "WTF - that's soooo overengineered". Let me explain: you feel comfortable with the types the .net framework offers and you use what is there out of the box - even if it does not really fit on your solution! DateTime Decimal I think, you got it. And have a look at one of these types - on its own it is far away from being overengineered: public struct Day { public int Month { get; private set; } public int DayOfMonth { get; private set; } public Day(int month, int dayOfMonth) : this() { Month = month; DayOfMonth = dayOfMonth; } } These types are called Value Types and introducing them into your code opens the door to creating a fluent API by using Extension Methods and Operator Overloading. (for Value Types you can have a look at one of my other articles) ;) Keep in mind that it is not always necessary to use these types directly in your code. They could be used as a connector to other types or results as shown further on. Just make sure that these types reflect your intention - not more! Extension Methods are around for a long time now. So I think that you have heard of them. In short - you can extend any member with your own methods and add functionality. It is a great way for creating domain specific languages which are readable much more fluently. So, you could also have heard about Fluent Interfaces. In my code you will find the IntExtension which returns a Day by calling 1.December() for example: IntExtension 1.December() public static Day December(this int dayOfMonth) { return new Day(12, dayOfMonth); } In the code above these Days are delegated to a DateTimeExtension which returns true if the given DateTime is between the given days (well, you see the code speaks for itself...): DateTimeExtension true public static bool IsBetween(this DateTime value, Day from, Day to) { // Compare a dummy leap year instead. var comparableValue = new DateTime(2000, value.Month, value.Day); var comparableFrom = new DateTime(2000, from.Month, from.DayOfMonth); var comparableTo = new DateTime(2000, to.Month, to.DayOfMonth); return comparableFrom <= comparableValue && comparableTo >= comparableValue; } All in all this lets you write: if (DateTime.Now.IsBetween(1.December(), 23.December())) {...} Nice, isn't it? By the way, I feel much more comfortable by having my operations near the instance of the relevant type. So instead of using the .net framework method String.IsNullOrEmpty(customerName) I got used to write my own String extension used as customerName.IsNullOrEmpty(). With extension methods you stay tuned to your instance - not using the general type String any more. I think this is much more readable. String.IsNullOrEmpty(customerName) String customerName.IsNullOrEmpty() If your method returns a bool name it with 'Is...', 'Has...', 'Can...' and so on. Introducing a new extension method on Collections would let you read if(listOfArticles.HasEntries()) {...} instead of if(listOfArticles.Count > 0) {...} - again your intention is doubtless. bool Collection if(listOfArticles.HasEntries()) {...} if(listOfArticles.Count > 0) {...} But let's come back to my my leading code example... Another IntExtension is named Percent which let's you write 15.Percent(): Percent 15.Percent() public static Percentage Percent(this int value) { return new Percentage(value); } This returns an instance of the new type Percentage which looks like that: public struct Percentage { public decimal Value { get; private set; } public Percentage(int value) : this() { Value = value; } public Percentage(decimal value) : this() { Value = value; } } With this new type we are ready for the next trick... Did you wonder why we can write invoice.Amount = invoice.Amount - 15.Percent() and everything works fine? Where is our formula gone? invoice.Amount = invoice.Amount - 15.Percent() Remember: 15.Percent() returns an instance of Percentage. Introducing this new type we are now able to tell this type how to handle all the common operators like minus (-). That's great (to my mind, obviously)! This is called Operator Overloading and looks like that: public static decimal operator -(decimal value, Percentage percentage) { return value - (value*percentage.Value/100); } So here is our formula - never ever confusing you in the rest of your code and the intention of the written code (invoice.Amount - 15.Percent()) is absolutely clear. invoice.Amount - 15.Percent() I am not going to hold back the third new type I mentioned before - the Amount. For the amount of the invoice I first created the following piece of code: public struct Amount { public decimal Value { get; private set; } public Amount(decimal value) : this() { Value = value.Rounded(decimals: 2); } } Again, you can see the advantage of a new type: the rounding is now done in the constructor and so you never have to think about that. It is hidden in your regular code - removing noise and complexity. If you have made it thoughtful through this article so far you might have recognized that the operator overloading from our Percentage simply returns a Decimal. But this Decimal value is assigned to the Amount property of the invoice - being of type Amout - not Decimal. How is this done? Amout Easy - there are some more operator overloads - called Implicit Conversion (or again another article of mine - last promotion for today ;)): public static implicit operator Amount(decimal value) { return new Amount(value); } public static implicit operator decimal(Amount value) { return value.Value; } They are simply made responsible for the conversion from Decimal to Amount and vice versa. So the compiler knows how to handle your assignment properly and it integrates noiseless into your code. I think you have done the most tricky parts of this article, now. All there is left to get rid of from our noisy code from the starting point are those noisy MessageBoxes. This was simply done with some new static classes which let you call: MessageBox User.Confirms("Give a discount?")<br /> Inform.User.About("The discount was given.") Here is the code: public static class User { public static bool Confirms(string text) { return MessageBox.Show(text, Application.ProductName, MessageBoxButtons.YesNo, MessageBoxIcon.Question) == DialogResult.Yes; } } public static class Inform { public static class User { public static void About(string text) { MessageBox.Show(text, Application.ProductName, MessageBoxButtons.OK, MessageBoxIcon.Information); } } } Easy but effective in reducing noise. You might worry about getting in conflict with other common classes like 'User'. For that you could layer a class named 'The' on top so you could write The.User.Confirms("Give a discount?"). As a rule of thumb you could name your class as a verb (like Inform) if there is not return value on this method. Verbs are rarely candidates for naming conflicts. The.User.Confirms("Give a discount?") That's it! As you have seen there are a lot of possibilities for getting noise out of your code. My example might be extreme somehow but it was my intention to show what is possible with some effort - just with tools coming out of the box of C#. My favorite concept is to introduce new types when other types do not represent your needs. This approach was borrowed from the domain driven design (DDD). DDD points out to introduce types for all your domain specific terms. Using that I asked myself why not using this approach for general types, e.g. Day, Percentage or Amount? By introducing them I recognized that it leads to a much more readable code by hiding calculations, using operator overloads and implicit conversions. As a side effect, your code will stay at a single dedicated place (the new type) - tested once, leading to less errors. Using Extension Methods builds a nice fluent syntax - introduce them if suitable. Using static helper classes is just freestyle on top - less necessary - but not less readable. Concider them if your code has a lot of noise. Again, choose wise by knowing your tools. Thank you for reading. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) public class MyClass { public int MyLogic() { // here comes the code that you have to write anyway } } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/813964/Let-your-code-speak-for-itself
CC-MAIN-2019-13
refinedweb
2,209
66.13
Hi I’m new to this forum and I’m learning C++ and OpenGL. I’m using Windows XP, texteditor, latest GCC and Gnu Make versions. While I’ve got basic knowledge of C++ (or even less), OpenGL is completely new to me. I’m following this tutorial (), which is, by the way, very good. But I’m puzzled about two things: - In the tutorial the main function is written as void: void main(int argc, char **argv) { But my gcc compiler gives errors here: “‘main’ must return ‘int’” and “return type for ‘main’ changed to ‘int’”. I then changed the function into int: int main(int argc, char **argv) { Why can’t I have void there? - If I compile my code with that changed line I get zero errors. But when I try to run the compiled .exe file I get a windows error: The application failed to initialize properly - 0xc0000005 |OK| When I comment the lines gluPerspective(45,ratio,1,1000); and gluLookAt(0.0,0.0,5.0, 0.0,0.0,-1.0, 0.0f,1.0f,0.0f); the error doesn’t occur anymore, but the window has bugs while resizing. Is this error a problem with the glu library? I’m using the following makefile: # A simple Makefile # C++ using GNU Make and GCC # opengl32.lib glut32.lib glu32.lib odbc32.lib odbccp32.lib myprogram.exe : myfile.cpp gcc myfile.cpp -Wall -lopengl -lglu -lglut32 this is my code: /* hallo.c */ #include <windows.h> #include <stdio.h> #include <stdlib.h> #include <GL/gl.h> #include <GL/glu.h> #include <GLUT/glut.h> using namespace std; void renderScene(void) { glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_TRIANGLES); glVertex3f(-0.5,-0.5,0.0); glVertex3f(0.5,0.0,0.0); glVertex3f(0.0,0.5,0.0); glEnd(); glFlush(); } void changeSize(int w, int h) { // Prevent a divide by zero, when window is too short // (you cant make a window of zero width). if(h == 0) h = 1; float ratio = 1.0* w / h; // Reset the coordinate system before modifying glMatrixMode(GL_PROJECTION); glLoadIdentity(); // Set the viewport to be the entire window glViewport(0, 0, w, h); // Set the correct perspective. gluPerspective(45,ratio,1,1000); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0.0,0.0,5.0, 0.0,0.0,-1.0, 0.0f,1.0f,0.0f); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DEPTH | GLUT_SINGLE | GLUT_RGBA); glutInitWindowPosition(100,100); glutInitWindowSize(320,320); glutCreateWindow("3D Tech- GLUT Tutorial"); glutDisplayFunc(renderScene); glutReshapeFunc(changeSize); glutMainLoop(); } I’d appreciate any help from you. Kenji
https://community.khronos.org/t/problem-with-glu-confusion-with-void-int-main/59160
CC-MAIN-2020-40
refinedweb
426
58.99
A collection of cheater methods for the Django TestCase. Project description django-basetestcase BaseTestCase is a collection of cheater methods for the Django TestCase. They are extensions of the Django TestCase. These came about as a result of learning and using TDD. There are four different classes: - ModelTestCase - FormTestCase - ViewTestCase - FunctionalTestCase - For use with Selenium. - Origins and some methods from "Obey The Testing Goat". Quickstart To install: pip install django-basetestcase To use in a test: from basetestcase import ModelTestCase Please check out the source code for details. Documentation will be coming bit by bit. Any suggestions or issues, please let me know. Compatibility This was built using Python 3.7 and Django 2.1.7. Anything prior to that has no guarantees. What's new? FormTestCase Now has a formset_error_test. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-basetestcase/1.0.2/
CC-MAIN-2022-40
refinedweb
160
61.12
Nextion return code 0x71 will return a numerical get in little endian order. Including the 32 bit value and 3byte data terminator, this is eight bytes. Over serial N81, this is 10 bit per byte for a total of eighty bits. At 9600 baud, 80bits /9600 max bits ~ 8.334ms +/- a little for processing. or approximately 1/120th of a second to reply with number. Requesting that number get n0.valÿÿÿ uses 13 bytes or 130 bits At 9600 baud, 130/9600 max bits ~ 13.542ms +/- a little for processing. If command pass/fail is set with bkcmd this uses 4 bytes or 40 bits to issue At 9600 baud, 40/9600 max bits ~ 4.116ms +/- a little for processing. The Nextion Basic Series 4.3" runs at 48MHz, the Enhanced 4.3" at 108MHz - but what user code runs on the Nextion is largely interpreted. - so at 9600 baud, the bottle neck will mainly be communication over serial. To learn about timings, one usually creates experiments to time. - perhaps setting up a timed loop to process 100 or 1000 iterations If you were to create an HMI project with n0 Number Component then Creating MCU code to time the task. Using Arduino as an example #include <Arduino.h> #include "Nextion.h" void setup (void) { uint32_t timedA, timedB, total, number; NexNumber n0 = NexNumber(0,1,"n0"); nexInit(); timedA = millis(); for (k=0;k<100;k++) { n0.getValue(&number); } timedB = millis(); total = timedB - timedA; dbSerialPrint("100 get n0.val iterations in: "); dbSerialPrintln(total); } void loop(void) { } And in this manner of experiment and observations, you learn about timings Ali I want to learn about response time after get n0.val command I m using nextion 4.3 inch and baud rate 9600 Tank you for helping
http://support.iteadstudio.com/support/discussions/topics/11000010907
CC-MAIN-2018-39
refinedweb
295
68.57
Visual Basic 2005: A Developer's Notebook/The Visual Basic Language From WikiContent When Visual Basic .NET first appeared, loyal VB developers were shocked to find dramatic changes in their favorite language. Suddenly, common tasks such as instantiating an object and declaring a structure required new syntax, and even basic data types like the array had been transformed into something new. Fortunately, Visual Basic 2005 doesn't have the same shocks in store. The language changes in the latest version of VB are refinements that simplify life without making any existing code obsolete. Many of these changes are language features imported from C# (e.g., operator overloading), while others are completely new ingredients that have been built into the latest version of the common language runtime (e.g., generics). In this chapter, you'll learn about all the most useful changes to the VB language. Use the My Objects to Program Common Tasks The new My objects provide easy access to various features that developers often need but don't necessarily know where to find in the sprawling .NET class library. Essentially, the My objects offer one-stop shopping, with access to everything from the Windows registry to the current network connection. Best of all, the My object hierarchy is organized according to use and is easy to navigate using Visual Studio IntelliSense. How do I do that? There are seven first-level My objects. Out of these, three core objects centralize functionality from the .NET Framework and provide computer information. These include: Note Tired of hunting through the extensive . NET class library in search of what you need? With the new My objects, you can quickly find some of the most useful features . NET has to offer. - My.Computer - This object provides information about the current computer, including its network connection, the mouse and keyboard state, the printer and screen, and the clock. You can also use this object as a jumping-off point to play a sound, find a file, access the registry, or use the Windows clipboard. - My.Application - This object provides information about the current application and its context, including the assembly and its version, the folder where the application is running, the culture, and the command-line arguments that were used to start the application. You can also use this object to log an application event. - My.User - This object provides information about the current user. You can use this object to check the user's Windows account and test what groups the user is a member of. Along with these three objects, there are another two objects that provide default instances. Default instances are objects that .NET creates automatically for certain types of classes defined in your application. They include: - My.Forms - This object provides a default instance of each Windows form in your application. You can use this object to communicate between forms without needing to track form references in another class. - My.WebServices - This object provides a default proxy-class instance for every web service. For example, if your project uses two web references, you can access a ready-made proxy class for each one through this object. Finally, there are two other My objects that provide easy access to the configuration settings and resources: - My.Settings - This object allows you to retrieve custom settings from your application's XML configuration file. - My.Resources - This object allows you to retrieve resources—blocks of binary or text data that are compiled into your application assembly. Resources are typically used to store localized strings, images, and audio files. Warning Note that the My objects are influenced by the project type. For example, when creating a web or console application, you won't be able to use My.Forms. Some of the My classes are defined in the Microsoft.VisualBasic.MyServices namespace, while others, such as the classes used for the My.Settings and My.Resources objects, are created dynamically by Visual Studio 2005 when you modify application settings and add resources to the current project. To try out the My object, you can use Visual Studio IntelliSense. Just type My, followed by a period, and take a look at the available objects, as shown in Figure 2-1. You can choose one and press the period again to step down another level. To try a simple example that displays some basic information using the My object, create a new console project. Then, add this code to the Main( ) routine: Console.WriteLine(My.Computer.Name) Console.WriteLine(My.Computer.Clock.LocalTime) Console.WriteLine(My.Application.CurrentDirectory) Console.WriteLine(My.User.Identity.Name) When you run this code, you'll see some output in the console window, which shows the computer name, current time, application directory, and user: SALESSERVER 2005-10-1 8:08:52 PM C:\Code\VBNotebook\1.07\MyTest\bin MATTHEW Warning The My object also has a "dark side." Use of the My object makes it more difficult to share your solution with non-VB developers, because other languages, such as C#, don't have the same feature. Where can I learn more? You can learn more about the My object and see examples by looking up the "My Object" index entry in the MSDN Help. You can also learn more by examining some of this book's other labs that use the My object. Some examples include: - Using My.Application to retrieve details of your program, such as the current version and the command-line parameters used to start it (see the "Get Application Information" lab in this chapter). - Using My.Resources to load images and other resources from the application assembly (see the "Use Strongly Typed Resources" lab in this chapter). - Using My.Settings to retrieve application and user settings (see the "Use Strongly Typed Configuration Settings" lab in this chapter). - Using My.Forms to interact between application windows (see the "Communicate Between Forms" lab in Chapter 3). - Using My.Computer to perform file manipulation and network tasks in Chapters 5 and 6. - Using My.User to authenticate the current user (see the "Test Group Membership of the Current User" lab in Chapter 6). Get Application Information The My.Application object provides a wealth of information right at your fingertips. Getting this information is as easy as retrieving a property. Note Using the My.Application object, you can get information about the current version of your application, where it's located, and what parameters were used to start it. How do I do that? The information in the My.Application object comes in handy in a variety of situations. Here are two examples: - You want to get the exact version number. This could be useful if you want to build a dynamic About box, or check with a web service to make sure you have the latest version of an assembly. - You want to record some diagnostic details. This becomes important if a problem is occurring at a client site and you need to log some general information about the application that's running. To create a straightforward example, you can use the code in Example 2-1 in a console application. It retrieves all of these details and displays a complete report in a console window. Example 2-1. Retrieving information from My.Application ' Find out what parameters were used to start the application. Console.Write("Command line parameters: ") For Each Arg As String In My.Application.CommandLineArgs Console.Write(Arg & " ") Next Console.WriteLine( ) Console.WriteLine( ) ' Find out some information about the assembly where this code is located. ' This information comes from metadata (attributes in your code). Console.WriteLine("Company: " & My.Application.Info.CompanyName) Console.WriteLine("Description: " & My.Application.Info.Description) Console.WriteLine("Located in: " & My.Application.Info.DirectoryPath) Console.WriteLine("Copyright: " & My.Application.Info.Copyright) Console.WriteLine("Trademark: " & My.Application.Info.Trademark) Console.WriteLine("Name: " & My.Application.Info.AssemblyName) Console.WriteLine("Product: " & My.Application.Info.ProductName) Console.WriteLine("Title: " & My.Application.Info.Title) Console.WriteLine("Version: " & My.Application.Info.Version.ToString( )) Console.WriteLine( ) Tip Visual Studio 2005 includes a Quick Console window that acts as a lightweight version of the normal command-line window. In some cases, this window is a little buggy. If you have trouble running a sample console application and seeing its output, just disable this feature. To do so, select Tools → Options, make sure the "Show all settings" checkbox is checked, and select the Debugging → General tab. Then turn off "Redirect all console output to the Quick Console window." Before you test this code, it makes sense to set up your environment to ensure that you will see meaningful data. For example, you might want to tell Visual Studio to supply some command-line parameters when it launches the application. To do this, double-click the My Project icon in the Solution Explorer. Then, choose the Debug tab and look for the "Command line parameters" text box. For example, you could add three parameters by specifying the command line /a /b /c. If you want to set information such as the assembly author, product, version, and so on, you need to add special attributes to the AssemblyInfo.vb file, which isn't shown in the Solution Explorer. To access it, you need to select Solution → Show All Files. You'll find the AssemblyInfo.vb file under the My Projects node. Here's a typical set of tags that you might enter: <Assembly: AssemblyVersion("1.0.0.0")> <Assembly: AssemblyCompany("Prosetech")> <Assembly: AssemblyDescription("Utility that tests My.Application")> <Assembly: AssemblyCopyright("(C) Matthew MacDonald")> <Assembly: AssemblyTrademark("(R) Prosetech")> <Assembly: AssemblyTitle("Test App")> <Assembly: AssemblyProduct("Test App")> All of this information is embedded in your compiled assembly as metadata. Now you can run the test application. Here's an example of the output you'll see: Note New in VB 2005 is the ability to add application information in a special dialog box. To use this feature, double-click the My Project item in the Solution Explorer, select the Assembly tab, and click the Assembly Information button. Command line parameters: /a /b /c Company: Prosetech Description: Utility that tests My.Application Located in: C:\Code\VBNotebook\1.08\ApplicationInfo\bin Copyright: (C) Matthew MacDonald Trademark: (R) Prosetech Name: ApplicationInfo.exe Product: Test App Title: Test App Version: 1.0.0.0 What about... ...getting more detailed diagnostic information? The My.Computer.Info object also provides a dash of diagnostic details with two useful properties. LoadedAssemblies provides a collection with all the assemblies that are currently loaded (and available to your application). You can also examine their version and publisher information. StackTrace provides a snapshot of the current stack, which reflects where you are in your code. For example, if your Main( ) method calls a method named A( ) that then calls method B( ), you'll see three of your methods on the stack—B( ), A( ), and Main( )—in reverse order. Here's the code you can add to start looking at this information: Console.WriteLine("Currently loaded assemblies") For Each Assm As System.Reflection.Assembly In _ My.Application.Info.LoadedAssemblies Console.WriteLine(Assm.GetName( ).Name) Next Console.WriteLine( ) Console.WriteLine("Current stack trace: " & My.Application.Info.StackTrace) Console.WriteLine( ) Use Strongly Typed Resources In addition to code, .NET assemblies can also contain resources—embedded binary data such as images and hardcoded strings. Even though .NET has supported a system of resources since Version 1.0, Visual Studio hasn't included integrated design-time support. As a result, developers who need to store image data usually add it to a control that supports it at design time, such as a PictureBox or ImageList. These controls insert the picture data into the application resource file automatically. Note Strongly typed resources let you embed static data such as images into your compiled assemblies, and access it easily in your code. In Visual Studio 2005, it's dramatically easier to add information to the resources file and update it afterward. Even better, you can access this information in a strongly typed fashion from anywhere in your code. How do I do that? In order to try using a strongly typed resource of an image in this lab, you need to create a new Windows application before continuing. To add a resource, start by double-clicking the My Project node in the Solution Explorer. This opens up the application designer, where you can configure a host of application-related settings. Next, click the Resources tab. In the Categories drop-down listbox, select the type of resources you want to see (strings, images, audio, and so on). The string view shows a grid of settings. The image view is a little different—by default, it shows a thumbnail of each picture. To add a new picture, select the Images category from the drop-down list and then select Add → Existing File from the toolbar. Browse to an image file, select it, and click OK. If you don't have an image file handy, try using one from the Windows directory, such as winnt256.bmp (which is included with most versions of Windows). By default, the resource name has the same name as the file, but you can rename it after adding it. In this example, rename the image to EmbeddedGraphic (as shown in Figure 2-2). Using a resource is easy. All resources are compiled dynamically into a strongly typed resource class, which you can access through My.Resources. To try out this resource, add a PictureBox control to your Windows form (and keep the default name PictureBox1). Then, add the following code to show the image when the form loads: Private Sub Form1_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load PictureBox1.Image = My.Resources.EmbeddedGraphic End Sub Note The resources class is added in the My Project directory and is given the name Resources.Designer..vb. To see it, you need to choose Project → Show All Files. Of course, you should never change this file by hand. If you run the code, you'll see the image appear on the form. To make sure the image is being extracted from the assembly, try compiling the application and then deleting the image file (the code will still work seamlessly). When you add a resource in this way, Visual Studio copies the resource to the Resources subdirectory of your application. You can see this directory, along with all the resources it contains, in the Solution Explorer. When you compile your application, all the resources are embedded in the assembly. However, there's a distinct advantage to maintaining them in a separate directory. This way, you can easily update a resource by replacing the file and recompiling the application. You don't need to modify any code. This is a tremendous benefit if you need to update a number of images or other resources at once. Note Another advantage of resources is that you can use the same images in multiple controls on multiple different forms, without needing to add more than one copy of the same file. You can also attach a resource to various controls using the Properties window. For example, when you click the ellipsis (...) in the Properties window next to the Image property for the PictureBox control, a designer appears that lists all the pictures that are available in the application's resources. What about... ...the ImageList? If you're a Windows developer, you're probably familiar with the ImageList control, which groups together multiple images (usually small bitmaps) for use in other controls, such as menus, toolbars, trees, and lists. The ImageList doesn't use typed resources. Instead, it uses a custom serialization scheme. You'll find that although the ImageList provides design-time support and programmatic access to the images it contains, this access isn't strongly typed. Use Strongly Typed Configuration Settings Applications commonly need configuration settings to nail down details like file locations, database connection strings, and user preferences. Rather than hardcoding these settings (or inventing your own mechanism to store them), .NET lets you add them to an application-specific configuration file. This allows you to adjust values on a whim by editing a text file without recompiling your application. Note Use error-proof configuration settings by the application designer. In Visual Studio 2005, configuration settings are even easier to use. That's because they're automatically compiled into a custom class that provides strongly typed access to them. That means you can retrieve settings using properties, with the help of IntelliSense, instead of relying on string-based lookups. Even better, .NET enhances this model with the ability to use updatable, user-specific settings to track preferences and other information. You'll see both of these techniques at work in this lab. How do I do that? Every custom configuration setting is defined with a unique string name. In previous versions of .NET, you could retrieve the value of a configuration setting by looking up the value by its string name in a collection. However, if you use the wrong name, you wouldn't realize your error until you run the code and it fails with a runtime exception. In Visual Studio 2005, the story is much improved. To add a new configuration setting, double-click the My Project node in the Solution Explorer. This opens up the application designer where you can configure a host of application-related settings. Next, click the Settings tab, which shows a list of custom configuration settings where you can define new settings and their values. To add a custom configuration setting to your application, enter a new setting name at the bottom of the list. Then specify the data type, scope, and the actual content of the setting. For example, to add a setting with a file path, you might use the name UserDataFilePath, the type String, the scope Application (you'll learn more about this shortly), and the value c:\MyFiles. Figure 2-3 shows this setting. Note In a web application, configuration settings are placed in the web.config file. In other applications, application settings are recorded to a configuration file that takes the name of the application, plus the extension .config, as in MyApp.exe.config. When you add the setting, Visual Studio .NET inserts the following information into the application configuration file: <configuration> <!-- Other settings are defined here. --> <applicationSettings> <WindowsApplication1.MySettings> <setting name="UserDataFilePath" serializeAs="String"> <value>c:\MyFiles</value> </setting> </WindowsApplication1.MySettings> </applicationSettings> </configuration> At the same time behind the scenes, Visual Studio compiles a class that includes information about your custom configuration setting. Then, you can access the setting by name anywhere in your code through the My.Settings object. For example, here's code that retrieves the setting named UserDataFilePath: Dim path As String path = My.Settings.UserDataFilePath In .NET 2.0, configuration settings don't need to be strings. You can also use other serializable data types, including integers, decimals, dates, and times (just choose the appropriate data type from the Types drop-down list). These data types are serialized to text in the configuration file, but you can retrieve them through My.Settings as their native data type, with no parsing required! Note The application settings class is added in the My Project directory and is named Settings.Designer.vb. To see it, select Project → Show All Files. What about... ...updating settings? The UserDataFilePath example uses an application-scoped setting, which can be read at runtime but can't be modified. If you need to change an application-scoped setting, you have to modify the configuration file by hand (or use the settings list in Visual Studio). Your other choice is to create user-scoped settings. To do this, just choose User from the Scope drop-down list in the settings list. With a user-scoped setting, the value you set in Visual Studio is stored as the default in the configuration file in the application directory. However, when you change these settings, a new user.config file is created for the current user and saved in a user-specific directory (with a name in the form c:\Documents and Settings\[UserName]\Local Settings\Application Data\[ApplicationName]\[UniqueDirectory]). The only trick pertaining to user-specific settings is that you must call My.Settings.Save( ) to store your changes. Otherwise, changes will only persist until the application is closed. Typically, you'll call My.Settings.Save( ) when your application ends. To try out a user-scoped setting, change the scope of the UserDataFilePath setting from Application to User. Then, create a form that has a text box (named txtFilePath) and two buttons, one for retrieving the user data (cmdRefresh) and one for changing it (cmdUpdate). Here are the event handlers you'll use: Private Sub cmdRefresh_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles cmdRefresh.Click txtFilePath.Text = My.Settings.UserDataFilePath End Sub Private Sub cmdUpdate_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles cmdUpdate.Click My.Settings.UserDataFilePath = txtFilePath.Text End Sub Finally, to make sure your changes are there the next time you run the application, tell .NET to create or update the user.config file when the form closes with this code: Private Sub Form1_FormClosed(ByVal sender As Object, _ ByVal e As System.Windows.Forms.FormClosedEventArgs) _ Handles Me.FormClosed My.Settings.Save( ) End Sub This rounds out a simple test form. You can run this application and try alternately retrieving the current setting and storing a new one. If you're interested, you can then track down the user.config file that has the changed settings for the current user. Build Typesafe Generic Classes Programmers often face a difficult choice. On one hand, it's keenly important to build solutions that are as generic as possible, so that they can be reused in different scenarios. For example, why build a CustomerCollection class that accepts only objects of type Customer when you can build a generic Collection class that can be configured to accept objects of any type? On the other hand, performance and type safety considerations can make a generic solution less desirable. If you use a generic .NET Collection class to store Customer objects, for example, how can you be sure that someone won't accidentally insert another type of object into the collection, causing an insidious problem later on? Note Need to create a class that's flexible enough to work with any type of object, but able to restrict the objects it accepts in any given instance? With generics, VB has the perfect solution. Visual Basic 2005 and .NET 2.0 provide a solution called generics. Generics are classes that are parameterized by type. In other words, generics allow you to create a class template that supports any type. When you instantiate that class, you specify the type you want to use, and from that point on, your object is "locked in" to the type you chose. How do I do that? An example of where the use of generics makes great sense is the System.Collections.ArrayList class. ArrayList is an all-purpose, dynamically self-sizing collection. It can hold ordinary .NET objects or your own custom objects. In order to support this, ArrayList treats everything as the base Object type. The problem is that there's no way to impose any restrictions on how ArrayList works. For example, if you want to use ArrayList to store a collection of Customer objects, you have no way to be sure that a faulty piece of code won't accidentally insert strings, integers, or some other type of object, causing future headaches. For this reason, developers often create their own strongly typed collection classes—in fact, the .NET class library is filled with dozens of them. Generics can solve this problem. For example, using generics you can declare a class that works with any type using the Of keyword: Public Class GenericList(Of ItemType) ' (Code goes here) End Class In this case, you are creating a new class named GenericList that can work with any type of object. However, the client needs to specify what type should be used. In your class code, you refer to that type as ItemType. Of course, ItemType isn't really a type—it's just a placeholder for the type that you'll choose when you instantiate a GenericList object. Example 2-2 shows the complete code for a simple typesafe ArrayList. Example 2-2. A typesafe collection using generics Public Class GenericList(Of ItemType) Inherits CollectionBase Public Function Add(ByVal value As ItemType) As Integer Return List.Add(value) End Function Public Sub Remove(ByVal value As ItemType) List.Remove(value) End Sub Public ReadOnly Property Item(ByVal index As Integer) As ItemType Get ' The appropriate item is retrieved from the List object and ' explicitly cast to the appropriate type, and then returned. Return CType(List.Item(index), ItemType) End Get End Property End Class The GenericList class wraps an ordinary ArrayList, which is provided through the List property of the CollectionBase class it inherits from. However, the GenericList class works differently than an ArrayList by providing strongly typed Add( ) and Remove( ) methods, which use the ItemType placeholder. Here's an example of how you might use the GenericList class to create an ArrayList collection that only supports strings: ' Create the GenericList instance, and choose a type (in this case, string). Dim List As New GenericList(Of String) ' Add two strings. List.Add("blue") List.Add("green") ' The next statement will fail because it has the wrong type. ' There is no automatic way to convert a GUID to a string. ' In fact, this line won't ever run, because the compiler ' notices the problem and refuses to build the application. List.Add(Guid.NewGuid( )) There's no limit to how many ways you can parameterize a class. In the GenericList example, there's only one type parameter. However, you could easily create a class that works with two or three types of objects, and allows you to make all of these types generic. To use this approach, just separate each parameter type with a comma (between the brackets at the beginning of a class). For example, consider the following GenericHashTable class, which allows you to define the type of the items the collection will store (ItemType), as well as the type of the keys you will use to index those items (KeyType): Public Class GenericHashTable(Of ItemType, KeyType) Inherits DictionaryBase ' (Code goes here.) End Class Another important feature in generics is the ability to apply constraints to parameters. Constraints restrict the types allowed for a given generic class. For example, suppose you want to create a class that supports only types that implement a particular interface. To do so, first declare the type or types the class accepts and then use the As keyword to specify the base class that the type must derive from, or the interface that the type must implement. Here's an example that restricts the items stored in a GenericList to serializable items. This feature would be useful if, for example, you wanted to add a method to the GenericList that required serialization, such as a method that writes all the items in the list to a stream: Public Class SerializableList(Of ItemType As ISerializable) Inherits CollectionBase ' (Code goes here.) End Class Similarly, here's a collection that can contain any type of object, provided it's derived from the System.Windows.Forms.Control class. The end result is a collection that's limited to controls, like the one exposed by the Forms.Controls property on a window: Public Class ControlCollection(Of ItemType As Control) Inherits CollectionBase ' (Code goes here.) End Class Sometimes, your generic class might need the ability to create the parameter class. For example, the GenericList example might need the ability to create an instance of the item you want to store in the collection. In this case, you need to use the New constraint. The New constraint allows only parameter types that have a public zero-argument constructor, and aren't marked MustInherit. This ensures that your code can create instances of the parameter type. Here's a collection that imposes the New constraint: Public Class GenericList(Of ItemType As New) Inherits CollectionBase ' (Code goes here.) End Class It's also worth noting that you can define as many constraints as you want, as long as you group the list of constraints in curly braces, as shown here: Public Class GenericList(Of ItemType As {ISerializable, New}) Inherits CollectionBase ' (Code goes here.) End Class Constraints are enforced by the compiler, so if you violate a constraint rule when using a generic class, you won't be able to compile your application. Note Generics are built into the Common Language Runtime. That means they are supported in all first-class . NET languages, including C#. What about... ...using generics with other code structures? Generics don't just work with classes. They can also be used in structures, interfaces, delegates, and even methods. For more information, look for the index entry "generics" in the MSDN Help. For more in-depth examples of advanced generic techniques, you can refer to a Microsoft whitepaper at. Incidentally, the .NET Framework designers are well aware of the usefulness of generic collections, and they've already created several for you to use out of the box. You'll find them in the new Systems.Collections.Generic namespace. They include: - List (a basic collection like the GenericList example) - Dictionary (a name-value collection that indexes each item with a key) - LinkedList (a linked list, where each item points to the next item in the chain) - Queue (a first-in-first-out collection) - Stack (a last-in-first-out collection) - SortedList (a name-value collection that's kept in perpetually sorted order) Most of these types duplicate one of the types in the System.Collections namespace. The old collections remain for backward compatibility. Make Simple Data Types Nullable With the new support for generics that's found in the .NET Framework, a number of new features become possible. One of these features—generic strongly typed collections—was demonstrated in the previous lab, "Build Typesafe Generic Classes." Now you'll see another way that generics can solve common problems, this time by using the new nullable data types. Note Do you need to represent data that may or may not be present? VB . NET's new nullable types fill the gap. How do I do that? A null value (identified in Visual Basic by the keyword Nothing), is a special flag that indicates no data is present. Most developers are familiar with null object references, which indicate that the object has been defined but not created. For example, in the following code, the FileStream contains a null reference because it hasn't been instantiated with the New keyword: Dim fs As FileStream If fs Is Nothing ' This is always true because the FileStream hasn't ' been created yet. Console.WriteLine("Object contains a null reference.") End If Core data types like integers and strings can't contain null values. Numeric variables are automatically initialized to 0. Boolean variables are False. String variables are set to an empty string (''") automatically. In fact, even if you explicitly set a simple data type variable toNothing in your code, it will automatically revert to the empty value (0, False, or ""), as the following code demonstrates: Dim j As Integer = Nothing If j = 0 Then ' This is always true because there is an ' implicit conversion between Nothing and 0 for integers. Console.WriteLine("Non-nullable integer j = " & j) End If This design sometimes causes problems, because there's no way to distinguish between an empty value and a value that was never supplied in the first place. For example, imagine you create code that needs to retrieve the number of times the user has placed an order from a text file. Later on, you examine this value. The problem occurs if this value is 0. Quite simply, you have no way to know whether this is valid data (the user placed no orders), or it represents missing information (the setting couldn't be retrieved or the current user isn't a registered customer). Thanks to generics, .NET 2.0 has a solution—a System.Nullable class that can wrap any other data type. When you create an instance of Nullable you specify the data type. If you don't set a value, this instance contains a null reference. You can test whether this is true by testing the Nullable.HasType( ) method, and you can retrieve the underlying object through the Nullable.Value property. Here's the sample code you need to create a nullable integer: Dim i As Nullable(Of Integer) If Not i.HasValue Then ' This is true, because no value has been assigned. Console.WriteLine("i is a null value") End If ' Assign a value. Note that you must assign directly to i, not i.Value. ' The i.Value property is read-only, and it always reflects the ' currently assigned object, if it is not Nothing. i = 100 If i.HasValue Then ' This is true, because a value (100) is now present. Console.WriteLine("Nullable integer i = " & i.Value) End If What about... ...using Nullable with full-fledged reference objects? Although you don't need this ability (because reference types can contain a null reference), it still gives you some advantages. Namely, you can use the slightly more readable HasValue() method instead of testing for Nothing. Best of all, you can make this change seamlessly, because the Nullable class has the remarkable ability to allow implicit conversions between Nullable and the type it wraps. Where can I learn more? To learn more about Nullable and how it's implemented, look up the "Nullable class" index entry in the MSDN Help. Use Operators with Custom Objects Every VB programmer is familiar with the arithmetic operators for addition (+), subtraction (-), division (/), and multiplication (*). Ordinarily, these operators are reserved for .NET numeric types, and have no meaning when used with other objects. However, in VB .NET 2.0 you can build objects that support all of these operators, as well as the operators used for logical operations and implicit conversion). This technique won't make sense for business objects, but it is extremely handy if you need to model mathematical structures such as vectors, matrixes, complex numbers, or—as demonstrated in the following example—fractions. Note Tired of using clumsy syntax like ObjA.Subtract(ObjB) to perform simple operations on your custom objects? With VB's support for operator overloading, you can manipulate your objects as easily as ordinary numbers. How do I do that? To overload an operator in Visual Basic 2005, you need to create a special operator method in your class (or structure). This method must be declared with the keywords Public Shared Operator, followed by the symbol for the operator (e.g., +). Tip To overload an operator simply means to define what an operator does when used with a specific type of object. In other words, when you overload the + operator for a Fraction class, you tell .NET what to do when your code adds two Fraction objects together. For example, here's an operator method that adds support for the addition (+) operator: Public Shared Operator+(objA As MyClass, objB as MyClass) As MyClass ' (Code goes here.) End Operator Every operator method accepts two parameters, which represent the values on either side of the operator. Depending on the class and the operator, order may be important (as it is for division). Once you've defined an operator, the VB compiler will call your code when it executes a statement that uses the operator with your class. For example, the compiler changes code like this: ObjC = ObjA + ObjB into this: ObjC = MyClass.Operator+(ObjA, ObjB) Example 2-3 shows how you can overload the Visual Basic arithmetic operators used to handle Fraction objects. A Fraction consists of two portions: a numerator and a denominator (known colloquially as "the top part and the bottom part"). The Fraction code overloads the +, -, *, and / operators, allowing you to perform fractional calculations without converting your numbers to decimals and losing precision. Example 2-3. Overloading arithmetic operators in the Fraction class Public Structure Fraction ' The two parts of a fraction. Public Denominator As Integer Public Numerator As Integer Public Sub New(ByVal numerator As Integer, ByVal denominator As Integer) Me.Numerator = numerator Me.Denominator = denominator End Sub.Numerator, _ x.Denominator * y.Denominator) End Operator Public Shared Operator /(ByVal x As Fraction, ByVal y As Fraction) _ As Fraction Return Normalize(x.Numerator * y.Denominator, _ x.Denominator * y.Numerator) End Operator ' Reduce a fraction. Private Shared Function Normalize(ByVal numerator As Integer, _ ByVal denominator As Integer) As Fraction If (numerator <> 0) And (denominator <> 0) Then ' Fix signs. If denominator < 0 Then denominator *= -1 numerator *= -1 End If Dim divisor As Integer = GCD(numerator, denominator) numerator \= divisor denominator \= divisor End If Return New Fraction(numerator, denominator) End Function ' Return the greatest common divisor using Euclid's algorithm. Private Shared Function GCD(ByVal x As Integer, ByVal y As Integer) _ As Integer Dim temp As Integer x = Math.Abs(x) y = Math.Abs(y) Do While (y <> 0) temp = x Mod y x = y y = temp Loop Return x End Function ' Convert the fraction to decimal form. Public Function GetDouble( ) As Double Return CType(Me.Numerator, Double) / _ CType(Me.Denominator, Double) End Function ' Get a string representation of the fraction. Public Overrides Function ToString( ) As String Return Me.Numerator.ToString & "/" & Me.Denominator.ToString End Function End Structure The console code shown in Example 2-4 puts the fraction class through a quirk-and-dirty test. Thanks to operator overloading, the number remains in fractional form, and precision is never lost. Example 2-4. Testing the Fraction class Module FractionTest Sub Main( ) Dim f1 As New Fraction(2, 3) Dim f2 As New Fraction(1, 4) Console.WriteLine("f1 = " & f1.ToString( )) Console.WriteLine("f2 = " & f2.ToString( )) Dim f3 As Fraction f3 = f1 + f2 ' f3 is now 11/12 Console.WriteLine("f1 + f2 = " & f3.ToString( )) f3 = f1 / f2 ' f3 is now 8/3 Console.WriteLine("f1 / f2 = " & f3.ToString( )) f3 = f1 - f2 ' f3 is now 5/12 Console.WriteLine("f1 - f2 = " & f3.ToString( )) f3 = f1 * f2 ' f2 is now 1/6 Console.WriteLine("f1 * f2 = " & f3.ToString( )) End Sub End Module When you run this application, here's the output you'll see: f1 = 2/3 f2 = 1/4 f1 + f2 = 11/12 f1 / f2 = 8/3 f1 - f2 = 5/12 f1 * f2 = 1/6 Usually, the parameters and the return value of an operator method use the same type. However, there's no reason you can't create more than one version of an operator method so your object can be used in expressions with different types. What about... ...using operator overloading with other types? There are a number of classes that are natural candidates for operator overloading. Here are some good examples: - Mathematical classes that model vectors, matrixes, complex numbers, or tensors. - Money classes that round calculations to the nearest penny, and support different currency types. - Measurement classes that have irregular units, like inches and feet. Where can I learn more? For more of the language details behind operator overloading and all the operators that you can overload, refer to the "Operator procedures" index entry in the MSDN Help. Split a Class into Multiple Files If you've cracked open a .NET 2.0 Windows Forms class, you'll have noticed that all the automatically generated code is missing! To understand where it's gone, you need to learn about a new feature called partial classes, which allow you to split classes into several pieces. Note Have your classes grown too large to manage in one file? With the new Partial keyword, you can split a class into separate files. How do I do that? Using the new Partial keyword, you can split a single class into as many pieces as you want. You simply define the same class in more than one place. Here's an example that defines a class named SampleClass in two pieces: Partial Public Class SampleClass Public Sub MethodA( ) Console.WriteLine("Method A called.") End Sub End Class Partial Public Class SampleClass Public Sub MethodB( ) Console.WriteLine("Method B called.") End Sub End Class In this example, the two declarations are in the same file, one after the other. However, there's no reason that you can't put the two SampleClass pieces in different source code files in the same project. (The only restrictions are that you can't define the two pieces in separate assemblies or in separate namespaces.) When you build the application containing the previous code, Visual Studio will track down each piece of SampleClass and assemble it into a complete, compiled class with two methods, MethodA( ) and MethodB( ). You can use both methods, as shown here: Dim Obj As New SampleClass( ) Obj.MethodA( ) Obj.MethodB( ) Partial classes don't offer you much help in solving programming problems, but they can be useful in breaking up extremely large, unwieldy classes. Of course, the existence of large classes in your application could be a sign that you haven't properly factored your problem, in which case you should really break your class down into separate, not partial, classes. One of the key roles of partial classes in .NET is to hide the designer code that is automatically generated by Visual Studio, whose visibility in previous versions has been a source of annoyance to some VB programmers. For example, when you build a .NET Windows form in Visual Basic 2005, your event handling code is placed in the source code file for the form, but the designer code that creates and configures each control and connects its event handlers is nowhere to be seen. In order to see this code, you need to select Project → Show All Files from the Visual Studio menu. When you do, the file that contains the missing half of the class appears in the Solution Explorer as a separate file. Given a form named Form1, you'll actually wind up with a Form1.vb file that contains your code and a Form1.Designer.vb file that contains the automatically generated part. What about... ...using the Partial keyword with structures? That works, but you can't create partial interfaces, enumerations, or any other .NET programming construct. Where can I learn more? To get more details on partial classes, refer to the index entry "Partial keyword" in the MSDN Help. Extend the My Namespace The My objects aren't defined in a single place. Some come from classes defined in the Microsoft.VisualBasic.MyServices namespace, while others are generated dynamically as you add forms, web services, configuration settings, and embedded resources to your project. However, as a developer you can participate in the My namespace and extend it with your own ingredients (e.g., useful calculations and tasks that are specific to your application). Note Do you use the My objects so much you'd like to customize them yourself? VB 2005 lets you plug in your own classes. How do I do that? To plug a new class into the My object hierarchy, simply use a Namespace block with the name My. For example, you could add this code to create a new BusinessFunctions class that contains a company-specific function for generating custom identifiers (by joining the customer name to a new GUID): Namespace My Public Class BusinessFunctions Public Shared Function GenerateNewCustomerID( _ ByVal name As String) As String Return name & "_" & Guid.NewGuid.ToString( ) End Function End Class End Namespace Once you've created the BusinessFunctions object in the right place, you can make use of it in your application just like any other My object. For example, to display a new customer ID: Console.WriteLine(My.BusinessFunctions.GenerateNewCustomerID("matthew")) Note that the My classes you add need to use shared methods and properties. That's because the My object won't be instantiated automatically. As a result, if you use ordinary instance members, you'll need to create the My object on your own, and you won't be able to manipulate it with the same syntax. Another solution is to create a module in the My namespace, because all the methods and properties in a module are always shared. You can also extend some of the existing My objects thanks to partial classes. For example, using this feature you could add new information to the My.Computer object or new routines to the My.Application object. In this case, the approach is slightly different. My.Computer exposes an instance of the MyComputer object. My.Application exposes an instance of the MyApplication object. Thus, to add to either of these classes, you need to create a partial class with the appropriate name, and add the instance members you need. You should also declare this class with the accessibility keyword Friend in order to match the existing class. Note Shared members are members that are always available through the class name, even if you haven't created an object. If you use shared variables, there will be one copy of that variable, which is global to your whole application. Here's an example you can use to extend My.Application with a method that checks for update versions: Namespace My Partial Friend Class MyApplication Public Function IsNewVersionAvailable( ) As Boolean ' Usually, you would read the latest available version number ' from a web service or some other resource. ' Here, it's hardcoded. Dim LatestVersion As New Version(1, 2, 1, 1) Return Application.Info.Version.CompareTo(LatestVersion) End Function End Class End Namespace And now you can use this method: If My.Application.IsNewVersionAvailable( ) Console.WriteLine("A newer version is available.") Else Console.WriteLine("This is the latest version.") End If What about... ...using your My extensions in multiple applications? There's no reason you can't treat My classes in the same way that you treat any other useful class that you want to reuse in multiple applications. In other words, you can create a class library project, add some My extensions, and compile it to a DLL. You can then reference that DLL in other applications. Of course, despite what Microsoft enthusiasts may tell you, extending the My namespace in that way has two potentially dangerous drawbacks: - It becomes more awkward to share your component with other languages. For example, C# does not provide a My feature. Although you could still use a custom My object in a C# application, it wouldn't plug in as neatly. - When you use the My namespace, you circumvent one of the great benefits of namespaces—avoiding naming conflicts. For example, consider two companies who create components for logging. If you use the recommended .NET namespace standard (CompanyName.ApplicationName.ClassName), there's little chance these two components will have the same fully qualified names. One might be Acme.SuperLogger.Logger while the other is ComponentTech.LogMagic.Logger. However, if they both extend a My object, it's quite possible that they would both use the same name (like My.Application.Logger). As a result, you wouldn't be able to use both of them in the same application. The Visual Basic language provides a handful of common flow control statements, which let you direct the execution of your code. For example, you can use Return to step out of a function, or Exit to back out of a loop. However, before VB 2005, there wasn't any way to skip to the next iteration of a loop. Note VB's new Continue keyword gives you a quick way to step out of a tangled block of code in a loop and head straight into the next iteration. How do I do that? The Continue statement is one of those language details that seems like a minor frill at first, but quickly proves itself to be a major convenience. The Continue statement exists in three versions: Continue For, Continue Do, and Continue While, each of which is used with a different type of loop (For ... Next, Do ... Loop, or While ... End While). To see how the Continue statement works consider the following code: For i = 1 to 1000 If i Mod 5 = 0 Then ' (Task A code.) Continue For End If ' (Task B code.) Next This code loops 1,000 times, incrementing a counter i. Whenever i is divisible by five, the task A code executes. Then, the Continue For statement is executed, the counter is incremented, and execution resumes at the beginning of the loop, skipping the code in task B. In this example, the continue statement isn't really required, because you could rewrite the code easily enough as follows: For i = 1 to 1000 If i Mod 5 = 0 Then ' (Task A code.) Else ' (Task B code.) End If Next However, this isn't nearly as possible if you need to perform several different tests. To see the real benefit of the Continue statement, you need to consider a more complex (and realistic) example. Example 2-5 demonstrates a loop that scans through an array of words. Each word is analyzed, and the program decides whether the word is made up of letters, numeric characters, or the space character. If the program matches one test (for example, the letter test), it needs to continue to the next word without performing the next test. To accomplish this without using the Continue statement, you need to use nested loops, an approach that creates awkward code. Example 2-5. Analyzing a string without using the Continue statement ' Define a sentence. Dim Sentence As String = "The final number is 433." ' Split the sentence into an array of words. Dim Delimiters( ) As Char = {" ", ".", ","} Dim Words( ) As String = Sentence.Split(Delimiters) ' Examine each word. For Each Word As String In Words ' Check if the word is blank. If Word <> "" Then Console.Write("'" + Word + "'" & vbTab & "= ") ' Check if the word is made up of letters. Dim AllLetters As Boolean = True For Each Character As Char In Word If Not Char.IsLetter(Character) Then AllLetters = False End If Next If AllLetters Then Console.WriteLine("word") Else '") Else ' If the word isn't made up of letters or numbers, ' assume it's something else. Console.WriteLine("mixed") End If End If End If Next Now, consider the rewritten version shown in Example 2-6 that uses the Continue statement to clarify what's going on. Example 2-6. Analyzing a string using the Continue statement ' Examine each word. For Each Word As String In Words ' Check if the word is blank. If Word = "" Then Continue For Console.Write("'" + Word + "'" & vbTab & "= ") ' Check if the word is made up of letters. Dim AllLetters As Boolean = True For Each Character As Char In Word If Not Char.IsLetter(Character) Then AllLetters = False End If Next If AllLetters Then Console.WriteLine("word") Continue For End If '") Continue For End If ' If the word isn't made up of letters or numbers, ' assume it's something else. Console.WriteLine("mixed") Next What about... ...using Continue in a nested loop? It's possible. If you nest a For loop inside a Do loop, you can use Continue For to skip to the next iteration of the inner loop, or Continue Do to skip to the next iteration of the outer loop. This technique also works in reverse (with a Do loop inside a For loop), but it doesn't work if you nest a loop inside another loop of the same type. In this case, there's no unambiguous way to refer to the outer loop, and so your Continue statement always refers to the inner loop. Where can I learn more? For the language lowdown on Continue, refer to the index entry "continue statement" in the MSDN Help. Dispose of Objects Automatically In .NET, it's keenly important to make sure objects that use unmanaged resources (e.g., file handles, database connections, and graphics contexts) release these resources as soon as possible. Toward this end, such objects should always implement the IDisposable interface, and provide a Dispose( ) method that you can call to release their resources immediately. Note Worried that you'll have objects floating around in memory, tying up resources until the garbage collector tracks them down? With the Using statement, you can make sure disposable objects meet with a timely demise. The only problem with this technique is that you must always remember to call the Dispose( ) method (or another method that calls Dispose( ), such as a Close( ) method). VB 2005 provides a new safeguard you can apply to make sure Dispose( ) is always called: the Using statement. How do I do that? You use the Using statement in a block structure. In the first line, when you declare the Using block, you specify the disposable object you are using. Often, you'll also create the object at the same time using the New keyword. Then, you write the code that uses the disposable object inside the Using block. Here's an example with a snippet of code that creates a new file and writes some data to the file: Using NewFile As New System.IO.StreamWriter("c:\MyFile.txt") NewFile.WriteLine("This is line 1") NewFile.WriteLine("This is line 2") End Using ' The file is closed automatically. ' The NewFile object is no longer available here. In this example, as soon as the execution leaves the Using block, the Dispose( ) method is called on the NewFile object, releasing the file handle. What about... ...errors that occur inside a Using block? Thankfully, .NET makes sure it disposes of the resource no matter how you exit the Using block, even if an unhandled exception occurs. The Using statement makes sense with all kinds of disposable objects, such as: - Files (including FileStream, StreamReader, and StreamWriter) - Database connections (including SqlConnection, OracleConnection, and OleDbConnection) - Network connections (including TcpClient, UdpClient, NetworkStream, FtpWebResponse, HttpWebResponse) - Graphics (including Image, Bitmap, Metafile, Graphics) Where can I learn more? For the language lowdown, refer to the index entry "Using block" in the MSDN Help. Safeguard Properties with Split Accessibility Most properties consist of a property get procedure (which allows you to retrieve the property value) and a property set procedure (which allows you to set a new value for the property). In previous versions of Visual Basic, the declared access level of both procedures needed to be the same. In VB 2005, you can protect a property by assigning to the set procedure a lower access level than you give to the get procedure. Note In the past, there was no way to create a property that everyone could read but only your application could update. VB 2005 finally loosens the rules and gives you more flexibility. How do I do that? VB recognizes three levels of accessibility. Arranged from most to least permissive, these are: - Public (available to all classes in all assemblies) - Friend (available to all code in all the classes in the current assembly) - Private (only available to code in the same class) Imagine you are creating a DLL component that's going to be used by another application. You might decide to create a property called Status that the client application needs to read, and so you declare the property Public: Public Class ComponetClass Private _Status As Integer Public Property Status( ) As Integer Get Return _Status End Get Set(ByVal value As Integer) _Status = value End Set End Property End Class The problem here is that the access level assigned to the Status property allows the client to change it, which doesn't make sense. You could make Status a read-only property (in other words, omit the property set procedure altogether), but that wouldn't allow other classes that are part of your applications and located in your component assembly to change it. The solution is to give the property set procedure the Friend accessibility level. Here's what the code should look like, with the only change highlighted: Public Property Status( ) As Integer Get Return _Status End Get Friend Set(ByVal value As Integer) _Status = value End Set End Property What about... ...read-only and write-only properties? Split accessibility doesn't help you if you need to make a read-only property (such as a calculated value) or a write-only value (such as a password that shouldn't remain accessible). To create a read-only property, add the ReadOnly keyword to the property declaration (right after the accessibility keyword), and remove the property set procedure. To create a write-only property, remove the property get procedure and add the WriteOnly keyword. These keywords are nothing new—they've been available since Visual Basic .NET 1.0. Evaluate Conditions Separately with Short-Circuit Logic In previous versions of VB, there were two logical operators: And and Or. Visual Basic 2005 introduces two new operators that supplement these: AndAlso and OrElse. These operators work in the same way as And and Or, except they have support for short-circuiting, which allows you to evaluate just one part of a long conditional statement. Note With short-circuiting, you can combine multiple conditions to write more compact code. How do I do that? A common programming scenario is the need to evaluate several conditions in a row. Often, this involves checking that an object is not null, and then examining one of its properties. In order to handle this scenario, you need to use nested If blocks, as shown here: If MyObject Is Nothing Then If MyObject.Value > 10 Then ' (Do something.) End If End If It would be nice to combine both of these conditions into a single line, as follows: If MyObject Is Nothing And MyObject.Value > 10 Then ' (Do something.) End If Unfortunately, this won't work because VB always evaluates both conditions. In other words, even if MyObject is Nothing, VB will evaluate the second condition and attempt to retrieve the MyObject.Value property, which will cause a NullReferenceException. Visual Basic 2005 solves this problem with the AndAlso and OrElse keywords. When you use these keywords, Visual Basic won't evaluate the second condition if the first condition is false. Here's the corrected code: If MyObject Is Nothing AndAlso MyObject.Value > 10 Then ' (Do something.) End If What about... ...other language refinements? In this chapter, you've had a tour of the most important VB language innovations. However, it's worth pointing out a few of the less significant ones that I haven't included in this chapter: - The IsNot keyword allows you to simplify awkward syntax slightly. Using it, you can replace syntax like If Not x Is Nothing with the equivalent statement If x IsNot Nothing. - The TryCast( ) function allows you to shave a few milliseconds off type casting code. It works like CType( ) or DirectCast( ), with one exception—if the object can't be converted to the requested type a null reference is returned instead. Thus, instead of checking an object's type and then casting it, you can use TryCast( ) right away and then check if you have an actual object instance. - Unsigned integers allow you to store numeric values that can't be negative. That restriction saves on memory storage, allowing you to accommodate larger numbers. Unsigned numbers have always been in the .NET Framework, but now VB 2005 includes keywords for them (UInteger, ULong, and UShort).
http://commons.oreilly.com/wiki/index.php?title=Visual_Basic_2005:_A_Developer's_Notebook/The_Visual_Basic_Language&oldid=8995
CC-MAIN-2014-15
refinedweb
9,937
56.05
Say, you have a method that takes time to execute and you want its result to be cached. There are many solutions, including Apache Commons JCS, Ehcache, JSR 107, Guava Caching and many others. jcabi-aspects offers a very simple one, based on AOP aspects and Java6 annotations: import com.jcabi.aspects.Cacheable; public class Page { @Cacheable(lifetime = 5, unit = TimeUnit.MINUTES) String load() { return new URL("").getContent().toString(); } } The result of load() method will be cached in memory for five minutes. How It Works? This post about AOP, AspectJ and method loging explains how "aspect weaving" works (I highly recommend that you read it first). Here I'll explain how caching works. The approach is very straight forward. There is a static hash map with keys as "method coordinates" and values as their results. Method coordinates consist of the object, an owner of the method and a method name with parameter types. In the example above, right after the method load() finishes, the map gets a new entry (simplified example, of course): key: [page, "load()"] value: "<html>...</html>" Every consecutive call to load() will be intercepted by the aspect from jcabi-aspects and resolved immediately with a value from the cache map. The method will not get any control until the end of its lifetime, which is five minutes in the example above. What About Cache Flushing? Sometimes it's necessary to have the ability to flush cache before the end of its lifetime. Here is a practical example: import com.jcabi.aspects.Cacheable; public class Employees { @Cacheable(lifetime = 1, unit = TimeUnit.HOURS) int size() { // calculate their amount in MySQL } @Cacheable.FlushBefore void add(Employee employee) { // add a new one to MySQL } } It's obvious that the number of employees in the database will be different after add() method execution and the result of size() should be invalidated in cache. This invalidation operation is called "flushing" and @Cacheable.FlushBefore triggers it. Actually, every call to add() invalidates all cached methods in this class, not only size(). There is also @Cacheable.FlushAfter. The difference is that FlushBefore guarantees that cache is already invalidated when the method add() starts. FlushAfter invalidates cache after method add() finishes. This small difference makes a big one, sometimes. This article explains how to add jcabi-aspects to your project.
http://www.yegor256.com/2014/08/03/cacheable-java-annotation.html
CC-MAIN-2014-52
refinedweb
384
67.35
I was working on a unittest which when it failed would say "this string != that string" and because some of these strings were very long (output of a HTML lib I wrote which spits out snippets of HTML code) it became hard to spot how they were different. So I decided to override the usual self.assertEqual(str1, str2) in Python's unittest class instance with this little baby: def assertEqualLongString(a, b): NOT, POINT = '-', '*' if a != b: print a o = '' for i, e in enumerate(a): try: if e != b[i]: o += POINT else: o += NOT except IndexError: o += '*' o += NOT * (len(a)-len(o)) if len(b) > len(a): o += POINT* (len(b)-len(a)) print o print b raise AssertionError, '(see string comparison above)' It's far from perfect and doesn't really work when you've got Unicode characters that the terminal you use can't print properly. It might not look great on strings that are really really long but I'm sure that's something that can be solved too. After all, this is just a quick hack that helped me spot that the difference between one snippet and another was that one produced <br/> and the other produced <br />. Below are some examples of this utility function in action. Beware, you can use this in many different ways that fits your need so I'm not going to focus on how it's executed: u = MyUnittest() u.assertEqualLongString('Peter Bengtsson 123', 'Peter PengtsXon 124'); print "" u.assertEqualLongString('Bengtsson','Bengtzzon'); print "" u.assertEqualLongString('Bengtsson','BengtzzonLonger'); print "" u.assertEqualLongString('BengtssonLonger','Bengtzzon'); print "" u.assertEqualLongString('Bengtssonism','Bengtsson'); print "" # Results: Peter Bengtsson 123 ------*-----*-----* Peter PengtsXon 124 Bengtsson -----**-- Bengtzzon Bengtsson -----**--****** BengtzzonLonger BengtssonLonger -----**--****** Bengtzzon Bengtssonism ---------*** Bengtsson Follow @peterbe on Twitter I love writing python oneliners :) ".join([x[0]==x[1] and "-" or "*" for x in map(None, a, b)]) in python2.5 you can use the trinary op to make it slightly more elegant Nice. I've often used Python's difflib.ndiff to make test failures easier to understand, back when I wrote PyUnit-style unit tests. It was especially useful for multiline strings. Nowadays with doctests getting readable diffs is easy (#doctest: +REPORT_NDIFF). Unless you're comparing several-thousand-line-long HTML pages in your functional tests. while don't use the difflib builtin module? Hi Peter, I did the same thing recently, but I used the built-in difflib module to show me where the differences are. Might be worth a look. :) HTH Hi again, Just FYI, I only posted a repeat of what others had already mentioned because with cookies off, I do not see any comments. The warning about needing to turn on javascript in order to comment is nice, but with cookies off, you site looks like no one has commented. Anyway, just thought I'd mention that. Sorry for the duplicate info. Krys It's not about cookies, it's just that the caching is set to 1 hour which might have tricked you. I haven't had the time to find a good solution to this yet. lxml.html.usedoctest (and lxml.doctestcompare) implement a smarter comparison for HTML fragments. It's not perfect, but it's also somewhat less sensitive to unimportant differences in the HTML.
https://www.peterbe.com/plog/string-comparison-function-in-python-alpha
CC-MAIN-2019-43
refinedweb
546
65.42
2.2 Writing Efficient Message Passing Code¶ DGL optimizes memory consumption and computing speed for message passing. A common practise to leverage those optimizations is to construct one’s own message passing functionality as a combination of update_all() calls with built-in functions as parameters. Besides that, considering that the number of edges is much larger than the number of nodes for some graphs, avoiding unnecessary memory copy from nodes to edges is beneficial. For some cases like GATConv, where it is necessary to save message on the edges, one needs to call apply_edges() with built-in functions. Sometimes the messages on the edges can be high dimensional, which is memory consuming. DGL recommends keeping the dimension of edge features as low as possible. Here’s an example on how to achieve this by splitting operations on the edges to nodes. The approach does the following: concatenate the src feature and dst feature, then apply a linear layer, i.e. \(W\times (u || v)\). The src and dst feature dimension is high, while the linear layer output dimension is low. A straight forward implementation would be like: import torch import torch.nn as nn linear = nn.Parameter(torch.FloatTensor(size=(node_feat_dim * 2, out_dim))) def concat_message_function(edges): return {'cat_feat': torch.cat([edges.src['feat'], edges.dst['feat']], dim=1)} g.apply_edges(concat_message_function) g.edata['out'] = g.edata['cat_feat'] @ linear The suggested implementation splits the linear operation into two, one applies on src feature, the other applies on dst feature. It then adds the output of the linear operations on the edges at the final stage, i.e. performing \(W_l\times u + W_r \times v\). This is because \(W \times (u||v) = W_l \times u + W_r \times v\), where \(W_l\) and \(W_r\) are the left and the right half of the matrix \(W\), respectively: import dgl.function as fn linear_src = nn.Parameter(torch.FloatTensor(size=(node_feat_dim, out_dim))) linear_dst = nn.Parameter(torch.FloatTensor(size=(node_feat_dim, out_dim))) out_src = g.ndata['feat'] @ linear_src out_dst = g.ndata['feat'] @ linear_dst g.srcdata.update({'out_src': out_src}) g.dstdata.update({'out_dst': out_dst}) g.apply_edges(fn.u_add_v('out_src', 'out_dst', 'out')) The above two implementations are mathematically equivalent. The latter one is more efficient because it does not need to save feat_src and feat_dst on edges, which is not memory-efficient. Plus, addition could be optimized with DGL’s built-in function u_add_v, which further speeds up computation and saves memory footprint.
https://docs.dgl.ai/en/latest/guide/message-efficient.html
CC-MAIN-2021-49
refinedweb
401
50.53
Created on 2008-10-16 22:08 by bob.ippolito, last changed 2009-05-02 12:37 by benjamin.peterson. This issue is now closed. simple? Can you write a patch against python trunk ? :-) Sure, but that doesn't port it to Python 3.0 :) > Sure, but that doesn't port it to Python 3.0 :) Still, as Victor suggests, the first step for porting it to 3.0 definitely is to produce a patch for the trunk. What the next steps will be can be discussed when this step has been completed. patch to r66961 of trunk is attached. About the patch: are those lines really needed? + PyScannerType.tp_getattro = PyObject_GenericGetAttr; + PyScannerType.tp_setattro = PyObject_GenericSetAttr; + PyScannerType.tp_alloc = PyType_GenericAlloc; + PyScannerType.tp_new = PyType_GenericNew; + PyScannerType.tp_free = _PyObject_Del; I've never used them. What happens if the slots are left empty, and let PyType_Ready() do the rest? You're probably right, I don't remember what code I was using as a template for that. Actually, if I remove those lines from the equivalent module in simplejson it no longer works properly with Python 2.5.2. File "/Users/bob/src/simplejson/simplejson/decoder.py", line 307, in __init__ self.scan_once = make_scanner(self) TypeError: cannot create 'make_scanner' instances > Actually,? I don't recall exactly why they aren't in the struct itself, it may not have worked with some compiler on some platform. It's not really a complete rewrite, the encoding path is largely the same and the tests haven't changed. Anyway, there is no further work planned for simplejson. It's done except for the potential for bug fixes. The only enhancements were performance related and this is about as fast as it's going to get. The majority of this work was ready before Python 2.6 was released but it was frozen so I couldn't get this in. Attached is a new diff, one byte fix to the float parser when parsing JSON documents that are just a float (also a test and a version bump). Bumping priority a bit. File Lib/json/decoder.py (right): Line 55: def py_scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match): This function should get some comments what all the various cases are (preferably speaking with the terms of JSON spec, i.e. chars, char, ...) Line 71: _append(content) # 3 cases: end of string, control character, escape sequence Line 76: msg = "Invalid control character {0!r} at".format(esc) esc isn't assigned until a few lines later. Is this really correct? Line 104: raise ValueError No message? Line 107: raise ValueError No message? Line 111: m = unichr(uni) What's the purpose of m? Line 127: nextchar = s[end:end + 1] Why not s[end]? Add comment if this is necessary. Line 132: nextchar = s[end:end + 1] Likewise. There are more places where it does slicing, but also places where it does indexing, in this function. Line 290: following strings: -Infinity, Infinity, NaN. This sounds like an incompatible change. Line 317: def raw_decode(self, s, idx=0): That looks like an incompatible change File Modules/_json.c (right): Line 196: output_size *= 2; You might want to check for integer overflow here. Line 215: ascii_escape_str(PyObject *pystr) Please attach a comment to each function, telling what the function does. Line 733: "..." Some text should probably be added here. Line 1320: if ((idx + 3 < length) && str[idx + 1] == 'u' && str[idx + 2] == 'l' && str[idx + 3] == 'l') { Is this really faster than a strncmp? Line 1528: PyTypeObject PyScannerType = { I think scanner objects should participate in cyclic gc. Line 2025: "make_encoder", /* tp_name */ That is a confusing type name. How about "Encoder"? By . Bob, any news on this? patch to r69662 is attached as json_issue4136_r69662.diff -- note that simplejson 2.0.9 isn't released, as of r169 it's just simplejson 2.0.8 with some trivial changes to make this backport easier for me A bunch of comments from a quick look: - why do you use old-style relative imports ("from decoder import JSONDecoder")? - in join_list_unicode, join_list_string you could use PyUnicode_Join and _PyString_Join, respectively - in scanstring_unicode, the top comment says "encoding is the encoding of pystr (must be an ASCII superset)", but the function takes no "encoding" parameter - there are some lines much longer than 80 chars (it's quite clear when reading the diff) - there are places where you call PyObject_IsTrue(s->strict) without checking for an error return; perhaps you could do so in the constructor - the Scanner type doesn't support cyclic garbage collection, but it contains some arbitrary Python objects (parse_constant and friends could be closures or methods) - same issue with the Encoder type (default_fn could hold arbitrary objects alive) Bob, here is a small example showing how easy it is to encounter the GC problem: from json import JSONDecoder import weakref import gc class MyObject(object): def __init__(self): self.decoder = JSONDecoder(parse_constant=self.parse_constant) def parse_constant(self, *args, **kargs): """ XXX """ wr = weakref.ref(MyObject()) gc.collect() print wr() Old. New patch implementing cyclic GC, new-style relative imports, no lines >80 characters in non-test Python code Thanks. Honestly :) I think reformatting line length should not hold-up this patch. That is a nice-to-have, not a must-have. Reviewers: , Description: Updated patch from Bob Ippolito, for updating the Python trunk json package to the latest simplejson. Please review this at Affected files: Lib/json/__init__.py Lib/json/decoder.py Lib/json/encoder.py Lib/json/scanner.py Lib/json/tests/test_check_circular.py Lib/json/tests/test_decode.py Lib/json/tests/test_dump.py Lib/json/tests/test_encode_basestring_ascii.py Lib/json/tests/test_fail.py Lib/json/tests/test_float.py Lib/json/tests/test_unicode.py Lib/json/tool.py Modules/_json.c FWIW, following simplejson's SVN history[1] makes understanding the (bits of the) patch (that I had time to look at) much easier to me. I recall other JSON packages having lots of cornercase tests, not sure if they'd be relevant here. But sprinkling a few more tests around might help digest these changes :) [1] The. > simple. They are essentially the same except the relative imports are changed to use . syntax, simplejson._speedups is changed to _json, simplejson is changed to json, .format strings are used, and the test suite changes slightly. I can add fixing that struct function and removing the #if stuff from the C code to that list as well. The way I see it, the names have to change anyway, so other things might as well be modernized as long as it's trivial. I personally didn't make the call to switch from % to .format, someone else did that after I had originally committed simplejson to Python 2.6 trunk. Martin, is this patch good-to-go? What needs to happen next for this patch to go forward? Well, if Bob has addressed all of Martin's comments, I suppose it can get in. The second step will be to port it to py3k... All of the comments are addressed. I am not going to go through the trouble of creating a new patch to remove the remaining backwards compatibility cruft in the C code and struct function. That is easier to remove later. The patch in its current form is fine with me, please apply (OTOH, I don't see the need for urgency - 2.7 is still many months away, and likely, we will see another update to the same code before it gets released) Bob, please go ahead and commit. I don't see any advantage to letting the code continue sit in the tracker. Also, having it in will let me go forward with issue 5381 which has been held-up until this was complete. Thanks for all your work on JSON. r70443 in trunk Reopening so that we don't forget to merge it in py3k :) (I have the feeling it won't be trivial, although I hope to be proven wrong). This change should be ported to py3k sometime before the first beta. Here's a half-baked patch against py3k. It resolves all the conflicts but still has 15 failing tests. Perhaps someone would like to finish it up. For example, json.dumps(b"hi") works, but not json.dumps([b"hi", "hi"]) There is the problem in the current py3k version of json. b"hi" can be serialized, but not [b"hi"]. >>> json.dumps(b"hi") '"hi"' >>> json.dumps([b"hi"]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/antoine/py3k/__svn__/Lib/json/__init__.py", line 230, in dumps return _default_encoder.encode(obj) File "/home/antoine/py3k/__svn__/Lib/json/encoder.py", line 367, in encode chunks = list(self.iterencode(o)) File "/home/antoine/py3k/__svn__/Lib/json/encoder.py", line 306, in _iterencode for chunk in self._iterencode_list(o, markers): File "/home/antoine/py3k/__svn__/Lib/json/encoder.py", line 204, in _iterencode_list for chunk in self._iterencode(value, markers): File "/home/antoine/py3k/__svn__/Lib/json/encoder.py", line 317, in _iterencode for chunk in self._iterencode_default(o, markers): File "/home/antoine/py3k/__svn__/Lib/json/encoder.py", line 323, in _iterencode_default newobj = self.default(o) File "/home/antoine/py3k/__svn__/Lib/json/encoder.py", line 344, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: b'hi' is not JSON serializable Christian: 1) in py3k, loads and dumps always seem to operate on/produce str objects, but encode_basestring_ascii returns a bytes object. Why is that? 2) what is the use of the encoding argument in py3k? it looks completely ignored (bytes objects are not allowed as input and never produced as output) Updated patch: - fixes all failures - removes bytes input and output "support" (which didn't work but still involved a lot of code) To be done: - remove all traces of the encoding argument, and associated machinery Here is an updated patch, completely removing the `encoding` parameter and fixing docs. (by the way, all tests pass) It would be better to have a patch that diff's from the current 2.7 version than to start with the 3.0 version; otherwise, the two will never be fully synchronized and some of the choices made in 2.6-to-3.0 will live on forever. The 2.7 version reflects more patch review and real world usage (from simplejson) than the relatively unexercised 3.0 version. > It would be better to have a patch that diff's from the current 2.7 > version than to start with the 3.0 version; otherwise, the two will > never be fully synchronized and some of the choices made in 2.6-to-3.0 > will live on forever. How am I supposed to produce this patch? The idea is to ignore the current 3.0 version and just redo the 2-to-3 conversion from 2.7 and do it well this time. Compute the 3.1 patch as if the current 3.0 version was blown away (reverted). +1 for Raymond's suggestion The 3.0 version of json was more like a last minute patch work than thorough work. You might wanna svn rm the 3.0 code, svn cp the 2.7 code to the py3k branch and start all over. I'll take a stab at doing it Raymond's way this weekend. Since no other patches were proposed, I applied Antoine's patch in r72194.
http://bugs.python.org/issue4136
CC-MAIN-2013-20
refinedweb
1,911
67.76
While Blockly defines a number of standard blocks, most applications need to define and implement at least a few domain relevant blocks. Blocks are composed of three components: - Block definition object: Defines the look and behaviour of a block, including the text, colour, fields, and connections. - Toolbox reference: A reference to the block type in the toolbox XML, so users can add it to the workspace. - Generator function: Generates the code string for this block. It is always written in JavaScript, even if the target language is not JavaScript, and even for Blockly for Android. Block Definition Blockly for web loads loads Blocks via script files. The blocks/ directory includes several such examples for the standard blocks. Assuming your blocks don't fit in the existing categories, create a new JavaScript file. This new JavaScript file needs to be included in the list of <script ...> tags in the editor's HTML file. A typical block definition looks like this: JSON Blockly.Blocks['string_length'] = { init: function() { this.jsonInit({ "message0": 'length of %1', "args0": [ { "type": "input_value", "name": "VALUE", "check": "String" } ], "output": "Number", "colour": 160, "tooltip": "Returns number of letters in the provided text.", "helpUrl": "" }); } }; JavaScript Blockly.Blocks['string_length'] = { init: function() { this.appendValueInput('VALUE') .setCheck('String') .appendField('length of'); this.setOutput(true, 'Number'); this.setColour(160); this.setTooltip('Returns number of letters in the provided text.'); this.setHelpUrl(''); } }; string_length: This is the type name of the block. Since all blocks share the same namespace, it is good to use a name made up of your category (in this case string) followed by your block's function (in this case length). init: This function defines the look and feel of the block. This defines the following block: The details of block definitions can be found in Define Blocks. Add Toolbox Reference Once defined, use the type name to reference the block to the toolbox: <xml id="toolbox" style="display: none"> <category name="Text"> <block type="string_length"></block> </category> ... </xml> See the Toolbox guide for more details. Add Generator Function Finally, to transform the block into code, pair the block with a generator function. Generators are specific to the desired output language, but standard generators generally take the following format: Blockly.JavaScript['text_length'] = function(block) { // String or array length. var argument0 = Blockly.JavaScript.valueToCode(block, 'VALUE', Blockly.JavaScript.ORDER_FUNCTION_CALL) || '\'\''; return [argument0 + '.length', Blockly.JavaScript.ORDER_MEMBER]; }; The generator function takes a reference to the block for processing. It renders the inputs (the VALUE input, above) into code strings, and then concatenates those into a larger expression. See Use Custom Generators for more details.
https://developers.google.com/blockly/guides/configure/web/custom-blocks
CC-MAIN-2019-51
refinedweb
427
59.4
Stack in C++ Standard Template Library (STL) In this tutorial, we are going to learn about some special functions of stack in STL in C++. It is nothing but a special type of container adaptor which is specifically designed to operate in a LIFO context (last-in-first-out). We can insert and extract elements only from one end of the container. Functions Associated with Stack in C++ The functions which are associated with stack are as follows: - push( a ) - pop( ) - top( ) - empty( ) - size( ) We will discuss each function one by one and then we will implement it in our code. Push(a) This function adds the element (a) in the top of the stack and the time complexity of this operation is O(1). Pop() This function removes the top element from the stack and the time complexity of this operation is O(1). Top() This function returns the element which is present in the top of the stack and the time complexity of this operation is O(1). Empty() This function checks if the stack is empty or not. The time complexity of this operation is O(1). Size() This function returns the size of the stack at a particular point of time. The time complexity of this operation is O(1). Now we will try to implement each function in our code. Code to implement the following stack functions in C++ Cpp source code: // C++ program to implement stack STL #include<bits/stdc++.h> using namespace std; int main () { stack <int> s; s.push(11); s.push(12); s.push(13); s.push(14); s.push(15); // After inserting 5 elements we are checking the size of stack cout<<"Stack size: "<<s.size(); // Checking out the top element present in stack cout<<"\nTop element present in stack is: "<<s.top(); // Removing the top element from the stack s.pop(); // Now printing all the elements of the stack // until it is empty. cout<<"\nElements present in the stack is: "; while(!s.empty()) { cout<<s.top()<<" "; s.pop(); } return 0; } Input/Output: (Stack program in C++) Stack size: 5 Top element present in stack is: 15 Elements present in the stack is: 14 13 12 11 You may also learn: Merge sort in C++ (A Divide and Conquer algorithm) Program on Quicksort on Linked List in C++ Do not forget to comment if you find anything wrong in the post or you want to share some information regarding the same.
https://www.codespeedy.com/stack-in-c-standard-template-library-stl/
CC-MAIN-2019-43
refinedweb
411
69.21
Hi, I just have done this exercise from "Accelerated C++". Several things I'm not sure about: (1) Deal with "times" or "time", i.e. if <= 1, then choose "time", and else choose "times". I tried conditional operator, (a>b)? c : d..., but somehow it gives errors all the time (2) Does this program meet the requirements of the exercise (see the verbatim description of the exercise at the top of the program) (3) I don't understand much about istream& thing that the textbook uses to read words... Why is it used instead of the usual cin>>sth? (4) How can I initialize a vector? I've read and tried all possiblities: vector<int> vec(10, 1) for 10 elements, each = 1. But it doesn't work here somehow. Error warnings immediately. In this program, I've tried wp.count(wp.v.size(), 1) to inititiaze every element of the vector count to 1. Errors always! (5) Lastly, if possible, please give me a hint/suggestions on how to improve this program. Thanks a lot Code:// Exercise 4.6 // Textbook: Accelerated C++, by Andrew Koenig and Barbara E.Moo /* Write a function that reads words from an input stream and stores them in a vector. Use that function BOTH to write programs that count the number of words in the input and to count how many times each word occurs */ #include <iostream> #include <string> #include <vector> using namespace std; struct wordplay { int num; vector<int> count; vector<string> v; }; // write a function that (1) reads words, (2) stores words in a vector, (3) computes // the number of words in the input, and (4) counts how many times each word occurs wordplay myf(string s) { wordplay wp; while (cin >> s) // reads in words wp.v.push_back(s); // stores in a vector // # of words wp.num = wp.v.size(); // # count how many times each word occurs for (int k = 0; k != wp.v.size(); ++k) wp.count.push_back(1); // initialize each element of the count vector 1 for (int i = 0; i != wp.v.size(); ++i) { for (int j = 0; j != wp.count.size(); ++j) { if (j != i && wp.v[j] == wp.v[i]) { wp.count[i] += 1; } } } return wp; } int main() { cout << "Enter some words: " << endl; string s; wordplay wp = myf(s); cout << "The number of words entered is: " << wp.num << endl << endl; for (int i = 0; i != wp.count.size(); ++i) { cout << "This word, " << "\"" << wp.v[i] << "\"" << " occurs " << wp.count[i] << " times" << endl; } cout << endl << endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/126456-exercise-4-1-accelerated-cplusplus-please-help-improve-program.html
CC-MAIN-2015-40
refinedweb
419
75
16×2 LCD is a 32 digits display screen for all kinds of CMOS/TTL devices. This word comes from the liquid crystal and 16X2 represents its screen size. In Liquid crystal display 16×2, there are 2 rows and 16 columns. Besides, 5×8 pixel makes a single digit. Any digit from ASCII code is viewable on the module. It supports the custom signs and designs but those require some specific methods and have some limitations. This display module has too much use in most of the commercial projects and there is almost a library in every programming language about it. The premade libraries made it easy to interface with other devices. Pinout Diagram 16×2 LCD There are two sections pins on the whole 16×2 LCD module. Some of them are data pins and some are command pin. Somehow, every pin has a role in controlling a single pixel on the display. Additionally, all the input/output pins of this module are shown in the pinout diagram: 16×2 LCD Pin Configuration and Working of Each Pin Power Pins Control Pins Data Pins Led Pins 16×2 Liquid Crystal Display Constructions In the LCD the registers are used to store the data and commands. The command registers store the data of different functions that can be performed on the screen. The Data registers help to store the data and then pass it to the controller. The Data and command registers are only able to store the operation of basic light control. The liquid crystals are placed between the two glass sheets on the screen. The two sheets are also placed between these sheets. The sheets are used to stop the light. Working Principal The basic principle in LCD is by the passing of light from one layer (sheet) to another layer with the use of modules. The modules vibrate and align their position at 90 degrees, which allows the polarized sheet to pass the light through it. The molecules are responsible for showing the data on each pixel. Each pixel uses the light-absorbing method to show the digit. To show the value, molecules need to change their position to change the angle of light. So this deflection of light will make the human eye see the light of the remaining part which will make the dark part as a value and digits on the grid pixels. The data, we can see, will be the part where the light gets absorbed. The data will pass towards the molecules and will be there until they are changed. How to Us 16×2 LCD? The LCD consists of Data, Command and control registers. All the register helps to control the different kinds of functions on the LCD. The data and command registers take the input from digital pins D0-D7. Then controls pins help to differentiate between command/data registers. The LCD is made up of liquid crystals and the below image represents the two ICs which makes it control the LCD with the external devices. To control the LCD there are two kinds of methods. The first method is by understanding the internal registers operating method and then use it. Therefore, the second method is easy and simple. in this method, the only Library needs to use. Due to the wide usage of LCD in almost every field, All the boards and microcontrollers have LCD libraries. In both cases, the control method and circuits will different. 16×2 LCD Direct Programming Method To control the LCD without library all the 8 digital pins need to use. So, first, understand the control pins how they should operate. The first pins are the RS pin which helps to differentiate between command/data register. After connecting the data at digital input, it will go to the data or command register. If there is LOW input on RS pin then data will transfer to the command registers and If there is a HIGH input state at that pin then data will transfer at the data registers. The different kinds of data at the digital pins, will responsible for different functions on the LCD. All the commands of LCD with their functions are: Commands List The above command will only read by the module when there is low input at command pin but there should be low input at R/W pin. Low input at R/W pin will indicate that LCD is reading from the external pins. After that there is a third pin, enable pin. The enable pins will need to receive a low to HIGH pulse to transfer the command from registers to the LCD. Once the command is sent then there won’t be any change until new commands opposite to the given one replace it. These all functions will send through all the digital pins. Data Display The data display will also get through the digital pins. The data pins will send the data from the digital pins to the data register, whenever there will HIGH input signal at RS pin. All the data in alphabets or other words the ASCII code will able to show at the LCD. After transferring the data, the enable pin also needs to get the LOW to HIGH pulse. The LOW to high pulse needs only for few milliseconds. Therefore, To show the data on pixel grids, commands need to store within the module. If the command isn’t set according to each required function then the LCD will display the data according to the previously sent commands. So always send the commands before showing any data. Custom Character Display To display the custom, the character isn’t hard but it requires to follow some specific protocols. To display the character the CG RAM of the LCD needs to store the data for custom pixel. Commands to send and store Data The following commands will help to send and store the custom pixel data. Once the character is stored the commands need to send to the LCD to show the character. The character won’t show until it receives the command. Programming with Library Method The library method will send the data with the use of four pins mostly. In the library method, we are going to use Arduino as a reference here. The library method will send the data with the use of four pins mostly. In the library method, the data and other pins will be set once and the remaining will change through programming. Here’s the circuit diagram: The following code will help to display the data. #include <LiquidCrystal.h> //Library LiquidCrystal LCD(rs, en, d4, d5, d6, d7); //the varable will replace with each of the pins. void setup() { lcd.begin(16, 2); } void loop() { lcd.clear(); lcd.setCursor(0, 1); lcd.print("hello, world!"); delay(500); } The library size needs to initialize by lcd.begin command. The lcd.clear will clear the LCD display. The set.cursor will help to set the starting position of cursor and print will help to send the data to the LCD. 16×2 LCD Tutorials and Projects These are the tutorials and projects to explore this module further. - LCD interfacing with Arduino UNO R3 - 16×2 LCD Interfacing with PIC Microcontroller - I2C LCD interfacing with ESP32 - LCD interfacing with MSP430 LaunchPad - Scrolling text on LCD using MSP430 microcontroller - SCROLLING TEXT ON LCD USING PIC MICROCONTROLLER|Mikro C - I2C LCD interfacing with ESP32 and ESP8266 in Arduino IDE - Display GPS Co-ordinates on LCD using pic microcontroller 16×2 LCD Features - This module is useable with any CMOS/TTL device. - All kinds of Alphabets and digits present in the ASCII code are drawn able on the LCD. - It operates at 4.7 to 5.3Vs - A custom symbol size is each 5×8 pixel. - Useable by both 4-bit and 8-bit data input. 16×2 LCD Applications - In most of the applications that’s have only small values to show, uses the LCD. - Most of the commercial meters use this module to represent the data output. - In the toys and developing projects, it is still vastly in use. - In black and white printers, it helps to show the printer settings and status. 2D Diagram Alternative Displays: - Monochrome 0.96” OLED Display - Nokia5110 LCD Module - 2.4″ TFT LCD Display Module overview - TFT Display - TM1637- Grove 4 Digit Display Module - 7 Segment Display Other Electronic Components: 1 thought on “16×2 LCD Module – Liquid Crystal Display” Thanks for this informative
https://microcontrollerslab.com/16x2-lcd-pinout-working-examples-programming-applications/
CC-MAIN-2021-39
refinedweb
1,419
72.76
Challenge Have you ever tried making a sankey diagram with d3+react, I can't seem to make it work for some reason.:/ Emil No Emil, I have not. Let's give it a shot! Thanks for finding us a dataset that fits :) My Solution What is a Sankey diagram? Sankey diagrams are flow diagrams. They're often used to show flows of money and other resources between different parts of an organization. Or between different organizations. Sankey originally designed them to show energy flows in factories. Vertical rectangles represent nodes in the flow, lines connecting the rectangles show how each node contributes to the inputs of the next node. Line thickness correlates to flow magnitude. One of the most famous Sankey diagrams in history is this visualization of Napoleon's invasion into Russia. No I'm not quite sure how to read that either. But it's cool and it's old ✌️ How do you make a sankey with React and D3? Turns out building a Sankey diagram with React and D3 isn't terribly difficult. A D3 extension library called d3-sankey provides a generator for them. Your job is to fill it with data, then render. The dataset Emil found for us was specifically designed for Sankey diagrams so that was awesome. Thanks Emil. 🙏🏻 I don't know what our data represents, but you gotta wrangle yours into nodes and links. nodesare an array of representative keys, names in our case linksare an array of objects mapping a sourceinex to a targetindex with a numeric value {"nodes": [{"name": "Universidad de Granada"},{"name": "De Comunidades Autónomas"},//...],"links": [{"source": 19,"target": 26,"value": 1150000},{"source": 0,"target": 19,"value": 283175993},//...} Turn data into a Sankey layout We can keep things simple with a functional component that calculates the Sankey layout on the fly with every render. We'll need some color stuff too. That was actually the hardest, lol. import { sankey, sankeyLinkHorizontal } from "d3-sankey";//...const MysteriousSankey = ({ data, width, height }) => {const { nodes, links } = sankey().nodeWidth(15).nodePadding(10).extent([[1, 1], [width - 1, height - 5]])(data);const color = chroma.scale("Set3").classes(nodes.length);const colorScale = d3.scaleLinear().domain([0, nodes.length]).range([0, 1]); It's called MysteriousSankey because I don't know what our dataset represents. Takes a width, a height, and a data prop. We get the sankey generator from d3-sankey, initialize a new generator with sankey(), define a width for our nodes and give them some vertical padding. Extent defines the size of our diagram with 2 coordinates: the top left and bottom right corner. Colors are a little trickier. We use chroma to define a color scale based on the predefined Set3 brewer category. We split it up into nodes.length worth of colors - one for each node. But this expects inputs like 0.01, 0.1 etc. To make that easier we define a colorScale as well. It takes indexes of our nodes and translates them into those 0 to 1 numbers. Feed that into the color thingy and it returns a color for each node. Render your Sankey A good approach to render your Sankey diagram is using two components: <SankeyNode>for each node <SankeyLink>for each link between them You use them in two loops in the main <MysteriousSankey> component. return (<g style={{ mixBlendMode: 'multiply' }}>{nodes.map((node, i) => (<SankeyNode{...node}color={color(colorScale(i)).hex()}key={node.name}/>))}{links.map((link, i) => (<SankeyLinklink={link}color={color(colorScale(link.source.index)).hex()}/>))}</g>); Here you can see a case of inconsistent API design. SankeyNode gets node data splatted into props, SankeyLink prefers a single prop for all the link info. There's a reason for that and you might want to keep to the same approach in both anyway. Both also get a color prop with the messiness of translating a node index into a [0, 1] number passed into the chroma color scale, translated into a hex string. Mess. <SankeyNode> const SankeyNode = ({ name, x0, x1, y0, y1, color }) => (<rect x={x0} y={y0} width={x1 - x0} height={y1 - y0} fill={color}><title>{name}</title></rect>); SankeyNodes are rectangles with a title. We take top left and bottom right coordinates from the sankey generator and feed them into rect SVG elements. Color comes form the color prop. <SankeyLink> const SankeyLink = ({ link, color }) => (<pathd={sankeyLinkHorizontal()(link)}style={{fill: 'none',strokeOpacity: '.3',stroke: color,strokeWidth: Math.max(1, link.width),}}/>); SankeyLinks are paths. We initialze a sankeyLinkHorizontal path generator instance, feed it link info and that creates the path shape for us. This is why it was easier to get everything in a single link prop. No idea which arguments the generator actually uses. Styling is tricky too. Sankey links are lines. They don't look like lines, but that's what they are. You want to make sure fill is set to nothing, and use strokeWidth to get that nice volume going. The rest is just colors and opacities to make it look prettier. A sankey diagram comes out 👇 You can make it betterer with some interaction on the nodes or even links. They're components so the world is your oyster. Anything you can do with components, you can do with these.
https://reactfordataviz.com/cookbook/12/
CC-MAIN-2022-40
refinedweb
872
67.65
Created on 2010-06-30 12:28 by holdenweb, last changed 2011-01-30 06:23 by r.david.murray. This issue is now closed. The attached program completes in less than half a second under Python 2.5. Under Python 3 it takes almost three minutes on the same system. The issue appears to be heavy use of decoding, at least in a Windows system, during creation of the mailbox toc. The disparity may be less remarkable when not profiling. Further attachments will include a test data file (a Thunderbird mailbox taken from the same host system) and profiler outputs from the 2.5 and 3.1 runs of this program. Thread at refers to this issue. Posted files are already attached herewith. I can confirm on Ubuntu and with other example mailboxes. Looping through the messages and printing the subjects takes around 200-300 times longer under Python 3 than under Python 2. Can you confirm using the Py3.2 head? Am curious if Antoine's optimizations helped here. 3.2 sees a small improvement when running the Steve test: Python 2.6.6: 0.0291s Python 3.1.2: 31.1s Python 3.2b2+: 28.8s This is Ubuntu 10.04 on ext3, with all Pythons compiled from source, with no configure attributes except a prefix. I wonder if the differences between different unix systems can have to do with what the default system encoding is? Mine is UTF-8. The aforementioned python-dev thread (available at ) explains things quite well. The mailbox module needs to be modified to use binary I/O, both for functionality and for speed. Right now, I don't know how the mailbox module can be useful in py3k (you'd quickly run into unicode errors as soon as you try to read an email with another charset, I think). RDM, you were suggested for this by Thomas Wouters (who wrote much of the existing code). Are you up for it? With the module being so slow as to be unusable, this can be considered a bugfix, so it is okay if the fix goes into 3.2.1. I've been intending to take a look at this issue at some point, but am not sure when I'd get to it. I took a quick look. It does seems to me that it is true that for data-validity purposes the message files need to be opened in binary and fed to the email package in binary. But this is so that the message will get decoded using the correct character sets, not to avoid the decoding. In Python3 it makes no sense to manipulate the subjects as binary strings, so the example of "looping through the messages and printing the subjects" is still going to require decoding. There may still be ways to make it more efficient for common use cases, but that will require more detailed analysis. Re-New to Python - Re-Started with Py3K in 2011. 'Found myself in a dead-end after 10 days of work because a KOI8-R spam mail causes the file I/O decoding process to fail - and there is NO WAY TO HANDLE THIS with mailbox.py! (Went to python.org, searched for mailbox.py, was redirected to Google and found NOTHING related to this problem. FINALLY found the IssueTracker but was too stupid to re-search. Well. Put an issue 10995 which was wrong - unfortunate.) But now I will spend this entire day to back-port my script to Python 2.7 (and i did not work with Python for some six years)! I mean - the plan to rewrite the entire mailbox.py exists for about six months now, but mailbox.py is included in the basic library, documented in the library book - but not a single word, not a single comment states that it is in fact *UNUSABLE* in Py3K! Wouldn't it be sufficient *in the meanwhile* to apply the 10-minutes-work patch mentioned in my issue 10995? I know it's almost as wrong, but it would gracefully integrate in my fetchmail(1)/mutt(1) local en_GB.UTF-8 stuff 8-}. Python 3.2 is about to be released in two weeks - shall this unusable module be included the very same way once again? Thanks for reading this book. I'm afraid so. The python3 uptake process was expected to take five years overall, and we are only up to about the second year at this point. So while you may have been away from Python for 6 years, you came back right in the middle of an unprecedented transition period. I agree that it is unfortunate that a shipping library is not functioning correctly with respect to the Python3 bytes/string separation, but no one had tried to use mailbox in python3 enough to have encountered this problem. You will note that that this bug report was a *performance* bug report initially, and as such had lower priority. The encoding issue was recognized much more recently. And before that could be fixed correctly, the email package had to be fixed to handle bytes input. That happened only just before the end of the beta phase for 3.2. Now it is too late to make further API changes for 3.3, and in any case it seems counter-productive to make an API change that we don't really want in the library long term. You could work up a patch to fix this, use it locally, and contribute it so that it makes it in to 3.3. Perhaps if you and/or someone else can come up with a patch before RC2 it could even go in to 3.2. I haven't looked at it (yet), but I'm hoping that the patch isn't actually that hard to fix the encoding issues (as opposed to the performance issue, which may take more work). Or perhaps you could monkey-patch in your encoding fix until 3.3 comes out. Of course, right now using 2.7 with an eye to staying compatible with python3 is also a perfectly sensible option. That should have been "too late to make API changes for 3.2". ISTM an "API change" is okay if it fixes a critical usability bug. Also, if this is going to ship as-is, the docs should get a big warning right at the top. Perhaps the source code should also emit a notice that the module is hosed so that people like Steffen don't waste tons of time on hopeless endeavors. mailbox.patch: - open files in binary mode not as text - parse as bytes not as Unicode - replace email.generator.Generator() by email.generator.BytesGenerator() - use .message_from_bytes() instead of .message_from_str() - use .message_from_binary_file() instead of .message_from_file() - use BytesIO() instead of StringIO() - add more methods to _ProxyFile: readable, writable, seekable, flush, closed - don't use universal newline (not supported by binary files): I don't remember if the email binary parser supports directly universal newline I don't know anything about the mailbox module. I just replaced str functions by bytes functions. Keep Unicode for some things: MH.get_sequence() reads the file using UTF-8 encoding, labels and sequences. The patch have to be tested on Windows (Windows uses \r\n newline). I only tested on Linux. While working on this issue, I found and fixed two bugs in the email binary parser: r88196 and r88197. Nice. Thanks Victor. Thanks, Victor, you beat me to it :) I'll see if I can review this tomorrow, or if not I can probably do it Thursday. I reverted r88197 because it was incorrect and caused an email test to fail. Once I come up with a test for it I'll fix it correctly. I should write a test for the other one, too, even though it is trivial. Just a note: being as we're in RC, you should get a review even for seemingly trivial fixes. > I reverted r88197 because it was incorrect and caused an email test > to fail. Once I come up with a test for it I'll fix it correctly. test_mailbox is a good (indirect) test suite for this change. The problem of r88197 is that it replaces msg._payload by msg.get_payload() which is wrong. New attached patch mailbox.patch keeps msg._payload unchanged, but don't call _has_surrogates() (do nothing) if msg._payload is None. This patch has no test, I'm unable to write a test for this case (directly with the email API).. All test_email and test_mailbox pass with mailbox.patch+BytesGenerator_handle_text.patch on Windows except one test: ====================================================================== ERROR: test_set_item (test.test_mailbox.TestBabyl) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\victor\py3k\lib\test\test_mailbox.py", line 286, in test_set_item self._check_sample(self._box[key1]) File "C:\victor\py3k\lib\mailbox.py", line 76, in __getitem__ return self.get_message(key) File "C:\victor\py3k\lib\mailbox.py", line 1190, in get_message body = self._file.read(stop - self._file.tell()) ValueError: read length must be positive or -1 There's a missing conversion in mailbox.patch. Running with -bb shows the issue. Here is an updated patch. Haypo: yeah, in an ideal world Generator would use get_payload() and not _payload, but I had to make some compromises with purity of model separation in order to achieve the practical goal of handling bytes usefully. I'd like to fix that as I work on email in 3.3. pitrou> There's a missing conversion in mailbox.patch. pitrou> Running with -bb shows the issue. pitrou> Here is an updated patch. Good catch: test_mailbox now pass on Windows. -- Some remarks on mailbox2.patch. get_string() returns a bytes object: I propose to rename it to get_bytes(): """Return a *byte* string representation or raise a KeyError.""" The following comment is outdated, target have to be a *binary* file: def _dump_message(self, message, target, mangle_from_=False): # This assumes the target file is open in *text* mode ... get_file(): should we specify that the file-like object is a binary file? MH.get_sequences() and MH.set_sequences() opens .mh_sequences file in text mode from the locale encoding. I don't know if the locale encoding is a good choice. Does this file contain non-ASCII characters? Should we use ASCII or UTF-8 encoding instead, or parse the file in binary, and only decode requested values from ASCII? Since all tests of test_mailbox now pass on Windows, it looks like the "universal newline" thing still work. But how can I be sure? - from_line = 'From MAILER-DAEMON %s' % time.asctime(time.gmtime()) + from_line = b'From MAILER-DAEMON ' + time.asctime(time.gmtime()).encode() Is UTF-8 the right encoding to encode a timestamp? Or should we use something like "=?UTF-8?q?...?=" ? MH.set_sequences() does... sometimes... decode the sequence name from UTF-8. I don't understand why I had to add the following if: - f.write('%s:' % name) + if isinstance(name, bytes): + name = name.decode() + f.write(name + ':') Is it correct to decode the timestamp from UTF-8? And is the following change correct? *********** - maybe_date = ' '.join(self.get_from().split()[-5:]) + maybe_date = b' '.join(self.get_from().split()[-5:]) try: + maybe_date = maybe_date.decode('utf-8') message.set_date(calendar.timegm(time.strptime(maybe_date, '%a %b %d %H:%M:%S %Y'))) - except (ValueError, OverflowError): + except (ValueError, OverflowError, UnicodeDecodeError): pass *********** The following change is just enough to fix mailbox. But it would maybe be better to inherit from RawIOBase instead and implement all methods. _PartialFile class might be moved into the io module. All of this can be done later. ****** + def readable(self): + return self._file.readable() + def writable(self): + return self._file.writable() + + def seekable(self): + return self._file.seekable() + + def flush(self): + return self._file.flush() + + @property + def closed(self): + return self._file.closed ****** I haven't looked at the items Haypo has pointed to yet, but I have looked at the API issues (get_string, add, etc). It seems to me that we have to make a decision here: do we break API backward compatibility and convert to consuming and emitting bytes pretty much everywhere we handle non-Message messages, or do we maintain backward compatibility by extending the APIs to handle bytes as well as strings? In the email module I took the second approach. I'm leaning toward that approach here as well, but it is a little messier than it was for email, where there ended up being distinct interfaces for bytes versus string. Here we'd have get_string and get_bytes, but it seems more sensible to have add and similar methods accept both bytes and strings rather than making duplicate methods for each of the cases (since they are polymorphic between string and Message already). And then there is get_file, which is *documented* as returning a binary file, but in fact has been returning a text file. It makes sense that it should return a binary file. So that's one backward incompatible bug fix already....unless we introduce get_binary_file and change the docs. I think we need opinions from more than just haypo and I, but given that we are in RC I'm leaning toward a polymorphic API for add and friends and new get_bytes and get_binary_file methods. That's more work than the patch haypo produced, though, since it requires some new code and tests in addition to what he's already done. Either approach introduces API changes in an RC, but unless we want to continue to ship a mailbox module that is as half-functional (or less) as what email was in 3.1, we have to do something. I should have some time to work on this tomorrow, and a bit more on Friday, but we're getting down to the wire here. Attached is a patch that builds on Victor's patch, but takes the approach I discussed of maintaining backward compatibility (for the most part; see below). The test suite in this version is substantially unchanged. The major changes are adding tests for the bytes input and the new method (get_bytes). The changes to existing test methods are to methods that test internal interfaces (because those now handle bytes, not string). I've included doc changes, which are mostly adding notes about where bytes are accepted (add, __setitem__), and for the new get_bytes method. get_string is now implemented by calling get_bytes, passing the result to email.message.Message, and then calling as_string. This defeats the efficiency purpose of using get_string, but in that use case code should really be using get_bytes. I kept the change to get_file: it returns a binary file, as currently documented. I think this is less likely to cause backward compatibility issues (assuming any 3.x code exists that uses mailbox!) than get_string returning bytes (or dissapearing) would. As with email.message.Message's get_unixfrom method, get_from returns a string, and set_from takes a string. Although there are no real standards for this "header", I believe that it is restricted to ASCII, and have written the code accordingly. This is my answer to Victor's question about maybe_date and the from line: I think we should use ascii as the encoding. That is certainly true for the date; asctime does not use the locale, and English date fields are definitely the de-facto standard for From lines. I haven't looked at the mh_sequences question yet. I don't think there are any formal restrictions on what characters can be used in a sequence name, but I haven't looked to see if there are any standards documents for mh. I'll test to see if my nmh installation accepts non-ascci chars for sequence names tomorrow. I'm also going to try to go over Victor's changes section by section, but everything I've looked at other than the mh_sequences issue he raised looks good to me so far. I note that we still don't have an RM call on whether or not this can go in if it passes review. Oh, also note that neither Victor's patch nor my patch have any tests for non-ASCII characters. Some should be added :) After cloning branches/py3k (i now have three different detached repo snakes in my arena (2.7,3.1,py3k), by the way - not bad for a greenhorn, huh?). I've applied RDMs patch from msg127245. Note: the test mails are *malformed*! Stuff in brackets are my error messages, rest is "str(ex)". 1. Single Latin-1 character in "From:" (00F6;LATIN SMALL LETTER O WITH DIAERESIS): [ERROR: failed to handle box "/Users/steffen/tmp/au.latin1":] expected string or buffer 2. Whatever-Encoding in "Subject:" (see example 2 below): [PANIC: Box source-changes.mdir: message-add failed, mails may be lost:] 'ascii' codec can't encode character '\ufffd' in position 8: ordinal not in range(128) Here are two stripped header fields pasted in an UTF-8 environment: From: "SAJATNAPTAR.COM" <info@sajatnaptar.com>$ Subject: Falinaptár ingyenes házhozszállítással. Már rendeltél? Olvass el! From: "Syria Trade Center :" <no-reply@syriatc.com>$ Subject: ÅÝÊÊÇÍ ÇáãßÊÈ ÇáÑÆíÓí ááãÑßÒ - æÊÞæíã 2011 Steffen: thanks for testing. Do those error messages have tracebacks? Can you post them? Can you post example messages and a short program that demonstrates the problem? I'm going to be creating some non-ascii test cases, but any additional info you can provide will give me a leg up on that. Indeed i tried to create tracebacks (even with "import traceback"), but these all end up in my code (and all the time). I have not yet figured out how to create tracebacks which leave my code and reach the source, which surely must be somewhere in email/ - even with the -d command line switch. Wait a bit for the rest - i would indeed post my halfway-thought-through-and-developed S-Postman if you would ask for it. It however simply uses the email package (mailbox,email,FeedParser) and is a 30KB thing with config-file parsing etc.. I don't see those error messages in the mailbox source. I'm guess your application isn trapping the errors in a try/except. In that case, just do a bare 'raise' in the except clause, and you should get the full traceback. I'm sure I'll discover problems just using your simple examples. Likely your full code would be more of a distraction than a help, unless we end up in a situation where I've fixed all the bugs I can find and you are still having problems. You're indeed right, i've overseen a try..catch! I'll even be able to give you some fast code hints now (and i'll be offline the next few hours - the mails are simply mails with illegal charsets, say): Traceback (most recent call last): File "/Users/steffen/tmp/y/s-postman.py", line 1098, in <module> sys.exit(main()) File "/Users/steffen/tmp/y/s-postman.py", line 1088, in main xclass = xclass() # Impl. class chosen upon commline args File "/Users/steffen/tmp/y/s-postman.py", line 951, in __init__ self._walk() def _walk(self): verb("--dispatch: starting iteration over input boxes") for b, t in _Dispatch_Boxes: box = open_mailbox(b, type=t, create=False) try: self._do_box(box) except Exception as e: raise File "/Users/steffen/tmp/y/s-postman.py", line 958, in _walk self._do_box(box) def _do_box(self, box): cnt = len(box) log("* Box contains ", cnt, " messages") for nr in range(cnt): log(" * Dispatching message ", nr+1) msg = box.get_message(nr) Ticket.process_msg(msg) log(" @ Dispatched ", cnt, " messages, finished box") File "/Users/steffen/tmp/y/s-postman.py", line 982, in _do_box Ticket.process_msg(msg) @staticmethod def process_msg(msg): ticket = Ticket(msg) (match, ruleset, to_box) = Ruleset.dispatch_ticket(ticket) if not match: to_box.add_ticket(ticket) return splitter = to_box.get_archive_splitter() if not splitter or config.get_keep_archives(): to_box.add_ticket(ticket) return log(" @ Treating ticket ", ticket._id, " as archive, splitting") for msg in splitter(msg): ticket = Ticket(msg) to_box.add_ticket(ticket) File "/Users/steffen/tmp/y/s-postman.py", line 898, in process_msg to_box.add_ticket(ticket) def add_ticket(self, ticket, ignore_errors=False): efun = panic if ignore_errors: efun = error log(" @ Saving ticket ", ticket.get_id(), " in \"", self._ident, "\"") mbox = self._mailbox if not mbox: mbox = os.path.join(config.get_folder(), self._path) mbox = open_mailbox(mbox, type=self._type, create=True) self._mailbox = mbox try: mbox.lock() except Exception as e: efun("Could not gain mailbox lock!") try: mbox.add(ticket.get_msg()) mbox.flush() except Exception as e: #efun("Box ", self._ident, # ": message-add failed, ", # "mails may be lost: ", str(e)) raise File "/Users/steffen/tmp/y/s-postman.py", line 680, in add_ticket mbox.add(ticket.get_msg()) File "/Users/steffen/usr/lib/python3.2/mailbox.py", line 259, in add self._dump_message(message, tmp_file) File "/Users/steffen/usr/lib/python3.2/mailbox.py", line 205,) What is the data type returned by your get_msg? I bet it is string, and email can't handle messages in string format that have non-ASCII characters (I'm adding an explicit error message for this). You either need to use a Message object, or, more likely in your case, change the return type of get_msg to be bytes. I'm updating the patch to contain a couple tests using non-ASCII. More are needed. Before this patch, one could process a file containing non-ASCII characters as text, and if your default encoding happened to be able to decode it, things would appear to more or less work. In real life doing this is most likely to produce mojibake. So the patch now rejects string input that contains non-ASCII characters with a helpful message about using bytes or Message input. Email doesn't handle messages in string format that contain non-ASCII characters, either (which, I think, was the source of the error Steffen encountered). This means that the string backward-compatibility is reduced to ascii-only messages. But if mailbox in py3 is being used successfully by anybody, it is most likely to be someone processing ascii only messages for some reason. > What is the data type returned by your get_msg? I bet it is string, > and email can't handle messages in string format that have non-ASCII > characters (Now i see that the local names 'box', 'mbox' and 'mailbox' have become somewhat messed up, which may have been misleading.) The answer is (somewhat) definitely no: class Ticket: @staticmethod def process_msg(msg): ticket = Ticket(msg) ... def __init__(self, msg): global _Ticket_Count _Ticket_Count += 1 self._id = _Ticket_Count self._msg = msg log(" @ Creating ticket number ", self._id, ":") ... instantiated by either: msg = mbox.get_message(nr) # It's a Mailbox Ticket.process_msg(msg) ... or: def openbsd_splitter(msg): if msg.is_multipart(): log(" @ Multipart message: not splitting") return [msg] i = msg["Subject"] if i is None or "digest," not in i: log(" @ \"digest,\" not in Subject: not splitting") return [msg] # Real splitter: nl, SPLITTER, nl, Date: header.. SPLITTER = "------------------------------" def __create_msg(charset, lines): try: fp = email.feedparser.FeedParser() headerok, lastnl = False, False while len(lines) > 0: l = lines.pop(0) if SPLITTER in l and lastnl: break lastnl = not len(l.strip()) if not headerok: if lastnl: headerok = True else: l = split_header_line_helper(.....) fp.feed(l + "\n") return fp.close() except Exception as e: log(" @ Error - not splitting: ", str(e)) return None result = list() lines = msg.get_payload().splitlines() while len(lines): l = lines.pop(0) if SPLITTER in l: break while len(lines): l = lines[0] if l.startswith("Date: "): nm = __create_msg(charset, lines) if not nm: return [msg] result.append(nm) else: lines.pop(0) return result ... which then ends up as the shown for msg in splitter(msg): ticket = Ticket(msg) to_box.add_ticket(ticket) # This is 'class Box' ... and it's the very Box.add_ticket() which has been shown in msg127313. That's all - note however that the email.message.Message headers may either be strings or 'Header' objects - this is work in transition (i somehow want to deal with these malformed mails and at least encapsulate all str() headers to 'Header' headers with the fallback 'quopri' encoding ISO-8859-1 - like this the mail will at least be clean on the disk ...) Well, that's a bunch of code, and I'm afraid I don't know what your answer to my question was. What error do you get now if you use the new version of mailbox3.patch? If you feed the new mailbox/email bytes, it will preserve the bytes as is, as long as you don't try to convert the invalid headers to strings. If you convert them to string (by accessing them through the Message object), it will encode them as 'unknown-8bit' using quopri or base64 as appropriate (ie: depending on how many non-ascii chars there are). If you want instead to guess that they are latin-1, you can call decode_header on the stringified version to get back the original bytes, and then substitute your preferred guessed charset for the 'unknown-8bit' charset and go from there to unicode. (For Python3.3 I plan to provide tools to make this kind of processing much simpler.) Added two more tests of non-ASCII. I think the tests now cover the necessary cases. I still want to do a full code review tomorrow, but I think the patch is in final form if anyone else is available to do a review as well. Georg, are you OK with this going in? I think it is an important part of the "email is fixed" story and thus worth bending the rules for. I missed your mailbox3.patch, but now i've merged it in. One error changed, it now happens when a re.search is applied to a header value and thus seems to match what you say. I'm not able to understand this error this evening, but i will review it once again tomorrow and will notify this issue if it seems to me it's something different than what you say. The other error almost remains the same from my point of view, again, i'm out today, but here i'll add the traceback again. ... as before ... File "s-postman.py", line 685, in add_ticket mbox.add(ticket.get_msg()) File "/Users/steffen/usr/lib/python3.2/mailbox.py", line 269, in add self._dump_message(message, tmp_file) File "/Users/steffen/usr/lib/python3.2/mailbox.py", line 215,) If you are using the most recent mailbox3 patch (I should have renamed it, sorry...I've no done so to make it clear) you should be getting an error message that tells you to use binary or Message. So I don't understand how you are getting this message. I still don't know what it is you are passing to the add method. RC2 will be cut starting Sunday morning CET, and after that we will be making no further non-critical changes. So if you've got bugs, best we find them before the end of the day tomorrow ;) I'd really like someone else to throw a pair of eyes at the code changes before it is committed. But yes, I will allow this into rc2, since a completely broken module isn't really what a minor release is about. RDM: it seems i was too tired to get your messages right last evening! Indeed it's now completely my fault, i should inspect the content further in respect to the str/bytes etc. stuff! Thus - i will now need three or four days to cleanup my hacky code before output of this broken thing is of any further use. (If afterwards something new shows up i will of course post a feedback.) It may be of interest for you, however, that speed broke down heavily once again, so that my dumb thing did not take 2 seconds as it did after applying the first patch, but almost 8 seconds. (It takes max. 1.1 seconds on Python 2.7.) Beside that there came to me a "hhuuuuh"! Python is a lot about testing! I would have been more of a help if i would have simply offered some test cases? !! Next time! RDM: thanks a lot for spending long hours of work on this issue! + if isinstance(message, io.TextIOWrapper): + # Backward compatibility hack. + message = message.buffer Is it a good thing to parse a mailbox using a text file? If not, we should emit a warning and maybe remove this feature in Python 3.3. Victor: yes, I was thinking that when I added that comment but forgot to come back to it. Thanks for spotting that. Another thing I forgot about yesterday is that I activated the commented out statements that do linesep transformations on the binary file data. I'm guessing those were commented out when the module was converted to using text files, since the text file would do the transformation itself. Your patch left them commented out, and the tests passed on Windows. If the tests *still* pass on windows with them uncommented, then that will prove that, like the old email tests, the line ending variations aren't really being tested. But, the important thing here is that I haven't run the tests on Windows yet, and that certainly needs to be done. OK, I've added deprecation warnings for using StringIO or text mode files as input. I found one bug thereby, but it is a bug that pre-existed the patch (see issue 11062). I've completed my code review. To address Victor's question about the mh-sequences file: nmh rejects non-ascii sequence names, so the file should contain only ASCII. The man page specifies that sequences are composed only of alphanumeric characters. I think opening the file in text mode using the system default encoding is probably fine, since if any mh program does support non-ascii sequence names that is likely what it would do as well. Of course, in the future I would think utf-8 would be preferred, but I guess we can deal with that issue if we get a bug report. We're maintaining backward compatibility with 3.1 here, so it's not really an issue for this patch. As far as the 'if bytes' business goes, the tests pass for me without those lines, and it looks to me like they should not be needed. On IRC Victor said he thought he may have introduced those before the patch was finished. We have decided to omit them. I think I've address the remainder of Victor's issues already. The last step is running the tests on Windows. Attached is the updated patch. > The last step is running the tests on Windows. > Attached is the updated patch. mailbox4.patch doesn't pass on Windows, Raymond is working on a patch. (I hope you meant I was working on a patch :) Patch is done, but there is one remaining test failure that I'm not sure how to handle. The test is test_add_text_file_warns. The code checks to see if a file is a subclass of io.TextIOWrapper, and if so warns that this is deprecated, grabs the buffer attribute, and reads the file as binary. This works fine, except that in testing it I used a temporary file. On Linux that works great, but on Windows the temporary file is a tempfile._TemporaryFileWrapper, and that is *not* a subclass of io.TextIOWrapper. So the code falls through to the "this must be a binary file" code and fails with a TypeError. Any thoughts on how to handle this edge case? I've got stuff to do this afternoon but I'll check back later to see if anybody has any ideas and, if all else fails, will disable that test on Windows. It means the module doesn't handle temporary-text-file input on Windows correctly, but since we are deprecating text files anyway I think that is not a show stopper. Benjamin suggested using hasattr(message, 'buffer'), and that works great. The test revealed a bug in the patch, which is now fixed. All tests pass on windows. As far as I'm concerned the patch is ready to go. Other reviews would of course be welcome (and perhaps required by Georg). Committed (with RM approval on IRC) in r88252. Note that this does not necessarily solve the performance problem. A new issue should be opened for that if it still exist.
https://bugs.python.org/issue9124
CC-MAIN-2019-22
refinedweb
5,422
75
This could be the dumbest question ever asked but I think it is a total confusion for a newbie. - Can somebody clarify what is meant by immutable? - Why is a Stringimmutable? - What are the advantages/disadvantages of immutable objects? - Why should a mutable object such as StringBuilderbe preferred over String and vice-verse? A nice example (in Java) will be really appreciated.). An immutable object is an object where the internal fields (or at least, all the internal fields that affect its external behavior) cannot be changed. There are a lot of advantages to immutable strings: Performance: Take the following operation: String substring = fullstring.substring(x,y); The underlying C for the substring() method is probably something like this: // Assume string is stored like this: struct String { char* characters; unsigned int length; }; // Passing pointers because Java is pass-by-reference struct String* substring(struct String* in, unsigned int begin, unsigned int end) { struct String* out = malloc(sizeof(struct String)); out->characters = in->characters + begin; out->length = end - begin; return out; } Note that none of the characters have to be copied! If the String object were mutable (the characters could change later) then you would have to copy all the characters, otherwise changes to characters in the substring would be reflected in the other string later. Concurrency: If the internal structure of an immutable object is valid, it will always be valid. There’s no chance that different threads can create an invalid state within that object. Hence, immutable objects are Thread Safe. Garbage collection: It’s much easier for the garbage collector to make logical decisions about immutable objects. However, there are also downsides to immutability: Performance: Wait, I thought you said performance was an upside of immutability! Well, it is sometimes, but not always. Take the following code: foo = foo.substring(0,4) + "a" + foo.substring(5); // foo is a String bar.replace(4,5,"a"); // bar is a StringBuilder The two lines both replace the fourth character with the letter “a”. Not only is the second piece of code more readable, it’s faster. Look at how you would have to do the underlying code for foo. The substrings are easy, but now because there’s already a character at space five and something else might be referencing foo, you can’t just change it; you have to copy the whole string (of course some of this functionality is abstracted into functions in the real underlying C, but the point here is to show the code that gets executed all in one place). struct String* concatenate(struct String* first, struct String* second) { struct String* new = malloc(sizeof(struct String)); new->length = first->length + second->length; new->characters = malloc(new->length); int i; for(i = 0; i < first->length; i++) new->characters[i] = first->characters[i]; for(; i - first->length < second->length; i++) new->characters[i] = second->characters[i - first->length]; return new; } // The code that executes struct String* astring; char a = 'a'; astring->characters = &a; astring->length = 1; foo = concatenate(concatenate(slice(foo,0,4),astring),slice(foo,5,foo->length)); Note that concatenate gets called twice meaning that the entire string has to be looped through! Compare this to the C code for the bar operation: bar->characters[4] = 'a'; The mutable string operation is obviously much faster. In Conclusion: In most cases, you want an immutable string. But if you need to do a lot of appending and inserting into a string, you need the mutability for speed. If you want the concurrency safety and garbage collection benefits with it the key is to keep your mutable objects local to a method: // This will have awful performance if you don't use mutable strings String join(String[] strings, String separator) { StringBuilder mutable; boolean first = true; for(int i = 0; i < strings.length; i++) { if(!first) first = false; else mutable.append(separator); mutable.append(strings[i]); } return mutable.toString(); } Since the mutable object is a local reference, you don’t have to worry about concurrency safety (only one thread ever touches it). And since it isn’t referenced anywhere else, it is only allocated on the stack, so it is deallocated as soon as the function call is finished (you don’t have to worry about garbage collection). And you get all the performance benefits of both mutability and immutability. Actually String is not immutable if you use the wikipedia definition suggested above. String’s state does change post construction. Take a look at the hashcode() method. String caches the hashcode value in a local field but does not calculate it until the first call of hashcode(). This lazy evaluation of hashcode places String in an interesting position as an immutable object whose state changes, but it cannot be observed to have changed without using reflection. So maybe the definition of immutable should be an object that cannot be observed to have changed. If the state changes in an immutable object after it has been created but no-one can see it (without reflection) is the object still immutable? Immutable objects are objects that can’t be changed programmatically. They’re especially good for multi-threaded environments or other environments where more than one process is able to alter (mutate) the values in an object. Just to clarify, however, StringBuilder is actually a mutable object, not an immutable one. A regular java String is immutable (meaning that once it’s been created you cannot change the underlying string without changing the object). For example, let’s say that I have a class called ColoredString that has a String value and a String color: public class ColoredString { private String color; private String string; public ColoredString(String color, String string) { this.color = color; this.string = string; } public String getColor() { return this.color; } public String getString() { return this.string; } public void setColor(String newColor) { this.color = newColor; } } In this example, the ColoredString is said to be mutable because you can change (mutate) one of its key properties without creating a new ColoredString class. The reason why this may be bad is, for example, let’s say you have a GUI application which has multiple threads and you are using ColoredStrings to print data to the window. If you have an instance of ColoredString which was created as new ColoredString("Blue", "This is a blue string!"); Then you would expect the string to always be “Blue”. If another thread, however, got ahold of this instance and called blueString.setColor("Red"); You would suddenly, and probably unexpectedly, now have a “Red” string when you wanted a “Blue” one. Because of this, immutable objects are almost always preferred when passing instances of objects around. When you have a case where mutable objects are really necessary, then you would typically guard the objet by only passing copies out from your specific field of control. To recap, in Java, java.lang.String is an immutable object (it cannot be changed once it’s created) and java.lang.StringBuilder is a mutable object because it can be changed without creating a new instance. - In large applications its common for string literals to occupy large bits of memory. So to efficiently handle the memory, the JVM allocates an area called “String constant pool”.(Note that in memory even an unreferenced String carries around a char[], an int for its length, and another for its hashCode. For a number, by contrast, a maximum of eight immediate bytes is required) - When complier comes across a String literal it checks the pool to see if there is an identical literal already present. And if one is found, the reference to the new literal is directed to the existing String, and no new ‘String literal object’ is created(the existing String simply gets an additional reference). - Hence : String mutability saves memory… - But when any of the variables change value, Actually – it’s only their reference that’s changed, not the value in memory(hence it will not affect the other variables referencing it) as seen below…. String s1 = “Old string”; //s1 variable, refers to string in memory reference | MEMORY | variables | | [s1] --------------->| "Old String" | String s2 = s1; //s2 refers to same string as s1 | | [s1] --------------->| "Old String" | [s2] ------------------------^ s1 = “New String”; //s1 deletes reference to old string and points to the newly created one [s1] -----|--------->| "New String" | | | | |~~~~~~~~~X| "Old String" | [s2] ------------------------^ The original string ‘in memory’ didn’t change, but the reference variable was changed so that it refers to the new string. And if we didn’t have s2, “Old String” would still be in the memory but we’ll not be able to access it… “immutable” means you cannot change value. If you have an instance of String class, any method you call which seems to modify the value, will actually create another String. String foo = "Hello"; foo.substring(3); <-- foo here still has the same value "Hello" To preserve changes you should do something like this foo = foo.sustring(3); Immutable vs mutable can be funny when you work with collections. Think about what will happen if you use mutable object as a key for map and then change the value (tip: think about equals and hashCode). I really like the explaination from SCJP Sun Certified Programmer for Java 5 Study Guide.. Objects which are immutable can not have their state changed after they have been created. There are three main reasons to use immutable objects whenever you can, all of which will help to reduce the number of bugs you introduce in your code: - It is much easier to reason about how your program works when you know that an object’s state cannot be changed by another method - Immutable objects are automatically thread safe (assuming they are published safely) so will never be the cause of those hard-to-pin-down multithreading bugs - Immutable objects will always have the same Hash code, so they can be used as the keys in a HashMap (or similar). If the hash code of an element in a hash table was to change, the table entry would then effectively be lost, since attempts to find it in the table would end up looking in the wrong place. This is the main reason that String objects are immutable – they are frequently used as HashMap keys. There are also some other optimisations you might be able to make in code when you know that the state of an object is immutable – caching the calculated hash, for example – but these are optimisations and therefore not nearly so interesting. java.time It might be a bit late but in order to understand what an immutable object is, consider the following example from the new Java 8 Date and Time API (java.time). As you probably know all date objects from Java 8 are immutable so in the following example LocalDate date = LocalDate.of(2014, 3, 18); date.plusYears(2); System.out.println(date); Output: 2014-03-18 This prints the same year as the initial date because the plusYears(2) returns a new object so the old date is still unchanged because it’s an immutable object. Once created you cannot further modify it and the date variable still points to it. So, that code example should capture and use the new object instantiated and returned by that call to plusYears. LocalDate date = LocalDate.of(2014, 3, 18); LocalDate dateAfterTwoYears = date.plusYears(2); date.toString()… 2014-03-18 dateAfterTwoYears.toString()… 2016-03-18 One meaning has to do with how the value is stored in the computer, For a .Net string for example, it means that the string in memory cannot be changed, When you think you’re changing it, you are in fact creating a new string in memory and pointing the existing variable (which is just a pointer to the actual collection of characters somewhere else) to the new string. Once instanciated, cannot be altered. Consider a class that an instance of might be used as the key for a hashtable or similar. Check out Java best practices. Immutable means that once the object is created, non of its members will change. String is immutable since you can not change its content. For example: String s1 = " abc "; String s2 = s1.trim(); In the code above, the string s1 did not change, another object ( s2) was created using s1. String s1="Hi"; String s2=s1; s1="Bye"; System.out.println(s2); //Hi (if String was mutable output would be: Bye) System.out.println(s1); //Bye s1="Hi" : an object s1 was created with “Hi” value in it. s2=s1 : an object s2 is created with reference to s1 object. s1="Bye" : the previous s1 object’s value doesn’t change because s1 has String type and String type is an immutable type, instead compiler create a new String object with “Bye” value and s1 referenced to it. here when we print s2 value, the result will be “Hi” not “Bye” because s2 referenced to previous s1 object which had “Hi” value. Immutable Objects. Programmers are often reluctant to employ. The following subsections take a class whose instances are mutable and derives a class with immutable instances from it. In so doing, they give general rules for this kind of conversion and demonstrate some of the advantages of immutable objects. What. How does this String append is working ? New String object is created with the given value and the reference is updated to the new instance isolating the old object. Reference : An immutable object is the one you cannot modify after you create it. A typical example are string literals. A D programming language, which becomes increasingly popular, has a notion of “immutability” through “invariant” keyword. Check this Dr.Dobb’s article about it – . It explains the problem perfectly.
https://exceptionshub.com/what-is-meant-by-immutable.html
CC-MAIN-2022-05
refinedweb
2,294
61.06
The question is: The following method was known to the ancient Greeks for computing square roots. Given a value x > 0 and a guess g for the square root, a better guess is (x + g/x) / 2 (g + x/g) / 2. Write a recursive helper method public static squareRootGuess(double x, double g). If g2 is approximately equal to x, return g, otherwise, return squareRootGuess with the better guess. Then write a method public static squareRoot(double x) that uses the helper method. So, so far I've got: public class main1c { public static void main(String[] args){ Scanner keys = new Scanner(System.in); } public static double test(double x, double g) { if (closeEnough(x/g, g)) return g; else return test(g+x/g) } static boolean closeEnough(double a, double b) { return (Math.abs(a - b) < (b * 0.1)); // a is within 1% of b } } I'm not sure where to go from here. How do I make a recursion method with the equation(g + x/g) / 2?
http://www.javaprogrammingforums.com/algorithms-recursion/33377-exponent-recursion.html
CC-MAIN-2015-06
refinedweb
169
73.37
Components and supplies Apps and online services About this project 1. Introduction The hability to control any appliance around your house using only your smartphone is very interesting. This project consists of using an Arduino, a cheap bluetooth module and a relay to control, for example, a lamp, by connecting it with your smartphone via bluetooth. This project is intended to be simple using the least amount of resources and code, but still including important demonstrations and descriptions of the whole process. IMPORTANT NOTE: This project involves using high voltage devices, hence, extreme caution is advised and if possible do this project under the supervision of someone experienced. 2. Content Briefing In the next sections, we'll discuss some topics in the sequence shown below and in more detail. - 3. Demo - 4. Schematic - 5. Arduino code - 6. Android code (sources & apk in ) 3. Demo 4. Schematic Let's start by setting up our bluetooth module (bottom left component in the figure). As you can see we pull out both 5V and Ground from our Arduino all the way through to the Bluetooth Module (Red and Black wires). For the communication pins we will need to attach the Transmitter pin (TX) to the Arduino's receiver pin (RX) (Green Wire) and the Transmitter pin to the Arduino's Receiver pin (Orange wire). If you check the back of your bluetooth module it will have a label showing how much voltage it's receiver pin can handle (In my case it's 3.3V) and since our Arduino supplies 5V, the bluetooth module would probably burn up after frequent uses, that's why we need to reduce that voltage to 3.3V. Hence, it has been included 2 resistors that work as a "voltage divider" that go through the orange wire. the first R1 resistor is a 560 OHMS while the second one R2 is 1K OHMS. I used these resistors because it was what I had in my equipment but if you have any other resistors maybe you can find the right set for you using this Formula and that brings the voltage down a bit: Vout = (Vsource x R2)/R1+R2 In my case I got: Vout = 5 x 1000 / 1560 = 3.2V.. close enough.. (Search for Voltage Divider Calculators on Google and it will give you good explanation and quick calculations). As for the Relay, we also supply it with both power and ground (Red and Black wires) and a signal wire connected to Arduino's pin number 10 that will command the relay when to power ON or OFF our lamp, for example. The blue wires represent the wires of the plug/socket extension. Obviously you will not want to connect the relay directly to a lamp's built-in plug extension which you would purposly ruin for this project, so what is recommended is to buy a standalone plug/socket extension. What I did was to buy a smaller wire and add to it both plug and socket just as it's represented in the photo of the slideshow above. Now that you have your extension, pull out 1 of the wires extension and cut it, you will end up with 2 wire ends (shown in the 2 photos above). But where can you connect these 2 ends to the relay?. The Relay has 3 ports, NC (normally closed), C (Common), NO (normally open). This means that, when the relay is at it's normal state (no signal sent to the relay) there will be a connection between NC and C ports. When a signal is sent a connection will be set between C and NO instead. So that's why we connect our 2 wire ends to both C and NO because we want to send a signal to our relay in order to let the current flow through these two channels and provide energy to our appliance. Great explanation here: 5. Arduino Code #include <SoftwareSerial.h> #define RELAY 10 #define LIGHT 13 SoftwareSerial btm(2,3); // rx tx int index = 0; char data[10]; char c; boolean flag = false; void setup() { pinMode(RELAY,OUTPUT); pinMode(LIGHT,OUTPUT); digitalWrite(RELAY,HIGH); digitalWrite(LIGHT,LOW); btm.begin(9600); } void loop() { if(btm.available() > 0){ while(btm.available() > 0){ c = btm.read(); delay(10); //Delay required data[index] = c; index++; } data[index] = '\0'; flag = true; } if(flag){ processCommand(); flag = false; index = 0; data[0] = '\0'; } } void processCommand(){ char command = data[0]; char inst = data[1]; switch(command){ case 'R': if(inst == 'Y'){ digitalWrite(RELAY,LOW); btm.println("Relay: ON"); } else if(inst == 'N'){ digitalWrite(RELAY,HIGH); btm.println("Relay: OFF"); } break; case 'L': if(inst == 'Y'){ digitalWrite(LIGHT,HIGH); btm.println("Light: ON"); } else if(inst == 'N'){ digitalWrite(LIGHT,LOW); btm.println("Light: OFF"); } break; } } The Arduino's code shown above is generally structured in 4 phases. - 5.1. Initializations - 52. Setup - 5.3. Loop - 5.4. Process Command It is important to note that all the code above is in the Arduino's point of view, meaning that all the 'reading' operations are operations where the arduino is receiving data from some other source, and the write operations are operations where the arduino is sending messages to other source as well. The sequence of usage in our system will be like this: - a. The user clicks a button in the smartphone engaging a bluetooth command. - b. The bluetooth module receives it and sends that command to the Arduino. - c. The Arduino will then process that command and send a signal to the relay to turn it ON or OFF. - d. The Arduino then sends a sucessfull message to the bluetooth module, where the bluetooth module sends this message back to the smartphone. 5.1.Initializations In the first few lines of code, we start by including the SoftwareSerial library that will allow us to communicate with the Bluetooth module. It also lets us use different pins as a receiver and transmitter pins than the ones predefined for Arduino (pin 0 = RX and pin 1 = TX). Instead we'll use pin number 2 for Arduino's RX and pin 3 for Arduino's TX. Then we create constants that identifies the pins we wish to use for each of our components, in this case, the Arduino's pin that will controll the RELAY is number 10 and the pin to controll the built-in LIGHT on Arduino's is number 13.(this is optional, if you don't want to use it it's fine). Then, the data structure called 'data' of type 'char' acts as a buffer for our incoming messages from the bluetooth module, and some more auxilliary variables for our data structure ( flag, index, c ) that will be explained in the Loop section. 5.2. Setup The predefined method 'setup', will be the first method to be executed before our actual intended program starts running. It basically allows us to configurate some of the Arduino's pins and other stuff before the main program executes. Therefore we start by saying that the RELAY pin will be an OUTPUT pin since we want to send a signal to either turn OFF or ON the Relay. The same for the LIGHT pin. Additionally, we can choose if we want to start the program and start sending the signal right away using 'digitalWrite'. In case of the relay we do want to start sending a signal because the way the Relay works is a bit counter-intuitive because the way the relay works is that when a signal is detected by de relay, then it switches itself OFF, else it switches back ON. 5.3. Loop The loop is the method that, as the name suggests, is iteratively being called in order to repeatedly process whatever information we pass onto it. With that being said, we start by checking if there are incoming messages from the bluetooth module, and if there are, then we enter a cycle to keep reading those messages byte by byte (reading a type 'char' every iteration). About the line of code delay(10), to be honest i'm not completely sure why the code only worked with that delay(10). When I tried without that line of code, the messages weren't being received properly into the array of chars called 'data' (our buffer) and all I got was a bunch of junk in the buffer. My best guess, and it was the reason why I used it, would be the fact that there are different processing speeds in receiving and transmitting of our components, which in this case is the Arduino and Bluetooth Module. In some Arduino code over the web is costume to see some of those delays lines of code and probably some of them are likely to be used for that matter. After this delay, suppose we have read the first byte (char) of our message, we then add it to our buffer and increment the counter called ''index'' to keep adding more bytes iteratively and incrementally along the dimension of the array/buffer. After the message as been read, we exit the while loop and say "yes", there is a message to be processed (by setting the flag to true). And also add a '\0' to indicate the buffer's end. Finnaly, at the end of the loop method we simply check if there are messages to process, and if there is we then call the processCommand method for that matter, clearing/reseting our buffer afterwards by setting the first index of the array([0]) to null ('\0') and setting the counter ('index') to 0. This way our buffer is ready to receive more incoming messages. 5.4. Process Command Finnally, the processCommand method will be the method that will decide what to do with the bluetooth message received previously in the loop code section. For this project, I decided to send simple commands sent from the smarthphone to the Arduino through bluetooth. To turn the relay ON, a simple message built on the Android application will send the following String as bytes: "RY" (Relay Yes) if we wish to turn the relay ON or "RN" (Relay NO) if we wish to turn the relay off. Like mentioned before, I also included a "add-on" where you can controll the Arduino's built-in LED ( pin number 13 ) and so the commands are "LY" and "LN" but you don't have to use it. Remember that to turn ON the Relay we need to send a LOW signal from the Arduino and vise-versa. The Arduino will also send a status message such as for example "Relay: ON" back to the bluetooth module which in its turn will send to the user. 6. Android Code In this section we'll discuss how the Android app was implemented to communicate with the Bluetooth module., AS has set up all the things you need to start your app, including the Android Manifest which is where you include the app's general properties such as what permissions the app is going to need. A simple blank layout where you can add some components like buttons, and one java source code that initializes the app and controls all the UI stuff on the layout. If you haven't chosen the basic template and chose the empty template instead then you can always create a new activity which sets up a layout and corresponding source file that you'll need. Also the source code and resources can be found on my github:. So here's what we'll cover next: - Android Manifest - Simple Layout - Java Source Code (Ardcon.java + ConnectedThread.java) 6.1 Android Manifest <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <uses-permission android: <uses-permission android: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> </application> </manifest> The android manifest describes the application's main structure and what OS's functionalities will the app make use of. In our case we need to include permissions related to the Bluetooth interface. As we can see from the code shown above, the following lines were inserted in order to use the OS's Bluetooth functionalities: <uses-permission android: <uses-permission android: 6.2 Simple Layout The layout is presented in the following figure: The Text view is used only to output the messages sent by the Arduino + Bluetooth module. For instance, by sending a message to the arduino, it will send back to the smartphone a response if it was sucessfull, displaying that message in the Text View. Then we have two buttons, the Relay button and Light button to turn on/off the built-in LED found on the Arduino. I believe that it is not necessary to include the code in this case since all you need to do is drag two buttons and a text view somewhere you would like to place it. 6.3 Java Source Code (Ardcon.java) The first source file we'll cover is the Ardcon.java file (Arduino Connection). What this file mainly does is to first initialize Bluetooth and make a connection. Then it checks for clicks in our two buttons and do the corresponding operation. So let's start with the initializations: public final static String MODULE_MAC = "98:D3:34:90:6F:A1"; public final static int REQUEST_ENABLE_BT = 1; private static final UUID MY_UUID = UUID.fromString("00001101-0000-1000-8000-00805f9b34fb"); BluetoothAdapter bta; //bluetooth stuff BluetoothSocket mmSocket; //bluetooth stuff BluetoothDevice mmDevice; //bluetooth stuff Button switchLight, switchRelay; //UI stuff TextView response; //UI stuff boolean lightflag = false; //flags to determ. if ON/OFF boolean relayFlag = true; //flags to determ. if ON/OFF ConnectedThread btt = null; //Our custom thread public Handler mHandler; //this receives messages from thread The Media access control, or MAC address of the bluetooth module is, in my case "98:D3:34:90:6F:A1". In order to find the MAC of your bluetooth module, you first need to turn the circuit ON, or just set some basic circuit just to turn the bluetooth module ON. Then, use your smartphone to check for bluetooth signals, your bluetooth module should come up along with its MAC address. If the MAC address is not shown, try using other devices such as your PC. Then the UUID of the bluetooth module HC-06 is 00001101-0000-1000-8000-00805f9b34fb is always the same, if you are using Android, this number may change iOS is used. (Look up UUID for more information). Then we initialize the buttons, textviews, flags to determine if the relay and light are turned off or on, a ConnectedThread which basically takes care of sending/receiving messages to/from the Arduino + Bluetooth, and finally a Handler that takes care of processing the messages received from the thread. @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_ardcon); Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); Log.i("[BLUETOOTH]", "Creating listeners"); response = (TextView) findViewById(R.id.response); switchRelay = (Button) findViewById(R.id.relay); switchLight = (Button) findViewById(R.id.switchlight); switchLight.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Log.i("[BLUETOOTH]", "Attempting to send data"); if (mmSocket.isConnected() && btt != null) { if (!lightflag) { String sendtxt = "LY"; btt.write(sendtxt.getBytes()); lightflag = true; } else { String sendtxt = "LN"; btt.write(sendtxt.getBytes()); lightflag = false; } } else { Toast.makeText(Ardcon.this, "Something went wrong", Toast.LENGTH_LONG).show(); } } }); switchRelay.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Log.i("[BLUETOOTH]", "Attempting to send data"); if (mmSocket.isConnected() && btt != null) { if(relayFlag){ String sendtxt = "RY"; btt.write(sendtxt.getBytes()); relayFlag = false; }else{ String sendtxt = "RN"; btt.write(sendtxt.getBytes()); relayFlag = true; } //disable the button and wait for 4 seconds to enable it again switchRelay.setEnabled(false); new Thread(new Runnable() { @Override public void run() { try{ Thread.sleep(4000); }catch(InterruptedException e){ return; } runOnUiThread(new Runnable() { @Override public void run() { switchRelay.setEnabled(true); } }); } }).start(); } else { Toast.makeText(Ardcon.this, "Something went wrong", Toast.LENGTH_LONG).show(); } } }); bta = BluetoothAdapter.getDefaultAdapter(); //if bluetooth is not enabled then create Intent for user to turn it onif(!bta.isEnabled()){ Intent enableBTIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE); startActivityForResult(enableBTIntent, REQUEST_ENABLE_BT); }else{ initiateBluetoothProcess(); } } I'll now give a short overview of the code above. This is the method that initializes the application and all the UI, we start by getting our references of the buttons and textview from the layout. Then we add some logic to both our buttons. For instance, in the case of the switchLight, the click listener will first determine if there is a bluetooth connection with the module, and if there is, then a message is sent to either turn OFF or turn ON the light. The same applies to the switchRelay button, but in this case we've also included a small timer after an operation is made which disables the relay button for 4 seconds. This is a secure measure since we don't to start messing with the appliance by quickly switing ON/OFF while pressing the button in a rappid manner. Then, with all the button logic completed, we try to make a connection to a bluetooth module right away in initialization. First we check if the bluetooth is ON on the smartphone, if it is not then we must ask the user to turn it off by creating an intent of type BluetoothAdapter.ACTION_REQUEST_ENABLE. This will trigger a dialog window to confirm the bluetooth activation. Then we set a onActivityResult to receive the confirmation of the user like this: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(resultCode == RESULT_OK && requestCode == REQUEST_ENABLE_BT){ initiateBluetoothProcess(); } } Then, the initiaBluetoothProcess() method makes the connection to the bluetooth module and creates a handler which is the component that receives information from the ConnectedThread, again, this thread is in charge of receiving/sending bluetooth information to/from the bluetooth module in the arduino. The handler simply updates the TextView with the response text. public void initiateBluetoothProcess(){ if(bta.isEnabled()){ //attempt to connect to bluetooth module BluetoothSocket tmp = null; mmDevice = bta.getRemoteDevice(MODULE_MAC); //create socket try { tmp = mmDevice.createRfcommSocketToServiceRecord(MY_UUID); mmSocket = tmp; mmSocket.connect(); Log.i("[BLUETOOTH]","Connected to: "+mmDevice.getName()); }catch(IOException e){ try{mmSocket.close();}catch(IOException c){return;} } Log.i("[BLUETOOTH]", "Creating handler"); mHandler = new Handler(Looper.getMainLooper()){ @Override public void handleMessage(Message msg) { //super.handleMessage(msg); if(msg.what == ConnectedThread.RESPONSE_MESSAGE){ String txt = (String)msg.obj; response.append("\n" + txt); } } }; Log.i("[BLUETOOTH]", "Creating and running Thread"); btt = new ConnectedThread(mmSocket,mHandler); btt.start(); } } (ConnectedThread.java) As mentioned, the ConnectedThread.java file contains the necessary code to send and receive messages via bluetooth. It also sends informations to the Ardcon.java source file via Handler. public class ConnectedThread extends Thread{ private final BluetoothSocket mmSocket; private final InputStream mmInStream; private final OutputStream mmOutStream; public static final int RESPONSE_MESSAGE = 10; Handler uih; ... First we extend our class with Thread, then we initializa some objects that will allow us to receive and sends messages to the bluetooth module. The Handler is the component that takes care of sending responses back to the UI thread (Ardcon.java) so that we can update the Text View. public ConnectedThread(BluetoothSocket socket, Handler uih){ mmSocket = socket; InputStream tmpIn = null; OutputStream tmpOut = null; this.uih = uih; Log.i("[THREAD-CT]","Creating thread"); try{ tmpIn = socket.getInputStream(); tmpOut = socket.getOutputStream(); } catch(IOException e) { Log.e("[THREAD-CT]","Error:"+ e.getMessage()); } mmInStream = tmpIn; mmOutStream = tmpOut; try { mmOutStream.flush(); } catch (IOException e) { return; } Log.i("[THREAD-CT]","IO's obtained"); } So our ConnectedThread receives a connection to the bluetooth module called BluetoothSocket and the Handler. This constructor initializes the Input and Output streams of the communication. public void run(){ BufferedReader br; br = new BufferedReader(new InputStreamReader(mmInStream)); while(true){ try{ String resp = br.readLine(); Message msg = new Message(); msg.what = RESPONSE_MESSAGE; msg.obj = resp; uih.sendMessage(msg); }catch(IOException e){ break; } } Log.i("[THREAD-CT]","While loop ended"); } Then, we overwride the run method extended from Thread, in this method, there is a loop that is contantly looking for new messages that have arrived, and if so, then it sands back to the UI thread via handler. public void write(byte[] bytes){ try{ Log.i("[THREAD-CT]", "Writting bytes"); mmOutStream.write(bytes); }catch(IOException e){} } public void cancel(){ try{ mmSocket.close(); }catch(IOException e){} } Finnally, these two methods are in charge of writting (sending bytes) through the Outputstream to the Bluetooth module and to cancel the connection. Schematics Team Azoreanduino Published onDecember 19, 2018 Members who respect this project you might like
https://create.arduino.cc/projecthub/azoreanduino/simple-bluetooth-lamp-controller-using-android-and-arduino-aa2253
CC-MAIN-2019-09
refinedweb
3,428
53.81
This article describes the process that system manufacturers (OEMs) can use to provide instrumentation information by including ACPI objects in the systems they build that will be recognized by Microsoft Windows Management Instrumentation (WMI). This information applies for Windows 2000, Windows XP, and Windows 98 Second Edition. By including ACPI objects in the systems they build, OEMs can take advantage of a generic mapping driver that allows WMI to make the information available to the instrumentation consumers. The ACPI subsystem contains a wealth of instrumentation information; OEMs are encouraged to use ACPI to add additional platform specific instrumentation information. However, ACPI objects are not readily accessible by instrumentation data consumers such as WBEM. This article assumes the reader is familiar with driver mapping under Windows operating systems and the Data Block GUID Mapping control method for WMI. For information about WMI technologies for system management and hardware instrumentation, see Hardware Management and Security. ACPI implementation information is available at . The ACPI-to-WMI mapping functionality is achieved by means of two device drivers provided with the Windows 2000, Windows XP, and Windows 98 Second Edition operating systems: OEMs can differentiate their PC system capabilities by writing ACPI Source Language (ASL) code and a Managed Object Format (MOF or MOF) file. The MOF file can be in the BIOS or on disk. For more information about MOF, see the "MOF Data Types" section later in this article. ASL code is never executed directly by the Wmiacpi.sys driver. ASL code is always executed by the Acpi.sys driver (see the ASL information at ) . Wmiacpi.sys will invoke Acpi.sys to call control methods that access the management data exposed by the mapping driver. Microsoft does not ship a MOF file that is associated with the Wmiacpi.sys driver. The only information surfaced through ACPI is the temperature zone information, which is surfaced through and associated with the Acpi.sys device driver. WMI organizes individual data items (properties) into data blocks (structures) that contain related information. Data blocks may have one or more data items. Each data item has a unique index within the data block, and each data block is named by a globally unique 128-bit number called a globally unique identifier (GUID). WMI can provide notifications to the data producer as to when to start and stop collecting the data items that compose a data block. WMI has no knowledge of the data format for individual data blocks. WMI functionality allows for querying all instances of a data block or a single instance of a data block. It also allows for setting all data items in an instance of a data block or a single data item within a single instance of a data block. In addition to queries and sets, WMI allows WMI method calls, which are functionally equivalent to an I/O control (IOCTL) call to a device. Each WMI method call is identified by a GUID and a method index for that GUID. All WMI method calls use one buffer for input and output parameters. WMI allows notifications of significant events to be delivered to interested user-mode applications. Each type of event is uniquely named by a GUID. Events may also carry a data block with additional information about the event. WMI can provide notifications to the event generator about when to enable and disable an event type. WMI is an open architecture that allows OEMs to define their own data blocks, methods, and events. Along with the data that composes the custom data block, the OEM must also provide a description that generally represents how a data block or WMI method is mapped to a 2-character ID. This 2-character ID is part of the names of the control methods that act upon the data block. For example, when a call is made to query about the data block represented by a WMI GUID, the mapper will evaluate the WQ xx control method (where xx is the 2-character ID mapped to that GUID). These mappings are defined by the ACPI code and obtained by the mapper evaluating the _WDG control method. For more information, see "ACPI Control Method Naming Conventions and Functionality" later in this article. The mapping process is similar for events. The _WDG control method provides a mapping between the WMI event GUID that represents the event and the notification code specified in the ASL notify instruction. For example, when ACPI provides a callback to the mapper that a control method executed a notify(mapper-device, 0x81) function, the mapper will look up the WMI GUID mapped to 0x81 and use this WMI GUID in building the WMI event. Before launching the WMI event, the mapper will evaluate _WED to retrieve any additional data that belongs with the event. Loading the Mapping Driver. The Plug and Play ID PNP0c14 is assigned as the WMI-mapping pseudo device; the operating system device INFs (Plug and Play ID-to-device driver lookup table) point this Plug and Play ID to the ACPI-to-WMI mapping driver. To cause the ACPI-to-WMI mapping driver to load, an ACPI system needs to define one or more devices with that Plug and Play ID in the ACPI device tree. Each device declared in the ACPI device tree would have its own operating system device object with its own set of mappings. In this way, different sets of data blocks can be organized in the appropriate place within the device tree. This organization allows the different devices and their corresponding data blocks to come and go from the ACPI device tree. Note that if there are multiple WMI-mapping pseudo devices in the ACPI device tree, each device must have a unique value for its _UID. Mapping Driver Functionality. Essentially the mapping driver will do the following: The following list describes the goals for the ACPI-to-WMI mapper: These goals are achieved by having supporting code in the ACPI-to-WMI mapper (Wmiacpi.sys) as well as in the core ACPI code itself (Acpi.sys). The following are not goals for the ACPI-to-WMI mapper: How SMBIOS-provided information is handled. Vendors who want to provide OEM and system-specific instrumentation data may choose to use SMBIOS as the mechanism. To use the capabilities of the WMI infrastructure to surface this SMBIOS data, they must conform to any SMBIOS version between 2.0 and 2.3. This allows the Microsoft Win32 provider--which is shipped with Windows 2000, Windows XP, and future versions of Windows, and which is available as an update to Windows 98--to populate almost all of the SMBIOS-provided information into the CIMv2 namespace. In particular, almost all of the information will be put into Win32 classes. Some of these Win32 classes are derived from the CIMv2.1 physical MOF. The one exception where SMBIOS information will not be automatically populated by the Win32 provider into the CIMv2 namespace is SMBIOS vendor-specific data. Such SMBIOS vendor-defined data will be placed in a "VendorBucket" class in a "Root\VendorDefined" namespace, and will not be available in the CIMv2 namespace by default. Any system vendor who wants to provide such data must write a provider that will interpret this data. The SMBIOS data is read only once, either at boot time in Windows 2000/Windows XP or post boot on Windows 98. Dynamic updates that are made to the SMBIOS data after it has been read will not be reflected in the namespaces in this implementation. Microsoft is working with the industry to define standard ACPI methods for dynamic updates. The SMBIOS raw data will be available as a WMI data block in Windows 2000/Windows XP and as a flat file in Windows 98. This data will be interpreted and populated into the namespaces by the Win32 provider. The Data Block GUID Mapping control method named _WDG evaluates to a buffer that has the GUID mapping information for data blocks, events, and WMI methods. The result of the evaluation is a buffer containing an array }; // Set this flag if the WCxx control method should be run to whenever the first // data consumer is interested in collecting the data block and whenever the last data // consumer is no longer interested. #define WMIACPI_REGFLAG_EXPENSIVE 0x1 // Set this flag if the GUID represents a set of WMI method calls and not a data block #define WMIACPI_REGFLAG_METHOD 0x2 // Set this flag if the data block is wholly composed of a string and should be // translated from ASCIZ to UNICODE in returning queries and from UNICODE to ASCIZ // when passing sets #define WMIACPI_REGFLAG_STRING 0x04 // Set this flag if the guid maps to an event rather than a data block or method #define WMIACPI_REGFLAG_EVENT 0x08 Each element in the array describes the mapping of a WMI data block GUID to a 2-letter ACPI method identifier used to compose the method names that operate on the data block or on the notification value used in the ASL Notify operation. Each element of the array also contains the number of instances of the data block that exist and any flags that are set. This control method is required. The following table summarizes the information for each control method described later in this section. Control Method Summary Design Considerations. Consider the following in designing data blocks for instrumentation under Windows 2000/Windows XP: MOF Data Types. The Managed Object Format (MOF) for the data blocks implemented can be supplied as either a resource attached to a file or as the buffer that results from the evaluation of a control method. To establish the former, either bind the resource to the Wmiacpi.sys image or establish a REG_EXPAND_SZ registry value named MofImagePath under the WMIACPI service key. The contents of the value is a path to the image file that contains the resource. In either case, the resource must be named MofResourceName . The buffer resulting from the evaluation of the WQ xx control method assigned to the binary MOF GUID describes all data blocks, WMI methods, and events for the device in a compressed binary format. This binary data is created by building a text file using the MOF language and compiling it with the MOF compiler. MOF data types are very rich. MOF supports the basic data types of 8-, 16-, 32-, and 64-bit signed and unsigned integers, Boolean terms, floating points, strings, and UTC datetimes. Embedded classes--that is, structures that can contain basic data types and other embedded classes--are also supported. In addition, fixed and variable length arrays of basic data types and embedded classes are supported. The MOF language defines the data types shown in the following table. MOF Data Types Important: Because the MOF data types are much richer than those for ACPI control methods, the control method must be careful to pack the data blocks correctly within an ACPI buffer. The control method can also restrict itself to using only common data types. Each MOF class represents a data block and may contain one or more properties that represent data items within the data block. A MOF class would hold all information needed to parse a data block returned from the mapper. In addition, the MOF language allows rich meta-data to be included as qualifiers on properties and classes. Some qualifiers are required, but most are optional. Class and data item qualifiers are defined in the following table. Class and Data Item Qualifiers The order that the data items are laid out in the data block is controlled by the data item ID. Data item IDs must be allocated contiguously starting with data item ID 1. The data item order specified in the MOF is not relevant. MOF supports arrays of the basic types shown in the "MOF Data Types" table shown earlier. A variable sized array must have a WmiSizeIs() qualifier that specifies the property that has the number of elements in the array. Data Block Format. The format of the data block buffer returned from the query control method and passed into the set control method must be consistent with the description of it specified by the MOF for that data block with respect to the order and size of the data items within the data block. The Boolean data type is 1 byte in length and has a value of 0 for FALSE and non-zero value for TRUE. The string data type is a C-style ANSI null-terminated string. Standard Data Blocks, Methods, and Events. Additional data blocks, events, and methods will be defined in the future; they should be implemented by all OEMs in order to ensure a minimum of functionality on all PCs. In the future, an industry standard will be defined for the globally unique GUIDs to be assigned to the data blocks. The WMI component within Windows will contain the MOF definition for these standard data blocks so it does not need to be part of the result from the binary MOF query. Custom Data Blocks, Methods and Events. Custom or OEM-specific data blocks, events, and methods can be added by including them in the result of the _WDG method. The GUIDs that are assigned must be globally unique so they can be generated by a tool such as Guidgen or Uuidgen, which are provided with the WMI SDK. The MOF definition for these custom data blocks must be included in the results of the WQ xx method, where xx has been mapped to the MOF Data GUID, which is the GUID that is queried and returns MOF data--in order for applications to be able to access the data blocks. Or the MOF could be added as a resource to Wmiacpi.sys with a name of MofResourceName and a type MOFDATA. It can also be a resource in another image file with same name and type that is pointed to by the MofImagePath value in the registry key HKLM\CurrentControlSet\Services\WmiAcpi. How does WMI find ACPI/ASL code? In ASL, the developer creates a device with an _HID of PNP0c14. The operating system enumerates the device and loads the Wmiacpi.sys driver on top of it. How does the MOF associated with ACPI BIOS get registered? It is either a resource attached to Wmiacpi.sys or another image file such as a resource-only DLL. How does a management application discover the classes and properties provided by ASL instrumentation? By looking in the WMI namespace of the schema. Is the following true? Because very few ACPI standards exist for instrumentation, most of the ACPI instrumented features will appear differently on each vendor's product, and management applications will have to be "taught" to interpret the varying classes and methods. Microsoft is looking at standardizing this. Any suggestions are appreciated. Who provides the MOF files for standard ACPI features such as thermal monitoring? Windows 2000/Windows XP has a MOF file for thermal zone temperature as part of the operating system and instruments it within Acpi.sys, outside of the mapper. Typically, MOF files are compiled into BMF files and attached to a driver as a resource. The BMF files can be in the ROM or on disk. WMI determines the location of the MOF information by looking at the registry for the MofImagePath value under the WMIACPI service. If this does not exist, then WMI looks at the ImagePath value. If Wmiacpi.sys does not have a MOF resource, then WMI will query the binary MOF GUID for the MOF information. A driver may have a static list of pre-built MOF files; if so, it can "dynamically" report one of them. The mechanism is to report the file using a predefined GUID that returns a binary MOF. To dynamically build a MOF file, a driver would have to build a MOF file and then launch the MOF compiler, which is difficult. Currently, to do this on the machine running Wmiacpi.sys, the mofcomp command can be used to load the MOF file directly into the CIMOM database. The following list represents some of the ASL methods defined by the ACPI specification. These methods are of particular interest for systems management. None of these methods have been implemented within the WMI/ACPI mapper to date. A BIOS developer, for example, could use these methods to expose data using the mapper. These methods represent good opportunities for OEMs to differentiate their products with minimal effort: The same applies for these method for Control Method Battery devices: The following ASL code implements an event and a method that can be called to initiate that event. Device(AMW0) { // pnp0c14 is Plug and Play ID assigned to WMI mapper Name(_HID, "*pnp0c14") Name(_UID, 0x0) // // Description of data and events supported Name(_WDG, Buffer() { 0x6a, 0x0f, 0xBC, 0xAB, 0xa1, 0x8e, 0xd1, 0x11, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, 0, 0, 66, 65, // Object ID (BA) 3, // Instance Count 0x01, // Flags (WMIACPI_REGFLAG_EXPENSIVE) 0x6b, 0x0f, 0xBC, 0xAB, 0xa1, 0x8e, 0xd1, 0x11, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, 0, 0, 66, 66, // Object ID (BB) 3, // Instance Count 0x02, // Flags (WMIACPI_REGFLAG_METHOD) 0x6c, 0x0f, 0xBC, 0xAB, 0xa1, 0x8e, 0xd1, 0x11, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, 0, 0, 0xb0, 0, // Notification ID 1, // Instance Count 0x08 // Flags (WMIACPI_REGFLAG_EVENT) }) // // Storage for the 3 instances of BA Name(STB0, Buffer(0x10) { 1,0,0,0, 2,0,0,0, 3,0,0,0, 4,0,0,0 }) Name(STB1, Buffer(0x10) { 0,1,0,0, 0,2,0,0, 0,3,0,0, 0,4,0,0 }) Name(STB2, Buffer(0x10) { 0,0,1,0, 0,0,2,0, 0,0,3,0, 0,0,4,0 }) // // Query data block // Arg0 has the instance being queried Method(WQBA, 1) { if (LEqual(Arg0, 0)) { Return(STB0) } if (LEqual(Arg0, 1)) { Return(STB1) } if (LEqual(Arg0, 2)) { Return(STB2) } } // // Set Data Block // Arg0 has the instance being queried // Arg1 has the new value for the data block instance Method(WSBA, 2) { if (LEqual(Arg0, 0)) { Store(Arg1, STB0) } if (LEqual(Arg0, 1)) { Store(Arg1, STB1) } if (LEqual(Arg0, 2)) { Store(Arg1, STB2) } } // // Storage for data block BB Name(B0ED, Buffer(0x10) { 0,0,0,1, 0,0,0,2, 0,0,0,3, 0,0,0,4 }) // // Method Execution // Arg0 is instance being queried // Arg1 is the method ID // Arg2 is the method data passed Method(WMBB, 3) { if (LEqual(Arg1, 1)) { Store(Arg3, B0ED) Notify(AMW0, 0xB0) Return(Arg3) } else { Return(Arg1) } } // // More info about an event // Arg0 is the event ID that was launched ("fired") Method(_WED, 1) { if (LEqual(Arg0, 0xB0)) { Return(B0ED) } } } The following sample ASL code shows another example of implementing an event mechanism using ASL code. It also provides an example of embedding MOF data into ASL. Device(AMW0) { // // pnp0c14 is the ID assigned by Microsoft to the WMI to ACPI mapper Name(_HID, "*pnp0c14") Name(_UID, 0x0) // // _WDG evaluates to a data structure that specifies the data blocks supported // by the ACPI device. Name(_WDG, Buffer() { 0x5a, 0x0f, 0xBC, 0xAB, 0xa1, 0x8e, 0xd1, 0x11, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, 0, 0, 65, 65, // Object ID (AA) 2, // Instance Count 0x01, // Flags WMIACPI_REGFLAG_EXPENSIVE 0x5b, 0x0f, 0xBC, 0xAB, 0xa1, 0x8e, 0xd1, 0x11, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, 0, 0, 65, 66, // Object ID (AB) 2, // Instance Count 0x02, // Flags WMIACPI_REGFLAG_METHOD 0x5c, 0x0f, 0xBC, 0xAB, 0xa1, 0x8e, 0xd1, 0x11, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, 0, 0, 0xa0, 0, // Notification ID 1, // Instance Count 0x08, // Flags (WMIACPI_REGFLAG_EVENT) // // This GUID for returning the MOF data 0x21, 0x12, 0x90, 0x05, 0x66, 0xd5, 0xd1, 0x11, 0xb2, 0xf0, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, 66, 65, // Object ID (BA) 1, // Instance Count 0x00, // Flags }) // // Collection control method. If Arg0 is not zero then collection for the // data block is enabled. If Arg0 is zero then collection is disabled. If // this method does not exist then it is assumed that collection control is // not required. Collection control is only useful when collection of the // data block causes overhead. Method(WCAA, 1) { if (LEqual(Arg0, Zero)) { // Disable collection of data } else { // Enable collection of data } } // // Query method for data block AA. Arg0 has the data block instance index Method(WQAA, 1) { if (LEqual(Arg0, Zero)) { // Query is for first instance of data block return(0x10) } else { // Query is for second instance of data block return(0x20) } } // // Set method for data block AA. If data block is read-only then this method // does not need to exist. Arg0 has the instance index of the data block, // Arg1 has the new value for the data block. Method(WSAA, 2) { if (LEqual(Arg0, Zero)) { // Set is for first instance of data block } else { // Set is for second instance of data block } } // // Event enable/disable method. If event does not need to be armed/disarmed // then this method is not needed. Arg0 is Zero if event is being disarmed or // non zero if event is being armed. Name(ACEN, 0) Method(WEA0, 1) { Store(Arg0, ACEN) } // // _WED is called in response to an event launching ("firing") to gain additional // information about the event. Arg0 has the NotifyId for the event launched. Method(_WED, 1) { if (LEqual(Arg0, 0xA0)) { Return(0x100) } } // // Evaluation of this method causes the event 0xA0 to be fired. Since it is // defined by the _WDG method it is callable via WMI. Arg0 has the instance // index and Arg1 has any input parameters. Method(WMAB, 3) { // // If event was armed then launch it if (LEqual(ACEN, 1)) { Notify(AMW0, 0xa0) } Return(Arg1) } Name(WQBA, Buffer(926) { 0x46, 0x4f, 0x4d, 0x42, 0x01, 0x00, 0x00, 0x00, 0x8e, 0x03, 0x00, 0x00, 0xf6, 0x0f, 0x00, 0x00, 0x44, 0x53, 0x00, 0x01, 0x1a, 0x7d, 0xda, 0x54, 0x98, 0xdd, 0x87, 0x00, 0x01, 0x06, 0x18, 0x42, 0x10, 0x0b, 0x10, 0x0a, 0x0b, 0x21, 0x02, 0xcb, 0x82, 0x50, 0x3c, 0x18, 0x14, 0xa0, 0x25, 0x41, 0xc8, 0x05, 0x14, 0x55, 0x02, 0x21, 0xc3, 0x02, 0x14, 0x0b, 0x70, 0x2e, 0x40, 0xba, 0x00, 0xe5, 0x28, 0x72, 0x0c, 0x22, 0x82, 0xfd, 0xfb, 0x07, 0xc1, 0x90, 0x02, 0x08, 0x29, 0x84, 0x90, 0x08, 0x58, 0x2a, 0x04, 0x8d, 0x10, 0xf4, 0x2b, 0x00, 0xa1, 0x43, 0x01, 0x32, 0x05, 0x18, 0x14, 0xe0, 0x14, 0x41, 0x04, 0x41, 0x62, 0x17, 0x2e, 0xc0, 0x34, 0x8c, 0x06, 0xd0, 0x36, 0x8a, 0x64, 0x0b, 0xb0, 0x0c, 0x2e, 0x98, 0xa3, 0x08, 0x92, 0xa0, 0xc6, 0x09, 0xa0, 0xc4, 0x4c, 0x00, 0xa5, 0x13, 0x5c, 0x36, 0x05, 0x58, 0xc4, 0x96, 0x50, 0x14, 0x0d, 0x22, 0x4a, 0x82, 0x13, 0xea, 0x1b, 0x41, 0x13, 0x2a, 0x57, 0x80, 0x64, 0x78, 0x69, 0x1e, 0x81, 0xac, 0xcf, 0x41, 0x93, 0xf2, 0x04, 0xb8, 0x9a, 0x05, 0x7a, 0x8c, 0x34, 0xff, 0x30, 0x41, 0x99, 0x14, 0x43, 0x0e, 0x20, 0x24, 0x71, 0x98, 0xa0, 0x9d, 0x59, 0xed, 0x18, 0xd2, 0x3d, 0x07, 0x32, 0x4d, 0x60, 0x21, 0x70, 0x9e, 0xb8, 0x19, 0xa0, 0xf0, 0x5b, 0x1d, 0x80, 0xe0, 0x2b, 0x1d, 0x15, 0xd2, 0xeb, 0x34, 0x64, 0x72, 0x46, 0x48, 0xf8, 0xff, 0x7f, 0x02, 0x26, 0xe3, 0xb7, 0x60, 0x02, 0xa5, 0xd9, 0xb2, 0x82, 0x4b, 0x80, 0xc1, 0x68, 0x00, 0x91, 0xa2, 0x69, 0xa3, 0xe6, 0xea, 0xf9, 0x36, 0x8f, 0xaf, 0x59, 0x7a, 0x9e, 0x47, 0x7a, 0x34, 0x56, 0x36, 0x05, 0xd4, 0xf8, 0x3d, 0x9d, 0x93, 0xf3, 0x4c, 0x02, 0x1e, 0x9c, 0x61, 0x4e, 0x87, 0x83, 0xf1, 0xb1, 0xb1, 0x51, 0x70, 0x74, 0x03, 0xb2, 0x31, 0x38, 0xc6, 0xb0, 0xd1, 0x73, 0x39, 0x81, 0x47, 0x82, 0x43, 0x89, 0x7e, 0x0e, 0x6f, 0x00, 0x47, 0x17, 0xe3, 0x04, 0xce, 0x27, 0xc1, 0x61, 0x06, 0x39, 0xe3, 0x33, 0xf4, 0x44, 0x2c, 0x68, 0xd6, 0x02, 0x0a, 0x62, 0xa4, 0x58, 0xa7, 0xf5, 0x7c, 0x10, 0x8b, 0x41, 0x05, 0x8b, 0x11, 0xdb, 0x50, 0x87, 0x60, 0x18, 0x8b, 0x46, 0x11, 0xc8, 0x49, 0x3c, 0x49, 0x30, 0x94, 0x40, 0x51, 0x0c, 0x12, 0xda, 0xc3, 0x36, 0x92, 0x81, 0xcf, 0xdb, 0x20, 0xc7, 0x84, 0x51, 0x01, 0x21, 0xcf, 0xe3, 0xd0, 0x28, 0x4d, 0xd0, 0xfd, 0x29, 0x40, 0x37, 0x8b, 0x08, 0x67, 0x54, 0xd8, 0x44, 0x64, 0x6d, 0x02, 0xb2, 0x25, 0x40, 0x1c, 0xbe, 0x40, 0x1a, 0x43, 0x11, 0x44, 0x84, 0x98, 0x51, 0x8c, 0x19, 0x30, 0x82, 0x51, 0x0e, 0xa6, 0x39, 0x10, 0x69, 0x13, 0x30, 0xf6, 0x20, 0xd1, 0x62, 0x31, 0x04, 0xdb, 0x9f, 0x83, 0x30, 0x0e, 0x05, 0xa3, 0x03, 0x42, 0xe7, 0x84, 0xc3, 0x3b, 0x30, 0x9f, 0x1e, 0x4c, 0x70, 0xda, 0xcf, 0x07, 0xaf, 0x0b, 0x21, 0x8b, 0x17, 0x20, 0x0d, 0x43, 0xf8, 0x09, 0x6a, 0x7d, 0x51, 0xe8, 0x5a, 0xe0, 0x34, 0xe0, 0xa8, 0xeb, 0x82, 0x6f, 0x01, 0xbe, 0x01, 0x9c, 0xe0, 0xe3, 0x85, 0xf1, 0x83, 0x1c, 0xc1, 0x01, 0x3c, 0x44, 0xbc, 0x1a, 0x78, 0x08, 0x9e, 0xc3, 0xfb, 0x05, 0x3b, 0x0f, 0x60, 0xff, 0xff, 0x04, 0x5d, 0xe3, 0xe9, 0x92, 0x70, 0x02, 0x96, 0x83, 0x86, 0x1a, 0xac, 0x2f, 0x00, 0x27, 0xe9, 0xc1, 0x1a, 0xae, 0xae, 0xd3, 0x06, 0x7a, 0xba, 0xa7, 0x72, 0x5a, 0xa5, 0x0a, 0x30, 0x7b, 0x94, 0x20, 0x04, 0xcf, 0x1e, 0x6c, 0xde, 0x67, 0x73, 0xe6, 0x09, 0x9e, 0x14, 0x3c, 0x05, 0x3e, 0x2d, 0xcf, 0xd2, 0x97, 0x0e, 0x5f, 0x09, 0x7c, 0x9f, 0x30, 0x41, 0xf4, 0x27, 0x17, 0x36, 0x1a, 0xb8, 0xc3, 0xc6, 0x8d, 0x06, 0xce, 0xe5, 0xe0, 0xb1, 0xc3, 0x33, 0xf7, 0x5c, 0x4d, 0x50, 0xf3, 0xe5, 0x42, 0x4e, 0x66, 0x83, 0xd2, 0x03, 0xa2, 0x01, 0x3f, 0x34, 0x60, 0xd0, 0x1f, 0x19, 0xb8, 0xc8, 0x8b, 0x02, 0x95, 0x86, 0xac, 0xbf, 0x86, 0x45, 0x8d, 0x9b, 0x12, 0x58, 0xca, 0xa1, 0x82, 0xdc, 0x33, 0x7c, 0x9e, 0x38, 0x8c, 0x57, 0x00, 0xcf, 0xe6, 0xa0, 0x7c, 0x73, 0x71, 0xba, 0x7b, 0x05, 0x68, 0x66, 0x83, 0xbb, 0x51, 0x80, 0x05, 0xc3, 0xd7, 0x03, 0xdf, 0x30, 0xd8, 0xf1, 0xc3, 0xd7, 0x0c, 0x36, 0x24, 0x83, 0x45, 0x89, 0x14, 0x9b, 0x4d, 0xca, 0x03, 0xc0, 0xe0, 0xbd, 0xd7, 0xf8, 0x70, 0x61, 0x48, 0x9f, 0x31, 0xe0, 0x1e, 0x05, 0xe0, 0xfd, 0xff, 0xcf, 0x09, 0xe0, 0xb8, 0x6d, 0xf8, 0x2a, 0x62, 0x67, 0xf7, 0x0b, 0x5d, 0x6f, 0xb0, 0xf7, 0x1d, 0x78, 0xf8, 0x87, 0x85, 0xbb, 0x0b, 0x30, 0xb0, 0x13, 0xc5, 0x1c, 0x78, 0x80, 0xc7, 0x64, 0x1e, 0x78, 0xc0, 0x75, 0x96, 0x82, 0x3d, 0x04, 0xae, 0xfa, 0xc0, 0x83, 0xca, 0xf1, 0x6a, 0xa0, 0x67, 0x1e, 0xc0, 0xec, 0xff, 0xff, 0xcc, 0x03, 0x8c, 0xe0, 0x9f, 0x79, 0x80, 0x6b, 0xf4, 0x6b, 0x81, 0xde, 0x57, 0x3e, 0xf3, 0x00, 0x7c, 0x50, 0x79, 0x33, 0x01, 0xcd, 0xff, 0xff, 0x66, 0x02, 0xe3, 0xe0, 0xe0, 0x83, 0x88, 0xaf, 0x32, 0x3e, 0x11, 0x02, 0x93, 0xab, 0x09, 0x70, 0x09, 0x79, 0x27, 0xa2, 0x01, 0x07, 0x41, 0xaf, 0x01, 0x5c, 0x0b, 0x88, 0x66, 0xc8, 0xa6, 0x89, 0x25, 0x98, 0xe5, 0x22, 0x40, 0xef, 0x8a, 0x3e, 0x2a, 0xf1, 0x31, 0xfa, 0xa8, 0xc4, 0x70, 0xdf, 0x85, 0x8c, 0x7b, 0x7a, 0x67, 0xf7, 0xac, 0x84, 0xb9, 0x04, 0xbc, 0x8f, 0x80, 0x65, 0xf2, 0xf8, 0xd3, 0x07, 0x47, 0xf4, 0x85, 0xc1, 0x77, 0x23, 0x78, 0x04, 0xd5, 0x5f, 0x65, 0xa8, 0xfe, 0xbd, 0x48, 0x2f, 0x0c, 0xea, 0x2a, 0x03, 0x5c, 0xff, 0xff, 0x57, 0x19, 0x36, 0xc8, 0x63, 0x05, 0xcb, 0xf9, 0x11, 0x33, 0xc7, 0xd3, 0x8c, 0xe2, 0xa9, 0x78, 0xb8, 0xec, 0x62, 0x65, 0xef, 0x53, 0x25, 0xc7, 0x17, 0x5f, 0xab, 0xf0, 0x20, 0x8f, 0x31, 0xbe, 0xc3, 0x80, 0x71, 0x04, 0xef, 0x30, 0xc0, 0x35, 0xf0, 0xcb, 0x41, 0xd7, 0x40, 0xc0, 0xf6, 0xff, 0xff, 0x0e, 0x03, 0x96, 0xe0, 0x10, 0xba, 0x06, 0xe2, 0x64, 0x1c, 0x5b, 0xc8, 0x4d, 0xca, 0x53, 0x36, 0xc1, 0xa0, 0x13, 0xa6, 0x47, 0x40, 0xf0, 0xdc, 0x2b, 0x7c, 0x98, 0x00, 0xc7, 0x48, 0x30, 0xe7, 0x08, 0x9f, 0x1f, 0x7c, 0x7d, 0x78, 0x93, 0x60, 0x37, 0x0e, 0xc3, 0xf8, 0xca, 0x07, 0x0f, 0xf2, 0x15, 0x8b, 0x5d, 0x26, 0xf8, 0x49, 0x0f, 0x6c, 0x17, 0x65, 0x70, 0xdc, 0x7f, 0xe0, 0x5c, 0x94, 0x81, 0x11, 0xee, 0xe3, 0x0f, 0xf8, 0x0f, 0xcb, 0x70, 0xfe, 0xff }) } This sample MOF file complements the previous ASL code. [abstract] class AcpiSampleBase { }; [abstract] class AcpiSampleEvent : WMIEvent { }; [Dynamic, Provider("WMIProv"), WMI, Description("Counter for number of times the case has been hit"), GUID("{ABBC0f5a-8ea1-11d1-A000-c90629100000}"), locale("MS\\0x409")] class MachineHitSensor : AcpiSampleBase { [key, read] string InstanceName; [read] Boolean Active; [WmiDataId(1), Description("Number of times the case sensor determined that the machine has been hit"), read ] uint32 NumberTimesHit; }; [Dynamic, Provider("WMIProv"), WMI, Description("Counter for number of times the case has been hit"), GUID("{ABBC0f5b-8ea1-11d1-A000-c90629100000}"), locale("MS\\0x409")] class MachineHitSimulate : AcpiSampleBase { [key, read] string InstanceName; [read] Boolean Active; [WmiMethodId(1), Description("Simulate hitting the machine") ] void HitMachine(); }; [Dynamic, Provider("WMIProv"), WMI, Description("Event generated when machine is hit"), GUID("{ABBC0f5c-8ea1-11d1-A000-c90629100000}"), locale("MS\\0x409")] class MachineHitEvent : AcpiSampleEvent { [key, read] string InstanceName; [read] Boolean Active; [WmiDataId(1), Description("Force with which the machine was hit") ] uint32 Force; }; Appendix C - ASL Sample Code Device(WMI1) { Name (_HID, EISAID("PNP0Cxx")) // Plug and Play ID for mapping driver (TBD) _UID(1) // // Data block and Wmi method to Object ID mappings Name(_WDG, Buffer() { // Object AA {ABBC0F5A-8EA1-11d1-A53F-00A0C9062910} 0xABBC0F5A, 0x8ea1, 0x11d1, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, `A','A', // Object ID 1, // Instance Count 0x04, // Flags (WMIACPI_REGFLAG_STRING) // Object AB {ABBC0F5B-8EA1-11d1-A53F-00A0C9062910} 0xABBC0F5B, 0x8ea1, 0x11d1, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, `A','B', // Object ID 2, // Instance Count 0x01, // Flag (WMIACPI_REGFLAG_EXPENSIVE) // Object AC {ABBC0F5C-8EA1-11d1-A53F-00A0C9062910} 0xABBC0F5C, 0x8ea1, 0x11d1, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, `A','C', // Object ID 1, // Instance Count 0x06, // Flag (WMIACPI_REGFLAG_METHOD | _STRING) // Event 0x80 {ABBC0F5D-8EA1-11d1-A53F-00A0C9062910} 0xABBC0F5D, 0x8ea1, 0x11d1, 0x00, 0xa0, 0xc9, 0x06, 0x29, 0x10, 0x80, // Notification value 0, // Reserved 0, // Instance Count (Not meaningful for events) 0x0D // Flags (WMIACPI_REGFLAG_EXPENSIVE | _STRING |_EVENT) }) // // IO ports for configuration of Object AB OperationRegion(CAB0, SystemIo, 0xf8, 1) // Instance 0 OperationRegion(CAB1, SystemIo, 0xfc, 1) // Instance 1 OperationRegion(CABC, SystemIo, 0xf4, 1) // Enable/Disable Collection Method(WQAB, 1) { // // Read value from IO space for instance if (LEqual(Arg0, Zero) { Store(CAB0, Local0) } else { Store(CAB1, Local0) } // // If any of the lower 3 bits are set then return TRUE, else FALSE if (And(Local0, 7)) Return( 0x00000001 ) } else { Return( 0x00000000 ) } } // // Set the values for object AB Method(WSAB, 2) { if (LEqual(Arg0, Zero) { // Change contents of first instance of data block to // values in buffer in Arg1 } else { // Change contents of second instance of data block to // values in buffer in Arg1 } } // // Collection notification for object AB Method(WCAB, 1) { if (LEqual(Arg0, 1) { Store(One, CABC) // If enable, write all 1's to port } else { Store(Zero, CABC) // If disable, write all 0's to port } } // // Storage for maintaining values for the AA method. Name(STAA, "XYZZY") Method(WQAA, 1) { // // Only one instance for AA so no need to check arg return(STAA); } // Data block mapped to Object AA does not support set so it does not need // a WSAA method // // Storage for maintaining state of flag that determines whether to fire (launch) // the event or not. By default firing is disabled Name(FIRE, 0) // // This method will reset the values for AA and send a notification of // its occurrence Method(WMAC, 3) { Store(STAA, Local0) Store("XYZZY", STAA) if (LEqual(FIRE, 1)) { Notify(WMI1, 0x80) } Return(Local0) } // // Additional information about event Method(_WED) { return("Fired") } // // Event 0x80 Enable/Disable control method Method(WE80, 1) { Store(FIRE, Arg0) } )
http://www.microsoft.com/whdc/system/pnppwr/wmi/wmi-acpi.mspx
crawl-002
refinedweb
5,092
51.31
We are given N jobs numbered 1 to N. For each activity, let Ti denotes the number of days required to complete the job. For each day of delay before starting to work for job i, a loss of Li is incurred. We are required to find a sequence to complete the jobs so that overall loss is minimized. We can only work on one job at a time. If multiple such solutions are possible, then we are required to give the lexicographically least permutation (i.e earliest in dictionary order). Examples: Input : L = {3, 1, 2, 4} and T = {4, 1000, 2, 5} Output : 3, 4, 1, 2 Explanation: We should first complete job 3, then jobs 4, 1, 2 respectively. Input : L = {1, 2, 3, 5, 6} T = {2, 4, 1, 3, 2} Output : 3, 5, 4, 1, 2 Explanation: We should complete jobs 3, 5, 4, 1 and then 2 in this order. Let us consider two extreme cases and we shall deduce the general case solution from them. - All jobs take same time to finish, i.e Ti = k for all i. Since all jobs take same time to finish we should first select jobs which have large Loss (Li). We should select jobs which have the highest losses and finish them as early as possible. Thus this is a greedy algorithm. Sort the jobs in descending order based on Li only. - All jobs have the same penalty. Since all jobs have the same penalty we will do those jobs first which will take less amount of time to finish. This will minimize the total delay, and hence also the total loss incurred. This is also a greedy algorithm. Sort the jobs in ascending order based on Ti. Or we can also sort in descending order of 1/Ti. - Program to sort string in descending order - Stability in sorting algorithms - Minimum sum by choosing minimum of pairs from array - Job Sequencing Problem | Set 1 (Greedy Algorithm) - Job Scheduling with two jobs allowed at a From the above cases, we can easily see that we should sort the jobs not on the basis of Li or Ti alone. Instead, we should sort the jobs according to the ratio Li/Ti, in descending order. We can get the lexicographically smallest permutation of jobs if we perform a stable sort on the jobs. An example of a stable sort is merge sort. To get most accurate result avoid dividing Li by Ti. Instead, compare the two ratios like fractions. To compare a/b and c/d, compare ad and bc. // CPP program to minimize loss using stable sort. #include <iostream> #include <algorithm> #include <vector> using namespace std; #define all(c) c.begin(), c.end() // Each job is represented as a pair of int and pair. // This is done to provide implementation simplicity // so that we can use functions provided by algorithm // header typedef pair<int, pair<int, int> > job; // compare function is given so that we can specify // how to compare a pair of jobs bool cmp_pair(job a, job b) { int a_Li, a_Ti, b_Li, b_Ti; a_Li = a.second.first; a_Ti = a.second.second; b_Li = b.second.first; b_Ti = b.second.second; // To compare a/b and c/d, compare ad and bc return (a_Li * b_Ti) > (b_Li * a_Ti); } void printOptimal(int L[], int T[], int N) { vector<job> list; // (Job Index, Si, Ti) for (int i = 0; i < N; i++) { int t = T[i]; int l = L[i]; // Each element is: (Job Index, (Li, Ti) ) list.push_back(make_pair(i + 1, make_pair(l, t))); } stable_sort(all(list), cmp_pair); // traverse the list and print job numbers cout << "Job numbers in optimal sequence are\n"; for (int i = 0; i < N; i++) cout << list[i].first << " "; } // Driver code int main() { int L[] = { 1, 2, 3, 5, 6 }; int T[] = { 2, 4, 1, 3, 2 }; int N = sizeof(L) / sizeof(L[0]); printOptimal(L, T, N); return 0; } Output: Job numbers in optimal sequence are 3 5 4 1 2 Time Complexity: O(N log N) Space Complexity: O(N).
https://www.geeksforgeeks.org/job-sequencing-problem-loss-minimization/
CC-MAIN-2018-34
refinedweb
677
71.85
Django is a awesome web framework, very mature, aims for simplicity and the best of all: it’s fun to use it. To make it even more fun, lettuce has built-in support for Django. Pick up any Django project, and add lettuce.django in its settings.py configuration file: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.admin', # ... other apps here ... 'my_app', 'foobar', 'another_app', 'lettuce.django', # this guy will do the job :) ) Considering the configuration above, let’s say we want to write tests for the my_app django application. Lettuce will look for a features folder inside every installed app: /home/user/projects/djangoproject | settings.py | manage.py | urls.py | my_app | features - index.feature - index.py | foobar | features - carrots.feature - foobar-steps.py | another_app | features - first.feature - second.feature - many_steps.py @index.feature: Feature: Rocking with lettuce and django Scenario: Simple Hello World Given I access the url "/" Then I see the header "Hello World" Scenario: Hello + capitalized name Given I access the url "/some-name" Then I see the header "Hello Some Name" @index-steps.py: from lettuce import * from lxml import html from django.test.client import Client from nose.tools import assert_equals @before.all def set_browser(): world.browser = Client() @step(r'I access the url "(.*)"') def access_url(step, url): response = world.browser.get(url) world.dom = html.fromstring(response.content) @step(r'I see the header "(.*)"') def see_header(step, text): header = world.dom.cssselect('h1')[0] assert header.text == text Once you install the lettuce.django app, the command harvest will be available: user@machine:~projects/djangoproject $ python manage.py harvest The harvest command executes the django.test.utils.setup_test_environment function before it starts up the Django server. Typically, invoking this function would configure Django to use the locmem in-memory email backend. However, Lettuce uses a custom Django email backend to support retrieving email from Lettuce test scripts. See Checking email for more details. The harvest command accepts a path to feature files, in order to run only the features you want. Example: user@machine:~projects/djangoproject $ python manage.py harvest path/to/my-test.feature If you want to write acceptance tests that run with web browsers, you can user tools like twill, selenium, webdriver and windmill Lettuce cleverly runs an instance of the built-in Django HTTP server in the background. It tries to bind the HTTP server at localhost:8000 but if the port is busy, it keeps trying to run in higher ports: 8001, 8002 and so on until it reaches the maximum port number 65535. Note You can override the default starting port from “8000” to any other port you want. To do so, refer to “running the HTTP server in other port than 8000” below. So that you can use browser-based tools such as those listed above to access Django. As the Django HTTP server can be running in any port within the range 8000 - 65535, it could be hard to figure out the correct URL for your project, right? Wrong! Lettuce is here for you. Within your steps you can use the django_url utility function: from lettuce import step, world from lettuce.django import django_url @step(r'Given I navigate to "(.*)"') def navigate_to_url(step, url): full_url = django_url(url) world.browser.get(full_url) It prepends a Django-internal URL with the HTTP server address. In other words, if lettuce binds the http server to localhost:9090 and you call django_url with "/admin/login": from lettuce.django import django_url django_url("/admin/login") It returns: "" At this point you probably know how terrain.py works, and it also works with Django projects. You can setup environment and stuff like that within a terrain.py file located at the root of your Django project. Taking the very first example of this documentation page, your Django project layout would like like this: /home/user/projects/djangoproject | settings.py | manage.py | urls.py | terrain.py | my_app | features - index.feature - index.py | foobar | features - carrots.feature - foobar-steps.py | another_app | features - first.feature - second.feature - many_steps.py Notice the terrain.py file at the project root, there you can populate the world and organize your features and steps with it :) When you run your Django server under lettuce, emails sent by your server do not get transmitted over the Internet. Instead, these emails are added to a multiprocessing.Queue object at lettuce.django.mail.queue. Example: from lettuce import step from lettuce.django import mail from nose.tools import assert_equals @step(u'an email is sent to "([^"]*?)" with subject "([^"]*)"') def email_sent(step, to, subject): message = mail.queue.get(True, timeout=5) assert_equals(message.subject, subject) assert_equals(message.recipients(), [to]) Sometimes you may just do not want to run Django’s built-in HTTP server running in background, in those cases all you need to do is run the harvest command with the --no-server or -S option. Example: python manage.py harvest --no-server python manage.py harvest -S If you face the problem of having lettuce running on port 8000, you can change that behaviour. Before running the server, lettuce will try to read the setting LETTUCE_SERVER_PORT which must be a integer Example: LETTUCE_SERVER_PORT = 7000 This can be really useful if 7000 is your default development port, for example. In order to run tests against the nearest configuration of production, lettuce sets up settings.DEBUG=False However, for debug purposes one can face a misleading HTTP 500 error without traceback in Django. For those cases lettuce provides the --debug-mode or -d option. python manage.py harvest --debug-mode python manage.py harvest -d If you want to use a test database by default, instead of a live database, with your test server you can specify the -T flag or set the following configuration variable in settings.py. LETTUCE_USE_TEST_DATABASE = True You can also specify the index of the scenarios you want to run through the command line, to do so, run with --scenarios or -s options followed by the scenario numbers separated by commas. For example, let’s say you want to run the scenarios 4, 7, 8 and 10: python manage.py harvest --scenarios=4,7,8,10 python manage.py harvest -s 4,7,8,10 During your development workflow you may face two situations: Lettuce takes a comma-separated list of app names to run tests against. For example, the command below would run ONLY the tests within the apps myapp and foobar: python manage.py harvest --apps=myapp,foobar # or python manage.py harvest --a myapp,foobar You can also specify it at settings.py so that you won’t need to type the same command-line parameters all the time: LETTUCE_APPS = ( 'myapp', 'foobar', ) INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.admin', 'my_app', 'foobar', 'another_app', 'lettuce.django', ) Lettuce takes a comma-separated list of app names which tests must NOT be ran. For example, the command below would run ALL the tests BUT those within the apps another_app and foobar: python manage.py harvest --avoid-apps=another_app,foobar You can also specify it at settings.py so that you won’t need to type the same command-line parameters all the time: LETTUCE_AVOID_APPS = ( 'another_app', 'foobar', ) INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.admin', 'my_app', 'foobar', 'another_app', 'lettuce.django', )
http://lettuce.it/recipes/django-lxml.html
CC-MAIN-2018-26
refinedweb
1,205
50.84
My task is to port a NXP SPI driver delivered with Keil RL-ARM to my Kinetis K60 system. Finlay I want to talk to SD cards in SPI mode. I do this step by step. My next task is the Send routine. RL-ARM User's Guide: spi.SendBuf Library Routine This is the original NXP Send: static BOOL SendBuf (U8 *buf, U32 sz) { /* Send buffer to SPI interface. */ U32 i; for (i = 0; i < sz; i++) { SSPDR = buf[i]; /* Wait if Tx FIFO is full. */ while (!(SSPSR & TNF)); SSPDR; } /* Wait until Tx finished, drain Rx FIFO. */ while (SSPSR & (BSY | RNE)) { SSPDR; } return (__TRUE); } This my first trial for the Kinetis SendBuf: /*--------------------------- SendBuf ---------------------------------------*/ static BOOL SendBuf (U8 *buf, U32 sz) {/* Send buffer to SPI interface. */ U32 i; for (i = 0; i < sz; i++) { SPI2->PUSHR = (SPI_PUSHR_PCS(2) || buf[i]); while ((SPI2->SR & SPI_SR_TFFF_MASK) == 0) {}; while ((SPI2->SR & SPI_SR_RFDF_MASK) == 0) {}; } SPI2->SR = (SPI_SR_TFFF_MASK || (SPI2->SR & SPI_SR_RFDF_MASK)); return (__TRUE); } Do you think this is applicable for SD card access via SPI? I will discuss the other routines in other threads because of a better overview. I will post the final driver, because Keil provides only samples for SDHC mode for memory cards for the Kinetis system. Thank you
https://community.nxp.com/thread/309269
CC-MAIN-2018-22
refinedweb
207
71.85
On Wed, 7 Feb 2001, Berin Loritsch wrote: > This. you rock the party for taking this on. >. i'd almost concur, but on the xsp-dev list, matt requested a move to a cocoon-agnostic namespace for shared xsp logicsheets (he has an impl of esql and is discussing doing some others). on the one hand, he's got a point, maybe something like this would be better: on the other hand, i'd hate to make people change namespaces again. boy, it sure seems like the namespace working group could have antipicated this problem and written in some support for namespace migration... well, c'e la vie. anyway, i have no strong opinion one way or the other, so i'd let people vote. do we stick with this scheme or generalize to > After dependancies on prefixes are resolved, the expected > results should be that if the stylesheet creates elements, > they should belong to the associated namespace, using the > default prefix. i will love you forever when you remove the dependencies on namespace prefixes. - donald
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200102.mbox/%3CPine.LNX.4.30.0102072046450.1203-100000@localhost.localdomain%3E
CC-MAIN-2016-22
refinedweb
176
69.31
QuickcheckEdit Consider the following function: getList = find 5 where find 0 = return [] find n = do ch <- getChar if ch `elem` ['a'..'e'] then do tl <- find (n-1) return (ch : tl) else find n How would we effectively test this function in Haskell? The solution we turn to is refactoring and QuickCheck. Keeping things pureEdit = fmap take5 getContents -- The actual worker take5 :: [Char] -> [Char] take5 = take 5 . filter (`elem` ['a'..'e']) Testing with QuickCheckEdit (it's nice that we can use the QuickCheck testing framework directly from the Haskell REPL).! Testing take5Edit The first step to testing with QuickCheck is to work out some properties that are true of the function, for all inputs. That is, we need to find invariants. A simple invariant might be: no more than 5 characters long. So let's weaken the property a bit: That is, take5 returns a string of at most 5 characters long. Let's test this: *A> quickCheck (\s -> length (take5 s) <= 5) OK, passed 100 tests. Good! Another propertyEditEdit One issue with the default QuickCheck configuration, when testing [Char], is that the standard 100 tests isn't enough for our situation. In fact, QuickCheck never generates a String greater than 5 characters long, when using the supplied Arbitrary] More information on QuickCheckEdit HUnitEdit Sometimes it is easier to give an example for a test than.
http://en.m.wikibooks.org/wiki/Haskell/Testing
CC-MAIN-2014-10
refinedweb
226
71.34
I know the issue of circular imports in python has come up many times before and I have read these discussions. The comment that is made repeatedly in these discussions is that a circular import is a sign of a bad design and the code should be reorganised to avoid the circular import. Could someone tell me how to avoid a circular import in this situation?: I have two classes and I want each class to have a constructor (method) which takes an instance of the other class and returns an instance of the class. More specifically, one class is mutable and one is immutable. The immutable class is needed for hashing, comparing and so on. The mutable class is needed to do things too. This is similar to sets and frozensets or to lists and tuples. I could put both class definitions in the same module. Are there any other suggestions? A toy example would be class A which has an attribute which is a list and class B which has an attribute which is a tuple. Then class A has a method which takes an instance of class B and returns an instance of class A (by converting the tuple to a list) and similarly class B has a method which takes an instance of class A and returns an instance of class B (by converting the list to a tuple). Only import the module, don't import from the module: Consider a.py: import b class A: def bar(self): return b.B() and b.py: import a class B: def bar(self): return a.A() This works perfectly fine.
https://codedump.io/share/mFL6hKfdVgu4/1/how-to-avoid-circular-imports-in-python
CC-MAIN-2017-04
refinedweb
273
72.56
Outline org.netbeans.swing.outline.jar Wow- "the fact that the JTree-as-cell-renderer design works at all is just an accident of how BasicLookAndFeel's TreeUI implementation was designed" I wonder how many more such "accidents" are waiting to happen? -JohnR Posted by: johnreynolds on June 03, 2008 at 04:59 AM actually, it's 'all hail tim and david!' i just moved the source code from one repository to another... standa Posted by: saubrecht on June 03, 2008 at 05:11 AM Hi Tim, This sounds great. Is there a precompilled jar I could download or should I build form the source? Posted by: aberrant on June 03, 2008 at 09:47 AM aberrant: Downloading the sources would be best so you could debug into the code if needed, but you could also grab the precompiled binary JAR from NetBeans' Hudson build server. Posted by: tomwheeler on June 03, 2008 at 10:01 AM Aberrant: Today's daily NetBeans build should contain the relevant JAR file - easy to isolate and use and no dependencies on other NetBeans code. JohnR, wondering how many similar accidents are waiting to happen: Way too many. It's really easy to think you're depending on an API when actually you're depending on a detail of how that API was implemented. This is the stuff that makes careers for me and Jarda Tulach, teaching API design. I'm glad we're feeding our families, but it's a bit pathetic that the fairly simple degree of analysis needed to figure out if you're creating a big honking hole isn't a basic part of everyone's design process - I guess it's because most of the world hasn't figured out that they're designing APIs whether they like it or not... Posted by: timboudreau on June 03, 2008 at 10:49 AM Is there an easy way to use this with nodes and an ExplorerManager, or is org.openide.explorer.view.TreeTableView still the way to go? Posted by: tomwheeler on June 03, 2008 at 12:15 PM Hi Tim, - great news - the link to the prcompiled jar doesn't work - any chance of this component being given to the comunity for integration with a next java-release? Posted by: f_beullens on June 04, 2008 at 05:40 AM Your screenshot leaves a lot to be desired. You should update the article with a clean, large screenshot of what this new component looks like. Posted by: cowwoc on June 04, 2008 at 06:16 AM How does it compare to SwingLabs JXTreeTable - and for that matter why not use JXTreeTable? Posted by: luano on June 04, 2008 at 07:55 AM Have been using JXTreeTable for years which also revolves around javax.swing.TreeModel and what one must assume, very similar UI delegates etc. to this org.netbeans.swing.Outline (not a very intuitive name) component. Is anyone aware of the differences between Swinglabs JXTreeTable control and this Outline one? To a naïve consumer this seems to be somewhat duplicated effort, considering they are both sponsored by Sun. Posted by: mrmorris on June 04, 2008 at 10:40 AM mrmorris, luano I think they both are very similar in one aspect: Both will never make it into the standard Java release - where such a widget would belong. In the extremely unlikely event that standard Java will get a tree widget expect it to be a completely different, third design. Posted by: ewin on June 04, 2008 at 10:56 AM The reason is JXTreeTable is (or was at the time) based on the same fatally flawed design as the one Outline was written to replace. Using it would have simply been replacing one unfixable set of bugs with a new set we'd never be able to fix except by begging the maintainers of that to slap as many band-aids on their version as our old one had. Posted by: timboudreau on June 04, 2008 at 11:04 AM Wow, this is really great new, Tim. This has been a HUGE problem for me for years also. Any thoughts of contributing this to SwingX or distributing the Outline jar file separately? It would be nice to have it in a more obvious/easier-to-get-to place than installing Netbeans and ripping a jar out. I guess you could also start a separate project as was done with your Wizard framework, but it really would be nice to have all this stuff in SwingX. Posted by: reden on June 04, 2008 at 11:14 AM how about an article on how to use it? Posted by: dog on June 04, 2008 at 12:34 PM >>how about an article on how to use it? see Posted by: jancarel on June 05, 2008 at 03:39 AM tomwheeler wrote: Downloading the sources would be best so you could debug into the code if needed Is there any easy way to report (minor) bugs I have found? Posted by: bitguru on June 09, 2008 at 10:55 AM I guess I should clarify my previous comment. It's not that I want to report a bug in the "this doesn't work" sense, but minor code issues. For example, in this bit from Outline.java TableModel mdl = getModel(); if (mdl instanceof OutlineModel) { return (OutlineModel) getModel(); probably return (OutlineModel) mdl; was intended. Posted by: bitguru on June 09, 2008 at 11:08 AM There is a discussion on SwingX projetct about this component. See details in here. Regards. Posted by: hmichel on June 16, 2008 at 07:37 AM Hey! I was at the Netbeans Deep Dive Session at Manila and I saw you demonstrate the iReport Plugin on Netbeans. I have tried it and successfully made reports. My question is, how do I tell a jButton on my project to display the report once it is pressed? I know that I have to put some codes at the Event handler of the jButton but I don't know what to code. I saw a screencast on how it is done, but it didn't show what code I needed to write to be able to make a database connection. Sorry, Java is new for me and I am really relying on the IDE to get things done. The demo that I saw of you making it was fast and looks easy enough. Kindly help please... Thanks! Posted by: benhur99ph on June 30, 2008 at 11:19 PM Hmm .. all references removed? Afraid of the competition ... ? Grinning, of course. A preview into a new SwingX JXTreeTable - inspired by this good break-through - is discussed over at the SwingLabs forum. Feedback highly welcome! Jeanette Posted by: kleopatra on July 09, 2008 at 06:25 AM References removed? Don't know what you mean. I'm thrilled that some of this work is migrating into the SwingX tree table, where there is some hope that it will find its way one day into Swing itself! -Tim Posted by: timboudreau on July 10, 2008 at 12:07 PM The fact I was referring to, is that all hyperlinks in all comments had been removed (probably due to the repeated spamming in one of them). This included the link under "SwingX" in hmichel's comment. Cheers Jeanette Posted by: kleopatra on July 11, 2008 at 12:08 AM How can i edit node by double click on it? Posted by: orenh on May 02, 2009 at 11:34 PM
http://weblogs.java.net/blog/timboudreau/archive/2008/06/egads_an_actual.html
crawl-002
refinedweb
1,254
69.31
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hello, We currently have a workflow transition that automates moving an issue from one project to another. The result is a newly created issue in the target project, and the old (original) issue is left in 'Moved' status with most of the fields cleared out. We currently are doing a manual bulk delete of all issues in this 'Moved' status each week, but would like to automate this if possible either as a scheduled job or by using a listener. Is this possible with ScriptRunner? Hello Blake. I would tackle your issue in a different way. I would do one last manual bulk delete, and then I would place a custom scripted function that deletes the issue when it is transitioned to the "Moved" status. So, in the workflow, in your transition to "Moved" place a post function that is a custom scripted function like so: import com.atlassian.jira.bc.issue.IssueService import com.atlassian.jira.component.ComponentAccessor IssueService issueService = ComponentAccessor.getIssueService() def userManager = ComponentAccessor.getUserManager() String userkey = transientVars.userKey def userDelete = userManager.getUserByKey(userkey) log.debug("Delete issue $issue with user $userDelete") if (issue) { def validationResult = issueService.validateDelete(userDelete, issue.id) if (validationResult.errorCollection.hasAnyErrors()) { log.warn("Can't delete issue") } else { issueService.delete(userDelete, validationResult) } } You could test if this works for you. If you would rather use that script manually every week, you can adapt the script you see above to take a list of issues from a project with a given state. If you can't come up with that list, I can provide some code for you. Cheers. Dyelamos Hello Blake. Jira says that deleting an issue after a transition is an illegal operation, so you will be unable to use my script, however, you can adapt it to delete issues in bulk from a project. If you need help with that let me know, and I will give you a hand with that code. Cheers. Dyelamos Thanks Dyelamos, I appreciate your response. What we're looking for though is a way to eliminate any manual intervention to delete all issues in the 'Move' status. We can do it via Bulk edit in the JIRA UI without issue, but would just like to automate it so that the deletion happens automatically (either at the time an issue is transitioned to 'Move' or on a scheduled nightly.
https://community.atlassian.com/t5/Marketplace-Apps-questions/Scheduled-job-or-listener-to-delete-all-issues-in-a-specific/qaq-p/599256
CC-MAIN-2018-17
refinedweb
414
55.84
Easy Error Tracking in Your Applications Over and over again I see developers re-implementing error tracking, I’ve been there myself. One of the reasons to this I think is because many of the tracking tools out there add too much noise and are just cumbersome to use. In many cases the biggest problem is that you need error logging too late, meaning that you want the logging once the error has already occurred. It’s of course cleaver to say that you should always think about potential errors from the start, because let’s face it we all write applications that may have unexpected exceptions. Another problem is that if we do decide to log errors in our applications, where do we store them and how do we collect the logs? Luckily there’s tools out there that can help us on the way. One that I most recently came across called Raygun.Raygun is a product from the company Mindscape that have some very interesting products in their family. Error handling just got awesome! The punch line of Raygun is quoted above, a tool that makes error handling awesome. Let’s clear something up right before we take a look at Raygun , there are multiple providers supplied for Raygun: JavaScript, .NET, Java, PHP and Cold Fusion. Didn’t find the language you work with? Don’t worry, there’s a REST API for you RESTafarians! So there are providers for Raygun, but what does it actually do? Imagine that you have your web application written in PHP, ASP.NET or just something that is using JavaScript. Now you want some centralized place where you can store errors in either of these applications, be it severe exceptions or just notices about something unexpected. If you’ve found yourself writing an error tracker where you just dump the Stack Trace and the exception message into a database, then this is certainly something for you. Imagine that if your customer calls up and says that he recently got the yellow screen of death but don’t know what he was doing or really exactly what time it was. Now imagine that you were to access your centralized error tracker and you’d have all of the information that you would need to find the error in the code base including: - Time of error - How many times the current error have occurred - Information about the system the user is using - The exception message - A Stack Trace That is Raygun! A way to track your errors in a very easy way and the presentation is just beautiful. The information you’ll get out of each error report of course depends on the data that you supply Raygun with. Take a look at the REST API to get an idea of all the data that you possibly could supply Raygun with. Enough with what, let’s look at the how! Let’s look at some code! For this demo I’m going to setup 2 things an ASP.NET MVC 4 Application and a Class Library that will simulate a data store where I can search for people. The web front will allow me to search for people inside my collection and when I wrote this example Raygun actually helped me detect one of the errors I were getting, let’s call this “TrackCeption”. First of all let’s look at the library. There’s a very easy class that represents the person, it simply has a property called “Name” inside it. public class Person { public string Name { get; set; } } Secondly there’s a class that handles the search requests, I call this RequestHandler. To set this up we need to create a new list of people, in this case it’s just going to be a static collection of people as you can see here: private static IEnumerable<Person> _people; public RequestHandler() { _people = new Person[] { new Person { Name = "Filip" }, new Person { Name = "Sofie" }, new Person { Name = "Johan" }, new Person { Name = "Anna" }, }; } Now we need a way to retrieve all these people and I like creating asynchronous methods where the operations might be time consuming and in this case I know that it will take 2 seconds to retrieve the list of people: public Task<IEnumerable<Person>> GetPeopleAsync() { return Task<IEnumerable<Person>>.Factory.StartNew(() => { Thread.Sleep(2000); return _people; }); } This leaves us with implementing a method that lets us search for people in the collection. So far we don’t care if the list has been empty or not but when we search we want to report an error when there’s no people in the list. Let’s just assume that this is an exception in the application and the end user will always search for people that are in the list. Let’s install Raygun! Installing Raygun is as easy as saying “I’ll soon blast all my errors with this Raygun!”; simply bring up the NuGet package manager and write the following: PM> Install-Package Mindscape.Raygun4Net This will install Raygun into your class library! There’s a couple of more things that we need to do in order to get Raygun up and running: Creating a Raygun account is free for 30 days and you’ll need to do it in order to start tracking your errors. Once you’ve setup an application on Raygun you can retrieve the API Key from the “Application Settings” menu like you can see in the following image: We don’t need to add the API Key just yet, we’ll add that in the application configuration file of the project that will use our library later on (in this case the MVC 4 project). Now, bringing in Raygun into our application using NuGet will allow us to write the following: new RaygunClient().Send(new Exception(string.Format("People with name `{0}` not found", name))); That will create a Raygun client and send a new exception with the message you can see to the Raygun servers and passing it the API Key that we will provide later on. So let’s take a look at how the method that will find people in the colleciton will look like. This one also takes 2 seconds to execute so we will have this one asynchronous as well, we don’t need to do it but I take every chance I got to play with asynchronous programming. public Task<IEnumerable<Person>> FindPeopleAsync(string name) { return Task<IEnumerable<Person>>.Factory.StartNew(() => { Thread.Sleep(2000); var people = _people.Where(x => x.Name.Contains(name)).ToList(); if (people == null || !people.Any()) { new RaygunClient().Send(new Exception(string.Format("People with name `{0}` not found", name))); } return people; }); } The method will look for people with the name of the value that we passed to the method and if there’s no people found it will send this notice to Raygun. You might think to yourself that this isn’t really a good exception at all, but for the purpose of the demo, let’s just look pass that. Also a bird whispered into my ears that Mindscape is working on adding other message types than exceptions to Raygun, but that’s in the future. This leaves us with a structure looking like the following: We are now ready to use our library! Create a new ASP.NET MVC 4 Application, I named mine RaygunDemo. The first thing that we are going to do is to add Raygun to this project as well, install it into the ASP.NET MVC 4 project using NuGet as we did before and open up web.config once this is done. In order for us to get Raygun working we need to add our API Key. To do this we first need to add an element inside <configSections>: <section name="RaygunSettings" type="Mindscape.Raygun4Net.RaygunSettings, Mindscape.Raygun4Net"/> This will allow us to add a configuration like this: <RaygunSettings apikey="YOUR_API_KEY_HERE" /> It should look something like this in your web.config, with a lot of extra stuff as well of course: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <section name="RaygunSettings" type="Mindscape.Raygun4Net.RaygunSettings, Mindscape.Raygun4Net"/> </configSections> <RaygunSettings apikey="YOUR_API_KEY_HERE" /> </configuration> Remember I said that Raygun helped me find an exception in my application when setting up the demo application? This is because I told Raygun to submit all the application errors. In the ASP.NET MVC 4 project, open up Global.asax and add the following method, this one will be run every time there’s an error in the application: protected void Application_Error() { var exception = Server.GetLastError(); new RaygunClient().Send(exception); } This means that every time that we get an application error Raygun will be noticed of this and the entire Stack Trace, Computer info and such will be passed into Raygun! All there’s left to add now is the Home controller and the view, the Home controller consists of two asynchronous actions that will use the library we just created. One will return a view and the other will return a Json result: public async Task<ActionResult> Index() { var requestHandler = new RequestHandler(); var people = await requestHandler.GetPeopleAsync(); return View(people); } public async Task<JsonResult> Search(string search) { var requestHandler = new RequestHandler(); var people = await requestHandler.FindPeopleAsync(search); return Json(people); } The view is equally simple, it only has a text box that allows us to search for names and then it has a list that shows all the people. Once a key is pressed inside the text box an event is fired that requests the people that have a name containing that part: @model IEnumerable<RaygunDemoLibrary.Person> <div> <span>Search: </span> <span><input id="search" type="search" /></span> </div> <h2>People</h2> <div id="people"> @foreach (var person in Model) { <div class="person"> <span>@person.Name</span> </div> } </div> @section scripts{ <script> $("#search").keyup(function () { searchValue = $("#search").val(); $.post("/Home/Search", { search: searchValue}, function (data) { var peopleDiv = $("#people"); peopleDiv.html(""); data.forEach(function (person) { name = person.Name.replace(searchValue, "<strong>" + searchValue + "</strong>"); peopleDiv.append("<div class='person'><span>" + name + "</span></div>"); }); }); }); </script> } If I start this and search for a name that exists and one that doesn’t it will look like the following: Funny thing is that we didn’t actually notice anything when we searched for something that didn’t exist. So how do we know that this worked? Raygun comes with an Amazing dashboard that will give you an overview of everything including all the recent errors, how many errors/ignored errors you have and much more like you see in this image (click to enlarge): Finally this is what it looks like when you go into details about an exception, you’ll have a graph over how many times and when it occurred and then you have very much details that will help you Raygun the errors! If you’re unable to add code to your current website you can simply add a HTTP Module and a config value! Which means you could simply add this in your web.config provided you have the dll as well of course! <httpModules> <add name="RaygunErrorModule" type="Mindscape.Raygun4Net.RaygunHttpModule"/> </httpModules> I really recommend giving Raygun a try! Let me know what you think of it and if you have any alternatives that are equally awesome! The post Easy error tracking in your applications appeared first on Filip Ekberg's blog. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/easy-error-tracking-your
CC-MAIN-2015-48
refinedweb
1,915
60.55
Pushe flutter Pushe notification service official plugin for Flutter. Installation Add the plugin to pubspec.yaml: dependencies: pushe_flutter: $latest - If you want to use the latest version, not necessarily released, you can use the github source code. pushe_flutter: git: url: Then run flutter packages get to sync the libraries. Set up credentials Go to , create an application with the same package name and get the manifest tag. Add the manifest tag in the Application tag. It should be something like this: <meta-data android: Run the project after and you should be able to see your device in console after a short time. Add the code snippets In your main.dart: import 'package:pushe_flutter/pushe.dart'; More Info - For more details, visit HomePage docs - FAQ and issues in Github repo. - Sample project is in the library source code and in the Sample repo on github
https://pub.dev/documentation/pushe_flutter/latest/
CC-MAIN-2020-34
refinedweb
145
67.15
I would like to create a template using a type, as well as passing a function which has a template argument which contains that type. Here is what I currently have: #include <iostream> void fooRef(int& ref) { std::cout << "In ref" << std::endl; } void fooPtr(int* ptr) { std::cout << "In ptr" << std::endl; } template<typename T, typename F> void Print(T arg, F func) { //DoABunchOfStuff(); func(arg); //DoSomeMoreStuff(); } int main() { int x = 5; int& ref = x; Print<int*, void(*)(int*)>(&x, &fooPtr); Print<int&, void(*)(int&)>(ref, &fooRef); } Print<int*, fooPtr>(ptr); Print<int&, fooRef>(ref); Is there a way to simplify the calls to the Print function? Yes. What you do is not specify the template types at all. Function templates go through a process call template argument deduction. In this process the parameters passed to the function have their type deduced and the compiler tries to match it up to the template parameters. if it works then the function is stamped out and the compiler continues on. So for you code if we used int main() { int x = 5; int& ref = x; Print(&x, &fooPtr); Print(std::ref(ref), &fooRef); } Then we get In ptr In ref In the second call I used std::ref so it would actually pass ref by reference. Otherwise it would make a copy of whatever ref refers to.
https://codedump.io/share/9TNcYBXFchqo/1/how-can-i-pass-a-function-to-a-templated-function-a-which-may-have-a-different-signature-based-on-another-parameter-to-a
CC-MAIN-2017-04
refinedweb
226
62.72
How Python Boolean Operators Work How Python Boolean Operators Work Sometimes how logical operators work aren't quite as obvious as we might think, especially across different languages. This article has a look at the AND and OR operators in Python and some of their nuances. Join the DZone community and get the full member experience.Join For Free Deploy code to production now. Release to users when ready. Learn how to separate code deployment from user-facing feature releases with LaunchDarkly. To start, I’ll give an update on my video series. I’ve recorded my first episode, but I’ve had a ton of troubles when trying to edit it. The application keeps crashing, which is okay, since it recovers most of what I did, but it does grow tedious. I’ve also decided to start the editing over due to a few factors. Lastly, I’ve started recording a series of videos with my best friend for his gaming YouTube channel. All of that together has led me to put off the my video series for a while and get back to writing on the blog. I’ll get back to the video series when I’ve finished recording with my friend. It could take a while. Python Boolean Operator Confusion A while back, I stumbled upon a post asking about the how the following lines could possibly right: 'a' == 'b' or 'a' returns 'a' 'a' == 'a' and 'b' returns 'b' He had a few other lines that did what you might expect, returning True and False. But why do these and and or operators not always return boolean values? To answer that, I’d like to dig into Python’s history. A History of Boolean in Python In the beginning, Python didn’t have a boolean type. This may shock you, but Python is actually getting pretty far up there in age, and back when it was created, boolean types weren’t “standard”. Often, 1 and 0 were substitutes for truth and fallacy, respectively. That, coupled with Python’s ability for objects to express “truthiness”, “false” objects were generally representative of “empty” values. But then PEP 285 came out; True and False were added into the language, but Python still kept its “truthiness” concept. Really, True and False are pretty much (maybe actually) just constant names for 1 and 0. So, how does this translate to how the earlier lines work? How the Operators Work First thing you need to realize (and probably already do) is that and and or are short-circuiting operators, which means they skip doing work that they don’t have to do. To explain, let’s look at the truth tables of the operators: AND Now, look closely at those tables. In the OR table, any time that a is true, the final result will be true. This means, that when a is true, you don’t even need to find out what b is in order to know the final result. The only time you need to know what b is is when a is false. Interestingly, in both cases, the final result is equal to b. This can be shortened “if a is true, the answer is equal to a; otherwise, it’s equal to b”. AND has a similar property, but a little different. It can be worded as “if a is false, the answer is equal to a, otherwise it’s equal to b”. Applying Truthiness Let’s apply the same idea with truthiness and turn and and or into function definitions: def and(a, b): if a: # checks truthiness return a else: return b def or(a, b): if not a: return a else: return b Finally, not takes the result of bool(a) and returns the opposite one. So, when we see 'a' == 'b' or 'a', we know that it returns 'a' because the left side of or is false, meaning we return the right side. And we know that 'a' == 'a' and 'b' returns 'b' because the left side of and is true, meaning that we again return the right side. What Good Is This? So what does this do for us? Firstly, it gave Python backwards compatibility to the time before true boolean values. Secondly, it provides ways of quickly setting a default value: def someFunc(param=None): param = param or [] … In this instance, we want the default value for param to be an empty list, but using mutable types like that as the default in the parameter list is dangerous, so we default to None and use that as our falsey value to determine whether something was provided by the caller or not. This isn’t perfect, though. If the caller provides something else that’s falsey, it will be ignored in favor of the default value. This is more properly written with Python’s conditional expression: param = [] if param is None else param But there can still be times where the boolean operations could be useful in this way. Summary Hopefully you understand how Python does booleans a little better now. Javascript is quite similar in this respect, so don’t think that Python is the odd duck (get it? Because of ‘duck typing’). Next week, we’ll delve into why 'a' in 'abc' == True returns False. Deploy code to production now. Release to users when ready. Learn how to separate code deployment from user-facing feature releases with LaunchDarkly. Published at DZone with permission of Jake Zimmerman , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/how-python-boolean-operatorswork
CC-MAIN-2018-43
refinedweb
944
71.04
Java Prime Number Hunter The following code is an Implementation of the Sieve of Eratosthenes written using Java. It's purpose is to sieve the natural numbers and separate the primes from the composites. On this site you will find a PHP implementation as well as MySQL stored procedure that implements this algorithm. This code is much much faster than both those. public class Eratosthenes { int max; int primes[]; public static void main(String args[]) { Eratosthenes erat = new Eratosthenes(10000000); erat.find_primes(); } public Eratosthenes(int max) { this.max = max; } public void find_primes() { int i,j,k, divisor, offset; double sqrt = Math.sqrt(max)+1; int tmp[]; if(max > 100) { primes = new int[max/2]; } else { primes = new int[max]; } primes[0] = 2; primes[1] = 3; for(i=2,j=5; j For completeness sake you can dump the numbers to a file. But this code is so fast, that reading from disk would be slower for small numbers. try { File f = new File("/dev/shm/primes.txt"); FileWriter writer = new FileWriter(f); writer.write(Arrays.toString(primes)); } catch (IOException ex) { ex.printStackTrace(); } Like the PHP version, this code also happens to be a memory hog. if you want find primes larger than about 10,000,000 you will need to change the amount of memory allocated to the JVM. If that is not an option, you can change the first for loop (the one that populates the array) so that numbers divisable by 5,7 and 11 are also left out. That would reduce the memory consumption by a little bit.
http://www.raditha.com/java/primes.php
CC-MAIN-2016-07
refinedweb
261
73.98
Unable to load the Selenium jar file in the common file I am working on automating one of the web module of windows based Application. Windows client has automaleted scripts in the sikuli ide. There is a common file which has all the functions which are called quite often by various modules(Web and win). I have been able to import the selenium jar file and automate a few web modules. In each folder of the webmodule test case I have to add the selenium .jar file , which isn't very efficient way of doing things.And also there is a lot of redundant code in the files e.g the following lines have to be added in all the web module test cases : l load("selenium- import org.openqa. import org.openqa. import org.openqa. import org.openqa. import org.openqa. import org.openqa. import org.openqa. import org.openqa. import org.openqa. import org.openqa. import java.util. import java.util.Arrays as Arrays import java.lang import java.lang.System as System options = CO() capability= System. capability. capability. options. options. driver = CD(capability) driver. I wanted to move the above code into the common file. Unfortunately I cant do that without moving the selenium jar file in the common folder as well.Not only that I have to move the Selenium jar file in all the modules' folders which are using the common file regardless of them being web based or windows based.and we have 100's of folders with the sikuli script. It wouldn't be wise to move the selenium. jar to all of those folders. I wonder why does it have to be this way? Is this a bug related to sikuli ? Or was it by design :D Question information - Language: - English Edit question - Status: - Solved - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Solved by: - Maria Haris - Solved: - 2018-04-03 - Last query: - 2018-04-03 - Last reply: - 2018-03-15 So you mean maybe adding the folowing lines to the runsikulix.cmd should work for loading jar file echo +++ trying to run seleniumserver echo +++ using: %PARMS% -jar selenium- I do not really understand what and how you are doing your stuff. You are talking about Eclipse (so I think it should be PyDev), but in comment #2 i see runsikulix.cmd. Why don't you run the stuff from inside Eclipse for development and testing? the Selenium jar must be on the Java class path, which is the parameter -cp for the java command. All this is basic knowledge and has nothing to do with SikuliX. The Eclipse question was a separate task for my personal experimentation. Where as this question was refered to how things are going at my work place. We are not using any other ide other than sikuli for automation of the the windows application. ok, then you have to modify the java command in the runsikulix.cmd "%JAVA_ as "%JAVA_ It is still not working. I remove the load selemium code from the test case it fails to import the openqa. [error] script [ SDK_Service ] stopped with error in line 2 [error] ImportError ( No module named openqa ) ok, I have to admit: does not work this way - sorry for misleading. Finally did my own experiments now and this is the solution: (from: http:// create a sites.txt file that contains the absolute path to your selenium...jar file and place it according to the above doc/approach 2 Then just run your scripts just with the (unmodified) runsikulix.cmd Since the selenium....jar now automatically gets on to sys.path, the imports will work without any other additional handling. worked like a charm! thankyou RaiMan ok, understood. This is how modularized Python works. Since every module has its own namespace, you have to make all external names known in this module. In your case the feature is import. But you are on a not necessary complex way: Since you are using the Jython interpreter, you can use jars on the Java classpath directly, no need to use load(), to get the jar on the Python path. The load() feature is there for jars with mixed Java and Python content. This leaves you with the imports only. The next step would be to hide the detailed Selenium stuff needed in every module, by concentrating it to one module (what you apparently already decided to do), that has the imports and def's to be used in the other modules for steps, that are repeated more than once. BTW: In this scenario I do not understand the decision for Python scripting. A clean Java solution would be much more straight forward.
https://answers.launchpad.net/sikuli/+question/665575
CC-MAIN-2019-47
refinedweb
783
64.91
Up to [cvs.NetBSD.org] / src / lib / libc / gen Request diff between arbitrary revisions Default branch: MAIN Revision 1.13.4.1 / (download) - annotate - [select for diffs], Fri Jan 16 01:04:29 2009 UTC .13: +3 -3 lines Diff to previous 1.13 (colored) next main 1.14 (colored) Pull up following revision(s) (requested by lukem in ticket #247): include/unistd.h: revision 1.119 lib/libc/gen/getlogin.c: revision 1.14 lib/libc/sys/getlogin.2: revision 1.21 Change the second argument of getlogin_r() from int to size_t, per POSIX. Revision 1.15 / (download) - annotate - [select for diffs], Sun Jan 11 02:46:27 2009 UTC (8: +2 -2 lines Diff to previous 1.14 (colored) merge christos-time_t Revision 1.13.6.2 / (download) - annotate - [select for diffs], Sat Jan 10 22:59:51 2009 UTC (8 years, 9 months ago) by christos Branch: christos-time_t Changes since 1.13.6.1: +146 -0 lines Diff to previous 1.13.6.1 (colored) to branchpoint 1.13 (colored) next main 1.14 (colored) sync with head. Revision 1.14 / (download) - annotate - [select for diffs], Tue Jan 6 11:16:46 2009 UTC (8 years, 9 months ago) by lukem Branch: MAIN CVS Tags: christos-time_t-nbase, christos-time_t-base Changes since 1.13: +3 -3 lines Diff to previous 1.13 (colored) Change the second argument of getlogin_r() from int to size_t, per POSIX. Revision 1.12.32.1 / (download) - annotate - [select for diffs], Thu Sep 18 04:39:21 2008 UTC (9 years, 1 month ago) by wrstuden Branch: wrstuden-revivesa Changes since 1.12: +73 -6 lines Diff to previous 1.12 (colored) next main 1.13 (colored) Sync with wrstuden-revivesa-base-2. Revision 1.13.6.1, Wed Jun 25 11:10:24 2008 UTC (9 years, 3 months ago) by christos Branch: christos-time_t Changes since 1.13: +0 -146 lines FILE REMOVED file getlogin.c was added on branch christos-time_t on 2009-01-10 22:59:51 +0000 Revision 1.13 / (download) - annotate - [select for diffs], Wed Jun 25 11:10:24 2008 UTC (9 years, 3 months ago) by ad Branch: MAIN CVS Tags: wrstuden-revivesa-base-3, wrstuden-revivesa-base-2, netbsd-5-base, matt-mips64-base2 Branch point for: netbsd-5, christos-time_t Changes since 1.12: +73 -6 lines Diff to previous 1.12 (colored) Add getlogin_r. Manual page changes mostly lifted from FreeBSD. Revision 1.12 / (download) - annotate - [select for diffs], Thu Aug 7 16:42:50 18 11:23:53 2003 UTC (14 years, 9 months ago) by thorpej Branch: MAIN Changes since 1.10: +14 -2 lines Diff to previous 1.10 (colored) Merge the nathanw_sa branch. Revision 1.10.6.1 / (download) - annotate - [select for diffs], Mon Feb 25 00:43:47 2002 UTC (15 years, 7 months ago) by nathanw Branch: nathanw_sa CVS Tags: nathanw_sa_end Changes since 1.10: +14 -2 lines Diff to previous 1.10 (colored) next main 1.11 (colored) Move setlogin() stub to C code, and namespace-protect it. Revision 1.10 / (download) - annotate - [select for diffs], Sat Jan 22 22:19:10_before_merge, nathanw_sa_base, minoura-xpg4dl-base, minoura-xpg4dl, fvdl_fs64_base Branch point for: nathanw_sa Changes since 1.9: +3 -3 lines Diff to previous 1.9 (colored) Delint. Remove trailing ; from uses of __weak_alias(). The macro inserts this if needed. Revision 1.9 / (download) - annotate - [select for diffs], Mon Jul 21 14:07:08 1997 UTC (20 years, 3 months ago) by jtc Changes since 1.8: :03:35 1997 UTC (20 years, 3 months ago) by christos Branch: MAIN Changes since 1.7: +4 -2 lines Diff to previous 1.7 (colored) Add extern.h to get missing __getlogin prototype Fix RCSID's Revision 1.7 / (download) - annotate - [select for diffs], Mon Sep 23 02:43:11 1996 UTC (21 years ago) by thorpej Branch: MAIN CVS Tags: nsswitch Changes since 1.6: +3 -3 lines Diff to previous 1.6 (colored) Update for the new internal name for __getlogin(). Revision 1.6.4.1 / (download) - annotate - [select for diffs], Thu Sep 19 20:02:53 1996 UTC (21 years, 1 month ago) by jtc Branch: ivory_soap2 Changes since 1.6: +8 -3 lines Diff to previous 1.6 (colored) next main 1.7 (colored) snapshot namespace cleanup: gen Revision 1.6 / (download) - annotate - [select for diffs], Mon Feb 27 04:12:47 1995 UTC (22: +9 -4 lines Diff to previous 1.5 (colored) update from Lite, with local changes. fix Ids, etc. Revision 1.1.1.2 / (download) - annotate - [select for diffs] (vendor branch), Sat Feb 25 09:11:52 1995 UTC (22 years, 7 months ago) by cgd Branch: WFJ-920714, CSRG CVS Tags: lite-2, lite-1 Changes since 1.1.1.1: +3 -21 lines Diff to previous 1.1.1.1 (colored) from lite, with minor name rearrangement to fit. Revision 1.5 / (download) - annotate - [select for diffs], Sat Dec 18 01:16:18 1993 UTC (23 years,: +4 -4 lines Diff to previous 1.4 (colored) Fix bug #24 by renaming _logname_valid to __logname_valid. Revision 1.4 / (download) - annotate - [select for diffs], Mon Oct 11 19:45:56 1993 UTC (24 years ago) by jtc Branch: MAIN Changes since 1.3: +1 -19 lines Diff to previous 1.3 (colored) Moved cuserid() from getlogin.c to its own file, cuserid.c. getlogin() and cuserid() do very different things, getlogin() is POSIX, while cuserid() is not (it was removed in the 1990 revision). Revision 1.3 / (download) - annotate - [select for diffs], Thu Aug 26 00:44:38 1993 UTC (24:28.
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/getlogin.c
CC-MAIN-2017-43
refinedweb
944
77.03
Developer Blog Gradient Boosting, Decision Trees and XGBoost with CUDA Gradient this post I look at the popular gradient boosting algorithm XGBoost and show how to apply CUDA and parallel algorithms to greatly decrease training times in decision tree algorithms. I originally described this approach in my MSc thesis and it has since evolved to become a core part of the. Gradient Boostingised as a gradient descent algorithm over an objective function. Gradient boosting is a supervised learning algorithm. This means that it takes a set of labelled training instances as input and builds a model that aims to correctly predict the label of each training example based on other non-label information that we know about the example (known as features of the instance). The purpose of this is to build an accurate model that can automatically label future data with unknown labels. Table 1 shows a toy dataset with four columns: ”age”, “has job”, “owns house” and “income”. In this example I will use income as the label (sometimes known as the target variable for prediction) and use the other features to try to predict income. To do this, first I need to come up with a model, for which I will use a simple decision tree. Many different types of models can be used for gradient boosting, but in practice decision trees are almost always used. I’ll skip over exactly how the tree is constructed. For now it is enough to know that it can be constructed in order to greedily minimise some loss function (for example squared error). Figure 1 shows a simple decision tree model (I’ll call it “Decision Tree 0”) with two decision nodes and three leaves. A single training instance is inserted at the root node of the tree, following decision rules until a prediction is obtained at a leaf node. This first decision tree works well for some instances but not so well for other instances. Subtracting the predicted label ( ) from the true label ( ) shows whether the prediction is an underestimate or an overestimate. This is called the residual and is denoted as : . Table 2 shows the residuals for the dataset after passing its training instances through tree 0. To improve the model, I can build another decision tree, but this time try to predict the residuals instead of the original labels. This can be thought of as building another model to correct for the error in the current model. I add the new tree to the model, make new predictions and then calculate residuals again. In order to make predictions with multiple trees I simply pass the given instance through every tree and sum up the predictions from each tree. Let’s take a look at the sum of squared errors for the extended model. SSE can be calculated as: . For the baseline model I just predict 0 for all instances. You can see that the error decreases as new models are added. To explain why fitting new models to the residuals of the current model increases the performance of the complete model, take the gradient of the SSE loss function for a single training instance: . So the residual is the negative gradient of the loss function for this training instance. Hence, by building models that adjust labels in the direction of these residuals, this is actually a gradient descent algorithm on the squared error loss function for the given training instances. This minimises the loss function for the training instances until it eventually reaches a local minimum for the training data. The XGBoost Algorithm The above algorithm describes a basic gradient boosting solution, but a few modifications make it more flexible and robust for a variety of real world problems. In particular, XGBoost uses second-order gradients of the loss function in addition to the first-order gradients, based on Taylor expansion of the loss function. You can take the Taylor expansion of a variety of different loss functions (such as logistic loss for binary classification) and plug them into the same algorithm for greater generalisation. In addition to this, XGBoost transforms the loss function into a more sophisticated objective function containing regularisation terms. This extension of the loss function adds penalty terms for adding new decision tree leaves to the model with penalty proportional to the size of the leaf weights. This inhibits the growth of the model in order to prevent overfitting. Without these regularisation terms, gradient boosted models can quickly become large and overfit to noise present in the training data. Overfitting means that the model may look very good on the training set but generalises poorly to new data that it has not seen before. You can find a more detailed mathematical explanation of the XGBoost algorithm in the documentation. Quantiles In order to explain how to formulate a GPU algorithm for gradient boosting, I will first compute quantiles for the input features (‘age’, ‘has job’, ‘owns house’). This process involves finding cut points that divide a feature into equal-sized groups. The boolean features ‘has job’ and ‘owns house’ are easily transformed by using 0 to represent false and 1 to represent true. The numerical feature ‘age’ transforms into four different groups. The following table shows the training data with quantised features. It turns out that dealing with features as quantiles in a gradient boosting algorithm results in accuracy comparable to directly using the floating point values, while significantly simplifying the tree construction algorithm and allowing a more efficient implementation. Finding Splits in Decision Trees Here’s a brief explanation of how to find appropriate splits for a decision tree, assuming SSE is the loss function. As an example, I’ll try to find a decision split for the “age” feature at the start of the boosting process. After quantisation there are three different possible splits I could create for this feature: (age < 18), (age < 32) or (age < 67). I need a way to evaluate the quality of each of these splits with respect to the loss function in order to pick the best. Given a node in a tree that currently contains a set of training instances and makes a prediction (this prediction value is also called the leaf weight), I can re-express the loss function at boosting iteration as follows with as the prediction so far for instance and as the weight predicted for that instance in the current tree: . Rewritten in terms of the residuals and expanded this yields . I can simplify here by denoting the sum of residuals in the leaf as . . The above equation gives the training loss of a set of instances in a leaf. The next question is, what value should I predict in this leaf to minimise the loss function? The optimal leaf weight is given by setting . This gives . I can plug this back into the loss function for the current boosting iteration to see the effect of predicting in this leaf: . Simplifying, I get . This equation tells what the training loss will be for a given leaf , but how does it tell me if one split is better than another? When I create a split in the training instances , I denote the set of instances going down the left branch as and those going down the right branch . I predict in the left leaf and in the right leaf. . The above equation gives the training loss for a given split in the tree, so I can simply apply this function to a number of possible splits under consideration and choose the one with the lowest training loss. I can recursively create new splits down the tree until I reach a specified depth or other stopping condition. Note that the sum term never actually changes at boosting iteration and can be ignored for the purpose of determining if one split is better than another in the current tree. This means that, despite all of the equations, I only need the sum of the residuals in the left-hand branch ( ), the sum of the residuals in the right-hand branch ( ) and the number of examples in each ( , ) to evaluate the relative quality of a split. I call this reduced function the “split loss”: . Implementation: Histograms and Prefix Sums Bringing this back to my example of finding a split for the feature “age”, I’ll start by summing the residuals for each possible quantile value of age. Assume I’m at the start of the boosting process and therefore the residuals are equivalent to the original labels . The sums for each quantile can be calculated easily in CUDA using simple global memory atomic add operations or using the more sophisticated shared memory histogram algorithm discussed in this post. In order to apply the function, I need to know the sum of all values to the left and all values to the right of possible split points. To do this I can use the ever useful parallel prefix sum (or scan) operation. In this case I use the “inclusive” variant of scan for which efficient implementations are available in the thrust and cub libraries. I also make the reasonable assumption that I know the sum of all residuals in the current set of instances (210 here). This allows me to calculate the sum of elements to the right by subtracting the elements to the left (the inclusive scan) from the total. After applying the split loss function to the dataset, the split (<18) has the greatest reduction in the SSE loss function. I would also perform this test over all other features and then choose the best out of all features to create a decision node in the tree. A GPU can do this in parallel for all nodes and all features at a given level of the tree, providing powerful scalability compared to CPU-based implementations. Memory Efficiency: Bit Compression and Sparsity Gradient boosting in XGBoost contains some unique features specific to its CUDA implementation. Memory efficiency is an important consideration in data science. Datasets may contain hundreds of millions of rows, thousands of features and a high level of sparsity. Given that device (GPU) memory capacity is typically smaller than host (CPU) memory, memory efficiency is important. I have implemented parallel primitives for processing sparse CSR (Compressed Sparse Row) format input matrices following work in the modern GPU library and CUDA implementation of sparse matrix vector multiplication algorithms. These primitives allow me to process a sparse matrix in CSR format with one work unit (thread) per non-zero matrix element and efficiently look up the associated row index of the non-zero element using a form of vectorised binary search. This significantly reduces storage requirements, provides stable performance and still allows very clean and readable code. Another innovation is the use of symbol compression to store the quantised input matrix on the device. The maximum integer value contained in a quantised nonzero matrix element is proportional to the number of quantiles, commonly 256, and to the number of features which are specified at runtime by the user. It seems wasteful to use a four-byte integer to store a value that very commonly has a maximum value less than 216. To solve this, the input matrix is bit compressed down to bits per element on the host before copying it to the device. Note that this data is not modified once on the device and is read many times. I can then define an iterator that accesses these compressed elements in a seamless way, resulting in minimal changes to existing CUDA kernels and function calls: CompressedIterator<int> itr(compressed_buffer, max_value); template <typename iter_t> __global__ void some_kernel(iter_t x) { int tid = threadIdx.x + blockIdx.x * blockDim.x; int decompressed_value = x[tid]; } It’s easy to implement this compressed iterator to be compatible with the Thrust library, allowing the use of parallel primitives such as scan: thrust::device_vector<int> output(n); thrust::exclusive_scan(itr, itr + n, output.begin()); Using this bit compression method in XGBoost reduces the memory cost of each matrix element to less than 16 bits in typical use cases. This is half the cost of the equivalent CPU implementation. Note that while it would be possible to use this iterator just as easily on the CPU, the instructions required to extract a symbol from the compressed stream can result in a noticeable performance penalty. The GPU kernels are typically memory bound (as opposed to compute bound) and therefore do not incur the same performance penalty from extracting symbols. Performance on GPUs I evaluate performance of the entire boosting algorithm using the commonly benchmarked UCI Higgs dataset. This is a binary classification problem with 11M rows * 29 features and is a relatively time consuming problem in the single machine setting. The following Python script runs the XGBoost algorithm. It outputs the decreasing test error during boosting and measures the time taken by GPU and CPU algorithms. import csv import numpy as np import os.path import pandas import time import xgboost as xgb import sys if sys.version_info[0] >= 3: from urllib.request import urlretrieve else: from urllib import urlretrieve data_url = "" dmatrix_train_filename = "higgs_train.dmatrix" dmatrix_test_filename = "higgs_test.dmatrix" csv_filename = "HIGGS.csv.gz" train_rows = 10500000 test_rows = 500000 num_round = 1000 plot = True # return xgboost dmatrix def load_higgs(): if os.path.isfile(dmatrix_train_filename) and os.path.isfile(dmatrix_test_filename): dtrain = xgb.DMatrix(dmatrix_train_filename) dtest = xgb.DMatrix(dmatrix_test_filename) if dtrain.num_row() == train_rows and dtest.num_row() == test_rows: print("Loading cached dmatrix...") return dtrain, dtest if not os.path.isfile(csv_filename): print("Downloading higgs file...") urlretrieve(data_url, csv_filename) df_higgs_train = pandas.read_csv(csv_filename, dtype=np.float32, nrows=train_rows, header=None) dtrain = xgb.DMatrix(df_higgs_train.ix[:, 1:29], df_higgs_train[0]) dtrain.save_binary(dmatrix_train_filename) df_higgs_test = pandas.read_csv(csv_filename, dtype=np.float32, skiprows=train_rows, nrows=test_rows, header=None) dtest = xgb.DMatrix(df_higgs_test.ix[:, 1:29], df_higgs_test[0]) dtest.save_binary(dmatrix_test_filename) return dtrain, dtest dtrain, dtest = load_higgs() param = {} param['objective'] = 'binary:logitraw' param['eval_metric'] = 'error' param['tree_method'] = 'gpu_hist' param['silent'] = 1 print("Training with GPU ...") tmp = time.time() gpu_res = {} xgb.train(param, dtrain, num_round, evals=[(dtest, "test")], evals_result=gpu_res) gpu_time = time.time() - tmp print("GPU Training Time: %s seconds" % (str(gpu_time))) print("Training with CPU ...") param['tree_method'] = 'hist' tmp = time.time() cpu_res = {} xgb.train(param, dtrain, num_round, evals=[(dtest, "test")], evals_result=cpu_res) cpu_time = time.time() - tmp print("CPU Training Time: %s seconds" % (str(cpu_time))) if plot: import matplotlib.pyplot as plt min_error = min(min(gpu_res["test"][param['eval_metric']]), min(cpu_res["test"][param['eval_metric']])) gpu_iteration_time = [x / (num_round * 1.0) * gpu_time for x in range(0, num_round)] cpu_iteration_time = [x / (num_round * 1.0) * cpu_time for x in range(0, num_round)] plt.plot(gpu_iteration_time, gpu_res['test'][param['eval_metric']], label='Tesla P100') plt.plot(cpu_iteration_time, cpu_res['test'][param['eval_metric']], label='2x Haswell E5-2698 v3 (32 cores)') plt.legend() plt.xlabel('Time (s)') plt.ylabel('Test error') plt.axhline(y=min_error, color='r', linestyle='dashed') plt.margins(x=0) plt.ylim((0.23, 0.35)) plt.show() Running this script on a system with an NVIDIA Tesla P100 accelerator and 2x Intel Xeon E5-2698 CPUs (32 cores total) shows a 4.15x speed improvement for the GPU algorithm with the same accuracy as the CPU algorithm. Figure 3 plots the decrease in test error over time for each algorithm. As you can see, the test error decreases much more rapidly with GPU acceleration. Using the GPU-accelerated boosting algorithm results in a significantly faster turnaround for data science problems. This is particularly important because data scientists typically run the algorithm not just once, but many times in order to tune hyperparameters (such as learning rate or tree depth) and find the best accuracy. Future Work Future work on the XGBoost GPU project will focus on bringing high performance gradient boosting algorithms to multi-GPU and multi-node systems to increase the tractability of large-scale real-world problems. Experimental multi-GPU support is already available at the time of writing but is a work in progress. Stay tuned! Conclusion Whether you are interested in winning Kaggle competitions, predicting customer interactions or ranking relevant web pages, you can achieve significant improvements in training and inference speed by using CUDA-accelerated gradient boosting. Get started here with an easy python demo, including links to installation instructions. References Forest Image at top by Scott Wylie from UK CC BY 2.0, via Wikimedia Commons Mitchell R, Frank E. (2017) Accelerating the XGBoost algorithm using GPU computing. PeerJ Computer Science 3:e127 Chen, T., & Guestrin, C. (2016, August). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794). ACM.
https://developer.nvidia.com/blog/gradient-boosting-decision-trees-xgboost-cuda/
CC-MAIN-2021-31
refinedweb
2,768
54.02
just ran into some issues while attempting to cx_Freeze an application that uses PyOpenGL. The build process would execute to completion, but upon trying to run the generated executable, I would get a message stating that the "win32" module could not be found. I'm posting the solution that worked for me, in case anyone ever runs into the same issue. What I had to do was to add the following imports: from OpenGL.GL import * import OpenGL.platform.win32 import OpenGL.arrays.ctypesarrays import OpenGL.arrays.numpymodule import OpenGL.arrays.lists import OpenGL.arrays.numbers import OpenGL.arrays.formathandler #import OpenGL.arrays.strings #Only if you use GLSL I got the original idea from this thread:. I just needed to add a couple of additional imports. Well, I hope this helps anyone. Best, Alejandro.- --
https://sourceforge.net/p/pyopengl/mailman/pyopengl-users/?viewmonth=201109&viewday=19
CC-MAIN-2017-39
refinedweb
135
62.85
The function is defined in <cmath> header file. It is identical to nextafter() except that the second argument of nexttoward() is always of type long double. nexttoward() prototype [As of C++ 11 standard] double nexttoward(double x, long double y); float nexttoward(float x, long float y); long double nexttoward(long double x, long double y); double nexttoward(T x, long double y); // For integral type The nexttoward() function takes a two arguments and returns a value of type double, float or long double type. nexttoward() Parameters - x: The base value. - y: The value towards which the return value is approximated. nexttoward() Return value The nexttoward() function returns the next representable value after x in the direction of y. Example 1: How nexttoward() function works in C++? #include <iostream> #include <cmath> using namespace std; int main() { long double y = -1.0; double x = 0.0; double result = nexttoward(x, y); cout << "nexttoward(x, y) = " << result << endl; return 0; } When you run the program, the output will be: nexttoward(x, y) = -4.94066e-324 Example 2: nexttoward() function for integral types #include <iostream> #include <cmath> #include <climits> using namespace std; int main() { long double y = INFINITY; int x = INT_MAX; double result = nexttoward(x,y); cout << "nexttoward(x, y) = " << result << endl; return 0; } When you run the program, the output will be: nexttoward(x, y) = 2.14748e+09
https://www.programiz.com/cpp-programming/library-function/cmath/nexttoward
CC-MAIN-2021-39
refinedweb
226
52.19