text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
from popcornnotify import notify notify('5555555555', 'New user sign up') notify('[email protected]', 'Memory exceeded...', subject='Staging Error') notify(['555...', '[email protected]', '[email protected]'], "I'm sorry, Dave. I'm afraid I can't do that.") let notify = require('popcornnotify') notify('555-123-4567', 'New user sign up') notify('[email protected]', 'Memory exceeded...', {subject: 'Staging Error'}) notify(['5554259000', '[email protected]'], "I'm sorry, Dave. I'm afraid I can't do that.") curl \ -u super_secret_api_key: \ -d recipients="5555555555,[email protected]" \ -d message="My first popcorn." \ -d subject="Important Info" # Hello World notify 5555555555 "Hello World" # or echo "Hello World" | notify 5555555555 # Get a notification after a long running script ./script.sh && echo "Script done at $(date)" | notify "555...,[email protected]" # Let Dave know you can't do that notify "555...,[email protected]" "I'm sorry, Dave. I'm afraid I can't do that." Getting Started Popcorn Notify is designed to be an effortless method for sending emails and text messages from your code. Client libraries are simple to install and are available on pip and npm. PopcornNotify is a web API with one endpoint /notify. There's no installation necessary (see curl above), but we recommend using one of the client libraries. The client libraries makes sure the request is non-blocking and will read your API key from the environment. We recommend using the client libraries, but you can use curl (examples above). API Keys The client libraries will look for an environment variable called POPCORNNOTIFY_API_KEY. export POPCORNNOTIFY_API_KEY="abc123456" Or, you can pass your API key to notify as an argument. # python notify('5555555555', 'Bork bork', api_key='*******') // javascript notify('5555555555', 'Bork bork', {apiKey:'*******'}) API keys cost $45 for 12 months and 10,000 messages. Buy an API key here. Pricing: API keys cost $10 for each 1,000 messages and last one year.Buy an API key here. Popcorn Notify only sends SMS in the USA. Find out when Popcorn Notify supports international SMS. API PopcornNotify has one endpoint, which accepts POST requests. Domains and From Numbers PopcornNotify uses several phone numbers for sending text messages. Manage your API key and account settings here. Settings PopcornNotify libraries use the following optional environment variables. def notify(recipients, message, subject='', api_key=''): Manage your API key and account settings here. Hi, I'm Jason! I spent a few years working at big and small tech companies. Lately, I've been working on building tools to scratch my developer itches. I really like tools that make software development simpler. I live in Boston, and travel a lot. My favorite trip last year was the Tour du Mont Blanc, a 10 day hike through the French, Swiss, & Italian Alps (blog coming soon). I provide full support for all of my projects. You can reach me at [email protected] If you're using Popcorn Notify for a project, I'd be really excited to hear about it! I provide full support for all of my projects. You can reach me at [email protected]
https://popcornnotify.com/
CC-MAIN-2019-43
refinedweb
499
60.41
How to: Create Task List Comments The Task List displays comments in your code that begin with the comment marker for your development language. Next to the comments, the Task List also displays a default task token, such as TODO, HACK, or UNDONE, or a custom comment token. The number of comments that appear in the Task List may change, depending on the type of project you are working on. With Visual Basic and Visual C#, the Task List displays all the comments in the solution. With Visual C++ projects, the Task List displays only the comments that are found in the file that is currently active in the editor. Task List comments can be used to indicate a variety of work to be done at the location marked, including: features to be added; problems to be corrected; classes to implement; place markers for error-handling code; and reminders to check in the file. As with other Task List entries, you can double-click any comment entry to display the file indicated in the Code Editor and jump to the line of code marked. To add a comment to the Task List Open a source code file in the Code Editor. Begin a comment on a line of code you want to mark with <marker><token>, where <marker> is the comment marker for your development language, and <token> is the name of a recognized Task List comment token, such as TODO, HACK, or UNDONE, or a custom comment token. Note To add task tokens programmatically, set the DefaultCommentToken of the TaskList. Complete the comment with text describing the task. For example: // TODO Fix this function. - or - ' HACK Update this procedure. On the View menu, click Task List. The Task List is displayed. In the Categories list, click Comments. The Comments list displays the comment text. You can click any Task List comment to activate the file in the Code Editor and jump to the line of code that the comment marks. To change a comment or remove it from the Task List Open your code file for editing in the Code Editor. Modify or delete the comment in your code. To change the default priority of a comment On the Tools menu, click Options. Expand the Environment folder and then click Task List. In Token list, select the comment token you want to change the default priority of. Note You cannot change the priority for the TODO comment. In the Priority drop-down list, select a different priority type. Click OK. To create a custom comment token On the Tools menu, click Options. Expand the Environment folder and then click Task List. The Task List, Environment, Options Dialog Box is displayed. In the Comment tokens box, type a Name for your custom token. From the Priority list, select Normal, Low, or High. Click Add, and then click OK. For more information about adding custom tokens to the Token List, see How to: Create Custom Comment Tokens. Example // The following C# code file contains several TODO reminders. // Note that each line task reminder begins, like this comment, // with the C# comment indicator, '//'. // TODO: Add standard code header comment here. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Windows.Forms; // TODO: Add references to specific resources here. namespace TodoExample { partial class TodoExample : Form { public TodoExample() { InitializeComponent(); } } } // TODO: It is even possible to add comments at the end. The example shows that you can place TODO comments anywhere in a code file. Task List comments are best used to indicate work that must be done on specific lines or sections of your code. They are less appropriate for more lengthy descriptions of general development tasks. See Also Tasks How to: Control the Task List Reference Task List (Visual Studio) Supplying XML Code Comments Other Resources Setting Bookmarks in Code
https://docs.microsoft.com/en-us/previous-versions/zce12xx2(v=vs.110)
CC-MAIN-2019-18
refinedweb
645
65.93
D. Dice Game time limit per test 1.0 s memory limit per test 256 MB input standard input output standard output A dice is a small cube, with each side having a different number of spots on it, ranging from 1 to 6. Each side in the dice has 4 adjacent sides that can be reached by rotating the dice (i.e. the current side) 90 degrees. The following picture can help you to conclude the adjacent sides for each side in the dice. In this problem, you are given a dice with the side containing 1 spot facing upwards, and a sum n, your task is to find the minimum number of required moves to reach the given sum. On each move, you can rotate the dice 90 degrees to get one of the adjacent sides to the side that currently facing upwards, and add the value of the new side to your current sum. According to the previous picture, if the side that currently facing upwards contains 1 spot, then in one move you can move to one of sides that contain 2, 3, 4, or 5 spots. Initially, your current sum is 0. Even though at the beginning the side that containing 1 spot is facing upwards, but its value will not be added to your sum from the beginning, which means that you must make at least one move to start adding values to your current sum. Input The first line contains an integer T (1 ≤ T ≤ 200), where T is the number of test cases. Then T lines follow, each line contains an integer n (1 ≤ n ≤ 104), where n is the required sum you need to reach. Output For each test case, print a single line containing the minimum number of required moves to reach the given sum. If there is no answer, print -1. Example input 2 5 10 output 1 2 Note In the first test case, you can rotate the dice 90 degrees one time, and make the side that contains 5 spots facing upwards, which make the current sum equal to 5. So, you need one move to reach sum equal to 5. In the second test case, you can rotate the dice 90 degrees one time, and make the side that contains 4 spots facing upwards, which make the current sum equal to 4. Then rotate the dice another 90 degrees, and make the side that contains 6 spots facing upwards, which make the current sum equal to 10. So, you need two moves to reach sum equal to 10. 题意:给你一个六面相对方向已知的骰子,初始时1朝上,每次投掷时只能转到其附近的四个数字,给你一个数n,问最少要转几次才能使和达到n,如果不行输出-1 做法:dp[i][j]表示和达到i时,最后一次数字是j是最少的次数,开每一个数字四周的方向数字,则转移方程是dp[i+dir[j][k]][dir[j][k]]=min(dp[i][j]+1,dp[i+dir[j][k]][dir[j][k]]); 代码如下: #include<bits/stdc++.h> using namespace std; const int inf=0x3f3f3f3f; int dir[7][4]={{0,0,0,0},{2,3,4,5},{1,3,4,6},{1,2,5,6},{1,2,5,6},{1,3,4,6},{2,3,4,5}}; int dp[10005][7]; int main(){ int t,n; cin>>t; while(t--){ cin>>n; memset(dp,125,sizeof(dp)); for(int i=0;i<4;i++){ dp[dir[1][i]][dir[1][i]]=1; } for(int i=1;i<=n;i++){ for(int j=1;j<=6;j++){ for(int k=0;k<4;k++){ dp[i+dir[j][k]][dir[j][k]]=min(dp[i][j]+1,dp[i+dir[j][k]][dir[j][k]]); } } } int ans=inf; for(int i=1;i<=6;i++){ ans=min(ans,dp[n][i]); } if(ans>=inf) printf("-1\n"); else printf("%d\n",ans); } return 0; }
https://www.codetd.com/article/2831440
CC-MAIN-2021-49
refinedweb
618
74.02
Substring c++ How to find lexicographical concatenation of all substrings of a string Let’s have a look at the problem statement. Given a string, find concatenation of all substrings in lexicographic order. For input string “abc”, Output will be aababcbbcc. Concatenation of substrings in lexicographic order is “a”+”ab”+”abc”+”b”+”bc”+”c” = “aababcbbcc”. Now let’s look at the solution. First. Find all the substrings of string and store them in a string array. The size of array would be n*(n+1)/2 where n is the length of input string. Second. Sort the string array to make them all in lexicographical order. Third. Concatenate the strings of string array in another empty string. Now let’s look at the implementation. First, we Find all the substrings of string and store it in an array arr. We sue a double loop to do this. I goes from 0 to n and len goes from 1 to n-1. We find substring using s.substr(i, len) which returns substring from index i and length len and add it to the array. Now, Sort the string array to make them all in lexicographical order using sort(). Then we have to Concatenate the strings of string array in another empty string. Create an empty string result. To it add all the substrings of the array. Finally return the string result. The time complexity of this solution is O(n2) as we use a double loop here. std::string::substr. string substr (size_t pos = 0, size_t len = npos) const;. Syntax: string substr (size_t pos, size_t len) const; Parameters: pos: Position of the first character to be copied. len: Length of the sub-string. size_t: It is an unsigned integral type. Return value: It returns a string object. #include <string.h> #include <iostream> using namespace std; int main() { // Take any string string s1 = "Geeks"; // Copy three characters of s1 (starting // from position 1) string r = s1.substr(1, 3); // prints the result cout << "String is: " << r; return 0; } Output: String is: eek Returns a substring [pos, pos+count) . If the requested substring extends past the end of the string, or if count == npos. Use string.substr() function - How to use string.substr() function? If I am correct, the second parameter of substr()should be the length of the substring. How about b = a.substr(i,2); The substr() function returns a substring of the current string, starting at index, and length characters long. If length is omitted, it will default to string::npos. std::stoi · Parameters str String object with the representation of an integral number. idx Pointer to an object of type size_t Use std::string::find as follows: if (s1.find(s2) != std::string::npos) { std::cout << "found!" << '\n'; } Note: “found!” will be printed if s2 is a substring of s1, both s1 and s2 are of type std::string. You might see some C++ programs that use the size() function to get the length of a string. This is just an alias of length() . To get the length of a string, use the length() function: Example: #include <iostream> #include <string> using namespace std; int main() { string txt = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; cout << "The length of the txt string is: " << txt.length(); return 0; }
https://epratap.com/category/cpp/
CC-MAIN-2021-10
refinedweb
542
76.52
Hello Gurus, I'm experimenting (learning) how to write text data to a file. I could not write text in new lines. I thought of append "\n" to each line but that does not looks good because there is every possibility of forgetting to append this. Hence i'm looking for a method similar to println method of System class. Following is the code snippet that i have written and that should explain my pain. Looking forward to have constructive comments. package LearningExamples; import java.io.File; import java.io.FileWriter; import java.io.BufferedWriter; import java.io.IOException; public class Write2File { public Write2File(){ System.out.println("Write2File Constructor was called"); } /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub try{ File file= new File("/home/ganesh/MyFile.txt"); FileWriter fw = new FileWriter(file); BufferedWriter bw = new BufferedWriter(fw); bw.write("Testing file writing!"); bw.write("next line"); bw.close(); } catch(IOException ioe){ System.out.println("IOException was caught!"); ioe.printStackTrace(); } } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/27440-write-new-lines-file.html
CC-MAIN-2016-22
refinedweb
165
55
This post goes into the details of how you can add a "save game" feature to your games. Python's built-in shelve module makes this very easy to do, but there are some pitfalls and tips that you might want to learn from this post before trying to code it up yourself. To give an example of adding a "save game" feature to a game program, I'll be taking the Flippy program (an Othello clone) from Chapter 10 of "Making Games with Python & Pygame" (and Reversi from Chapter 15 of "Invent Your Own Computer Games with Python".) If you want to skip ahead and see the Flippy version with the "save game" feature added, you can download the source code and image files used by the game. You need Pygame installed to run Flippy (but not Reversi). The Naïve Ways to Implement "Save Game" A save game feature works by taking all of the values in the program's variables (which in total called the game state) and writes them out to a file on the hard drive. The game program can be shut down, and when next started again the values can be read from the file and into the program's variables. If you are familiar with Python's file I/O and the open(), write(), readline(), and close() functions, you might think that you can just open a file in write-mode, and then write out all the data to the hard drive that you want to load the next time the player plays the game. This is doable, but turns out to be a bad way to implement a "save game" feature. Quick Start: The shelve Built-In Module The shelve module has a function called shelve.open() that returns a "shelf file object" that can be used to create, read, and write data to shelf files on the hard drive. These shelf files can store any Python value (even complicated values like lists of lists or objects of classes you make). Say you had a variable with a list of list of strings, like the mainBoard variable in the Flippy program. Here's how you can save the state of all 64 spaces on the board (which are 64 string values) and the other variables (playerTile, computerTile, showHints, and turn): import shelve shelfFile = shelve.open('saved_game_filename') shelfFile['mainBoardVariable'] = mainBoard shelfFile['playerTileVariable'] = playerTile shelfFile['computerTileVariable'] = computerTile shelfFile['showHintsVariable'] = showHints shelfFile.close() The shelve.open() function returns a "shelf file object" that you can store values in using the same syntax as a Python dictionary. You don't have to put the word "Variable" at the end of the key. I just did that to point out that the name doesn't have to be the same as the variable with the value being stored. In fact, just like any dictionary key, it doesn't even need to be a string. The data stored in the shelf object is written out to the hard drive when shelfFile.close() is called. Note that the shelf file name is 'saved_game_filename', which doesn't have an extension. An extension isn't needed, but you can add one if you want. This will be explained more in detail. Here's the code to load the game state from a shelf file: import shelve shelfFile = shelve.open('saved_game_filename') mainBoard = shelfFile ['mainBoardVariable'] playerTile = shelfFile ['playerTileVariable'] computerTile = shelfFile ['computerTileVariable'] showHints = shelfFile ['showHintsVariable'] shelfFile.close() 'some 'some_file.txt', then the files will be some_file.txt.bak, some_file.txt.dat, and some_file.txt.dir. Security Warning Just like with any file, your players can modify the values in the shelf file. You can try obfuscated the data in it, but this never works in the long run. What this means in most cases is that people can make saved game hack programs to let players cheat. That's not really a problem. What can be a problem is if your game executes code depending on the content of the shelf file, than this can have bad security implications. Say as part of the save game file, you include a string that tells your game what program to run. Something like this: shelfFile['programToRun'] = 'notepad.exe' A malicious hacker could change the shelf file so that instead of the string 'notepad.exe' it is 'virus.exe' or some other value that could cause your game program to act badly because of a saved game file. In most cases, your games won't store data like this. But it's something that I just wanted to point out. Examples: flippy_withsavegame.py and reverse_withsavegame.py The good news is that the shelve module makes it as simple as possible to convert the values in variables to files on the hard drive, and vice versa. Just call shelve.open(), assign the values to the shelf file object, and then call the close() method. But it also helps to see this used in actual code. I've modified a couple Othello games from "Invent Your Own Computer Games with Python" and "Making Games with Python". Both are written for Python 3. Reversi is an Othello clone that uses ASCII text for graphics. Flippy is an Othello clone that has real graphics. You will need to download and install Pygame to run it. 4 thoughts on “Implement a "Save Game" Feature in Python with the shelve Module” Great post. I've been doing this the old way with I/O functions. I'll try this for sure. Thx :) I like using YAML via PyYAML for this sort of thing because it's a standard format that can be read by other languages and I can easily edit the files manually if necessary. Unlike XML or other similar options, you can take a Python dictionary and call yaml.dump(dict) and it will give you a nice human readable YAML representation of the data without having to mess around with any further details. Hi. I know this was posted quite a while ago, but if you don't mind, could you explain what to do if you need to save methods? I'm working on a game and each character has a dictionary of the commands they can use (with the command to type in as the key, and the function for that command as the value) and I want this saved as some characters might have different skills and so on. When I try to shelve the characters I get an error because instance methods can't be pickled. Is there a way around this, or will I have to completely rewrite the way commands work? Thanks. Hi. I know this was posted a while ago, but if you don't mind, could you explain what to do if you want to shelve methods? I'm working on a game where each character has a dictionary of commands (with the string to be typed in as the key and the function for the command as the value). I want to save this as different characters might have different skills and so on. When I try to shelve it I get an error because instance methods can't be pickled. Is there a way around this, or do I have to rewrite the way commands work? Thanks.
http://inventwithpython.com/blog/2012/05/03/implement-a-save-game-feature-in-python-with-the-shelve-module/?wpmp_switcher=mobile
CC-MAIN-2015-18
refinedweb
1,216
79.3
hey i am beginner in java. i am not able to resolve differences in interface and with abstract class. do abstract class have methods defined??does those methods have body?? if so for abstract classes do not create objects, how they r invoked?? hey i am beginner in java. i am not able to resolve differences in interface and with abstract class. do abstract class have methods defined??does those methods have body?? if so for abstract classes do not create objects, how they r invoked?? An interface is a static context template that classes can implement. The methods within the interface have no body and must be of public access. In addition, interfaces can also contain "constants" in which data is declared and defined. An example-- public interface MyInterface{ public final int VALUE = 100; // constant public void doSomething(); // method declaration within interface } --when a class implements an interface, the class also implements the methods the interface contains. However, the implementing class MUST define the methods implemented. An example-- public class MyTestClass implements MyInterface{ public void doSomething(){ System.out.println(VALUE); } } --notice that the class MyTestClass doesn't declare VALUE however it is defined in the interface and therefore MyTestClass implements the public-accessible VALUE also. An Abstract class is much like both a class AND an interface, however moreso a class. An Abstract class has the potential to have default methods and interface-like methods where the methods MUST be defined by concrete subclasses that extend from the abstract class. Furthermore, the concept of Abstract is "not fully defined" so in that respect, abstract classes cannot be instantiated. An example-- public abstract class MyAbstractClass{ protected abstract void subCommand(); public final void templateMethod(){ System.out.println("Performing a defined command..."); subCommand(); System.out.println("SubCommand finished!"); } } --do not be distracted by the protected and final modifiers. The key focus is the abstract void subCommand method. Notice that an abstract class is like an interface in which it can house methods without definitions (so long as they are declared abstract) and additionally you can't instantiate an abstract class much like you can't instantiate an interface. However when you are using an abstract class in a subclass you must override the abstract methods you are implementing from the abstract class. An example-- public class MyOtherClass extends MyAbstractClass{ protected void subCommand(){ System.out.println("Whoo! This is my method! O_O"); } public static void main(String... args){ MyAbstractClass mac = new MyOtherClass(); mac.templateMethod(); } } --notice that I'm storing MyOtherClass into a reference-variable called MyAbstractClass and then the templateMethod is called. Because MyOtherClass has a specialized implementation of subCommand, the call to templateMethod polymorphically calls the overridden subCommand within the template algorithm. Hopefully with the above example, you can see why abstract classes and interfaces are extremely useful. The major difference between an abstract class and an interface is the way Java handles each - you can extend only one class but you can implement an infinite amount of interfaces. That being said, use interfaces whenever possible if the implementing class needs more implementations. suppose in abstact class "InputStream" we can implement method read(byte[]) with it's reference. how does this method is being refferred by it's abstract class as there will be no instantiation to this abstact class. suppose in abstact class "InputStream" we can implement method read(byte[]) with it's reference. how does this method is being refferred by it's abstract class as there will be no instantiation to this abstact class. I'm not sure if I'm understanding the question. Do you mean how are you able to call read from an object of type InputStream if you cannot directly instantiate one? If so then please re-read my post, this is mentioned in there. If you mean extending the abstract class then using read, you will have to override the read method with your own implementation of read. Your concrete class that extends from the abstract class should not be marked abstract, so that you will be able to instantiate your concrete class and still have the functionality of the extending class. Read my above post thoroughly - this is also mentioned. Also, this might help-- CLICK! You know what? You should use interface when you want to take all the common features from different stuffs say you have one interface that contains method a() and another interface that contains method b(). Now in the situation when you want your program to have both a() and b() need to be present and you want a restriction that a() and b() shouldn't be present in one file you should break them in two interfaces because your program can implement two interfaces simultaneously. So here you can apply Interface. So when I need to provide the multiple inheritance functionality I should use Interface. You should use abstract class when you want multilevel inheritance. Because abstract class can only be extended once. Am I clear from this point? If not then let me know and I'll explain with real life scenarios and examples to make them clear as to where to use Interface and abstract class?. ...
https://www.daniweb.com/programming/software-development/threads/142503/how-is-abstact-class-different-from-interface
CC-MAIN-2017-26
refinedweb
860
56.25
Edward Moemeka My new book, Real World Windows 8 App Development with JavaScript (apress), will be out July 10th. Pre-order now. You might also like my other titles: Professional C# (wrox) and C# for Java developers (sams) (all available on Amazon) Inside Windows Platform | Going from Windows Phone Silverlight to Windows RT XAML awesome video! Can you shed some light on when WinPRT will have ServiceModel and associated namespaces so we can *really* start porting. All these samples are great from a UI perspective but don't include the tremendous amount of backend redesigning that needs to happen for a port to happen (becuase of lack of WCF stack). Most of us built our WP Silverlight apps using WCF web services. For some bizarre reason everyone keeps giving "LongListSelector" and "namespace change" examples; these are just about the least of anyone's concerns given the huge backend effort needed to commit to WinPRT. For my part I spent a week converting one of my apps without even suspecting that such a thing would be ommitted and not clearly stated. Then i right-clicked on my project searching for the "Add Service Reference" menu item and cried myself to sleep... Please give us some good news, every release of your UI SDKs lately seem to be missing some obscure critical piece that just delays innovation and development. What's New In C# 6.0 @MadsTorgersen:Thanks for you feedback, this request might be too late but please consider making anonymous types like new{Name="john",Age=5}; serializable by generating a public parameterless constructor for them and also marking the type with the appropriate DataMember/Contract attributes. They are already public types and cannot be redefined at runtime so why not make them seriazable? This reduces code bloat astronomically. What's New In C# 6.0 Um, what happened to the $ shortcut for dictionary initializers that have a string based key. I get the squiggles when I try to use it. This is a critical issue for data access types as you noted in a previous blog. I am tired of creating one-off data classes that are just extensions of some back end projection. The $ feature was a nice middle ground since anonymous types can still not be serialized properly. Enterprise App Deployment for Windows and Windows Phone ...OR...you can just write your enterprise code in WPF (even web based) and not have to jump through 10000 hoops to get it installed. Why would you make it so difficult to simply install an app, effectively making every other UI technology you have canibalize the use of WInRT and apps in the enterprise ?!? What does one gain by using WinRT for LOB? There is absolutely NO value proposition based on these deployment rules. If you only use store compliant APIs then the store is best for deploying, if you dont then you are basically working three times as hard to make an application that will look (as of windows 10) and feel just like it's Native counterpart. You are shackling software development to IT guys who are: conservative, slow, and certainly dont have a passion for the technology. Do you expect a VP on the Infrastructure team to approve such a process (maybe even have to train people!) just so I can build a cool reporting utility for my manager? This type of approach is so strategic a choice, that it kills the ability for visionary technologists - who are *not* in senior leadership positions - to create enough of a ground-swell to make WInRT's merrits visible to people who *are* leadership positions. But there is no simple way to explain to leaders what the actual value proposition of WinRT in this scenario is because its value is something only a visionary technologists could divine. See the problem? Please remove all these pointless restrictions because they really are idiotic. People can already install / run programs from anywhere and that is just not going to change. I agree that there should be an API subset between store apps and normal apps, you can even put up a prompt or something every time a sideloaded app is run, but this madness has to stop. GET OUT OF YOUR OWN WAY!!! And let us make this happen. The New Windows Phone Application Model. Anders Hejlsberg: Introducing TypeScript !!!!! Visual Studio Toolbox: Visual Studio 11 Beta with Jason Zander Future directions for C# and Visual Basic ). Windows 8 Running on ARM ! Allen Wirfs-Brock and Chris Wilson: EcmaScript, JavaScript and the Web I know javascript rules, but I just always feel like i'm stitching things together with it. Allen Wirfs-Brock and Chris Wilson: EcmaScript, JavaScript and the Web will there ever be a new compiled language for the web? One that all the browsers will understand? I really wish that something else existed beyond javascript. Where the Multitouch Devices Are pt. 2.
https://channel9.msdn.com/Niners/Moemeka
CC-MAIN-2017-13
refinedweb
821
61.87
Burning Bootloaders Into AVRs Using Arduino 4,817 46 10 Featured Intro: Burning Bootloaders Into AVRs Using Arduino This instructable is the result of my failure with Optiboot(8Mhz) bootloader on ATmega8. While trying that, the clock fuse had been accidentally set to 16MHz which prevented me from using my AVR because I didn't have a 16MHz oscillator and my AVR would only boot up if I provided it an external 16MHz oscillator. While looking up for multiple solutions, I finally stumbled upon this link! This instructable aims at the usage of two Arduinos, one as a programmer and the other as a pre-built Arduino circuit. Therefore method is a real help for those who have multiple Arduinos but no programmers such as USBasp etc. Now coming back to my ATmega8, since it required a 16MHz oscillator and Arduinos have them soldered in at the respective pin for ATmega328, wouldn't it be wise to use ATmega8 in place of ATmega328 because both have the same pin configuration? Now my external oscillator issue was solved and hence I proceeded with everything as shown in this instructable. I'd like to add that this procedure will work with all the AVRs having the same pin configuration as that of ATmega328. In other words, your AVR has to be pin compatible with ATmega328. In my knowledge, this includes all the variants of ATmega8, ATmega328, ATmega48 and ATmega88. There may be others as well. If you know them, comment them down. Step 1: Gather Around Some Stuff The best thing about this method for burning bootloaders is that it doesn't require even require a single electronic component. All that it requires are only the basics! Requirements: - Male to Male Jumper Wires - 6 - AVR Chip (in which bootloader is to be burnt) I'll be proceeding with the instructable by taking the ATmega8 as an example. The AVR should be pin compatible with ATmega328. - Arduino UNO(or any Arduino with 28 Pin IC Base) Make sure that its ATmega328 is removable and not SMD. It will be used to insert your AVR and to burn a bootloader in it .I'll be referring to this board as Arduino-1 throughout this instructable. - Another Arduino (any version capable of storing ArduinoISP) This Arduino board will be used as an ISP because I don't have one. If so, then simply hook up your ISP with the other Arduino. I'll be calling it Arduino-2 throughout this instructable. Now that you have all the prerequisites, let's move on and start tackling the problem head on! Step 2: Replacing the Arduino IC Pick up Arduino-1, remove its IC(ATmega328) and insert your AVR in its place. The IC can be easily removed using tweezers by applying leverage at one of its ends and slowly pushing in the tweezers below the IC. After that, insert your AVR in the correct indentation according to the IC base. Step 3: Using Arduino As an ISP If you have an ISP beforehand, skip this step! If not, then proceed with this. Arduino IDE has a lot of useful sketch examples pre-installed. One of them is the ArduinoISP sketch which configures Arduino in such a way that it acts as an ISP. Therefore, to make your Arduino an ISP, simply upload ArduinoISP sketch to it. In our case, this sketch has to be uploaded to Arduino-2. The pin assignments of Arduino when it acts as an ISP are shown above in the form of pictures. Refer to them for better understanding of its wiring. Here is the code for ArduinoISP if you are unable to find it! // This sketch turns the Arduino into a AVRISP // using the following arduino pins: // // Pin 10 is used to reset the target microcontroller. // // By default, the hardware SPI pins MISO, MOSI and SCK pins are used // to communicate with the target. On all Arduinos, these pins can be found // on the ICSP/SPI header: // // MISO °. . 5V (!) Avoid this pin on Due, Zero... // SCK . . MOSI // . . GND // // On some Arduinos (Uno,...), // even when not using an Uno. (On an Uno this is not needed). // // Alternatively you can use any other digital pin by configuring software ('BitBanged') // SPI and having appropriate defines for PIN_MOSI, PIN_MISO and PIN_SCK. // // IMPORTANT: When using an Arduino that is not 5V tolerant (Due, Zero, ...) // as the programmer, make sure to not expose any of the programmer's pins to 5V. // A simple way to accomplish this is to power the complete system (programmer // and target) at 3V3. // // Put an LED (with resistor) on the following pins: // 9: Heartbeat - shows the programmer is running // 8: Error - Lights up if something goes wrong (use red if that makes sense) // 7: Programming - In communication with the slave // #include "Arduino.h" #undef SERIAL #define PROG_FLICKER true // Configure SPI clock (in Hz). // E.g. for an attiny @128 kHz: the datasheet states that both the high // and low spi clock pulse must be > 2 cpu cycles, so take 3 cycles i.e. // divide target f_cpu by 6: // #define SPI_CLOCK (128000/6) // // A clock slow enough for an attiny85 @ 1MHz, is a reasonable default: #define SPI_CLOCK (1000000/6) // Select hardware or software SPI, depending on SPI clock. // Currently only for AVR, for other archs (Due, Zero,...), // hardware SPI is probably too fast anyway. #if defined(ARDUINO_ARCH_AVR) #if SPI_CLOCK > (F_CPU / 128) #define USE_HARDWARE_SPI #endif #endif // Configure which pins to use: // The standard pin configuration. #ifndef ARDUINO_HOODLOADER2 #define RESET 10 // Use pin 10 to reset the target rather than SS #define LED_HB 9 #define LED_ERR 8 #define LED_PMODE 7 // Uncomment following line to use the old Uno style wiring // (using pin 11, 12 and 13 instead of the SPI header) on Leonardo, Due... // #define USE_OLD_STYLE_WIRING #ifdef USE_OLD_STYLE_WIRING #define PIN_MOSI 11 #define PIN_MISO 12 #define PIN_SCK 13 #endif // HOODLOADER2 means running sketches on the atmega16u2 // serial converter chips on Uno or Mega boards. // We must use pins that are broken out: #else #define RESET 4 #define LED_HB 7 #define LED_ERR 6 #define LED_PMODE 5 #endif // By default, use hardware SPI pins: #ifndef PIN_MOSI #define PIN_MOSI MOSI #endif #ifndef PIN_MISO #define PIN_MISO MISO #endif #ifndef PIN_SCK #define PIN_SCK SCK #endif // Force bitbanged SPI if not using the hardware SPI pins: #if (PIN_MISO != MISO) || (PIN_MOSI != MOSI) || (PIN_SCK != SCK) #undef USE_HARDWARE_SPI #endif // Configure the serial port to use. // // Prefer the USB virtual serial port (aka. native USB port), if the Arduino has one: // - it does not autoreset (except for the magic baud rate of 1200). // - it is more reliable because of USB handshaking. // // Leonardo and similar have an USB virtual serial port: 'Serial'. // Due and Zero have an USB virtual serial port: 'SerialUSB'. // // On the Due and Zero, 'Serial' can be used too, provided you disable autoreset. // To use 'Serial': #define SERIAL Serial #ifdef SERIAL_PORT_USBVIRTUAL #define SERIAL SERIAL_PORT_USBVIRTUAL #else #define SERIAL Serial #endif // Configure the baud rate: #define BAUDRATE 19200 // #define BAUDRATE 115200 // #define BAUDRATE 1000000 #define HWVER 2 #define SWMAJ 1 #define SWMIN 18 // STK Definitions #define STK_OK 0x10 #define STK_FAILED 0x11 #define STK_UNKNOWN 0x12 #define STK_INSYNC 0x14 #define STK_NOSYNC 0x15 #define CRC_EOP 0x20 //ok it is a space... void pulse(int pin, int times); #ifdef USE_HARDWARE_SPI #include "SPI.h" #else #define SPI_MODE0 0x00 class SPISettings { public: // clock is in Hz SPISettings(uint32_t clock, uint8_t bitOrder, uint8_t dataMode) : clock(clock){ (void) bitOrder; (void) dataMode; }; private: uint32_t clock; friend class BitBangedSPI; }; class BitBangedSPI { public: void begin() { digitalWrite(PIN_SCK, LOW); digitalWrite(PIN_MOSI, LOW); pinMode(PIN_SCK, OUTPUT); pinMode(PIN_MOSI, OUTPUT); pinMode(PIN_MISO, INPUT); } void beginTransaction(SPISettings settings) { pulseWidth = (500000 + settings.clock - 1) / settings.clock; if (pulseWidth == 0) pulseWidth = 1; } void end() {} uint8_t transfer (uint8_t b) { for (unsigned int i = 0; i < 8; ++i) { digitalWrite(PIN_MOSI, (b & 0x80) ? HIGH : LOW); digitalWrite(PIN_SCK, HIGH); delayMicroseconds(pulseWidth); b = (b << 1) | digitalRead(PIN_MISO); digitalWrite(PIN_SCK, LOW); // slow pulse delayMicroseconds(pulseWidth); } return b; } private: unsigned long pulseWidth; // in microseconds }; static BitBangedSPI SPI; #endif void setup() { SERIAL.begin(BAUDRATE); pinMode(LED_PMODE, OUTPUT); pulse(LED_PMODE, 2); pinMode(LED_ERR, OUTPUT); pulse(LED_ERR, 2); pinMode(LED_HB, OUTPUT); pulse(LED_HB, 2); } int error = 0; int pmode = 0; // address for reading and writing, set by 'U' command unsigned; uint8_t flashpoll; uint16_t eeprompoll; uint16_t pagesize; uint16_t eepromsize; uint32_t flashsize; } parameter; parameter param; // this provides a heartbeat on pin 9, so you can tell the software is running. uint8_t hbval = 128; int8_t hbdelta = 8; void heartbeat() { static unsigned long last_time = 0; unsigned long now = millis(); if ((now - last_time) < 40) return; last_time = now; if (hbval > 192) hbdelta = -hbdelta; if (hbval < 32) hbdelta = -hbdelta; hbval += hbdelta; analogWrite(LED_HB, hbval); } static bool rst_active_high; void reset_target(bool reset) { digitalWrite(RESET, ((reset && rst_active_high) || (!reset && !rst_active_high)) ? HIGH : LOW); }); } } uint8_t spi_transaction(uint8_t a, uint8_t b, uint8_t c, uint8_t d) { SPI.transfer(a); SPI.transfer(b); SPI.transfer(c); return SPI.transferx80: breply(HWVER); break; case 0x81: breply(SWMAJ); break; case 0x82: breply(SWMIN); break; case 0x93: breply('S'); //x01000000 + buff[17] * 0x00010000 + buff[18] * 0x00000100 + buff[19]; // avr devices have active low reset, at89sx are active high rst_active_high = (param.devicecode >= 0xe0); } void start_pmode() { // Reset target before driving PIN_SCK or PIN_MOSI // SPI.begin() will configure SS as output, // so SPI master mode is selected. // We have defined RESET as pin 10, // which for many arduino's is not the SS pin. // So we have to configure RESET as output here, // (reset_target() first sets the correct level) reset_target(true); pinMode(RESET, OUTPUT); SPI.begin(); SPI.beginTransaction(SPISettings(SPI_CLOCK, MSBFIRST, SPI_MODE0)); // See avr datasheets, chapter "SERIAL_PRG Programming Algorithm": // Pulse RESET after PIN_SCK is low: digitalWrite(PIN_SCK, LOW); delay(20); // discharge PIN_SCK, value arbitrally chosen reset_target(false); // Pulse must be minimum 2 target CPU clock cycles // so 100 usec is ok for CPU speeds above 20KHz delayMicroseconds(100); reset_target(true); // Send the enable programming command: delay(50); // datasheet: must be > 20 msec spi_transaction(0xAC, 0x53, 0x00, 0x00); pmode = 1; } void end_pmode() { SPI.end(); // We're about to take the target out of reset // so configure SPI pins as input pinMode(PIN_MOSI, INPUT); pinMode(PIN_SCK, INPUT); reset_target(false); pinMode(RESET, INPUT); pmode = 0; } void universal() { uint8_t ch; fill(4); ch = spi_transaction(buff[0], buff[1], buff[2], buff[3]); breply(ch); } void flash(uint8_t hilo, unsigned int addr, uint8_t data) { spi_transaction(0x40 + 8 * hilo, addr >> 8 & 0xFF, addr & 0xFF, data); } void commit(unsigned int addr) { if (PROG_FLICKER) { prog_lamp(LOW); } spi_transaction(0x4C, (addr >> 8) & 0xFF, addr & 0xFF, 0); if (PROG_FLICKER) { delay(PTIME); prog_lamp(HIGH); } } unsigned int current_page() {; unsigned int page = current_page(); while (x < length) { if (page != current_page()) { commit(page); page = current_page(); } flash(LOW, here, buff[x++]); flash(HIGH, here, buff[x++]); here++; } commit(page); return STK_OK; } #define EECHUNK (32) uint8_t write_eeprom(unsigned int length) { // here is a word address, get the byte address unsigned int start = here * 2; unsigned(unsigned int start, unsigned int length) { // this writes byte-by-byte, // page writing may be faster (4 bytes at a time) fill(length); prog_lamp(LOW); for (unsigned int x = 0; x < length; x++) { unsigned int addr = start + x; spi_transaction(0xC0, (addr >> 8) & 0xFF, addr & 0xFF, buff[x]); delay(45); } prog_lamp(HIGH); return STK_OK; } void program_page() { char result = (char) STK_FAILED; unsigned int length = 256 * getch(); length += getch(); char memtype = getch(); // flash memory @here, (length) bytes if (memtype == 'F') { write_flash(length); return; } if (memtype == 'E') {, unsigned int addr) { return spi_transaction(0x(); length += getch(); char memtype = getch(); if (CRC_EOP != getch()) { error++; SERIAL.print((char) STK_NOSYNC); return; } SERIAL.print((char) STK_INSYNC); if (memtype == 'F') result = flash_read_page(length); if (memtype == 'E') result = eeprom_read_page(length); SERIAL.print(result); } void read_signature() { if (CRC_EOP != getch()) { error++; SERIAL.print((char) STK_NOSYNC); return; } SERIAL.print((char) STK_INSYNC); uint8_t high = spi_transaction(0x30, 0x00, 0x00, 0x00); SERIAL.print((char) high); uint8_t middle = spi_transaction(0x30, 0x00, 0x01, 0x00); SERIAL.print((char) middle); uint8_t low = spi_transaction(0x30, 0x00, 0x02, 0x00); SERIAL.print((char) low); SERIAL.print((char) STK_OK); } ////////////////////////////////////////// ///////////////////////////////////////// /////////////////////////////////// //////////////////////////////////// void avrisp() { uint8_t ch = getch(); switch (ch) { case '0': // signon error = 0; empty_reply(); break; case '1': if (getch() == CRC_EOP) { SERIAL.print((char) STK_INSYNC); SERIAL.print("AVR ISP"); SERIAL.print((char) STK_OK); } else { error++; SERIAL.print((char) STK_NOSYNC); } break; case 'A': get_version(getch()); break; case 'B': fill(20); set_parameters(); empty_reply(); break; case 'E': // extended parameters - ignore for now fill(5); empty_reply(); break; case 'P': if (!pmode) start_pmode(); empty_reply(); break; case 'U': // set address (word) here = getch(); here += 256 * getch(); empty_reply(); break; case 0x60: //STK_PROG_FLASH getch(); // low addr getch(); // high addr empty_reply(); break; case 0x61: //STK_PROG_DATA getch(); // data empty_reply(); break; case 0x64: //STK_PROG_PAGE program_page(); break; case 0x74: //STK_READ_PAGE 't' read_page(); break; case 'V': //0x56 universal(); break; case 'Q': //0x51 error = 0; end_pmode(); empty_reply(); break; case 0x75: //STK_READ_SIGN 'u'); } } Did this step already? Well that was quick! Proceed to the next step. Step 4: Wiring Up the Arduino We are almost halfway! By referring to the above wiring diagram, connect your Arduino-1 with Arduino-2. In this step, all that we're doing is connecting the pins of the AVR directly to Arduino-2. Note that Arduino-1 doesn't play any more role than that of supplying an external oscillator in this case. I'm only using it to get female headers for the pins of the AVR inserted in it, and because it has in-built oscillator circuitry. The main schematic from which this setup has been derived has been attached to this step. If you intend to use your programmer rather than Arduino and are feeling out of place, then don't worry, refer to the diagrams for ISP to Arduino connections which are in this step as well as in the previous step. Step 5: Burning the Bootloader Considering you've done everything correctly up till now , we can proceed with the burning of the bootloader into our AVR. Follow these steps to do so: - Connect your PC with Arduino-2. - Open Arduino IDE, no matter which version, and do the following. - Select the board whose bootloader is to be burnt in the AVR. For example, as shown in the picture, you'll choose Arduino UNO if you're burning the bootloader onto ATmega328 because Arduino UNO is based around ATmega328 and therefore it's bootloader will be the same. - Set the COM Port to Arduino-2's COM port. In my case, it was COM1. Yours might differ. - Set the Programmer as "Arduino as ISP". - Hit Burn Bootloader under Tools menu. After clicking on "Burn Bootloader", the burning will begin. You should see the constant blinking of Rx and Tx LEDs on Arduino-2. After about half a minute you should also get a message saying "Done Burning Bootloader". I would like to add that I didn't upload any official board's bootloader onto my ATmega8! I wanted to use the internal oscillator of my ATmega8 because I don't have any oscillator. Therefore I used a different bootloader. The method I used is covered over here. Keep in mind that the wiring used here is the same as the one used there. Therefore you can continue using the wiring you've done till now to proceed with that method. That concludes this step. Step 6: Congratulations! Phew! That was some work! Anyways, now your desired bootloader has been burnt into the inserted AVR successfully and it is up to you whether you use it in combination with Arduino-1 or in other circuits. I was able to get my two ATmega8s in working condition and replace a busted ATmega328 of an Arduino UNO(Arduino-1). I have embedded the video of the ATmega328, in which I have burnt the bootloader, programmed with Blink sketch. Wondering About Sketch Uploading? I'd like to add that sketches can be uploaded to the AVR using the identical setup you used to burn the bootloader. However, instead of clicking on "Burn Bootloader", you use "Ctrl+Shift+U" key combination to upload the opened sketch onto the AVR. All this key combination does is that it tells the IDE to upload the sketch using the programmer. I'd like to write more but I think that I've written all there is to write in this instructable. If you think I'm missing something, you're more than welcome to mention suggestions! I'd appreciate it if you support me on Patreon. By: Utkarsh Verma Thanks to Ashish Choudhary for lending his camera and to Abhishek Kumar for lending his Arduino(Arduino-1). samsulhadi made it! 10 Discussions 7 months ago I was able to burn bootloader to my Atmega8. Also succesffuly tested the blink sketch sample. All this using the old Arduino IDE. If I were to use the latest version of the Arduino IDE due to the updated libraries. will this still work and what board should I choose? Reply 4 months ago Yes, it will work. I'm using Arduino IDE v1.8.5 and an Atmega8 AVR with a Duemilanove board for AVR host, and an Arduino Uno as the ISP. Just make sure you select "Arduino NG or older" on board setting and "Atmega8" processor. It works well for me. Reply 7 months ago In short, yes. But there are some things to look out for. This instructable aims at the selection of boards only to convey the bootloader, to be burnt to the MCU, to the Arduino IDE. So it would work if the bootloader you're trying to burn is official, like the Optiboot for ATmega328 which I've burnt in this instructable. It comes along by default with Arduino UNO, that's why I chose UNO. So, if you want to burn custom bootloaders, you'll have to switch back to older Arduino, since newer versions don't allow adding custom boards. 10 months ago Hey can't this done in one arduino(the smd version ) and where do you geteven the bootloaded atmega 328 in retail in india leave alone the non bootloader bersion ?? pls advice Reply 10 months ago Yes, it is possible. You "might" require a 16MHz crystal oscillator, but that depends on whether your device has the clock fuse set to 16Mhz. If it isn't then you won't even need the oscillator. So tell me about your device. Regarding the availability of electronics goods in India, it's really bad. That's why I always order my components from Aliexpress. It has all the required stuff. Reply 10 months ago This would be a good place to refer to: 11 months ago Fantastic write-up, I wish I was 1/10th as smart as you at age 16!!! Reply 11 months ago Thanks! It really encourages me to do more! 11 months ago Nice work. Reply 11 months ago Thanks :)
https://www.instructables.com/id/Burning-Bootloaders-Into-AVRs-Using-Arduino/
CC-MAIN-2018-47
refinedweb
3,091
62.58
STRTOFFLAGS(3) BSD Programmer's Manual STRTOFFLAGS(3) fflagstostr, strtofflags - convert between file flag bits and their string names #include <unistd.h> char * fflagstostr(u_int32_t flags); int strtofflags(char **stringp, u_int32_t *setp, u_int32_t *clrp); The fflagstostr() function returns a comma separated string of the file flags represented by flags. If no flags are set, a zero length string is returned. If memory cannot be allocated for the return value, fflagstostr() returns NULL. The value returned from fflagstostr() is obtained from malloc(3) and should be returned to the system with free(3) when the program is done with it. The strtofflags() function takes a string of file flags, as described in chflags(1), parses it, and returns the "set" and "clear" flags such as would be given as arguments to chflags(2). On success, strtofflags() re- turns 0, otherwise it returns non-zero and stringp is left pointing to the offending token. The fflagstostr() function may fail and set errno for any of the errors specified for the library routine malloc(3). chflags(1), chflags(2), malloc(3) The fflagstostr() and strtofflags() functions first appeared in OpenBSD 2.8. MirOS BSD #10-current January 1,.
http://mirbsd.mirsolutions.de/htman/i386/man3/fflagstostr.htm
crawl-003
refinedweb
195
70.33
From: Hartmut Kaiser (hartmutkaiser_at_[hidden]) Date: 2003-12-17 11:28:23 Joel de Guzman wrote: > Darren Cook wrote: > > > Joel de Guzman wrote: > > > >> Robert Ramey wrote: > >> > >>> Would it be possible that spirit 1.6 also be included in > boost 1.31 > >>> perhaps under different directory/namespace ? > > > > > > Hi Joel, > > I didn't see you directly answer this: could boost/spirit and > > boost/spirit.1.6 co-exist in boost 1.31? > > Possible? I wouldn't say no. However, there are lots of > things that should be put into consideration. The first one > that comes to my mind is namespace and directory structure. I > do not think that putting > 1.6.1 in a different namespace and directory is a good idea. > That would hurt backward compatibility. v1.6 code should work > as before. > > I'm sure there are other issues as well. However, I won't > close my mind on the idea. If boost only had a *smart* > configuration based download such that when your compiler is > VC6, a copy of Spirit 1.6 is sent instead, it would be ideal. > > I'd like to hear what Hartmut, Dan, Martin, etc. think about > this idea. The only thing I could think of is to include both versions of Spirit into boost and wrapping them with pp constants to ensure, that actually only one version is tossed in: Directory structure: boost spirit spirit_1_6_x ... Version 1.6.x goes here spirit ... Head version goes here For _every_ Spirit header (spirit.hpp, spirit/core.hpp etc.) do something like: boost/spirit.hpp: #if defined(BOOST_SPIRIT_USE_VERSION_1_6) // this is the only header, which needs to be renamed #include <boost/spirit/spirit_1_6_x.hpp> #else #include <boost/spirit/spirit.hpp> #endif boost/spirit/core.hpp #if defined(BOOST_SPIRIT_USE_VERSION_1_6) #include <boost/spirit/spirit_1_6_x/core.hpp> #else #include <boost/spirit/spirit/core.hpp> #endif The drawback is, that it would triple the file count! Makes this any sense? Regards Hartmut Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/12/57801.php
CC-MAIN-2020-40
refinedweb
342
62.64
A statement specifies an action to be performed. Except as indicated, statements are executed in sequence. In other words, we can say that Statement is the text the compiler will attempt to turn into executable instructions. In C, Statements are always end with semicolon (;) character. In Source file multiple statements can share a single line or vice-versa. Language C defines six types of Statements and they are: 1. Label Statement A label consists of a name followed by a colon (:) on a line by itself. Label names shall be unique within a function. Any statement may be preceded by a prefix that declares an identifier as a label name. Labels in themselves do not alter the flow of control, which continues unimpeded across them. Label names that are not referenced might be classified as redundant code. However, labels have uses other than as the destination of goto statements. They may also be used as break points when stepping through code using a symbolic debugger. Automatically generated code may also contain labels that are not jumped to. C does not provide any mechanism for declaring labels before they appear as a prefix on a statement. Labels may be referenced before they are declared. That is, it is possible to goto a label that has not yet been seen within the current function. Label Statement have three types: a) Identifier: Statement; b) case constant-expression: Statement; c) default: Statement; case and default label shall appear only in switch statement and for referring see in selection statements. Small program using Identifier Statement as under: Without any Identifier Label Statement: #include<stdio.h> int main(void){ int i; for(i=0; i<10; i++){ printf("%d - Executed\n", i); } printf("Ending Program"); return 0; } Output of Program: 0 – Executed 1 – Executed 2 – Executed 3 – Executed 4 – Executed 5 – Executed 6 – Executed 7 – Executed 8 – Executed 9 – Executed Ending Program With Identifier Label Statement: #include<stdio.h> int main(void){ int i; for(i=0; i<10; i++){ printf("%d - Executed\n", i); if(i==5){ goto ending;</div> } } ending: printf("Ending Program"); return 0;</div> } Output of Program: 0 – Executed 1 – Executed 2 – Executed 3 – Executed 4 – Executed 5 – Executed Ending Program 2. Compound Statements A compound statement contains numerous statements that you enclose within curly brace ({}) punctuation. The statements inside a compound statement can be any kind of statement. The Compounded statements are two types (a) declaration-list (b) statement-list. If there are declarations, they must come before any statements. The scope of each identifier declared at the beginning of a compound statement extends from its declaration point to the end of the block. It is visible throughout the block unless a declaration of the same identifier exists in an inner block. Storage is not allocated and initialization is not permitted if a variable or function is declared in a compound statement with storage class extern. The declaration refers to an external variable or function defined elsewhere. Variables declared in a block with the auto or register keyword are reallocated and, if necessary, initialized each time the compound statement is entered. These variables are not defined after the compound statement is exited. If a variable declared inside a block has the static attribute, the variable is initialized when program execution begins and keeps its value throughout the program. When editing someone else’s code, always use the style used in that code. #include<stdio.h> int main(void){ /* Starting of 1st compound statement */ int i; for(i=0; i<10; i++){ /* Starting of 2nd compound statement */ printf("%d - Executed\n", i); } /* Ending of 2nd compound statement */ printf("Ending Program"); return 0; } /* Ending of 1st compound statement */ 3. Expression Statements Expression Statement is the most important part of Language C and if one can’t understand expression statement don’t move forward until you understand expression statement. An expression consists of combination of operators and operands. (An operand, recall, is what an operator operates on.) The simplest expression is a lone operand, and you can build in complexity from there. Thus, we can say that an expression is anything that evaluates to a numeric value. Treating an expression as a statement simplifies the C syntax. However, this specification is needed to handle the resulting value. If a function call is evaluated as an expression statement for its side effects only, the discarding of its value may be made explicit by converting the expression to a void expression. #include<stdio.h> int main(void){ int a, b, c; int X, y, z; a = b*3+c; /*This is the Expression Statement */ X=y-z; /*This is the Expression Statement */ return 0; } Thus, an expression is nothing but a valid combination of constants, variables and operators. Thus, 3, 3 + 2, c and a + b * c – d all are valid expressions. 4. Selection Statements A selection statement selects among a set of statements depending on the value of a controlling expression. A selection expression is used to make one of two choices and these may be driven by applications requirements. Selection Statement written in the C language have a well-defined syntactic and semantic meaning. Selection statement are used to choose one of several flows of the control the Selection statement. There are two types of Selection Statement are, if and switch. 4.1 IF – Selection Statements The controlling expression of an if statement shall have a scalar type. The value of a if selection expression is used to make one of two choices. Values used in this way are generally considered to have a boolean role. Some languages require the controlling expression to have a boolean type and their translators enforce this requirement. Some coding guideline documents contain recommendations that effectively try to duplicate this boolean type requirement found in other languages. In both forms, the first sub-statement is executed if the expression compares unequal to 0. In the else form, the second sub-statement is executed if the expression compares equal to 0. If the first sub-statement is reached via a label, the second sub-statement is not executed. As a general rule in C, anywhere you can use a simple statement, you can use any compound statement, which is just a number of simple or compound ones enclosed in {}. The ability to replace single statements by complex ones at will is one feature that makes C much more pleasant to use than Programming Languages. #include <stdio.h> int main(void){ int i, j; for ( i = 0; i < 10; i++ ){ printf("Outer loop executing. i = %d\n", i ); for (j=0; j<3; j++ ){ printf("Inner loop executing. j = %d\n", j ); if(i==5){ goto stop; /* We have used condition statement*/ } } } printf( "Loop Exited. i = %d\n", i ); /* This message does not print: */ stop: printf( "Jumped to stop. i = %d\n", i ); /*Loop will finish when i =5.*/ return 0; } Output of Program: Outer loop executing. i = 0 Inner loop executing. j = 0 Inner loop executing. j = 1 Inner loop executing. j = 2 Outer loop executing. i = 1 Inner loop executing. j = 0 Inner loop executing. j = 1 Inner loop executing. j = 2 Outer loop executing. i = 2 Inner loop executing. j = 0 Inner loop executing. j = 1 Inner loop executing. j = 2 Outer loop executing. i = 3 Inner loop executing. j = 0 Inner loop executing. j = 1 Inner loop executing. j = 2 Outer loop executing. i = 4 Inner loop executing. j = 0 Inner loop executing. j = 1 Inner loop executing. j = 2 Outer loop executing. i = 5 Inner loop executing. j = 0 Jumped to stop. i = 5 4.2 Switch – Selection Statements The controlling expression of a switch statement shall have integer type. A switch statement uses the exact value of its controlling expression and it is not possible to guarantee the exact value of an expression having a floating type. For this reason implementations are not required to support controlling expressions having a floating type. A controlling expression, in a switch statement, having a boolean role might be thought to be unusual, an if statement being considered more appropriate. However, the designer may be expecting the type of the controlling expression to evolve to a non-boolean role, or the switch statement may have once contained more case labels. The expression of each case label shall be an integer constant expression and no two of the case constant expressions in the same switch statement shall have the same value after conversion. Some sequences of case label values might be considered to contain suspicious entries or omissions. For instance, a single value that is significantly larger or smaller than the other values (an island), or a value missing from the middle of a contiguous sequence of values (a hole). While some static analysis tools check for such suspicious values, it is not clear to your author what, if any, guideline recommendation would be worthwhile. There may be at most one default label in a switch statement. Some coding guideline documents recommend that all switch statements contain a default label. There does not appear to be an obvious benefit for such a guideline recommendation. To adhere to the guideline developers simply need to supply a default label and an associated null statement. There are a number of situations where adhering to such a guideline recommendation leads to the creation of redundant code.. #include<stdio.h> int main(void){ int no; printf("\nEnter a number with Digits form 0 to 9(1 digit no): "); scanf("%d",&no); switch(no){("Kindly Enter the Digits Only\t"); break; } return 0; } 5. Iteration Statements 6. Jump Statements to be continued….. 30.756074 76.787399 Advertisements
https://vineetgupta22.wordpress.com/2011/09/25/statements/
CC-MAIN-2017-30
refinedweb
1,611
56.66
Problem Statement Suppose you have an integer array. The problem “Segregate even and odd numbers” asks to rearrange the array so that the odd and even numbers can be separated in two segments of the array. The even numbers be shifted into the left side of the array and odd numbers be shifted into the right side of the array, Example arr[]={ 2,4,5,1,7,8,9,7} 2 4 8 1 7 5 9 7 Explanation: All even elements are placed before odd elements Thay also follow the same order as they were in the given input. Algorithm to Segregate even and odd numbers 1. Set i = -1, j = 0. 2. While j is not equal to n(where n is the length of the array), 1. Check if arr[j] is even or odd if it is even 1. Do i++, and swap the values of arr[i] and arr[j] respectively. 2. Increase the value of j by 1. 3. Print the array. Explanation We have given an integer array. We are asked to rearrange the array in two parts. One part is of even numbers and other is of odd numbers. We should be doing this within the given array. Such that the even numbers should be shifted to the left side of the array. And odd numbers should be shifted to the right side of the array. For this we will be checking each of the array elements. If it is even or not, if it is even number then we will be pulling it the left side of the array. After performing this operation with even numbers. All odd number automatically will go on to the right side of the array. Then we will traverse the array, taking two index values for a single array. One is for regular traversing and the other is for even numbers indexing. We will traverse the array with the value of j. And check if any of the numbers is even means if arr[j] is even. Then we will be swapping it with the arr[i]. We increase i only when the value of array[j] is found to be even. So that even though we do not find even valued numbers we will keep on incrementing the value of j. And then if we find the even number. Then we will swap arr[j] with arr[i] which was supposed to be an odd number. It will only be swapped within the array when the even numbers are found. And of course, either it will be swapped with an odd number or with itself. After all these operations we will print the resultant array. Code C++ code to Segregate even and odd numbers #include<iostream> using namespace std; void getArrangedEvenOdd(int arr[], int n) { int i = -1, j = 0; while (j != n) { if (arr[j] % 2 == 0) { i++; swap(arr[i], arr[j]); } j++; } for (int i = 0; i < n; i++) cout << arr[i] << " "; } int main() { int arr[] = { 2,4,5,1,7,8,9,7}; int n = sizeof(arr) / sizeof(int); getArrangedEvenOdd(arr, n); return 0; } 2 4 8 1 7 5 9 7 Java code to Segregate even and odd numbers public class rearrangeEvenOdd { public static void getArrangedEvenOdd( int arr[], int n) { int i = -1, j = 0; while (j != n) { if (arr[j] % 2 == 0) { i++; int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } j++; } for (int k = 0; k < n; k++) System.out.print(arr[k] + " "); } public static void main(String args[]) { int arr[] = { 2,4,5,1,7,8,9,7}; int n = arr.length; getArrangedEvenOdd (arr, n); } } 2 4 8 1 7 5 9 7 Complexity Analysis Time Complexity O(n) where “n” is the number of elements in the array. Here we have traversed the array until the index j is not equal to N. That means we traversed the array onmly once. And thus linear time complexity. Space Complexity O(1) as no extra space is required. The algorithm itself takes only constant space but the program as a whole takes linear space.
https://www.tutorialcup.com/interview/array/segregate-even-and-odd-numbers.htm
CC-MAIN-2021-31
refinedweb
689
73.07
)) The following line is needed to download the example FITS files used here. from astropy.utils.data import download_file from astropy.io import fits image_file = download_file('', cache=True ) I will open the FITS file and find out what it contains. hdu_list = fits.open(image_file) hdu_list.info() Filename: /Users/erik/.astropy/cache/download/py3/2c9202ae878ecfcb60878ceb63837f5f No. Name Type Cards Dimensions Format 0 PRIMARY PrimaryHDU 161 (891, 893) int16 1 er.mask TableHDU 25 1600R x 4C [F6.2, F6.2, F6.2, F6.2] Generally the image information is located in the PRIMARY block. The blocks are numbered and can be accessed by indexing hdu_list. image_data = hdu_list[0].data You data is now stored as a 2-D numpy array. Want to know the dimensions of the image? Just look at the shape of the array. print(type(image_data)) print(image_data.shape) <class 'numpy.ndarray'> (893, 891) At this point, we can just close the FITS file. We have stored everything we wanted to a variable. hdu_list.close() If you don't need to examine the FITS header, you can call fits.getdata to bypass the previous steps. image_data = fits.getdata(image_file) print(type(image_data)) print(image_data.shape) <class 'numpy.ndarray'> (893, 891) plt.imshow(image_data, cmap='gray') plt.colorbar() # To see more color maps # <matplotlib.colorbar.Colorbar at 0x1114a4e10> Let's get some basic statistics about our image print('Min:', np.min(image_data)) print('Max:', np.max(image_data)) print('Mean:', np.mean(image_data)) print('Stdev:', np.std(image_data)) Min: 3759 Max: 22918 Mean: 9831.48167629 Stdev: 3032.3927542 To make a histogram with matplotlib.pyplot.hist(), I need to cast the data from a 2-D to array to something one dimensional. In this case, I am using the ndarray.flatten() to return a 1-D numpy array. print(type(image_data.flatten())) <class 'numpy.ndarray'> NBINS = 1000 histogram = plt.hist(image_data.flatten(), NBINS) Want to use a logarithmic color scale? To do so we need to load the LogNorm object from matplotlib. from matplotlib.colors import LogNorm plt.imshow(image_data, cmap='gray', norm=LogNorm()) # I chose the tick marks based on the histogram above cbar = plt.colorbar(ticks=[5.e3,1.e4,2.e4]) cbar.ax.set_yticklabels(['5,000','10,000','20,000']) [<matplotlib.text.Text at 0x1134c1a20>, <matplotlib.text.Text at 0x1134287f0>, <matplotlib.text.Text at 0x1139417f0>] You can perform math with the image data like any other numpy array. In this particular example, I will stack several images of M13 taken with a ~10'' telescope. I open a series of FITS files and store the data in a list, which I've named image_concat. image_list = [ download_file(''+n+'.fits', cache=True ) \ for n in ['1','2','3','4','5'] ] # The long way image_concat = [] for image in image_list: image_concat.append(fits.getdata(image)) # The short way #image_concat = [ fits.getdata(image) for image in IMAGE_LIST ] Now I'll stack the images by summing my concatenated list. # The long way final_image = np.zeros(shape=image_concat[0].shape) for image in image_concat: final_image += image # The short way #final_image = np.sum(image_concat, axis=0) I'm going to show the image, but I want to decide on the best stretch. To do so I'll plot a histogram of the data. image_hist = plt.hist(final_image.flatten(), 1000) I'll use the keywords vmin and vmax to set limits on the color scaling for imshow. plt.imshow(final_image, cmap='gray', vmin=2.e3, vmax=3.e3) plt.colorbar() <matplotlib.colorbar.Colorbar at 0x1166f0940> This is easy to do with the writeto() method. You will receive an error if the file you are trying to write already exists. That's why I've set clobber=True. outfile = 'stacked_M13_blue.fits' hdu = fits.PrimaryHDU(final_image) hdu.writeto(outfile, clobber=True) WARNING: AstropyDeprecationWarning: "clobber" was deprecated in version 1.3 and will be removed in a future version. Use argument "overwrite" instead. [astropy.utils.decorators] Determine the mean, median, and standard deviation of a part of the stacked M13 image where there is not light from M13. Use those statistics with a sum over the part of the image that includes M13 to estimate the total light in this image from M13. Show the image of the Horsehead Nebula, but in to units of surface brightness (magnitudes per square arcsecond). (Hint: the physical size of the image is 15x15 arcminutes.) Now write out the image you just created, preserving the header the original image had, but add a keyword 'UNITS' with the value 'mag per sq arcsec'. (Hint: you may need to read the astropy.io.fits documentation if you're not sure how to include both the header and the data)
http://www.astropy.org/astropy-tutorials/FITS-images.html
CC-MAIN-2018-13
refinedweb
773
53.27
perlmeditation mstone <p> Most people know perl's range operator in its list context: </p> <p> <code> @digits = (0..9); </code> </p> <p> but in scalar context, the range operator acts like a flip-flop. The manpage will give you the gory details of how it calls the terms on either side, and by the time I was done reading, I had no idea why anyone would ever want such a thing. I filed it away in the back of my mind, though, and later, while trying to unroll the mysteries of asynchronous socket code, had an epiphany. </p> <p> Trying to loop over code that can fail is annoying. Error-handling code likes to branch, and loops prefer linear code. Reconciling the two tends to be work. It's much easier to to put the code that can fail into the loop condition, because then you can write the body of your loop for data that passes all the essential sanity/hygiene tests. </p> <p> That idea leads to code like so: </p> <p> <code> while ($data = &func) { &munge ($data); } </code> </p> <p> but when you're dealing with systems that can fail on setup, you have to embed the loop in a conditional: </p> <p> <code> if (&setup) { while ($data = &advance) { &munge ($data); } &teardown; } else { ## life sucks } </code> </p> <p> which is still kinda clunky. </p> <p> That's when the lightbulb went on. </p> <p> The scalar range operator tests its left argument on the first and last passes through a loop, and its right argument on all the remaining passes. Therefore, I reasoned, one could put the setup and teardown code to the left, and the advancement code to the right. I tried it: </p> <p> <code> while (&setup_and_teardown .. &advance) { &munge (&get_data); } </code> </p> <p> and to my mild surprise, it worked. </p> <p> It still didn't send me to nirvana, though. The structure above demands shared variables among the functions called in the loop, and I have this screaming distaste for linking functions together with globals. Fortunately, objects provide exactly the right kind of encapsulation for the job. </p> <p> The results follow, and if nothing else, help make some sense of the scalar range operator. </p> <readmore> <code> #!/usr/bin/perl -w package Loop_control; ## define some constants used in the simulation: $LIMIT = 5; ## maximum number of iterations $START_ERR = 0.25; ## 25% chance of failing to start $LOOP_ERR = 0.1; ## 10% chance of failing per iteration $FATAL = 0.15; ## 15% chance of fatal errors ## new (nil) : Loop_control_ref # # yer basic constructor. bless the ref, call init(), and return # the result. # # general note regarding style: i've outdented all the # print() statements that generate tracking output. it's a # personal thing.. i find s/^print/# print/ easier than sifting # through the code trying to find that one blasted debugging # statement. # sub new { print "-- Loop_control::new\n"; my $self = bless {}, shift; return ($self->init); } ## init (nil) : Loop_control_ref # # sets up a couple utility variables, but doesn't do anything # earth-shattering. # sub init { print "---- Loop_control::init\n"; my $self = shift; $self->{'state'} = 'new'; ## condition register $self->{'msg'} = ''; ## this object's $! return ($self); } ## updown (nil) : boolean # # the range operator calls this routine on the first and last # passes through the loop. this routine delegates control to # setup() or teardown(), based on the contents of the state # register. # sub updown { print "-- Loop_control::updown\n"; my $self = shift; if ($self->{'state'} eq 'new') { ## have we been here yet? $self->{'state'} = 'running'; return ($self->setup); ## no.. set things up } else { return ($self->teardown); ## yes.. tear things down } } ## setup (nil) : boolean # # set up the data source. this could be any procedure that # might fail, like opening a file or a network connection. # this toy version just fails randomly so you can see the # overall system work. # # this routine returns TRUE if the setup succeeds, thus # making the range operator test TRUE the first time it's # polled. # sub setup { print "---- Loop_control::setup - "; my $self = shift; if (rand() > $START_ERR) { $self->{'count'} = 1; ## trivial setup print "TRUE\n"; return (1); } else { print "FALSE - FAIL - FAIL - FAIL -\n"; $self->{'state'} = (rand() < $FATAL) ? 'fatal' : 'error'; $self->{'msg'} = 'failed during setup'; return (0); } } ## teardown (nil) : boolean # # shut down the data source. this routine terminates the loop, # but shouldn't fail in any way that will ruin the data. # # this routine returns FALSE, thus making the range operator # test FALSE as well, thus ending the loop. # sub teardown { print "---- Loop_control::teardown - FALSE\n"; return (0); } ## advance (nil) : boolean # # this routine fetches the next chunk of data. it can fail # in ways that will ruin the transaction, so once again we # simulate failure by rolling dice. # # this routine returns FALSE on success, which seems wierd # until you recall that the range operator is asking, # "have we hit a stopping point yet?" # # $self->{'data'} is a read-only inspection variable. it # does the same thing an accessor method get_data() would, # but doesn't require a function call. it's an indulgence # i grant myself when i'm damsure i can get away with it. # no code anywhere in this package reads $self->{'data'}, # so even if a user does screw around with it, their change # will have no effect on the object's behavior. # sub advance { print "-- Loop_control::advance - "; my $self = shift; if (rand() < $LOOP_ERR) { ## short-circuit on error print "TRUE - FAIL - FAIL - FAIL -\n"; $self->{'state'} = (rand() < $FATAL) ? 'fatal' : 'error'; $self->{'msg'} = "failed during pass $self->{'count'}"; return (1); } $self->{'data'} = $self->{'count'}; if ($LIMIT > $self->{'count'}) { $self->{'count'}++; print "FALSE\n"; return (0); } else { $self->{'state'} = 'done'; $self->{'msg'} = 'normal termination'; print "TRUE\n"; return (1); } } package main; ## # # now for the simulation. we fill a list with numbers, then # iterate using that list as a queue. items that fail with # recoverable errors get pushed back on the queue for another # try, and items with fatal errors get dropped. # # the real point of this whole mess is to see the tracking # statements for each pass through the loop. you can see # the order in which functions are called, and the TRUE/FALSE # results that go back to the range operator each step of the # way. you'll see a TRUE (FALSE)+ TRUE FALSE sequence when # everything works, and the range operator maps that to the # sequence (TRUE)+ FALSE. # ## @list = (1..10); while (@list) { $i = shift @list; print "======== trying $i\n\n"; ## create a control object and run the loop @cache = (); $obj = new Loop_control; while ($obj->updown .. $obj->advance) { push @cache, $obj->{'data'} * $i; } ## then decide what to do with the results print "\n## $obj->{'msg'}: "; if ($obj->{'state'} eq 'done') { print join (', ', @cache), "\n\n"; } elsif ($obj->{'state'} eq 'error') { print "recoverable. re-queueing $i\n\n"; push @list, $i; } else { print "fatal error. giving up on $i\n\n"; } print "======== end $i\n\n"; } </code>
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=133899
CC-MAIN-2016-36
refinedweb
1,136
70.53
This post by adamab originally appeared last Friday, July 15, 2011 on the AppFabric Team Blog. In the previous blog post on developing AppFabric Applications, we showed you how to create a simple AppFabric application. This app was an ASP.NET web site that used a SQL Database. In this blog post, we’ll use the AppFabric Application Manager to configure, deploy, and monitor that application. Here is a breakdown of what we will cover: - The AppFabric Player application has been imported (where the last blog post left off). Package can be downloaded here (AppFabricPlayer.afpkg). - Update the connection string to point to a SQL Azure database - Initialize the SQL Azure Database with application data - Deploy and start the application - View aggregate ASP.NET monitoring metrics at both web app and web page granularities Before we can do those things, we’ll sign into the AppFabric LABS Portal at using our Live Id. Since this is a limited CTP, only approved users will have access to the service, but all users can try the SDK and VS Tools on your local machines. To request access to the AppFabric June CTP, follow the instructions in this blog post. After signing in with an approved account, we click on AppFabric Services in the bottom left corner and then select Applications from the tree view. Now we can start with our first task, updating the connection string to the SQL database that is used by the AppFabric Player app. Users of the AppFabric June CTP are provided with a SQL Azure database at no charge. We will use this database. To get access to our SQL Azure connection string, we’ll click on the database in the main panel and then click View under Connection String. The resulting dialog lets us copy the connection string to the clipboard. Now we’re ready to go to the Application Manager. We select our namespace from the main content area and then click Application Manager on the ribbon. We can also reach the Application Manager directly by using a URL in this format:. We click on AppFabricPlayer to manage that app. Then we’ll click on Referenced Services in the Summary area. Then we’ll click on the VideoDb link to configure the database. Now we will update the connection string using the value we copied earlier. Note that the DatabaseName and the ServerConnectionString fields are separate so you will have to remove the database name from the value before pasting it in. Before Data Source=myservername;Initial Catalog=mydatabase;User ID=myuserid;Password=mypassword After Data Source=myservername;User ID=myuserid;Password=mypassword We update both fields and select Save. Next we will create and populate the SQL table that this app expects in the database. We’ll go to to do that. We login with the same credentials that appear in our connection string. We’ll click on New Query from the ribbon and paste in the SQL script below. We execute the script and the database is initialized.. Now that the app is running we are able to monitor important activity. To simulate activity we wrote a load test that hits the web page once every second or so. If we go back to the application page and click on Monitoring in the Summary area, we’ll see some interesting metrics. These metrics are aggregated at the application level, but if we drill down into the ASP.NET Web Application we can see more granular metrics. We click on Containers in the Summary area and then select Web1, which represents our one and only service group. We then click on PlayerWeb, which represents our ASP.NET Web Application. We then click on Monitoring from the Summary area..
https://azure.microsoft.com/pt-pt/blog/configuring-deploying-and-monitoring-applications-using-appfabric-application-manager/
CC-MAIN-2019-09
refinedweb
624
64.71
<Core><Intermediate><Advanced> Overview | Packages | Class Internals | Collections | I-O | Network | Database The Javadoc documentation for the java.io package at Sun's Java site simply says: Provides for system input and output through data streams, serialization and the file system. The java.io package Don't be misled by the simple declaration above into believing that the java.io package is not too complicated. On the contrary, it is the most comprehensive in the java package hierarchy, and constitutes a forest of classes. But that is not a deterrent to the intrepid java explorer. We will begin very simply with files and file access for reading and writing. File IO For file manipulation Java provides the following classes: java.io.File java.io.FileReader java.io.FileWriter java.io.FileInputStream java.io.FileOutputStream The File class encapsulates file details such as size, last modified date and so on. It abstracts the semantics of a folder as well as a file. Listing 1 import java.io.File; // for File I/O import java.util.Date; // for converting time in millis to date public class FileDetails { public FileDetails(String currentPath) { // constructor File path = new File(currentPath); // create a file object String[] files = path.list(); // Get list of file/folder objects System.out.println("Name|Type|Modified|Size"); for(int i=0; i<files.length; i++) { System.out.println(); File F = new File(files[i]); System.out.print(F.getName()); String type = F.isDirectory()==true ? "Directory" : "File"; System.out.print("|"+type); System.out.print("|"+new Date(F.lastModified())); if(type.equals("File")) { System.out.print("|"+F.length()); } } System.out.println("\nListed "+files.length+" objects"); } public static void main(String[] args) { new FileDetails("."); // run against current directory } } Listing 1 shows a program that captures the details of files and folders given a path - in this case, the path chosen is the current directory. Here is the sample output: Name|Type|Modified|Size xml2html.java|File|Thu Dec 12 12:31:58 GMT+05:30 2002|729 stocks.xml|File|Sat Dec 07 15:32:22 GMT+05:30 2002|511 stocks.xsl|File|Sat Dec 07 15:32:44 GMT+05:30 2002|709 xml2html.class|File|Thu Dec 12 12:32:16 GMT+05:30 2002|1390 stocks.html|File|Sat Dec 07 15:34:54 GMT+05:30 2002|374 jstl_test.jsp|File|Tue Jan 07 16:43:12 GMT+05:30 2003|1168 spel.jsp|File|Tue Jan 07 16:34:56 GMT+05:30 2003|482 ejb|Directory|Sat Jan 18 09:26:46 GMT+05:30 2003 ant|Directory|Mon Jan 20 12:47:40 GMT+05:30 2003 jstl|Directory|Tue Jan 21 16:57:24 GMT+05:30 2003 jmail|Directory|Wed Jan 22 15:11:06 GMT+05:30 2003 collections|Directory|Mon Feb 03 12:12:52 GMT+05:30 2003 Calculator|Directory|Tue Feb 04 12:47:10 GMT+05:30 2003 FileDetails.java|File|Wed May 28 12:44:24 GMT+05:30 2003|754 FileDetails.class|File|Wed May 28 12:44:56 GMT+05:30 2003|1370 Listed 52 objects The output list is truncated for reasons of space. The last two entries include the details of the file shown in Listing 1. Some of the other nice things that you can do with the File object are: Make directory or delete a file or folder Get path name, absolute and canonical Get Parent details or list contents of folder Check for existence of file or folder Check for properties such as 'hidden' Filter file names for specified types such as 'java', 'pdf', 'html', and so on For constraints of space and time, let's take leave of the File object, and poke around with reading and writing of files. Before we look at the details of coding, we need to clear up a few things first. Like C++, Java handles input/output as streams. A data stream is a sequence of bytes or characters. Data is streamed from one system to another over a network, or within a system from one drive to another, or from the system console. The stream is either byte-oriented or character oriented. It is also possible to convert a byte stream to a character character stream through the use of a bridge class. The converted character stream may accept default platform encoding, or it may be set explicitly. The byte streams are of three basic kinds: File Streams - FileInputStream, FileOutputStream Buffered Stream - BufferedInputStream, BufferedOutputStream Data Streams - DataInputStream, DataOutputStream There are of course several other classes to suit different needs, it will suffice to discuss these classes as an introduction to java.io package. The byte streams are rooted at the abstract InputStream and OutputStream classes. The concrete subclasses do the actual job of data transfer. We will look at some code examples. Listing 2 // Using FileInputStream and FileReader classes import java.io.*; public class FileStreamReader { static String fileName = "FileStreamReader.java"; // Using InputStream public FileStreamReader(FileInputStream fis) throws IOException { byte[] bites = new byte[1024]; int bitesRead = fis.read(bites); System.out.println(bitesRead + " bytes read."); } // Using Reader public FileStreamReader(FileReader fr) throws IOException { char[] chars = new char[1024]; int charsRead = fr.read(chars); System.out.println(charsRead + " bytes read."); } public static void main(String[] args) { // Using InputStream try { FileInputStream fin = new FileInputStream(fileName); new FileStreamReader(fin); fin.close(); } catch(IOException ioe) { System.err.println("Error reading file"); } // Using Reader try { FileReader fr = new FileReader(fileName); new FileStreamReader(fr); fr.close(); } catch(IOException ioe) { System.err.println("Error reading file"); } } } Output: 1005 bytes read. 1005 bytes read. Listing 2 demonstrates the use of both the byte stream and character stream classes. A buffer is created to hold the bytes or chars as they are read one at a time, and at the end the count is output to console. It is more efficient to read a buffered input, however, and the following code snippet shows how-to. A buffered reader or input stream is used to read one line at a time. BufferedReader br = new BufferedReader(new FileReader(fileName); String line = br.readLine(); Ditto for BufferedInputStream. If you would experiment with Listing 2 above, just wrap the FileReader object inside the BufferedReader instance, and read off the lines one by one with the readLine() method. Output each line to console as you read. while( (line=br.readLine()) != null) // EOF condition { System.out.println(line); line = br.readLine(); } That's it. Now you know all you ever needed to know java IO basics. Armed with this information, you may enter the IO forest to further explore on your own.
http://javanook.tripod.com/core/javabook_1_4.html
CC-MAIN-2018-51
refinedweb
1,103
58.58
#include <lib_cloud.h> Cloud object. Retrieves the size in the vector vSize. Retrieves the grid size. (x + 1) * (y + 1) * (z + 1)grid points. Gets the density array. Gets the density size. Resizes the cloud object. Gets the tool data. Allocates the tool data. Frees the tool data. Gets the altitude of a cloud group. Smooths borders. Smooths all of the cloud object. Clears the cloud object to chDensity. Fills the plane. Fills a sphere. Checks for visibility. Checks if the cloud object is locked. Draws the cloud object. Gets the plane index. Sets the plane index. Gets the plane position. Sets the plane position. Retrieves the cloud private data. Sets the cloud draw hook.
https://developers.maxon.net/docs/Cinema4DCPPSDK/html/class_cloud_object.html
CC-MAIN-2020-24
refinedweb
115
83.62
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question) Assume you have some registry key you want to delete in its entirety, meaning you want to delete it with all children and subtrees. A sample is depicted below (HKCU\MyDemo): Chances are high you've tried to delete the key with the following piece of code (well, at least in some language or another using the RegDeleteKey function of advapi32): using System;using System.Runtime.InteropServices; namespace RegDeleteTreeDemo{ class Program { [DllImport("advapi32.dll")] static extern int RegDeleteKey(uint hKey, string lpSubkey); static uint HKEY_CURRENT_USER = 0x80000001; static void Main(string[] args) { int res = RegDeleteKey(HKEY_CURRENT_USER, "MyDemo"); } }} If you've tried this at home you'll know it isn't that easy. Effectively, it's like doing an rd on a non-empty directory. The Windows SDK mentions: "The subkey to be deleted must not have subkeys. To delete a key and all its subkeys, you need to enumerate the subkeys and delete them individually." and a sample is available that illustrates doing this. There are however better alternatives if you really want to delete a whole key with all its descendants. There's a function called SHDeleteKey defined in SHLWAPI but in Windows Vista this functionality has been added to advapi32 as well using RegDeleteTree. A sample is shown below: namespace RegDeleteTreeDemo{ class Program { [DllImport("advapi32.dll")] static extern int RegDeleteTree(uint hKey, string lpSubkey); static uint HKEY_CURRENT_USER = 0x80000001; static void Main(string[] args) { int res = RegDeleteTree(HKEY_CURRENT_USER, "MyDemo"); } }} It's as easy as this, and now the deletion works like a charm. So, time to get rid of recursive tree deletion stuff. Warning: Extra care is recommended when working with this function; I'm not responsible for accidental deletions of things like HKCU\Software (haven't tried it myself, don't know whether it would work, expect it would, but don't want to know it at all). You've been kicked (a good thing) - Trackback from DotNetKicks.com
http://community.bartdesmet.net/blogs/bart/archive/2006/12/11/Windows-Vista-_2D00_-Registry-tip_3A00_-RegDeleteTree.aspx
CC-MAIN-2014-35
refinedweb
333
59.53
Using the Arachnio With Data.sparkfun.com Introduction: Using the Arachnio With Data.sparkfun.com Sparkfun has produced a lovely tool for cloud storage of data streams, data.sparkfun.com. It's a great fit for the Arachnio, since it only requires a TCP connection and a GET request to log a data record. The data can then be retrieved from data.sparkfun.com website for later analysis. This is a core function for any kind of sensor network you might want to build with the Arachnio and the Arachnode board. This Instructable shows you how to do it, and it's really pretty easy. I'll be showcasing some of my development testing to show off a real application. In this case, I'd like to explore the up-time performance of the ESP8266 with standard AT command firmware and the decoupling capacitor arrangement. There's been a number of complaints on-line about the stability of the ESP8266 but little in the way of hard data. The current baseline Arachnio uses the same decoupling capacitor arrangement as the typical modules -- a single 10 uF capacitor. This is likely inadequate, as explained in this tutorial from Analog Devices. The next generation will have an improved setup with a 1 uF and a pair of 100 nF caps joining the 10 uF one. I'll be using the older 0.9.5 firmware for initial testing, then I will be moving over to the 1.0.1 firmware to get a comparison along that axis as well. For this testing, I installed the Arachnio into an ArachnoProto to protect the pins and remove confounding factors. Step 1: Tools, Materials, and Software - One Arachnio - One ArachnoProto - Arduino IDE 1.6.0 - ITEAD WeeESP8266 library, commit 21 Apr 2015 commit - data.sparkfun.com Step 2: Setting Up Your Stream The first step is to set up your data stream you want to upload to. Sparkfun has made this super easy -- here are the steps: - Go to - Enter a title - Describe what you are doing - Choose whether to make the stream visible on the public stream list or hidden - Enter field names -- one per piece of data, and they should be descripttive. - Give the stream an alias suitable for inclusion in a URL. Otherwise, you will have to use the public key in the URL. - Tag the stream (optional) - Enter a location (optional) - Hit Save You'll then be taken to the new stream page. It will show you everything you need to interact with your new stream. Of particular note are the private and delete keys (blacked out in the attached image). These let you write to, manage, or destroy the stream, and therefore must be kept private. The page gives you the option to download the keys as a json file or have them emailed to you. Near the bottom, the page will have example URLs that can be used to write to your stream. I've blacked the private key out of the example, but when you create your own, you can copy and paste it into your browser's URL window and then check the stream to see the data you just posted has appeared. Step 3: Arachnio Code Here's the code to run on the Arachnio. Obviously, you will need to enter your own SSID, password, public, and private keys. #include "ESP8266.h" // replace with your WiFi connection info #define WLAN_SSID "xxxx" #define WLAN_PASS "yyyy" ESP8266 wifi(Serial1); void setup() { // put your setup code here, to run once: Serial.begin(115200); Serial1.begin(9600); while (!Serial); Serial.println("I live!"); restartESP(); } void loop() { static long startTime = millis(); char getBuffer[256] = {'\0'}; // check to see if the ESP8266 is still alive if (wifi.kick()) { wifi.createTCP("data.sparkfun.com", 80); long thisTime = millis(); sprintf(getBuffer, "GET /input/[public key]?private_key=[private key]&baselineespup=%lu&baseline32u4up=%lu\r\nHost: data.sparkfun.com\r\n\r\n", (thisTime - startTime)/1000, thisTime/1000); Serial.print("Starting send at "); Serial.println(millis()); wifi.send((const uint8_t *)getBuffer, strlen(getBuffer)); Serial.print("Ending send at "); Serial.println(millis()); Serial.print(getBuffer); delay(100); wifi.releaseTCP(); } else { wifi.restart(); Serial.println("ESP8266 died; restarting\n"); delay(30000); restartESP(); startTime = millis(); Serial.println(""); } delay(1000); } The setup() function calls the restartESP() function to start up the ESP -- we'll cover that later. The code gets on with checking if the ESP is alive. The kick() function gives us a quick check whether the ESP is still responding or not. If it is, we open up a TCP connection, assemble the request. The get request is a pretty standard thing; expanding the escaped newlines, it looks like this: GET /input/[public key]?private_key=[private key]&baselineespup=%lu&baseline32u4up=%lu Host: data.sparkfun.com The two values are the difference between the current system time (seconds since it started) and the system time the last time the ESP was started. After sending the GET request, we just close the connection. Waiting for and processing the response isn't necessary, so we ignore it and kill the connection. If the call to kick() returns false, we wait for 30 seconds then restart the ESP by calling restart() then restartESP() and reset the start time. void restartESP (void) { Serial.print("FW Version:"); Serial.println(wifi.getVersion().c_str()); if (wifi.setOprToStation()) { Serial.print("to station ok\r\n"); } else { Serial.print("to station err\r\n"); } if (wifi.joinAP(WLAN_SSID, WLAN_PASS)) {"); } Most of the stuff here is just diagnostics to make it easier to figure out what's going on when we have it plugged into a host computer. The key parts are where it sets itself to station mode, joins the access point it's told to, and disables the mux. We only need one connection at a time, so disabling the mux removes unnecessary complication. You can see my ongoing results for the baseline here. I expect to have an updated one up and running tomorrow. Note that this code, as written, must be attached to a USB host. Therefore, the test code my be interrupted from time to time. Good to see another one on this Data to Sparkfun -I did one too... So I just got a half dozen ESP-8266 ESP-12 boards and will use this to convert my pressure sensor and temp data Thanks Cool! I'm excited to see what people come up with this.
http://www.instructables.com/id/Using-the-Arachnio-with-datasparkfuncom/
CC-MAIN-2017-47
refinedweb
1,073
65.83
Queries a physical volume and returns all pertinent information. Logical Volume Manager Library (liblvm.a) #include <lvm.h> int lvm_querypv (VG_ID, PV_ID, QueryPV, PVName) struct unique_id *VG_ID; struct unique_id *PV_ID; struct querypv **QueryPV; char *PVName; Note: You must have root user authority to use the lvm_querypv subroutine. The lvm_querypv subroutine returns information on the physical volume specified by the PV_ID parameter. The querypv structure, defined in the lvm.h file, contains the following fields: struct querypv { long ppsize; long pv_state; long pp_count; long alloc_ppcount; struct pp_map *pp_map; long pvnum_vgdas; } struct pp_map { long pp_state; struct lv_id lv_id; long lp_num; long copy; struct unique_id fst_alt_vol; long fst_alt_part; struct unique_id snd_alt_vol; long snd_alt_part; } The PVName parameter enables the user to query from a volume group descriptor area on a specific physical volume instead of from the Logical Volume Manager's (LVM) most recent, in-memory copy of the descriptor area. This method should only be used if the volume group is varied off. The data returned is not guaranteed to be most recent or correct, and it can reflect a back level descriptor area. The PVname parameter should specify either the full path name of the physical volume that contains the descriptor area to query or a single file name that must reside in the /dev directory (for example, rhdisk1). This field must be a null-terminated string of from 1 to LVM_NAMESIZ bytes, including the null byte, and represent a raw or character device. If a raw or character device is not specified for the PVName parameter, the LVM will add an r to the file name in order to have a raw device name. If there is no raw device entry for this name, the LVM will return the LVM_NOTCHARDEV error code. If a PVName is specified, the volume group identifier, VG_ID , will be returned by the LVM through the VG_ID parameter passed in by the user. If the user wishes to query from the LVM in-memory copy, the PVName parameter should be set to null. When using this method of query, the volume group must be varied on, or an error will be returned. Note: As long as the PVName is not null, the LVM will attempt a query from a physical volume and not from its in-memory copy of data. In addition to the PVName parameter, the caller passes the VG_ID parameter, indicating the volume group that contains the physical volume to be queried, the unique ID of the physical volume to be queried, the PV_ID parameter, and the address of a pointer of the type QueryPV. The LVM will separately allocate enough space for the querypv structure and the struct pp_map array and return the address of the querypv structure in the QueryPV pointer passed in. The user is responsible for freeing the space by freeing the struct pp_map pointer and then freeing the QueryPV pointer. The lvm_querypv subroutine returns a value of 0 upon successful completion. If the lvm_querypv subroutine fails it returns one of the following error codes: If the query originates from the varied-on volume group's current volume group descriptor area, one of the following error codes may be returned: If a physical volume name has been passed, requesting that the query originate from a specific physical volume, then one of the following error codes may be returned: This subroutine is part of Base Operating System (BOS) Runtime. The lvm_varyonvg subroutine. List of Logical Volume Subroutines and Logical Volume Programming Overview in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs.
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/basetrf1/lvmquerz.htm
CC-MAIN-2022-21
refinedweb
592
56.39
Use the Application page of project properties in Visual Studio to configure application settings for a project. To access this page, select a project node in Solution Explorer, select Project > Properties from the Visual Studio menu, and then select the Application tab. Assembly name Specifies the name of the assembly to be created. The default name is the project name, and the default extension is determined by the output type. If the build results in an executable assembly, the default extension is .exe. If it results in a class library, the default is .dll. Default namespace Specifies the namespace that will be used by default when you add new items to the project. Target framework Specifies the target .NET Framework, .NET Core, or .NET Standard version for the assembly. The .NET Framework versions installed on your system determine which versions are available for this setting. By default, this is set to the version selected when the project was created. Output type Specifies application type: Windows Application, Console Application, or Class Library. The default depends on the project type. Startup object Specifies the entry point for the application. This can be a main routine specified with the Synergy MAIN statement, or it can be a class. If it is a class, the class must contain a method called "main" with the following signature (otherwise, a MISMAIN error will be reported): public static method main, void arg, [#]string proc .... A MULTMAIN error will be reported if both a main method and a main routine (specified by a Synergy MAIN statement) exist. Assembly Information Displays the Assembly Information dialog box, which enables you to set global assembly attributes, which are stored in the AssemblyInfo file for the project. Icon and manifest This option enables you to specify an icon and manifest for the project. Use this option unless the project will have a resource file. Icon Specifies the icon (.ico) file that will be used for the application. Manifest Selects the manifest generation setting used when the application runs under User Account Control (UAC): Resource file To provide a Win32 resource file for the project, select this option and then specify the resource file (which can include an icon).
https://www.synergex.com/docs/vs/Project_Application_NET.htm
CC-MAIN-2019-47
refinedweb
366
55.95
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace _04.TwoInsDev5 { class TwoInsDev5 { static void Main(string[] args) { Console.WriteLine("Enter two positive integer nubmers:"); int a = int.Parse(Console.ReadLine()); int b = int.Parse(Console.ReadLine()); if (a > 0 && b > 0) { for (int i = a; i <= b; i++) { if (i % 5 == 0) { } } } else if (a <= 0 || b <= 0) { Console.WriteLine("Please enter positive nubmers!"); } } } } How to count If statements?Page 1 of 1 2 Replies - 1154 Views - Last Post: 27 April 2013 - 05:10 AM #1 How to count If statements? Posted 27 April 2013 - 04:21 AM First I will start that this is home work.The homework is "Write a program that reads two positive integer numbers and prints how many numbers p exist between them such that the reminder of the division by 5 is 0 (inclusive). Example: p(17,25) = 2.".My question is how to count the right statements that are divided by 5.I searched around and can't find anything that will count and print how much numbers will be printed.I don't know how to formulate the If or there is another way as presented in the following example - (17,25) = 2. Replies To: How to count If statements? #2 Re: How to count If statements? Posted 27 April 2013 - 04:28 AM Can't find anything that will count? . A variable will count if you start it at 0 and keep adding 1 to it. You could name that variable p. #3 Re: How to count If statements? Posted 27 April 2013 - 05:10 AM You must take care of the scenario when a > b. Maybe have an if-statement to determine which number is greater. Page 1 of 1
http://www.dreamincode.net/forums/topic/319675-how-to-count-if-statements/page__pid__1842949__st__0
CC-MAIN-2016-07
refinedweb
295
78.14
This article is in the Product Showcase section for our sponsors at CodeProject. These reviews are intended to provide you with information on products and services that we consider useful and of value to developers. Originally posted on Jesse Liberty's blog Let’s start by taking a look at the application were going to build. The application opens with a form, and that form is populated by data binding. It is tempting to draw a parallel in which Visual Studio maps to XCode and Expression Blend maps to Interface Builder. This isn’t wrong so much as incomplete, because you can create your entire interface in Visual Studio (and many folks do) but for more advanced UI development, Blend is the tool of choice. We could start off by creating the form itself in Blend (and some would argue we should) but we’ll keep things simple by doing all our work for now in Visual Studio, introducing Blend in a following tutorial. To begin, open Visual Studio, click on New Project and in the left pane (Templates) choose Visual C# and then Silverlight For Windows Phone 7. In the right hand side, choose the MVVM Light Toolkit. Give Visual Studio 2010 a few minutes to settle down and you’ll find that Visual Studio 2010 and the MVVM Light Toolkit have cooperated to create a project with numerous files and folders. You can see these in Solution Explorer window, which is typically on the upper left hand side of your development environment and which you can open either from the menu (View->Solution Explorer) or with the hot keys that are shown on the menu. Because all such windows can be opened from a menu, I will eschew walking you through each, and will only point out those menu choices which may be otherwise confusing. For now, we’ll ignore the Properties and Reference items (though feel free to explore on your own, just take a compass and plenty of rope and you’ll be fine). The key files we care about right now are MainPage.xaml and, under the ViewModel folder MainViewModel.cs. The former, MainPage.xaml has an associated code behind file which you can expose with the very familiar turn down. Double click on MainViewModel.cs to populate the edit windows. Your initial view will be a split pane with the design view on one side and the code view on the other. The code view, in this case, will be populated with a good bit of Xaml. We’ll close the Xaml window for now to focus on the Design view. To close the Code View make sure the Design View it is on the left (you can switch it with the double arrow button between the panes) and then click the double-arrowhead to close the code view. Finally, make sure that the Properties window is visible on the right (F4) and the toolbox is pinned open on the left ( Control-Alt-X). In the illustration below, I’ve labeled 7 areas. Let’s go over them one by one: Area 1 is the toolbox. Note that at the upper right hand corner (circled) are three symbols. The first allows you to Area 2 is the Application String and Area 3 is the Page Title string; we’ll return to these shortly. Area 4 is the ContentPanel and we’ll be doing most of our design work here. Area 5 is the Solution Explorer, which is above the Properties window by convention (you can of course move any window to any position) The properties window is divided into area 6, where the name of the selected object is set and where you can switch the properties window from Properties view to Events view (more on events later in this tutorial). Area 7 is where the properties are set (you can see some of the Text properties exposed in this illustration). Note the arrows emanating from the design panel. The upper two are used to switch between Code and Design when one of the windows is hidden. The lower three split the window vertically or horizontally, and reveal (or hide) the right hand (or lower) window respectively. Above all of this is the menu and of course the most important menu entry is Help, which offers a variety of ways to receive help. To see one of its most powerful features at work, place the cursor anywhere in the RowDefinition keyword (approximately line 47) and press F1. The first time you press F1 you’ll be asked if you want to use local or internet help. If you have an internet connection, by all means ask for online help. A browser window will open with help on the RowDefinition keyword. Sweet. As of this writing the on-line help file has tabs for each supported language. We’ll use the C# tab (which will set the C# tab for the entire page and which is sticky and thus persists to the next time you open the window). Each entry follows the pattern of identifying the class, the namespace and assembly (we’ll cover both as we go). Syntax is demonstrated by language, and the Remarks section provides detailed information and links to related topics. The examples section follows the Remarks section and provides copyable code to get you started. After the Examples come the Inheritance Hierarchy, showing where your class fits in, and notes on Thread Safety, Platform, Versions and, finally a See Also section for related topics. You also have much of this information locally in Silverlight.chm – a Windows Help file that was loaded when you installed Silverlight. Let’s return to the project and begin to create the form. For now we’re going to do all the work in the design view, though you’ll see later that everything done in Design view instantly updates the Xaml view, and vice versa. It is common to distinguish those controls used for layout from those used for interacting with the user, though the distinction is, in truth arbitrary. In any case, there are quite a few layout controls, though the two we care most about are the Grid and the StackPanel. The Grid is the most powerful and (nearly) ubiquitous layout control – something of a UITableView on steroids. Click in the Content Panel (area 4 above) and the edges of the Grid that was placed there for you will light up. Move your cursor in the margin and lines will appear. Click and horizontal lines become rows, vertical lines become columns. You’ll want to create two columns (the left taking about 1/3 of the grid) and 8 rows of approximately equal size. Click in the Grid and notice that the properties window has a ColumnDefinitions property and a RowDefinitions Property. Here you can make the spacing more exact. You can do so with precise sizes (e.g., 25px) or with percentages, or with relative sizing (e.g., 2* and 3* will create two columns with the first taking 2/5 of the available space.). I chose 1* (which you can shorten to *) and 2* for the columns, and seven rows of 1*. You can of course open these instead of creating the rows and columns first in the designer. Click on the Application Title or the Page Title (areas 2 and 3 above) We’ll use the left column for the prompts. Drag a TextBlock (used for displaying text) into the first row (row offset 0), first column and place it more or less in the center. Then switch over to the properties window and set the following properties: Drag six more prompts onto the form, setting them identically, except for the Text and the Row. The rows will be 1,2,3,4,5 and 7 and the Text will be You can leave all the rest alone except that under Text you’ll want to click the Bold button and you’ll want to set the prompt names (or use the default as the only reason to have meaningful names is when you need to address the objects programmatically, which we won’t do with these labels) and don’t forget to set the Grid.Row appropriately. The row you skipped will be used for the Male & Female RadioButtons without a prompt. The fact that so many of the properties are identical may be giving you the willies. Not to worry, we can fix this with Styles, a topic taken up in a future tutorial. The complete Xaml is below, to facilitate cut and paste. Next, you’ll want to add the input controls. Add TextBox objects to rows 0,1,3,4,5 and 7 . Set the following properties: You skipped over the input control for row 2 (the third from the top) because you want to place three text boxes in that row (for city, state and zip, respectively). If you just set all three to the same row and column they will appear on top of one another. What you want is to “stack” them, but in this case not one on top of the other, rather one next to the other. Drag a StackPanel onto the form and set its properties as follows: Notice that controls within the StackPanel do not need Grid coordinates. Finally, you’ll want to add the two RadioButtons in row offset 6. Again create a StackPanel, and within the StackPanel place two RadioButton controls: The GroupName creates a Radio Button Group allowing the buttons to be mutually exclusive. That is your UI. Before we go any further, if you haven’t yet done so, run the application. You can do this by clicking Build->Rebuild Your First Application (or control-shift-B) and then clicking Debug->Run (F5) or you can skip the first step and just run, as that will force a rebuild. Your application will come up in the emulator, with populated fields! Hoo ha! When you are done enjoying the first version of your app, stop debugging (press the stop debugging button on the toolbar or Debug->Stop Debugging) but do not close the emulator. The emulator takes a while to “boot up” and you can leave it running; it will attach to your program each time you rebuild and re-run. Unfortunately, we are rarely called upon to populate a form with known values. Somewhat more common is to populate the form from data, often data retrieved from a database, possibly via a web service. We won’t worry about how to retrieve the data in this tutorial, but let’s take the plunge into Data Binding – the technology of telling the View how to obtain the data it needs. Since we are doing this with MVVM, we’ll have the ViewModel obtain the data from the Model, message it as required by business rules and then use binding to allow the View to have no code in the MainPage.xaml.cs file. (This greatly assists with testing). To make this work we need to do two things: Ok, Three Things, we need to Four things! Yes! Four things… While in a “real” application you’ll get your data from a data source such as a web service (which may in turn get the data from a database) for now we’ll create an instance of a “Customer” class right in local code. Create a Customer.cs class in the Model folder that looks like this: using System; namespace Your_First_Application; } } } These are called “automatic” properties. When you write public string First { get; set; } the compiler treats it exactly as if you had written private string _secretvariable public string First { get { return _secretVariable; } set { _secretVariable = value; } } That is, it is as if you had created a backing store for the value (in this case the string _secretVariable) and the get accessor returned that value. The set accessor sets the value of the backing store to the value passed in to the set accessor. Here’s how you use either form First = "Mary"; // call the setter //.... string fn = First; // call the getter The first line calls the setter and the string Mary is passed in as value. The second line represents intervening code. The third line uses the getter to retrieve “Mary” and assign it to the local string variable fn. If we’re using automatic properties, why not use public fields? Jon Galloway points out that this is a common question and that one of the better answers was given by Scott Guthrie:. With the properties in place we need a way to generate an instance. Best would be if we didn’t have to have an instance to create one, and so we’ll add a static method that returns a pre-filled instance. Static methods can be called on the class rather than on an instance (as you’ll see in just a moment). public static Customer GetCustomer() { var newCust = new Customer( "Martha", "Washington", "1640 Pennsylvania Avenue", "New York", "NY", "11214", "617-555-4663", "781-555-9675", "212-555-5353", "jesseliberty@jesseliberty.com", "none", false, "VIP - Treat nicely", 700, new DateTime(1955,07,10), new DateTime(2010,06,06)); return newCust; } This method does nothing more than to create an instance of the class, populate its fields and then return that instance. You would never do this in production code, but it does give us an instance of the Customer class as if we had obtained it from a web service. All that is missing is the constructor that we’re calling (that takes a value for each property),; HomeEmail = homeEmail; WorkEmail = workEmail; IsMale = isMale; Notes = notes; CreditRating = creditRating; FirstContact = firstContact; LastContact = lastContact; } That’s the model. The job of the ViewModel is to manage this data, bind it to the view and handle user input and user actions. We’ll postpone a discussion of user input and user actions until a future tutorial, but let’s create the view model with public properties for each of the values in the Customer that we want to display in the view. Here’s the complete MainViewModel.cs file using GalaSoft.MvvmLight; using Your_First_Application.Model; namespace Your_First_Application.ViewModel { public class MainViewModel : ViewModelBase { public string ApplicationTitle { get { return "Your First Application"; } } public string PageName { get { return "Customer"; } } private readonly Customer cust = Customer.GetCustomer(); public string Name { get { return cust.First + " " + cust.Last; } } public string Street { get { return cust.Address; } } public string City { get { return cust.City; } } public string State { get { return cust.State; } } public string Zip { get { return cust.Zip; } } public string Phone { get { return cust.WorkPhone; } } public string Fax { get { return cust.Fax; } } public string Email { get { return cust.WorkEmail; } } public bool IsMale { get { return cust.IsMale == true; } } public bool IsFemale { get { return !IsMale; } } public string Notes { get { return cust.Notes; } } } } The first two public properties are put in place by the MVVM Light Toolkit, the 11 at the bottom are the properties to which we’ll bind the 11 user controls. Notice that the values that we’re binding to are values on the customer object. How will the UI elements know what object has these properties? That is handled by the Data Context property of the control, or its container (grid) or the container’s container (the page). With MVVM Light Toolkit this work is done with the DAtaContext Locator object, which you’ll find was declared in the Xaml file, This declaration was created for you by the MVVM Light Toolkit and so you do not have to worry about it; it just works. It is time to bind the controls to the properties in the ViewModel. Return to the View and click on one of the TextBox controls. Delete the content and select binding. The dialog that opens will ask which public property you wish to bind to. The tiny gold barrel on the line with the Text property indicates that you have a known data source. The dialog box that opens has four tabs: Source is the data context (though you can have other sources, a topic for a later tutorial) Path is the property to which you’ll bind Converter allows the binding object to convert the type and/or to format the data Options is where you pick one of the three types of binding: OneTime binding sets the binding and shuts down the binding connection. OneWay binding is for read-only data TwoWay binding allows the user to update the data you are displaying. We’ll look at TwoWay binding in an early upcoming tutorial. While drag and drop and setting properties works very well, you may find that your friends who do Silverlight programming do not take advantage of this. Rather they write these simple UI designs directly in the Xaml. This is very much an artifact of the days in which coding the Xaml directly was the only option, but you may be interested in how to do such a thing. Easy, peasy. Return to the split screen and examine the Xaml that was generated. Change the Xaml and see how the UI changes. Change the UI and see how the Xaml changes. Here is the complete Xaml for our page, <phone:PhoneApplicationPage x: <!-="{Binding ApplicationTitle}" Style="{StaticResource PhoneTextNormalStyle}" /> <TextBlock x: </StackPanel> <!--ContentPanel - place additional content here--> <Grid x:Name="ContentGrid" Grid. Height="30" HorizontalAlignment="Right" Margin="5" Name="NamePrompt" Text="Full Name" VerticalAlignment="Center" FontWeight="Bold" Grid. <TextBlock FontWeight="Bold" Height="30" HorizontalAlignment="Right" Name="streetPrompt" Text="Street Address" VerticalAlignment="Center" Margin="5" Grid. <TextBlock FontWeight="Bold" Height="30" HorizontalAlignment="Right" Name="cityStateZipPrompt" Text="City, State, Zip" VerticalAlignment="Center" Margin="5" Name="FullName" Width="303" Height="70" HorizontalAlignment="Left" VerticalAlignment="Bottom" Margin="5,0,0,5" Grid. <TextBox Name="Street" Width="303" Height="70" HorizontalAlignment="Left" VerticalAlignment="Bottom" Margin="5,0,0,5" Grid.Row="1" Grid. <StackPanel Orientation="Horizontal" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="0" Grid.Row="2" Grid. <TextBox Name="City" Width="150" Height="70" HorizontalAlignment="Left" VerticalAlignment="Bottom" Margin="0,0,0,5" Text="{Binding City}" /> <TextBox Name="State" Width="74" Height="70" Margin="0,0,0,5" Text="{Binding State}" /> <TextBox Name="Zip" Width="93" Height="70" Margin="0,0,0,5" Text="{Binding Zip}" /> </StackPanel> <TextBox Height="70" HorizontalAlignment="Left" Margin="5,0,0,5" Name="Phone" VerticalAlignment="Bottom" Width="303" Grid.Column="1" Grid. Email}" /> <StackPanel Orientation="Horizontal" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="5" Grid.Row="6" Grid. <RadioButton Name="Male" Content="Male" GroupName="Sex" IsChecked="{Binding IsMale}" FontSize="20" /> <RadioButton Name="Female" Content="Female" GroupName="Sex" IsChecked="{Binding IsFemale}" FontSize="20" /> </StackPanel> <TextBox Height="72" HorizontalAlignment="Left" Margin="5,0,0,5" Name="Notes" VerticalAlignment="Bottom" Width="290" Grid.Column="1" Grid. </Grid> </Grid> </phone:PhoneApplicationPage> An argument can certainly be made that too many of the properties in the ViewModel are nothing but pass-through wrappers for the same properties in Customer. You can avoid that by making a Customer object a public property of the ViewModel and then binding to that directly. If you prefer that style, you would want to change the ViewModel to look like this: public class MainViewModel : ViewModelBase { public string ApplicationTitle { get { return "Your First Application"; } } public string PageName { get { return "Customer"; } } private Customer cust = Customer.GetCustomer(); public Customer Cust { get { return cust; } } public string Name { get { return cust.First + " " + cust.Last; } } public bool IsMale { get { return cust.IsMale == true; } } public bool IsFemale { get { return !IsMale; } } } You’ll need to update the Xaml file to bind to WorkPhone and WorkEmail (as the ViewModel is no longer translating for you). Finally, you’ll need to bind explicitly to the Customer property for most of the fields, and to the ViewModel for the couple (Name and Male/Female) that will use Properties on the VM rather than on the Customer public MainPage() { InitializeComponent(); var vm = new MainViewModel(); DataContext = vm.Cust; FullName.DataContext = vm; Male.DataContext = Female.DataContext = vm; } We create an instance of the ViewModel, and then set the DataContext of the View to the Cust property of the ViewModel. We then set the DataContext of FullName to the ViewModel instance, and finally we set Male.DataContext and FemaleDataContext to the vm as well. Mind Map This is the kind of thing that people who like this kind of thing.
http://www.codeproject.com/Articles/136448/i-W-An-iPhone-Developer-s-First-Windows-Phone-Ap
CC-MAIN-2014-42
refinedweb
3,386
61.16
Mike, I know it helps to get feedback, so here is some regarding the first issue of Overload. I think the content is first rate. As an experienced C programmer, but C++ novice, I found all the articles interesting, useful and well written. I would be happy with more of the same. On the production side, I would like to see more consistent use of fonts. Also, I personally don't like the use of 'script' type fonts in magazines (other than, say, as signatures). Finally, I would prefer to have all the book reviews together in one place. Not, I hasten to add, just because they are in CVu like that. I do genuinely find it preferable. In summary, an excellent first issue, but a few changes to cater for my own personal prejudices! Well done. Regards - Ian Cargill Thanks Ian. Many others who have seen Overload took a dislike to the numerous fonts. I hope you like the new look. Subject matter has now been grouped as much as possible, I hope you like it. Mike Toms Mike, Thank you very much indeed for your very interesting first issue of Overload. You asked for letters! The following question on C++ Streams immediately springs to mind, as an idea for inclusion into a future issue. Do you know if there is a "Streams" substitute for getch() and bioskey()? If you want to read a character without echo, or to get all entered keys in full character code and scancode form as in a line editor, it would be nice to just use the iostream library, instead of having to pull in the CONIO or BIOS library modules just for these functions. The very best of luck to Overload. Yours Sincerely - Peter Wippell To the best of my knowledge there is no such stream function. It is possible to produce one by making a custom streambuf to access the keyboard buffer directly. To accomplish this there are a number of problems to overcome, but I'll look into it for a future article. In the meantime, don't be too upset to bring getch() or bioskey() into your programs. Bioskey is a very small function and will only increase the program size by approximately 70 bytes. Mike, Congratulations to you and your colleagues on the first edition of Overload. If it continues the way it has started out it will be one of the best value for money magazines around. I found it very interesting and useful, and the larger format makes it easier reading than CVu, especially the code. In case Francis is reading (and he almost certainly is), I still love CVu. I feel sorry for CUG(UK) members who have not joined the Turbo C++ SIG, because the majority of articles are not Borland specific, but are good, solid standard C++. Since becoming a member of CUG(UK), I have always tried to program in standard C or C++ as this is the line pushed there; quite correctly I believe. Portability is important and careful thought must be given to using special features. The article on streams looks like being very helpful. Despite studying several books there is still much I don't understand. For example, I wanted a program which would run a loop reading in a string and a value, and print them out. In simplified form it looks like this: #include <iostream.h> main() { charbuf[100]; double value; do { cin.get(buf,100); // cin >> doesn't work with spaces cin >> value; cout << buf << " " << value << endl << endl; } while (value != 0); return 0; } The program goes into uncontrolled looping the second time round the loop. Perhaps you or another reader can help. By way of a contribution, I have a rough and ready boolean class which I have used. I will try and polish it up and send it in. Wishing you continued good fortune with the magazine. Yours Sincerely - Jim Bartholomew I hope we can continue to please you Jim. I agree with your views on writing standard code, however, in order to exploit products such as OWL and Turbo Vision, non-standard code must be used. The problem with your program is that the cin >> double whilst extracting the characters of the double number it leaves the carriage-return in the stream. If you add the line char scrap at the start and perform a cin.get(scrap) after the extract your double will find that the CR is removed. This should cure your looping. P.S. You should also test the return status of the stream after input (see this issues streams article). Mike Mike, RE: Overload Issue 1 - Article on Streams. Streams seem to be a really useful feature of C++ and its a great pity that they are grossly underused. Your series of articles will do a great job in helping to put that right. If I may, I would like to add a little to the discussion on strstream in the first article. First, under Problems with 'ostream', to get rid of a frozen string when you have finished with it, all you have to do is delete it. So the final lines of the example under automatic memory allocation might be amended to read:- char *frozen = buffer.str(); // freeze the string cout << frozen; // output to the console delete [] frozen; // remove it from the heap Used this way, ostream becomes an extremely useful replacement for both scanf() and malloc() with much improved formatting facilities. Second, from under From-Memory reading, in the example, the removal of whitespace and the use of the 'get' member function is not necessary. The same output is obtained by erasing those lines and adding " >> s1 >> s2; " to the main stringstream output line. The example below shows how you can read a string of several words separated by whitespace, if it is the final string in the buffer or there is a convenient terminator. Otherwise you would have to read word by word as in the article. // From-Memory reading amended to read // the final string #include <strstream.h> int main() { char memory_info[] = "100 200 22.995 hello fred"; int i1, i2; float f1; char s[20]; istrstream stringstream(memory_info, sizeof(memory_info)); stringstream >> i1 >> i2 >> f1; // stringstream >> ws; // to remove whitespace // if required stringstream.get(s, 20, '\0'); cout << i1 << endl << i2 << endl << f1 << endl << s << endl; cin.peek(); // nearest I can get to getch() but you have // to hit return return 0; } Good Luck to Overload - Peter Wippel (again) The delete [] freeze certainly works. I am paranoid about identifying the matching pairs of new/delete operators. How would I cope with missing new statements? It is not too difficult to use:- buffer.rdbuf()->freeze(0); The initial streams articles are only intended to give a brief overview. More detail to follow! - Mike.
https://accu.org/index.php/journals/1358
CC-MAIN-2018-09
refinedweb
1,137
72.56
When you think of apps developed in Swift using Xcode, do you think of Linux? Do you think of web apps? Do you think of Microsoft Exchange Servers? 😱 Probably not. At Jeff Bergier’s work, he runs an app that uses all of these. Plus, it’s all Swift! While moving from the safe world of Darwin and Foundation to the wild west of Linux, you may run into quite a few speedbumps. In this talk from Swift Language User Group, Jeff shares some of these with you so that your journey down this exciting new road will be easy! Introduction (0:36) I am a UX designer at a company called Riverbed. We do enterprise, networking hardware stuff that has nothing to do with mobile or iOS or Apple. I learned Objective-C before Swift, and have been teaching myself iOS for about three years. The App and How it Started (1:20) My production server Swift app is a conference room scheduler for work. It started as a pure storyboard prototype. Storyboards are a great prototyping tool - you can throw together all your table views, click through them, and easily make videos that you can show so that people understand what you are trying to make without any code. The storyboard turned into a Swift playground, because everyone at work wanted it to interface with Outlook. With Outlook, you need to use the Exchange API, which is in XML. During a work hackathon, I was able to develop a real version of this app written in Python on a framework called CherryPy. Because I wasn’t happy with the Python, and because I wanted to learn server-side Swift, I rewrote it in my spare time. There are several server-side Swift frameworks, particularly one developed by IBM. I did not end up using the IBM framework, even though they had a really nice GUI tool to help with changes to the Linux server. I went with Perfect because it seemed easiest to get up and running. Why Server-Side Swift? (7:01) I wanted to learn JavaScript and server-side Swift. Python works okay, but I think Xcode is great because of autocomplete and syntax checks. In server-side Swift, you don’t use interface builder, so that reduces most of your crashes to none. Get more development news like this Type safety is also great in Swift. It controls for the passing of unintended parameters into functions. Plus, Safari’s debugging tools are similar to Xcode’s. How It’s Built (7:01) This is a modern web app in that the Javascript loads HTML. The JavaScript sends a post request to the server. The server then sends back a JSON string, and then the JavaScript converts the JSON string into HTML elements and displays them. I used Bootstrap and JavaScript, then I used some cookies to store session information. I also stored some AES credentials in the cookies so that I can decrypt their password later when users return. Like the backend with CherryPy, most web frameworks work in a similar way, where you have a router concept. I set up one that listens for all or POST requests on slash, and then it goes into my Swift code. Perfect does not have a session manager, so I derive my own that sets the cookies and restores them with the request. Each request that comes in has a JSON payload that the Swift takes apart, determines what step in the process they are in and what data they’ve selected, communicates with the exchange server, and then returns a new JSON string to them. Forget About the .XcodeProj (11:02) The framework does not listen to the Xcode project, which is just a nice-to-have so that you can edit the code in Xcode. I don’t even check it into the repo, as it is totally disposable. We use the Swift package manager instead. For some reason it sets the OS build settings to 10.10, so if you use any of the newer code, you have to tell it to do 10.12. Also, because your web framework is going to need static files that go with it, you have to add a copy files phase, or else the web server can’t find them. Foundation is Mostly Present (17:13) I would still default to using Foundation types, even though the Xcode has no good way of telling you if it will work on Linux. Some of the bigger classes like NSURLSession were not on Perfect when I started, but now they’re there. This is a little bit of code I wrote to round the dates; you can select to the nearest 15 minutes. You can see that it looks like standard iOS Foundation code. extension Date { private static func roundedTimeInterval(from date: Date) -> TimeInterval { let dc = Calendar.current.dateComponents([.minute, .second], from: date) let originalMinute = Double(dc.minute ?? 0) let originalSeconds = Double(dc.second ?? 0) let roundTo = 15.0 let roundedMinute = round(originalMinute / roundTo) * roundTo let interval = ((roundedMinute - originalMinute) * 60) - originalSeconds return interval } mutating func roundMinutes() { let timeInterval = type(of: self).roundedTimeInterval(from: self) self += timeInterval } } How can you tell if a Foundation type is going work when you are working with Linux? You can start by going to Apple’s GitHub pages on Foundation. If you see things that look like what you see there, that means it is probably working. public struct DateComponents : ReferenceConvertible, Hashable, Equatable, _MutableBoxing { public typealias ReferenceType = NSDateComponents internal var _handle: _MutableHandle<NSDateComponents> //// Initialize a 'DateComponents', optionally specifying values for its fields. public init(calendar: Calendar? = nil, timeZone: TimeZone? = nil, era: Int? = nil, ... ) _handle = MutableHandle(adoptingReference: NSDateComponents()) if let _calendar = calendar { self.calendar = _calendar } if let _timezone = timeZone { self.timeZone = _timeZone } if let _era = era { self.era = era } ... } If you see this, this is a NSURLAuthenticationChallenge object, and it’s not implemented. That’s a bad sign. These crash at run time too. open class URLAuthenticationChallenge: NSObject, NSSecureCoding { static public var supportsSecureCoding: Bool { return true } public required init?(coder aDecoder:NSCoder) { NSUnimplemented() } open func encode(with aCoder: NSCoder) { NSUnimplented() } } Here is my first attempt at the same algorithm again which seemed to be the much more normal way to do it (I wanted to round something to 15 minutes). import Foundation // get date and components var dc = Calendar.current.dateComponents( [.year, .month, .day, .hour, .minute, .second, .calendar, .timeZone], from: Date() ) // get originals and do rounding let originalMinute = Double(dc.minute ?? 0) let roundTo = 15.0 let roundedMinute = Int(round(originalMinute / roundTo) * roundTo) // modify components dc.minute = roundedMinute dc.second = 0 // generate new date let roundedDate = dc.date! // crashes on linux // fatal error: copy(with:) is not yet implemented: file Foundation/NSCalendar.swift, line 1434 I set minutes to roundedMinute and the seconds to 0, and then I ask them for a new date instead of that previous one where I got the time interval and then added and subtracted it from the original. It turns out that this part crashes on Linux because copyWithZone is not implemented on NSDate. You might run into little surprises like that. Test on Linux Often (20:05) You never know when you will get an NSUnimplemented or copyWithZone not implemented. If you are really concerned, I would suggest setting up some CI stuff for at least every commit. I’ve been using an app called Veertu; it’s sandboxed on the app store, it’s free, and it runs VMs in headlist modes. You don’t have to see Linux’s horrible UI. It automatically downloads Linux and installs it for you. Working with JSON is Much Easier (21:21) You already know how to deal with NSJSONSerialization. It still comes back as a data.jsonEncodedString, so that will give you most collections you can call this on. Then every string and several other things have the opposite, so you can convert any string into objects, as long as it is real JSON. There are actually valid areas it throws in the try statement if you want to deal with that. import PerfectLib let data: [String : Any] = [ "date" : "2016-01-01T12-12-00", "name" : "Billy", "age" : 22, "emails" : [ "[email protected]", "[email protected]" ] ] let json = try data.jsonEncodedString() Random is Hard (21:58) arc4random_uniform() is not part of Linux, but I saw some systems that you could install. #if os(Linux) import Glibc #else import Darwin #endif for i in 1 ... 5 { #if os(Linux) let randomNumber = random() #else let randomNumber = Int(arc4random_uniform(UInt32.max)) #endif print("Round \(i): \(randomNumber)") } This loop generates five random numbers and prints them out. If it is on Linux, it uses Glibc’s random() function. This random number generator is totally useless. The easiest way I found to do random numbers is open up access to devrandom and read bytes off of it. It works on Mac and Linux, so you don’t have to do arc4random separately. There are also a couple of libraries that exist such as Turnstile and Crypto. Avoid #if os(Linux) (24:25) When you do this, you lose all help from Xcode. It can’t do syntax checking, it can’t do the most basic things, and it definitely can’t tell if it will work. Basically, that code in between that block doesn’t exist. I’d recommend avoiding it all costs and find a solution that works on both platforms. The other thing is if Foundation is letting you down, which sometimes happens on Linux, you treat whatever framework you chose as the new Foundation. There are lots of things for Perfect, and it is very modular. Threading (26:13) Threading doesn’t work. You can set your timer and schedule it, and you can fire it manually, and it will fire and then never fire again. I think this just has to do with Perfect; they have their own runloop going that’s separate from what NSTimer is expecting. Prefect has their own threading framework, and you can create threads yourself. If they are serial or concurrent, you can dispatch work onto them. Forced Unwrapping is Bad (28:31) Forced unwrapping is bad. When you force unwrap something on someone’s iOS app, it crashes for the user on the device. When you force unwrap something here on the server, it crashes for everybody. Q&A (31:24) Q: What were some of the other constraints when you were calling URIs? Jeff: I prefer to write my code in a way that it is cleaner, slower, and easier to read by generating more objects instead of keeping them around for longer. I don’t know how good the performance is, and it hasn’t tested under load. In general, some of the Perfect guys did performance comparisons between the various Swift web frameworks and the traditional web frameworks, and the Swift ones seemed to be doing pretty well. But I honestly haven’t tested it with performance. Q: How do you see server-side Swift in the next three to five years? Jeff: I think right now it is way too early. I think we are at least a year away from even startup-y companies using it in production, and many years away for bigger companies. Q: What are the trade-offs of using Swift instead of Python? Jeff: I find that the autocompletes and stuff in Xcode are an immense help. I find the type checking to be immensely helpful. The way we wrote it in Python was not really object-oriented, just because everyone thought it would be. Python does better work with JSON type data structures. About the content This content has been published here with the express permission of the author.
https://academy.realm.io/posts/slug-jeff-bergier-building-production-server-swift-app/
CC-MAIN-2018-22
refinedweb
1,962
64.91
hi.. I am Sumit from Kolkata I am a student... today is my 1st day with java and i am trying to run a simple programm in karel world.. my code is import stanford.karel.*; public class CheckerboardKarel extends SuperKarel { // You fill in this part public void run() { move (); pickBeeper (); move(); turnLeft(); } } but when i click the run button the error poped in console tab is Exception in thread "main" java.lang.NullPointerException at acm.program.Program.main(Program.java:917) at stanford.karel.Karel.main(Karel.java:202) i tried a lot to figured it out but i can't did.. i am using "eclipse" software.. untitled.JPG screenshot of my monitor is attached... someone help me please...
http://www.javaprogrammingforums.com/whats-wrong-my-code/13106-today-1st-day-java-i-am-screwed.html
CC-MAIN-2015-11
refinedweb
119
69.58
PythonScript plugin - variables and memory IIUC all globals of any script remain in memory, even when the script is ended. Different scripts have one common global namespace? So those are removed only after NPP restart. So, basically this means no auto memory cleaning? But does this happens only in this situation - i.e. for unique global names, or does new memory allocate constantly also for same global names? So I am curious, how does Pythonscript plugin manages memory? e.g. such script: T = editor.getText() or this script: def main() T = editor.getText() main() What happens in memory in each case? I mean if I run the script over and over again. - Claudia Frank last edited by So I am curious, how does Pythonscript plugin manages memory? basically it is doing the same as an interactive python shell session. If the reference count of an variable is >0 then it is kept alive otherwise memory released. Memory allocated by a variable inside a function does get release as soon as the function ends as long as the variable is only used in local function scope. So, in this case, def main() T = editor.getText() main() memory used by T is released once main function ends but if you would have used something like T = '' def main() global T T = editor.getText() main() then memory is stil used. Cheers Claudia - Scott Sumner last edited by What happens in memory in each case? I mean if I run the script over and over again. For this case: T = editor.getText() Running repeatedly will cause the previous memory held by T(if any) to be overwritten with new contents. After the final invocation of the script there will still be a Tin memory, available at the PS console’s >>>prompt, as well as to other scripts. And for this case: def main() T = editor.getText() main() Similarly, running repeatedly will cause the previous memory held by main(if any) to be overwritten with new contents. After the final invocation of the script there will still be a mainin memory, available at the PS console’s >>>prompt, as well as to other scripts. Note that in this case there will be no Tavailable, unless you ran the first script before this one. Because Tis local to the mainfunction, it has no effect on the non-local T(from earlier). Personal note: What I do here is to name any variables outside of function scope with a (hopefully) unique prefix. I have done this ever since I created a bug (and a debugging nightmare for myself) when I used the same non-local variable name in two different scripts. For example if my script is named ThisIsMyCoolPythonScript.pyI take the first letter of each word ( TIMCPS), add a double-underscore ( TIMCPS__) and then use this as a prefix on names that will continue to live once the pythonscript ends. So I might take your / @Claudia-Frank 's code snippets and turn them into: TIMCPS__def main() T = editor.getText() TIMCPS__main() …and… : TIMCPS__T = '' def TIMCPS__main() global TIMCPS__T TIMCPS__T = editor.getText() TIMCPS__main() And really a side note, but related: When I need to test to see if a script has been run previously, I do something like this: try: TIMCPS__main except NameError: pass # has NOT been run previously else: pass # has been run previously Hope this helps. - Scott Sumner last edited by @Scott-Sumner said: previous memory held by T (if any) to be overwritten with new contents What I meant here I could have said better…“overwritten” and “new contents” may be rather vague…let me try again: Running repeatedly will cause the memory occupied by a previous T(if any) to be released, and memory to be allocated to hold the new T.
https://community.notepad-plus-plus.org/topic/15959/pythonscript-plugin-variables-and-memory
CC-MAIN-2021-31
refinedweb
627
62.88
jGuru Forums Posted By: shashikanth_sastry Posted On: Friday, August 17, 2001 12:50 AM I need to print a swing form on to a printer. The swing form contains a JTextArea inside a JScrollPane (ie, it has scrollbars) and the textarea may contain a large text. I want the entire text in the textarea to be printed. but it is printing only the contents visible to the user. I have made the panel containing the rest of the GUI as Pageable and the textarea also a Pageable and appending them to a Book object and printing them.But it is not printing the full contents of the textarea. You need to retrive the contents of the text area? Posted By: Shaji_Kalidasan Posted On: Friday, August 17, 2001 03:37 AM just make use of the following packagesjavax.printjavax.print.attributejavax.print.attribute.standardjavax.print.event A sample program from the demo is provided here for your convenience. Please also check the JPS API Doc for more information here. import java.io.*;import javax.print.*;import javax.print.attribute.*;import javax.print.attribute.standard.*;import javax.print.event.*;public class PrintPS { public static void main(String args[]) { PrintPS ps = new PrintPS(); } public PrintPS() { /* Construct the print request specification. * The print data is Postscript which will be * supplied as a stream. The media size * required is A4, and 2 copies are to be printed */ DocFlavor flavor = DocFlavor.INPUT_STREAM.POSTSCRIPT; PrintRequestAttributeSet aset = new HashPrintRequestAttributeSet(); aset.add(MediaSizeName.ISO_A4); aset.add(new Copies(2)); aset.add(Sides.TWO_SIDED_LONG_EDGE); aset.add(Finishings.STAPLE); /* locate a print service that can handle it */ PrintService[] pservices = PrintServiceLookup.lookupPrintServices(flavor, aset); if (pservices.length > 0) { System.out.println("selected printer " + pservices[0].getName()); /* create a print job for the chosen service */ DocPrintJob pj = pservices[0].createPrintJob(); try { /* * Create a Doc object to hold the print data. * Since the data is postscript located in a disk file, * an input stream needs to be obtained * BasicDoc is a useful implementation that will if requested * close the stream when printing is completed. */ FileInputStream fis = new FileInputStream("example.ps"); Doc doc = new SimpleDoc(fis, flavor, null); /* print the doc as specified */ pj.print(doc, aset); /* * Do not explicitly call System.exit() when print returns. * Printing can be asynchronous so may be executing in a * separate thread. * If you want to explicitly exit the VM, use a print job * listener to be notified when it is safe to do so. */ } catch (IOException ie) { System.err.println(ie); } catch (PrintException e) { System.err.println(e); } } }}
http://www.jguru.com/forums/view.jsp?EID=478400
CC-MAIN-2014-35
refinedweb
420
50.94
#include <dc1394_camera.h> #include <dc1394_camera.h> Inherits bj::Camera. Inheritance diagram for bj::DC1394Camera: DC1394Camera class provides an interface to firewire (IEEE1394) camera devices. Any type of firewire camera can be accessed through this class. "/dev/video1394/0" 320 240 0 30 true A constructor. Open a video device, and initialize the device using default values. A destructor. [inline] Query the current Bayer tiling format. . Query the length of PNM data excluding an header. Query the number of frames per second. Query the length of PNM header. Query the height of input images. Query the current hue property. [protected] camera features in a specified channel.. Change the resolution: BAYER_NONE Set a Bayer tiling format. Set a brightness property. Change a channel. Set a color property. Set a contrast property. Set a hue property. Set a whiteness property. Make the camera in a specified channel start to send images. Make the camera in a specified channel stop sending images. Query the camera type. Query the curret whiteness property. Query the width of input images.
http://robotics.usc.edu/~boyoon/bjlib/d0/df5/classbj_1_1DC1394Camera.html
CC-MAIN-2018-26
refinedweb
173
64.67
This action might not be possible to undo. Are you sure you want to continue? Front cover IBM Cognos Business Intelligence V10.1 Handbook Understand core features of IBM Cognos BI V10.1 Realize the full potential of IBM Cognos BI Learn by example with practical scenarios Dean Browne Brecht Desmeijter Rodrigo Frealdo Dumont Armin Kamal John Leahy Scott Masson Ksenija Rusak Shinsuke Yamamoto Martin Keen ibm.com/redbooks International Technical Support Organization IBM Cognos Business Intelligence V10.1 Handbook October 2010 SG24-7912-00 Note: Before using this information and the product it supports, read the information in “Notices” on page ix. First Edition (October 2010) This edition applies to Version 10.1 of IBM Cognos Business Intelligence. © Copyright International Business Machines Corporation 2010. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . xv Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Part 1. IBM Business Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction to IBM Cognos Business Intelligence. . . . . . . . . . 3 1.1 IBM Business Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Introduction to IBM Cognos BI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Easily view, assemble, and personalize information . . . . . . . . . . . . . . 5 1.2.2 Explore all types of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.3 Analyze facts and anticipate tactical and strategic implications . . . . . 5 1.2.4 IBM Cognos BI user interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 2. Overview of the IBM Cognos Business Intelligence architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Enterprise class SOA platform architecture . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.1 IBM Cognos Platform server roles . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.2 IBM Cognos BI services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 Open access to all data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3 Business intelligence for all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4 Common integrated security model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Chapter 3. Business scenario and personas used in this book . . . . . . . . 21 3.1 Business scenario overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.1.1 Business questions to address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.2 Information stored in the data warehouse of this company. . . . . . . . 23 3.2 Personas used in the scenarios in this book . . . . . . . . . . . . . . . . . . . . . . . 24 3.2.1 Advanced Business User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.2.2 Professional Report Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2.3 Modeler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2.4 Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.5 Analyst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.6 Business User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 iii Part 2. IBM Cognos metadata modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Chapter 4. Create reporting packages with IBM Cognos Framework Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1 IBM Cognos Framework Manager overview . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.1 Reporting requirements and data access strategies . . . . . . . . . . . . . 34 4.1.2 Metadata model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.1.3 The IBM Cognos Framework Manager UI . . . . . . . . . . . . . . . . . . . . 37 4.1.4 Reporting objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.2 Build a model with IBM Cognos Framework Manager . . . . . . . . . . . . . . . 44 4.2.1 Import metadata using Model Design Accelerator . . . . . . . . . . . . . . 45 4.2.2 Model organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.2.3 Verify query item properties and relationships . . . . . . . . . . . . . . . . . 59 4.2.4 Import additional metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.2.5 Verify the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.2.6 Verify the data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2.7 Specify determinants. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.3 Add business logic to the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.3.1 Add filters to the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.3.2 Add calculations to the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.3 Make the model dynamic using macros . . . . . . . . . . . . . . . . . . . . . . 97 4.4 Create dimensional objects for OLAP-style reporting . . . . . . . . . . . . . . . 100 4.4.1 Create Regular Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.4.2 Create Measure Dimensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.4.3 Define scope for measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.5 Create and configure a package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.5.1 Analyze publish impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.6 Apply security in IBM Cognos Framework Manager . . . . . . . . . . . . . . . . 124 4.6.1 Object level security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.6.2 Row level security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.6.3 Package level security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.7 Model troubleshooting tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 4.7.1 Examine the SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 4.7.2 Object dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 4.7.3 Search the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Part 3. Business intelligence simplified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Chapter 5. Business intelligence simplified: An overview . . . . . . . . . . . 135 5.1 Information delivery leading practices . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.1.1 List reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.1.2 Crosstabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.1.3 Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 5.2 Enabling access for more people . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 iv IBM Cognos Business Intelligence V10.1 Handbook 5.3 Business use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Chapter 6. Individual and collaborative user experience . . . . . . . . . . . . 161 6.1 Dashboard overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 6.2 Introduction to IBM Cognos Business Insight . . . . . . . . . . . . . . . . . . . . . 163 6.2.1 The Getting Started page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 6.2.2 Application bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 6.2.3 Dashboard layout area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.2.4 Content pane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.2.5 Widgets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 6.3 Interaction with the dashboard components . . . . . . . . . . . . . . . . . . . . . . 176 6.3.1 Personalize content. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 6.3.2 Add new content to broaden the scope. . . . . . . . . . . . . . . . . . . . . . 183 6.3.3 Sort and filter data and perform calculations. . . . . . . . . . . . . . . . . . 189 6.3.4 Use advanced filtering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 6.3.5 Add non-BI content to a dashboard . . . . . . . . . . . . . . . . . . . . . . . . 205 6.3.6 Work with report versions and watch rules . . . . . . . . . . . . . . . . . . . 207 6.4 Collaborative business intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 6.4.1 Create annotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.4.2 IBM Lotus Connections activities . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Chapter 7. Self service interface for business users . . . . . . . . . . . . . . . . 217 7.1 Explore the IBM Cognos Business Insight Advanced interface . . . . . . . 218 7.1.1 Page layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 7.1.2 Context filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 7.1.3 Insertable Objects pane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 7.1.4 Page navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 7.1.5 Work area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 7.1.6 Properties pane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 7.2 Choose a reporting style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 7.3 Change existing reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 7.3.1 Sort data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 7.3.2 Filter data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 7.3.3 Perform calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 7.3.4 Set the right level of detail for the analysis . . . . . . . . . . . . . . . . . . . 250 7.4 Create content. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 7.4.1 Create a crosstab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 7.4.2 Create a chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 7.4.3 Set conditional formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 7.4.4 Analyze the execution query path . . . . . . . . . . . . . . . . . . . . . . . . . . 283 7.4.5 Render output in various formats and print content . . . . . . . . . . . . 286 7.5 Search for meaningful information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 7.6 Summarize data and create calculations . . . . . . . . . . . . . . . . . . . . . . . . 293 Contents v 7.6.1 Summarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 7.6.2 Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 7.7 Add filters to refine data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 7.7.1 Filter reports for relational data sources . . . . . . . . . . . . . . . . . . . . . 300 7.7.2 Filter reports for dimensional data sources . . . . . . . . . . . . . . . . . . . 302 7.7.3 Suppress data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 7.7.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 7.8 Add external data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 7.8.1 External Data feature example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 7.9 Create a package with the Self Service Package wizard . . . . . . . . . . . . 329 7.9.1 Create a package for Cognos PowerCubes . . . . . . . . . . . . . . . . . . 330 7.9.2 Create a package for SAP BW . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 7.10 Create statistical calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 7.10.1 IBM Cognos Statistics overview . . . . . . . . . . . . . . . . . . . . . . . . . . 333 7.10.2 IBM Cognos Statistics use case: Create an IBM Cognos Statistics report . . . . . . . . . . . . . . . . . . . . . 355 Chapter 8. Actionable analytics everywhere . . . . . . . . . . . . . . . . . . . . . . 361 8.1 Accessibility and internationalization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 8.1.1 Enabling access for more people . . . . . . . . . . . . . . . . . . . . . . . . . . 362 8.1.2 Providing internationalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 8.2 Disconnected report interaction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 8.2.1 IBM Cognos Active Report overview. . . . . . . . . . . . . . . . . . . . . . . . 365 8.2.2 IBM Cognos Active Report features . . . . . . . . . . . . . . . . . . . . . . . . 365 8.2.3 IBM Cognos Active Report use case . . . . . . . . . . . . . . . . . . . . . . . 368 8.3 Interact with IBM Business Analytics using mobile devices . . . . . . . . . . 373 8.3.1 Extended device support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 8.3.2 Simplified experience across all devices. . . . . . . . . . . . . . . . . . . . . 374 8.3.3 IBM Cognos Mobile use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 8.4 IBM Cognos Analysis for Microsoft Excel . . . . . . . . . . . . . . . . . . . . . . . . 380 8.4.1 Features of IBM Cognos Analysis for Microsoft Excel . . . . . . . . . . 381 8.4.2 IBM Cognos Analysis for Microsoft Excel use case . . . . . . . . . . . . 382 8.5 Business driven workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 8.5.1 Enhanced event management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 8.5.2 Human task service use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Part 4. Enterprise ready platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 Chapter 9. Enterprise ready performance and scalability . . . . . . . . . . . . 403 9.1 Overview of Dynamic Query Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 9.1.1 What is Dynamic Query Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 9.1.2 Why use Dynamic Query Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 9.1.3 Technical overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 9.2 Configuring Dynamic Query Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 vi IBM Cognos Business Intelligence V10.1 Handbook 9.2.1 Creating a connection in IBM Cognos Administration . . . . . . . . . . . 408 9.2.2 Creating a package in IBM Cognos Framework Manager . . . . . . . 409 9.2.3 Transitioning to Dynamic Query Mode using IBM Cognos Lifecycle Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 9.3 Query Service Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 9.3.1 Query Service metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 9.3.2 Manage the cache in IBM Cognos Administration . . . . . . . . . . . . . 417 9.3.3 Query Service settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 9.3.4 Disabling the Query Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 9.4 Analyzing queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 9.4.1 What is Dynamic Query Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . 424 9.4.2 Working with Dynamic Query Analyzer . . . . . . . . . . . . . . . . . . . . . . 425 Chapter 10. IBM Cognos system administration . . . . . . . . . . . . . . . . . . . 431 10.1 IBM Cognos Administration overview . . . . . . . . . . . . . . . . . . . . . . . . . . 432 10.1.1 IBM Cognos Administration capabilities . . . . . . . . . . . . . . . . . . . . 432 10.1.2 The IBM Cognos Administration user interface. . . . . . . . . . . . . . . 437 10.2 Moving to IBM Cognos BI version 10.1 from a previous release . . . . . 449 10.2.1 Using IBM Cognos Lifecycle Manager to test the IBM Cognos environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 10.2.2 Validating the target environment . . . . . . . . . . . . . . . . . . . . . . . . . 455 10.2.3 Executing target and source content. . . . . . . . . . . . . . . . . . . . . . . 458 10.2.4 Compare the output to ensure consistency. . . . . . . . . . . . . . . . . . 459 10.2.5 Analyzing the project status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 10.2.6 One-click comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 10.3 Using the administrative features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 10.3.1 Enhanced search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 10.3.2 Restricting the scheduling options . . . . . . . . . . . . . . . . . . . . . . . . 475 10.3.3 Intra-day scheduling window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 10.3.4 Allowing users to persist personal database signons . . . . . . . . . . 481 10.4 Managing the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 10.4.1 Metric tolerance thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 10.4.2 Reacting to bottlenecks due to unexpected events. . . . . . . . . . . . 495 10.4.3 System trending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 10.4.4 Consuming system metrics from external tools . . . . . . . . . . . . . . 498 10.5 Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 10.5.1 Configure the audit database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 10.5.2 Audit table definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 10.5.3 Audit levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 10.5.4 Audit and logging for IBM Cognos BI services . . . . . . . . . . . . . . . 507 10.5.5 Setting audit levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 10.5.6 Maintaining audit detail while troubleshooting. . . . . . . . . . . . . . . . 509 10.5.7 Audit scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Contents vii 10.5.8 Sample audit package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 10.5.9 Audit content package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 10.5.10 Audit extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Part 5. Complete IBM Business Analytics solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Chapter 11. Integrating IBM Cognos BI with IBM Cognos Business Analytics solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 11.1 Overview of IBM Cognos Business Analytics solutions . . . . . . . . . . . . 530 11.1.1 IBM Cognos TM1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 11.1.2 IBM Cognos Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 11.1.3 IBM Cognos Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 11.2 Business scenarios and roles to take advantage of IBM Business Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 11.3 Integrating IBM Cognos TM1 with IBM Cognos BI . . . . . . . . . . . . . . . . 534 11.3.1 Creating a data source and package . . . . . . . . . . . . . . . . . . . . . . 535 11.3.2 Objects used in the dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 11.3.3 Configuration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 11.3.4 Business case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 11.4 Integrating IBM Cognos Planning Contributor with IBM Cognos BI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 11.5 Integrating IBM Cognos Controller with IBM Cognos BI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 Part 6. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 Appendix A. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 Locating the web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 How to use the web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549 viii IBM Cognos Business Intelligence V10.1 Handbook. ix product. A current list of IBM trademarks is available on the Web at. Microsoft. Inc. the IBM logo.Trademarks IBM. or both. and/or other countries. or both. other countries. and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States. other countries. and the Windows logo are trademarks of Microsoft Corporation in the United States. and all Java-based trademarks are trademarks of Sun Microsystems. in the United States. indicating US registered or common law trademarks owned by IBM at the time this information was published.com/legal/copytrade. other countries. or both: AIX® ClearCase® Cognos® Cube Views® DB2® developerWorks® IBM® Informix® InfoSphere™ Lotus® PowerPlay® Rational® Redbooks® Redbooks (logo) ReportNet® System z® Tivoli® TM1® WebSphere® ® The following terms are trademarks of other companies: Adobe. the Adobe logo. Such trademarks may also be registered or common law trademarks in other countries. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™). or service names may be trademarks or service marks of others. or both. other countries. and ibm. x IBM Cognos Business Intelligence V10.1 Handbook . Other company. Windows.com are trademarks or registered trademarks of International Business Machines Corporation in the United States.ibm.shtml The following terms are trademarks of the International Business Machines Corporation in the United States. Java. U. The book is primarily focused on the roles of Advanced Business User. Belgium.1 Realize the full potential of IBM Cognos BI Learn by example with practical scenarios This book uses a fictional business scenario to demonstrate the power of IBM Cognos BI. xi . He holds a degree in Computer Science from Hogeschool Gent. standards. He designed global telecommunications and server infrastructures. including reporting. Dean Browne is a Product Manager for IBM Software. He writes extensively about Cognos business solutions development life cycle and Business Analytics requirements. Canada. He also © Copyright IBM Corp. U. Dean is responsible for IT value related features in IBM Cognos products. Before joining IBM Software Group 7 years ago. Pennsylvania.S. data integration and data modeling. Business Analytics in Philadelphia. Rodrigo Frealdo Dumont is an IT Specialist at the IBM Brazil Global Delivery Center. 2010. All rights reserved. and remotely. The team who wrote this book This book was produced by a team of specialists from around the world working in Ottawa. He has 4 years of experience in the IBM Cognos Business Intelligence field. and out-sourced support processes for global chemical and pharmaceutical corporations. You can use this book to: Understand core features of IBM Cognos BI V10.K.Preface IBM® Cognos® Business Intelligence (BI) helps organizations meet strategic objectives and provides real value for the business by delivering the information everyone needs while also reducing the burden on IT. Brecht Desmeijter is a Proven Practice Advisor in Bedfont. Dean worked as an IT infrastructure design and out-sourcing consultant. Administrator. He has four years of experience in Business Analytics applications.1. Cognos Software team. This IBM Redbooks® publication addresses IBM Cognos Business Intelligence V10. Modeler. Currently he acts as a Cognos development lead of Business Analytics projects at the Business Analytics Center of Competency. and IT Architect. Professional Report Author. As part of the IBM Business Analytics. His areas of expertise include the IBM Cognos infrastructure and the Software Development Kit. Scott Masson is an IBM Cognos Senior Product Manager. With over 11 years in business intelligence and information management. he writes extensively around IBM Cognos products and how to optimize the administration of the Business Intelligence infrastructure. Ksenija Rusak is a Client Technical Professional in IBM Croatia. reporting tools such as Microsoft® Reporting Services and Crystal Reports and data warehousing implementations (Microsoft SQL Server DTS. In 2007. consulting. He implemented various IBM products. Her areas of expertise include the IBM Cognos portfolio.acts as a technical lead and subject matter expert of IBM Cognos for the Information and Data Management Center of Competence of Application Services area in Brazil. WebSphere®. John Leahy is a Proven Practice Team Leader with the IBM Cognos Business Analytics iApps Team. Rodrigo holds a bachelor degree in Systems Analysis from Pontifícia Universidade Católica of Campinas. Author of the IBM Cognos System Management Methodology. He has written extensively on IBM Cognos Framework Manager and IBM Cognos Report Studio. and Tivoli®. John is an IBM Cognos Planning Certified Expert and an IBM Cognos TM1® Certified Developer. end-user education and large-scale projects. including AIX®. She has 9 years of IT experience. He has 9 years of experience working with IBM Cognos products with a focus on metadata modeling and report design. IBM InfoSphere™ Warehouse). Canada. DB2®. His areas of expertise include course development. John holds a bachelor’s degree in Business and Economics from Ursinus College. Armin Kamal is an IBM Cognos Proven Practice Advisor for Business Analytics in Canada. working with local technical communities in providing technical sales support for major prospects and customers. and customer support. technical writing. Shinsuke Yamamoto is an IT Specialist joining IBM Japan in 2001. He holds a degree in Communications and Psychology from The University of Ottawa and a diploma in Information Technology from ITI. He writes extensively about IBM Cognos Financial Performance Management products and has over 9 years experience working with IBM Cognos products in various roles. Rodrigo is an IBM Certified Designer for Cognos 8 BI Reports and IBM Certified Developer for Cognos 8 BI Metadata Models. he moved to IBM Systems Engineering in Japan and has experience in handling DB2 projects as a subject matter expert and publishing xii IBM Cognos Business Intelligence V10. She holds a degree in mathematical statistics and computer science. working in Ottawa. implementation of business intelligence solutions.1 Handbook . including technical sales. he has focused on the technology and infrastructure organizations need to drive better business performance. She is a member of the Community of Practice for Central and Eastern Europe responsible for the Cognos portfolio. Chris McPherson is Product Manager responsible for IBM Cognos Framework Manager and Metadata at the IBM Canada Ottawa Lab.technical guides about DB2. IBM Business Analytics Andrew Popp IBM Cognos BI and PM Product Marketing and GTM Strategy. Martin Keen is a Consulting IT Specialist at the ITSO. IBM Software Group Rebecca Hilary Smith Senior Manager. Business Intelligence. Martin. U. and Armin Special thanks to Chris McPherson for his written contributions to this book. IBM Cognos Software Rola Shaar Senior Product Manager. and ESB. design and administration. Figure 1 Ottawa team (left-to-right): Dean. Rodrigo. He is currently the leader of the Cognos support team. Before joining the ITSO. Business Analytics. Martin holds a bachelor’s degree in Computer Studies from Southampton Institute of Higher Education. He also teaches IBM classes worldwide about WebSphere. Ksenija. Scott. Martin worked in the EMEA WebSphere Lab Services team in Hursley. Raleigh Center. Shinsuke. He holds a BA from the University of Western Ontario.K. focusing on Cognos architecture. He writes extensively about WebSphere products and service-oriented architecture (SOA). IBM Cognos BI and PM Product Marketing. SOA. IBM Business Analytics Wassim Hassouneh Product Manager. Thanks to the following people for their contributions to this project: Daniel Wagemann Proven Practice Consultant. IBM Business Analytics Preface xiii . Andreas Coucopoulos IBM Cognos 8 Platform Product Marketing and GTM Strategy, IBM Business Analytic Stewart Winter Senior Software Developer, IBM Ottawa Robert Kinsman Product Manager, Cognos BI, IBM Ottawa Bill Brousseau Cognos Beta Management Team, IBM Ottawa Douglas Wong Technical Solution Manager, IBM Ottawa Paul Glennon Product Manager, IBM Ottawa Brett Johnson Information Development Infrastructure Lead, IBM Software Group Jennifer Hanniman Senior Product Manager, IBM Business Analytics Greg McDonald Product Manager, IBM Business Analytics Michael McGeein Senior Product Manager, IBM Business Analytics Mike Armstrong Senior Manager Cognos Platform Product Management, IBM Business Analytics Ronnie Rich Product Manager, IBM Business Analytics Paul Young Proven Practice Advisor, IBM Business Analytics James Melville Pre-Sales - Financial Consolidations, IBM Business Analytics Doug Catton Proven Practice Advisor, IBM Business Analytics xiv IBM Cognos Business Intelligence V10.1 Handbook Stay connected to IBM Redbooks Find us on Facebook: Follow us on Twitter: Preface xv Look for us on LinkedIn: Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: Stay current on recent Redbooks publications with RSS Feeds: xvi IBM Cognos Business Intelligence V10.1 Handbook Part 1 Part 1 IBM Business Analytics 1 2 IBM Cognos Business Intelligence V10.1 Handbook 1 Chapter 1. Introduction to IBM Cognos Business Intelligence. In this chapter, we introduce IBM Business Analytics and IBM Cognos BI and discuss the following topics: IBM Business Analytics Introduction to IBM Cognos BI 3 1.1 IBM Business Analytics IBM Business Analytics includes the following products: IBM Cognos BI IBM Financial Performance and Strategy Management – – – – IBM Cognos TM1 IBM Cognos Planning IBM Cognos Controller IBM Cognos Business ViewPoint IBM Analytics Applications IBM Advanced Analytics (SPSS) IBM Cognos Express 1.2 Introduction to IBM Cognos BI IBM Cognos BI provides a unified workspace for business intelligence and analytics that the entire organization can use to answer key business questions and outperform the competition. With IBM Cognos BI, users can: Easily view, assemble and personalize information Explore all types of information from all angles to assess the current business situation Analyze facts and anticipate tactical and strategic implications by simply shifting from viewing to more advanced, predictive or what-if analysis Collaborate to establish decision networks to share insights and drive toward a collective intelligence Provide transparency and accountability to drive alignment and consensus Communicate and coordinate tasks to engage the right people at the right time Access information and take action anywhere, taking advantage of mobile devices and real-time analytics Integrate and link analytics in everyday work to business workflow and process Organizations need to make the most of a workforce that is increasingly driven to multi-task, network and collaborate. IBM Cognos BI delivers analytics everyone can use to answer key business questions. 4 IBM Cognos Business Intelligence V10.1 Handbook 1.2.1 Easily view, assemble, and personalize information Often, business users do not know how to get to the information that they need, and available tools might not provide the freedom to combine and explore information in the way they want. IBM Cognos BI features allow business users to easily view, assemble, and personalize information to follow a train of thought and to generate a unique perspective. Using a single place to quickly see a view of their business, users can personalize content, build on the insights of others, and incorporate data from a variety of sources. These capabilities ensure that more people are engaged in providing unique insights and delivering faster business decisions. Giving business users greater self-service control reduces demands on IT and business intelligence systems. 1.2.2 Explore all types of information If a separate tool is required, it. These functions allow business users to deliver reports that include statistical insight and validation and to distribute these reports to the larger business community. 1.2.3 Analyze facts and anticipate tactical and strategic implications Business users need tools that let them accurately evaluate and identify the impact that different scenarios will have on the business and on the bottom line. IBM Cognos BI allows the business user to analyze facts and anticipate strategic implications by simply shifting from viewing data to performing more advanced predictive or what-if analysis. Understanding the scenarios that affect business enables the business user to make informed recommendations to the business and provides an increased competitive advantage. Chapter 1. Introduction to IBM Cognos Business Intelligence 5 1.2.4 IBM Cognos BI user interfaces IBM Cognos BI includes web-based and Windows®-based user interfaces that provide a business intelligence experience that is focused upon the needs of different users. IBM Cognos Business Insight With IBM Cognos Business Insight, you can create sophisticated interactive dashboards using IBM Cognos content, as well as external data sources such as TM1 Websheets and CubeViews, according to your specific information needs. You can view and open favorite dashboards and reports, manipulate the content in the dashboards, and email your dashboards. You can also use comments and activities for collaborative decision making and use social software such as IBM Lotus® Connections for collaborative decision making. For more information about using IBM Cognos Business Insight, see the IBM Cognos Business Insight User Guide. IBM Cognos Report Studio IBM Cognos Report Studio is a robust report design and authoring tool. Using IBM Cognos Report Studio, report authors can create, edit, and distribute a wide range of professional reports. They can also define corporate-standard report templates for use in IBM Cognos Query Studio and edit and modify reports created in IBM Cognos Query Studio or IBM Cognos Analysis Studio. This book does not cover use of IBM Cognos Report Studio, given the depth of features that IBM Cognos Report Studio provides. For information about using IBM Cognos Report Studio, refer to IBM Cognos Report Studio User Guide or the online Quick Tour. IBM Cognos Query Studio Using IBM Cognos Query Studio, users with little or no training can quickly design, create, and save reports to meet reporting needs that are not covered by the standard, professional reports created in IBM Cognos Report Studio. For information about using IBM Cognos Query Studio, see the IBM Cognos Query Studio User Guide or the online Quick Tour. IBM Cognos Analysis Studio With IBM Cognos Analysis Studio, users can explore and analyze data from different dimensions of their business. Users can also compare data to spot trends or anomalies in performance. IBM Cognos Analysis Studio provides access to dimensional, online analytical processing (OLAP), and dimensionally 6 IBM Cognos Business Intelligence V10.1 Handbook modeled relational data sources. Analyses created in IBM Cognos Analysis Studio can be opened in IBM Cognos Report Studio and used to build professional reports. For information about using IBM Cognos Analysis Studio, see the IBM Cognos Analysis Studio User Guide or the online Quick Tour. IBM Cognos Event Studio In IBM Cogn. For information about using IBM Cognos Event Studio, see the IBM Cognos Event Studio User Guide or the online Quick Tour. IBM Cognos Metric Studio In IBM Cognos Metric Studio, you can create and deliver a customized score carding environment for monitoring and analyzing metrics throughout your organization. Users can monitor, analyze, and report on time-critical information by using scorecards based on cross-functional metrics. For information about using IBM Cognos Metric Studio, see the IBM Cognos Metric Studio User Guide for Authors. IBM Cognos Administration IBM Cognos Administration is a central management interface that contains the administrative tasks for IBM Cognos BI. It provides easy access to the overall management of the IBM Cognos environment and is accessible through IBM Cognos Connection. For information about using IBM Cognos Administration, see the IBM Cognos Administration and Security Guide.. Chapter 1. Introduction to IBM Cognos Business Intelligence 7 OLAP cubes are designed to contain sufficient metadata for business intelligence reporting and analysis. Because cube metadata can change as a cube is developed, IBM Cognos Framework Manager models the minimum amount of information needed to connect to a cube. Cube dimensions, hierarchies, and levels are loaded at run time. For information about using IBM Cognos Framework Manager, see the IBM Cognos Framework Manager User Guide. 8 IBM Cognos Business Intelligence V10.1 Handbook We discuss the various components that make up an IBM Cognos Platform deployment for business intelligence applications and how you can deploy them to meet application requirements. we introduce the IBM Cognos Business Intelligence (BI) architecture and the IBM Cognos Platform. All rights reserved. Overview of the IBM Cognos Business Intelligence architecture In this chapter. We discuss the following topics: Enterprise class SOA platform architecture Open access to all data sources Business intelligence for all Common integrated security model © Copyright IBM Corp.2 Chapter 2. 9 . 2010. The IBM Cognos BI user interfaces are accessed through the web tier. including relational data sources and online analytical processing (OLAP). creating and administering dashboards. enterprise-class platform. which results in complete fault tolerance. with automatic load balancing built into the system. The IBM Cognos Platform provides optimized access to all data sources. This n-tiered architecture is made up of three server tiers: The web tier The application tier The data tier The tiers are based on business function and can be separated by network firewalls. Any service of the same type. with a single query service.1 Handbook .1 Enterprise class SOA platform architecture IBM Cognos BI delivers a broad range of business intelligence capabilities on an open. analysis. and openness. In addition. web-based administration that provides a complete view of system activity as well as system metrics and thresholds so that organizations can resolve potential issues before there is a business impact. The IBM Cognos Platform is built on a web-based service-oriented-architecture (SOA) that is designed for scalability. and events—are accessed through web interfaces.2. and native MDX to optimize data retrieval for all these different data providers. Services in the application tier operate on a peer-to-peer basis. Reliability and scalability were key considerations when designing the IBM Cognos Platform. The IBM Cognos Platform delivers the capabilities to manage business intelligence applications with centralized. The dispatching (or routing) of requests is done in an optimal way. native SQL. All capabilities—including viewing. which means that no service is more important and that there are loose service linkages. this query service understands and takes advantage of the data source strength by using a combination of open standards such as SQL99. on any machine in an IBM Cognos Platform configuration. reports. can satisfy an incoming request. 10 IBM Cognos Business Intelligence V10. scorecards. availability. The IBM Cognos Gateway component manages all web communication for the IBM Cognos Platform. Presentation/Web Tier IBM Cognos Gateway Application Tier Firewall/Router/Encryption Dispatcher IBM Cognos Content Manager Dispatcher IBM Cognos Report Server Dispatcher IBM Cognos Report Server Data Tier Firewall/Router/Encryption Security Security Security Namespace Namespace Namespace IBM Cognos Content Store Data Source OLAP Figure 2-1 Typical distributed topology for the IBM Cognos Platform 2. These server roles also define the tier within the architecture that an IBM Cognos BI server uses. Overview of the IBM Cognos Business Intelligence architecture 11 . The workload on the IBM Cognos Gateway server requires minimal processing resources. For high availability or scalability Chapter 2. The IBM Cognos components that fulfill this role are referred to as the IBM Cognos Gateway.Figure 2-1 illustrates a typical tiered deployment.1.1 IBM Cognos Platform server roles To ensure optimal performance. You can tune the performance of IBM Cognos Platform by defining how IBM Cognos Dispatcher handles requests and manages services. IBM Cognos configuration: A normal configuration for IBM Cognos Dispatcher is two IBM Cognos Report Server processes (see “IBM Cognos Report Server” on page 13) per allocated processor and eight to 10 threads per processor in one of the following configurations: Three low affinity threads plus one high affinity thread Four low affinity threads plus one high affinity thread Threads within IBM Cognos Platform are managed by the type of traffic that they handle. requests are load balanced across all available services using a configurable. Low-affinity connections are used to process low-affinity requests. weighted round-robin algorithm to distribute requests. but resource consumption is minimized if the request is routed back to the report service process that was used to execute the original process. you can deploy multiple redundant gateways along with an external HTTP load-balancing router. During the normal operation of IBM Cognos BI services. A low affinity request operates just as efficiently on any service. Application tier: Server components The application tier for the IBM Cognos Platform is made up of three main server components: IBM Cognos Dispatcher IBM Cognos Report Server IBM Cognos Content Manager Application tier servers are composed of a collection of loosely-coupled Java™ and C++ services. Affinity relates to the report service process that handled the original user request when multiple interactions need to occur to satisfy the request. High-affinity connections are used to process absolute and high-affinity requests from the report services. At startup. 12 IBM Cognos Business Intelligence V10. which is referred to as high and low affinity. It can be processed on any service. A high affinity request is a transaction that can gain a performance benefit from a previously processed request by accessing cache. IBM Cognos Dispatcher IBM Cognos Dispatcher performs the load balancing of requests at the application tier.requirements.1 Handbook . The IBM Cognos Dispatcher component is a lightweight Java servlet that manages (and provides communication between) application services. each IBM Cognos Dispatcher registers locally available services with the IBM Cognos Content Manager. When configuring the IBM Cognos Platform server environment.and low-affinity connections. In general. Chapter 2. you must set a Java heap size. Configure the number of processes for IBM Cognos Report Server based on the available processor capacity. as well as the minimum number of processes that should be running at non-peak times. You can determine the optimal Java heap size using Java garbage collection statistics.You can manage the number of threads per IBM Cognos BI reporting service process through the IBM Cognos Platform administration console by setting the number of high. IBM Cognos BI reporting service performance is closely tied to processor clock speed and throughput capabilities. Similarly. which is not Java. refer to the IBM Cognos BI Architecture and Deployment Guide IBM Cognos Report Server The main service that is responsible for application-tier processing is the report or query service (often referred to as the BIBus processes). An administrator can specify the maximum number of processes that these services can start. The number of processors in a server and their clock rates are the two primary factors to consider when planning for additional IBM Cognos Report Server hardware capacity. given two servers with an equal number of processors. This setting ensures that as much memory as possible is available to the IBM Cognos Report Service. Overview of the IBM Cognos Business Intelligence architecture 13 . IBM Cognos Dispatcher starts IBM Cognos Report Server processes dynamically as needed to handle the request load. For example. The IBM Cognos BI reporting and query service is made up of two underlying components: The Java servlet-based IBM Cognos Dispatcher services Report services that are launched using the Java Native Interface (JNI) Set the Java virtual machine (JVM) heap-size allocation for IBM Cognos Platform so that Java memory is only as large as is necessary to accommodate the processing requirements of the Java based services. configure the server with a significantly faster processor clock rate to have more report and report-service processes. For more details. you generally configure a server with four available processors to use more report service processes than a server with only two available processors. refer to the IBM Cognos BI Architecture and Deployment Guide. IBM Cognos BI data-source performance considerations Query performance is typically bound to the performance of the server that hosts the data source for IBM Cognos BI. IBM Cognos BI deployment options The first step to creating a proper IBM Cognos BI environment begins with a successful installation. IBM Cognos Content Manager maintains information in a relational database that is referred to as the content store database. You can also deploy 14 IBM Cognos Business Intelligence V10.1 Handbook . either horizontally or vertically. handle scheduling information and manage the IBM Cognos security name space. Server components within all three tiers can scale easily. retrieve or store report specifications. For detailed information about IBM Cognos Platform architecture and server deployment options. IBM Cognos BI can access many data sources. At the web tier. including relational and OLAP sources. Relational data sources that are tuned to meet the access requirements of the business community naturally perform better than those that are not. the IBM Cognos Gateway system requirements are lightweight and can. handle large user loads with fewer system resources than other server tiers in the IBM Cognos Platform architecture. An experienced database administrator should monitor and tune relational database-management systems (RDBMS) to achieve optimum database performance. to meet a variety of business intelligence processing requirements.IBM Cognos Content Manager IBM Cognos Content Manager is the IBM Cognos Platform service that manages the storage of the following customer application data: Security settings and configurations Server configuration settings Packages Dashboards Metrics Report specifications Report output You use IBM Cognos Content Manager to publish models. Use the IBM Cognos BI Installation and Configuration Guide for complete information about the installation and initial configuration process. A minimum of one IBM Cognos Content Manager service is required for each IBM Cognos Platform implementation. Content Manager performance can benefit from the availability of high-speed RAM resources and typically requires one processor for every four processors that are allocated for IBM Cognos Report Server processing. therefore. server availability. General guidelines In general. relational data sources. IBM Cognos Report Server performs the heavy lifting within an IBM Cognos BI server deployment. with the exception of IBM System z® deployments that have sufficient available resources. Generally. You can deploy multiple IBM Cognos Report Server instances to meet the processing requirements of large applications and user populations. Configure one physical server or virtualization platform guest instance per IBM Cognos Platform application tier server instance. Overview of the IBM Cognos Business Intelligence architecture 15 . At the application tier. can be affected by processor clock speed and I/O performance. throughput requirements. and systems management requirements such as availability and security. and IBM Cognos Content Manager are implemented in one or more WebSphere Application Server profiles or JVM instances. Deployment options and recommendations include: Use one server and operating system instance with a single or multiple IBM Cognos Platform instances. or other service level agreements (SLAs). and OLAP data source. IBM Cognos Content Manager server hardware can also scale vertically to provide increased throughput. therefore. user concurrency. use 2 GB RAM per CPU with IBM Cognos Content Manager and IBM Cognos Report Server instances. Do not over-subscribe CPU resources with IBM Cognos BI implementations. IBM Cognos Report Server performance is tied to overall system performance and. both the IBM Cognos Report Server and IBM Cognos Content Manager components can scale to meet application-processing requirements. Chapter 2. Use IBM Cognos Report Server with two BIBus processes per processor and five threads per process (four low affinity and one high affinity). You can deploy multiple IBM Cognos Content Manager instances. the IBM Cognos Report Server. for example using a configuration in which the total number of processors that are allocated among the virtual guest servers instances for the IBM Cognos Platform exceed the number of physical CPUs that are available on the physical server or LPAR. Use separate servers for the RDBMS that is hosting the content store database. decisions regarding the best methods for deploying IBM Cognos BI are driven by application complexity. One instance actively process requests while the other instances remain in standby mode to provide high availability.multiple IBM Cognos Gateway instances to meet requirements for additional user load. These comments persist throughout different versions of the report. are created in the IBM Cognos Data Manager Designer and are published to IBM Cognos BI. the agent service also runs two other types of specialized tasks: Stored procedures using IBM Cognos Report Server Web service tasks Annotation service The annotation service enables the addition of commentary to reports using IBM Cognos Business Insight. The service determines which tasks to execute and forwards those tasks to the monitor service for execution. Content manager cache service The cache service enhances the overall system performance and IBM Cognos Content Manager scalability by caching frequent query results in each dispatcher. It performs object manipulation functions such as add. update. In addition to running agents. Data movement service The data movement service manages the execution of data movement tasks in Cognos BI. Batch report service The batch report service manages background requests to run reports and provides output on behalf of the monitor service. 16 IBM Cognos Business Intelligence V10. and copy.1 Handbook .2 IBM Cognos BI services This section introduces the various services offered by IBM Cognos BI. This service runs the conditions and creates and stores the generated event list. delete.2. query. such as import and export. It also handles the content store management functions. such as Builds and Job Streams. Agent service The agent service is responsible for running agents.1. Data movement tasks. Content manager service The content manager service interacts with the content store. move. suspending. Part of the scheduling aspect is the control over cancelling. requests to cancel. Index data service The index data service provides basic full-text functions for storage and retrieval of terms and indexed summary documents. Index search service The index search services provides search and drill-through functions. and report output that is written to the file system are examples of content that is handled by the delivery service. A human task such as report approval can be assigned to individuals or groups on an ad hoc basis or by any of the other services. SDK requests: All SDK requests coming into IBM Cognos BI start with the event management services. Graphics service The graphics service produces graphics on behalf of the report service. Graphics can be generated in the following formats: Raster Vector Microsoft Excel XML Portable Document Format (PDF) Human task service The human task service enables the creation and management of human tasks. Chapter 2. Event management service The event management service is the service that handles scheduling. or suspend are forwarded from the event management service to the monitor service. The information found as part of the Upcoming Activities task in the IBM Cognos administration console is also provided by the event management service. Overview of the IBM Cognos Business Intelligence architecture 17 . and releasing scheduled tasks. the delivery service delivers content. news items. For tasks that are already entered the queue. Part of this service is a persistent email queue that is in place to guarantee that the items are forwarded to the configured SMTP server.Delivery service As the name implies. including lists of aliases and examples. release. Email. When a service indicates that there is sufficient bandwidth. Job service Before jobs can be executed. reports that are set to run and then email the results. The exceptions to this process are the history details for deployment and IBM 18 IBM Cognos Business Intelligence V10. Monitor service The monitor service handles all of the requests set to run in the background. file. Log service The log service creates log entries that are received from the dispatcher and other services.Index update service The index update service provides write. Because the monitor service can receive more requests than can be executed. and jobs. Metric studio service The metric studio service provides the IBM Cognos Metric Studio user interface for the purposes of monitoring and entering performance information. and IBM Cognos Analysis Studio. The log service is called regardless of which logging output is specified (for example database. Because the monitor service handles all of the background tasks. including scheduled tasks. update. Migration service The migration service manages the migration from IBM Cognos Series 7 to IBM Cognos BI version 10. Lineage information includes information such as data source and calculation expressions.1. and so forth). remote log server. meaning that the steps of a job must be analyzed for issues such as circular dependencies in nested jobs and resolution for run options that are part of the jobs. delete. it also queues requests and waits for resources to become available for the required service. IBM Cognos Report Studio. writing history information about the individual task executions is the responsibility of the monitor service. they must first be prepared.1 Handbook . Metadata service The metadata service provides support for data lineage information that is displayed in Cognos Viewer. IBM Cognos Query Studio. The job service completes these tasks and then sends the job to the monitor service for execution. and administration functions. the monitor service then forwards the task to the appropriate service for execution. 2 Open access to all data sources Most organizations have data sitting in different systems and data sources. and administration capabilities in IBM Cognos Connection. The information found as part of the Current Activities task in the administration console is also provided by the monitor service. Another function of the presentation service is to send the saved content when a request to view saved output is made. IBM Cognos Office Connection. 2. In addition. Overview of the IBM Cognos Business Intelligence architecture 19 . which makes gaining access to all data difficult. Presentation service The presentation service provides the display. such as HTML or PDF. the request is handled by the report service. methods to ensure that users get complete access to data can be expensive. Report data service The report data service manages the transfer of report data between IBM Cognos BI and applications that consume report data. It also receives generic XML responses from other services and transforms them into output format. Report service The report service manages interactive report requests to run reports and provides the output for a user in IBM Cognos Business Insight or in one of the IBM Cognos studios. which are written directly to the content store using the IBM Cognos Content Manager component. System service The system service is used by the dispatcher to obtain application configuration parameters and provides methods for interfacing with locale strings that are supported by the application for support of multiple languages.Cognos Search indexing tasks. If a request to execute the report is made from inside of Cognos Viewer. Query service The query service manages Dynamic Query Mode requests and returns the result to the requesting batch or report service. such as IBM Cognos Analysis for Excel. The IBM Cognos Platform provides the ability to deliver all information from wherever it resides and isolates Chapter 2. navigation. and IBM Cognos Mobile. 4 Common integrated security model The IBM Cognos Platform provides integration with enterprise authentication and single sign-on (SSO) solutions and a central place to control access and authorization for all IBM Cognos BI objects.the user from technological complexity. IBM Cognos BI also provides tools to enable equal access to business intelligence information for users with physical disabilities. Information consumers and executives can access reports. IBM Cognos software ensures satisfaction and a successful business intelligence project that the organization will embrace and promote. and other business intelligence objects using dashboards from IBM Cognos Business Insight. In addition. With complete. This process removes barriers to information and hides source complexity from metadata modelers. charts. and data. optimized access to all information. 20 IBM Cognos Business Intelligence V10. We discuss further details about these features in subsequent chapters in this book. capabilities. Business users can interact and view all the data they need to make the best possible decision. Each option uses common functions and interfaces. all capabilities that are associated with IBM Cognos BI can access data as soon as any data source is made available to it.1 Handbook . IBM Cognos Advanced Business Users and Professional Report Authors can use IBM Cognos Business Insight Advanced to create new IBM Cognos BI objects and to analyze information from any source and from new perspectives.3 Business intelligence for all IBM Cognos BI provides users with many options to access information. Users familiar with office productivity tools can use IBM Cognos Analysis for MS Excel to blend the power of IBM Cognos BI information with the personal workspace of Excel. 2. The IBM Cognos Platform provides a single point to access all data sources. Executives can take IBM Cognos Active reports offline to keep working while out of the office. 2. Users on the go can use IBM Cognos Mobile to stay connected to their IBM Cognos BI information. 2010. we discuss the following topics: Business scenario overview Personas used in the scenarios in this book © Copyright IBM Corp.3 Chapter 3. All rights reserved. In this chapter. 21 . Business scenario and personas used in this book This chapter describes the fictitious business scenario and personas that we use to demonstrate IBM Cognos Business Intelligence (BI) functions and use cases that we describe in this book. Switzerland.1 Handbook .1 Business scenario overview Fictional company used for this scenario and samples: This book uses the fictional Great Outdoors company. The fictional Great Outdoors company began in October 2004 as a business-to-business company. golf equipment. and personal accessories. For more information about how to install IBM Cognos samples. which sells to retailers from Geneva. the Great Outdoors company expanded its business by creating a website to sell products directly to consumers. with the exception of GO Accessories. The Great Outdoors organization is made up of six companies. and IBM provided samples to describe business scenarios and product functionality. refer to the IBM Cognos Installation and Configuration Guide. rather. 22 IBM Cognos Business Intelligence V10. outdoor protection. the products are manufactured by a third party and are sold to third-party retailers. mountaineering equipment. The Great Outdoors company includes the following subsidiaries: GO Americas GO Asia Pacific GO Central Europe GO Northern Europe GO Southern Europe GO Accessories Each of these subsidiaries sells camping equipment. The Great Outdoors company executives need a clear view of where the pain points exist in the sales process. The company built its business by selling products to other vendors. it has been difficult for managers of the Great Outdoors company to run their branches and monitor only performance indicators based on sales. It does not manufacture its own products. Each of these countries has one or more branches. These companies are primarily geographically-based. Because the company has steadily grown into a worldwide operation over the last several years. These samples are included with IBM Cognos software and install with the IBM Cognos samples database.3. GO Accessories sells only Personal Accessories. Recently. chapters within this book focus on creating a dashboard that provides insight to address the following questions. sales target and return quantity Chapter 3. marketing (promotions. we focus on the Sales perspective in order to translate the data warehouse data into meaningful insights about the Great Outdoors company. gross margin. revenue. helping the Great Outdoors company sales executives to make better decisions: Are we selling the right products? Have we been growing our profit margin? Have we had a considerable number of returns? How many units of a product should we buy by each period of the year? How does the performance of our business compare to last year? How does the performance of our business compare to what we planned? Can we add meaningful information about our competitors? How successful are our promotions? 3. we focus on the following dimensions: Organization Product Retailer Order method Sales staff Purchase. Business scenario and personas used in this book 23 . bundle sales. In this book. and item sales).1 Business questions to address Based on this scenario. satisfaction (customer.1. and human resources to solve this information gap. planned revenue. In our scenarios in this book. such as quantity. distribution. employee. and retailer).1.3. gross profit. charges and shipping from its warehouses Metrics.2 Information stored in the data warehouse of this company The information management team of the Great Outdoors company created a data warehouse with information about sales targets. that represent people in a real Business Analytics deployment.1 Advanced Business User This persona has a deep understanding of the business needs and a good understanding of technology. the Advanced Business User’s name is Lynn Cope. called personas. 3. IBM Cognos BI addresses these business needs with a range of integrated tools. Persona’s needs The Advanced Business User has the following business needs: Get the right advice to senior management Self-sufficiency Look at the problem from different angles Needs tools that integrate seamless and allow full collaboration with colleagues Get things done quickly Trust the data 24 IBM Cognos Business Intelligence V10. The Advanced Business User leads the interpretation of business requirements and creates reports to answer business questions. such as IBM Cognos Business Insight and IBM Cognos Business Insight Advanced.1 Handbook . A user can take on the role of one or more personas (for example Advanced Business Users and Professional Report Author).2. this book refers to user roles. In this book.2 Personas used in the scenarios in this book To demonstrate the samples and business scenarios.3. We describe scenarios are that meet the business needs of these users. The six personas are as follows: Advanced Business User Professional Report Author Modeler Administrator Analyst Business User Each these personas has different needs and expectations that must be delivered by the Business Analytics platform. slicing and dicing the information.Solutions that can help this persona The following solutions can help the Advanced Business User meet the business needs: IBM Cognos Business Insight Create and change dashboards in order to organize the information. IBM Cognos Analysis for Microsoft Excel Perform flexible. and sorting. IBM Cognos Planning Insert and update plans and actual data. IBM Cognos Business Insight Advanced Easily create new reports. perform interactive exploration and analysis. apply filters. such as prompts. filters. interactive exploration and analysis. IBM Cognos Analysis Studio Easily perform interactive exploration and analysis on dimensional data models. interactive exploration and analysis of multidimensional data into Microsoft Excel. sort. IBM Cognos TM1 Insert and update plans and actual data Chapter 3. groups. add external data. IBM Cognos Mobile Consume reports and dashboards from a friendly interface on mobile devices. and advanced queries. using calculations. and sort and change the display type to discover meaningful details about the information and collaborate with insights about the business performance. apply filters. add calculation and statistics. IBM Cognos Metric Studio Align tactics with strategy and monitor performance with scorecards. group data. IBM Cognos Report Studio Create professional reports with advanced features. Business scenario and personas used in this book 25 . multiple objects. and create additional objects to existent reports. IBM Cognos Query Studio Easily create simple reports with relational and dimensional data models. 2 Professional Report Author This persona has a deep understanding of Cognos tools and creating reports based on business requirements. HMTL. conditional behaviors. 26 IBM Cognos Business Intelligence V10. and leading practices of data modeling to deliver the best data models to be used in IBM Cognos solutions.2. In this book. In this book. the Professional Report Author’s name is Lynn Cope. IBM Cognos Report Studio Create professional reports with advanced features. bursting. the Modeler’s name is John Walker.1 Handbook .3.3 Modeler This persona works closely with the Business Analyst to understand the business needs and to translate them in data models. and multi-language support. advanced queries. databases. multiple objects. Apply filters and sort and change the display type to display meaningful details about the information. 3.2. IBM Cognos Event Studio Create events to monitor business performance. such as prompts. Persona’s needs The Professional Report Author has the following business needs: Scale to meet the needs of different types of users Quality content regardless of locale or environment Streamlined development environment Enhanced collaboration with business users Solutions that can help this persona The following solutions can help the Professional Report Author meet the business needs: IBM Cognos Business Insight Create and change dashboards to organize the information. The Modeler has a deep understanding of technology. check the execution path of the queries. IBM Cognos Connection Add data sources and connections to databases to be used with IBM Cognos Framework Manager and IBM Cognos Transformer. IBM Cognos Data Manager Create jobs to Extract. IBM Cognos Query Studio Create simple queries to check if the model is working as expected. define filters. and advanced queries. IBM Cognos Planning Create dimensional cubes (OLAP) for planning purposes. such as prompts. IBM Cognos Report Studio Create professional reports with advanced features. IBM Cognos PowerPlay® Studio Perform interactive exploration and analysis on dimensional data models. IBM Cognos Administration Manage security of the packages and data sources. Transform. multiple objects. Chapter 3. slicing and dicing the information to check if the model is working as expected. IBM Cognos Transformer Create dimensional cubes with security filters to improve performance of the Business Analytics solution. and Load data to the database data model. Business scenario and personas used in this book 27 .Persona’s needs The Modeler has the following business needs: Complete and consistent information Fewer iterations of models Ability to develop and change quickly Solutions that can help this persona The following solutions can help the Modeler meet the business needs: IBM Cognos Framework Manager Rapidly create relational and dimensional models (Dimensionally Modeled Relational) through a guided workflow-driven modeling process. IBM Cognos TM1 Create dimensional cubes (in memory OLAP) for planning purposes. configure data multi-language support and security filters. upgrades. printers. and downtime Solutions that can help this persona The following solutions can help the Administrator meet the business needs: IBM Cognos Administration Monitor server resources. In this book. They are consumers of reports and dashboards that were created by an Advanced Business User or a Professional Report Author. detailed reports and statistical analysis to support management decisions. styles.2. manage content. The personas described here might also perform ad-hoc analysis. content store database connection.5 Analyst This persona uses dashboards and reports when connected to the network (mobile computer or mobile phone) or when not able to access the network to provide consolidated. 3. LDAP preferences and start the services to run IBM Cognos. dispatchers.1 Handbook . The Analyst also collaborates with colleagues to provide insight about Great Outdoors business performance. security. and assuring that the IBM Cognos services running and performing properly. IBM Cognos Lifecycle Manager Manage and make the transition from prior versions of IBM Cognos to the latest version. In this book. configuring it. Persona’s needs The Administrator has the following business needs: Application installation. the Administrator’s name is Sam Carter. and set search indexes. the Analyst is named Ben Hall. 28 IBM Cognos Business Intelligence V10. data sources.4 Administrator This persona is responsible for installing the overall IBM Cognos solution. IBM Cognos Configuration Set server URLs. Remaining personas: The remaining personas are not advanced users.2. portlets. configuration. and life cycle Manage complex environments Visibility into processes and activities Limit costly maintenance.3. ports. distribution lists. and create additional objects to existent reports. add external data. IBM Cognos Query Studio Easily create simple reports with relational and dimensional data models. and Testing and Statistical Process Control—to display the best insights about the business performance. interactive exploration and analysis of multidimensional data into Microsoft Excel. filters. groups.Persona’s needs The Analyst has the following business needs: Analyze large or complex data sets Explore data from new perspectives and dimensions Identify relationships and trends Freedom to apply specific styles and formatting to results Solutions that can help this persona The following solutions can help the Analyst meet the business needs: IBM Cognos Business Insight Create and change dashboards to organize the information. and sorting. to apply filters. IBM Cognos Report Studio Create professional reports with advanced features. and to sort and change the display type to discover meaningful details about the information. add calculation and statistics. sort. IBM Cognos Analysis for Microsoft Excel Perform flexible. slicing and dicing the information. multiple objects and advanced queries. IBM Cognos Business Insight Advanced Easily create new reports. IBM Cognos Statistics Perform three different kinds of statistical calculations—Distribution of Data. perform interactive exploration and analysis. using calculations. IBM Cognos Analysis Studio Easily perform interactive exploration and analysis on dimensional data models. group data. Data Analysis. Chapter 3. Use the dashboards for interactive exploration and analysis. Business scenario and personas used in this book 29 . such as prompts. apply filters. Collaborate with insights. IBM Cognos Planning Insert and update plans and actual data. My Inbox of IBM Cognos Connection Store and open report views from previous executions of a report. IBM Cognos TM1 Insert and update plans and actual data 30 IBM Cognos Business Intelligence V10.1 Handbook . Persona’s needs The Business User has the following business needs: Access anywhere No investment in training or software Simple and intuitive interface Solutions that can help this persona The following solutions can help the Business User meet the business needs: IBM Cognos Business Insight Consume dashboards and reports to help her to make decisions and take actions based on accurate analytical data. the Business User is named Betty Black.6 Business User This persona uses dashboards and reports that have been created specifically for this persona to understand aspects of the performance for this persona’s area of the Great Outdoors company. 3.2. In this book. IBM Cognos TM1 Insert and update plans and actual data.IBM Cognos Mobile Consume reports and dashboards from a friendly interface on mobile devices. IBM Cognos Planning Insert and update plans and actual data. IBM Cognos Connection Receive scheduled reports. 31 . 2010. All rights reserved.Part 2 Part 2 IBM Cognos metadata modelling © Copyright IBM Corp. 1 Handbook .32 IBM Cognos Business Intelligence V10. 4 Chapter 4. This chapter is not intended as a replacement for formal training on IBM Cognos Business Intelligence (BI) metadata modelling. The recommended training for IBM Cognos Framework Manager is IBM Cognos Framework Manager: Design Metadata Model. 2010. we discuss the following topics: IBM Cognos Framework Manager overview Build a model with IBM Cognos Framework Manager Add business logic to the model Create dimensional objects for OLAP-style reporting Create and configure a package Apply security in IBM Cognos Framework Manager Model troubleshooting tips © Copyright IBM Corp. it is critical that the proper training and experience be gained to ensure a successful IBM Cognos BI project. In this chapter. Create reporting packages with IBM Cognos Framework Manager This chapter provides an overview of IBM Cognos Framework Manager and illustrates several general modelling concepts through practical exercises. Because metadata modeling is the foundation of business intelligence reporting on relational data sources. All rights reserved. 33 . you can publish that metadata to IBM Cognos BI in the form of a package. If the intervals of data freshness are greater. This knowledge allows the modeler to make better decisions regarding data access strategies. The modeler also needs to consider the following types of questions: Are those data sources appropriate for reporting? Is it a transactional system or the preferred reporting database structure. For example. if you require up-to-the-minute data in your reports. then going with the transactional database might be the only option. known as a star schema data warehouse or data mart? How fresh does the data need to be? Will your reporting occur daily. however.1. by the hour day. and report package delivery to IBM Cognos BI.4.1 Handbook . In this section.1 IBM Cognos Framework Manager overview IBM Cognos Framework Manager is the metadata model development environment for IBM Cognos BI. Reviewing sample or mock reports that meet the business needs is a good start. It is a Windows-based client tool that you can use to create simplified business presentations of metadata that are derived from one or more data sources.1 Reporting requirements and data access strategies Before creating an IBM Cognos Framework Manager project. metadata model design. we discuss items that you need to consider before beginning metadata modeling projects. This type of model is known as a Dimensionally Modeled Relational (DMR) model. week. because transactional systems are typically quite complex. it is important for the modeler to understand the reporting requirements. 34 IBM Cognos Business Intelligence V10. followed by identifying which data sources contain the information that is required. You can also use it to add dimensional information to relational data sources that allow for OLAP-style queries. This choice. This information familiarizes you with the IBM Cognos Framework Manager user interface (UI) and terminology. can increase the metadata modeling work drastically. or monthly? The answers to these types of questions can affect dramatically the data access strategy that you choose. then using data warehouses or data marts that are refreshed at the required interval is a better choice. weekly. 4. or month for example. With IBM Cognos Framework Manager. In addition. we need to define a metadata model in the context of IBM Cognos BI. Give special attention to planning and scope before you embark on a business intelligence project to avoid rework down the road. such as filters. This method can reduce response times dramatically as well. it is better to off-load to the extract. or security requirements? These types of questions allow you to investigate what can be done at the data source level rather than in the IBM Cognos Framework Manager model. because vendor databases are typically optimized for those types of operations. columns. which can bypass the requirement for metadata modeling in IBM Cognos Framework Manager. It describes the tables. Chapter 4. Create reporting packages with IBM Cognos Framework Manager 35 . sorting. OLAP sources provide the added bonus of dimensional analysis and reporting. As a general rule. and load (ETL) process that populates a warehouse to avoid that processing time when running reports. because the data is already calculated and aggregated.IBM Cognos bases its algorithms around industry-standard star schema designs that consist of fact tables and related dimension tables. Also ask the following questions before creating an IBM Cognos Framework Manager project: What type of business logic needs to be implemented? Are there specific calculations. For larger warehouses that have slower response times due to the sheer volume of data. 4. and so on. it is better to push more processing to the data source.1. You can also use OLAP sources. which allow users to navigate through the data and to apply powerful dimensional functions. transform.2 Metadata model Before we continue. A metadata model is a collection of metadata that is imported from a database. consider using some form of materialization in which views are created with pre-aggregated results. filters. and relationships in the database. grouping. as shown in Figure 4-1. the metadata is altered in IBM Cognos Framework Manager to ensure predictable results when reporting and to meet reporting and presentation requirements. The model can hide the structural complexity of underlying data sources and provide more control over how data is presented to IBM Cognos BI users. You can also choose which data to display to users and how that data is organized.1 Handbook . 36 IBM Cognos Business Intelligence V10. and is used to generate appropriate SQL requests to the database when reports or analysis are run.This metadata is published as a package to the IBM Cognos BI portal. IBM Cognos BI Portal IBM Cognos BI Metadata Model Relational Files Cubes Other Figure 4-1 Metadata model workflow In most cases. The overall goal of modeling the metadata is to create a model that provides predictable results and an easy-to-use view of the metadata for authors and analysts. or delete objects. by default. Create reporting packages with IBM Cognos Framework Manager 37 . Diagram. and Dimension Map) allow you to create. The Project Info pane is the center pane and provides access to the project’s objects through various methods. The three tabs in this pane (Explorer.4.3 The IBM Cognos Framework Manager UI Figure 4-2 shows the IBM Cognos Framework Manager UI. edit. You use each of these tabs throughout this chapter.1. configure. Chapter 4. Figure 4-2 IBM Cognos Framework Manager user interface The user interface includes the following panes: The Project Viewer pane. is on the left side of the window and provides an easy way to access all your project's objects in a tree format. 1 Handbook . by default.The Properties pane. This is very useful when you want to change an object and assess the impact that the change will have on other objects in the model. To restore a pane. select the object or one of its children in the top panel and view the dependant objects in the bottom panel. This pane also provides a search utility (second tab) and an object dependencies utility (third tab). which is the main work area. The Tools pane. to view project statistics. You can also detach and rearrange the Project Viewer. 38 IBM Cognos Business Intelligence V10. is located in the bottom middle of the window and allows you to configure various properties for any of the project's objects. by default. is located on the right side of the window and provides several useful tools. and to perform common tasks for selected objects. Properties. All panes can be hidden except the Project Info pane. and Tools panes. Simply drag an object (and its children if it has any) to the top panel. You can use it to switch the project language quickly. use the View menu or use the toggles on the toolbar. Create reporting packages with IBM Cognos Framework Manager 39 . Figure 4-3 Model objects in the Project Viewer pane Chapter 4. we examine the objects in the Properties pane.1. shown in Figure 4-3. For simplicity.4 Reporting objects In IBM Cognos Framework Manager. there are several objects with which you interact in either the Project Viewer or the Project Info panes.4. with the exception of relationships. shown in Figure 4-4. A Query item. is contained within a query subject and maps to a corresponding object in the data source. Figure 4-5 Model query subjects – A Stored Procedure executes a database stored procedure to retrieve or update the data. maps to existing metadata in the model. Figure 4-6 Query items 40 IBM Cognos Business Intelligence V10. shown in Figure 4-5.The Project Viewer pane includes the following reporting objects: The following types of query subjects: – A Data Source object. These types of query subjects will be beyond the scope of this book. shown in Figure 4-6.1 Handbook . This object is identifiable by the small database icon in the top-right corner. Figure 4-4 Data Source query subjects – A Model object. maps to a corresponding object in the data source and uses a modifiable SQL statement to retrieve the data. Its icon appears the same as a Data Source query subject. allowing for OLAP-style queries. is a collection of facts for OLAP-style queries. shown in Figure 4-9. contains descriptive and business key information and organizes the information in a hierarchy.A Regular Dimension object. is a pointer to an underlying object that can act as an alias or reference. from the highest level of granularity to the lowest. Figure 4-9 Shortcuts Chapter 4. shown in Figure 4-7. Figure 4-7 Regular Dimension A Measure Dimension object. Create reporting packages with IBM Cognos Framework Manager 41 . shown in Figure 4-8. Figure 4-8 Measure Dimension A Shortcut object. For example. shown in Figure 4-10. but we discuss those later. which is useful for star schema groupings. is an organizational container that also uniquely identifies the objects that it contains. such as model filters and calculations. 42 IBM Cognos Business Intelligence V10. Query item folders are also available to organize items within a query subject.1 Handbook .A Namespace object. two different namespaces can both contain a shortcut called Products without causing a naming conflict in the model. shown in Figure 4-11. Figure 4-11 Folder Other objects are available. is an organizational container for various model objects. Figure 4-10 Namespaces A Folder object. shown in Figure 4-12. Figure 4-13 Scope relationship Chapter 4. Figure 4-12 Relationship A scope relationship. Create reporting packages with IBM Cognos Framework Manager 43 . shown in Figure 4-13. explains how the data in one query subject relates to the data in another.The Project Info pane on the Diagram tab includes the following relationships: A relationship. exists between Measure Dimensions and Regular Dimensions to define the level at which the measures are available for reporting. and order methods and to compare the sales figures to sales target values. John’s requirements are to create a reporting package for IBM Cognos BI that allows authors and analysts to query the data source for sales information by product. time. Figure 4-16 The Packages folder 4. shown in Figure 4-14. shown in Figure 4-16. Figure 4-15 The Parameter Maps folder The Packages folder.2 Build a model with IBM Cognos Framework Manager In the modeling scenario for this chapter. These parameter maps are useful when trying to dynamically affect the way that the model behaves when reports are run. John Walker is the company modeler who is creating a basic sales model based on the Great Outdoors data warehouse. Figure 4-14 The Data Source folder The Parameter Maps folder.You might also work with the following types of objects: The Data Sources folder. The data sources are definitions containing the pertinent information the IBM Cognos BI requires to connect to the underlying data sources.1 Handbook . shown in Figure 4-15. contains parameter maps that allow for data or some other model value substitution at run time. The data sources are defined in IBM Cognos Connection. contains packages that are published to IBM Cognos BI to make model metadata available to authors. contains the data sources that we used in the project. 44 IBM Cognos Business Intelligence V10. Some authors want to perform only basic relational queries against the data source. The following steps describe how to import metadata using the Model Design Accelerator. Create reporting packages with IBM Cognos Framework Manager 45 . Figure 4-17 IBM Cognos Framework Manager welcome panel Chapter 4. Open IBM Cognos Framework Manager. you can link the results together. You can create multiple star schemas using the Model Design Accelerator several times. You can add features to the model using standard IBM Cognos Framework Manager functionality. both based on the relational data source: A package for basic relational queries A package for OLAP-style queries We provide the final result model for this chapter as a reference in the additional material that is supplied with this book. 1. particularly analysts. The Model Design Accelerator applies IBM Cognos leading practices to produce single star schemas quickly.1 Import metadata using Model Design Accelerator The Model Design Accelerator is a graphical utility that is designed to guide both novice and experienced modelers through a simplified modeling process. and other authors.John has already looked at various report samples and interviewed users to better understand how he needs to present the data to the authors. see Figure 4-17. 4. want the ability to navigate through the data to better understand how the business is doing and where it is being affected positively and negatively.2. and click Create a new project using Model Design Accelerator. Then. John will deliver the following packages. To that end. Select the design language for the project. Click OK. and specify the location for the project. Figure 4-18 New Project dialog box 3. in this example English. and then click Next. as shown in Figure 4-18.2. If the specified folder does not exist. and then click OK. which was already created by the IBM Cognos BI administrator. 4. you are prompted with a message asking if you want to create one. Figure 4-19 Metadata wizard: Select Data Source dialog box 46 IBM Cognos Business Intelligence V10. in this example GO Sales. Enter an appropriate project name.1 Handbook . Select the GOSALESDW data source as shown in Figure 4-19. displaying information about the Model Design Accelerator. Chapter 4. and then select the following tables: – – – – – – – GO_TIME_DIM SLS_ORDER_METHOD_DIM SLS_PRODUCT_DIM SLS_PRODUCT_LOOKUP SLS_PRODUCT_TYPE_LOOKUP SLS_PRODUCT_LINE_LOOKUP SLS_SALES_FACT 6. The IBM Cognos Framework Manager User Guide window opens. In the Model Accelerator pane. 5. 7. You can close this window. and click Rename. modelers can also create their own data source connections either through the Metadata wizard by clicking the New button or directly in IBM Cognos Administration by selecting Configuration Data Source Connections.Modeler permissions: If given appropriate permission. In the list of objects. Click Continue. The information in this window explains the steps to create a model using the Model Design Accelerator. right-click the Fact Table query subject in the center of the pane. expand GOSALESDW Tables. Refer to the IBM Cognos BI Administration and Security Guide for details about how to create a data source connection. Create reporting packages with IBM Cognos Framework Manager 47 . See Figure 4-20.1 Handbook .8. and then press Enter. The result displays. Figure 4-20 Model Accelerator pane 48 IBM Cognos Business Intelligence V10. Type Sales Fact to rename the fact query subject. 9. Create reporting packages with IBM Cognos Framework Manager 49 . select the measures that follow. and then drag those measure to the Sales Fact query subject shown in Figure 4-22 on page 50. as shown in Figure 4-21. Figure 4-21 Model Design Accelerator: Explorer Tree Chapter 4. expand gosalesdw SLS_SALES_FACT. In the Explorer tree pane. Expand the SLS_PRODUCT_LINE_LOOKUP table. 11. and drag the PRODUCT_LINE_EN data item into the Products query subject. b.Figure 4-22 Model Design Accelerator with query items added to Sales Fact 10. Expand the SLS_PRODUCT_TYPE_LOOKUP table.1 Handbook . c.In the Explorer tree pane: a. and drag the PRODUCT_ NAME data item into the Products query subject.Rename New Query Subject 1 to Products. and drag the PRODUCT_TYPE_EN data item into the Products query subject. 50 IBM Cognos Business Intelligence V10. Expand the SLS_PRODUCT_LOOKUP table. Figure 4-23 Model Design Accelerator Relationship Editing dialog box This dialog box opens because IBM Cognos Framework Manager cannot determine the relationship between the SLS_PRODUCT_LOOKUP table and the SLS_SALES_FACT table. Create reporting packages with IBM Cognos Framework Manager 51 . Chapter 4. You need to establish the relationship yourself.The Relationship Editing Mode for: Products dialog box opens. See Figure 4-23. which results in a many-to-many relationship with the PRODUCT table. After you generate the basic model.1 Handbook .12. PRODUCT_NUMBER and SLS_PRODUCT_DIM PRODUCT_NUMBER. you will eventually add a filter to filter out all non-English product names.Click OK. and then click OK again to close the Relationship Editing Mode dialog box. The Modify the Relationship dialog box opens. thus creating a one-to-many relationship. Figure 4-24 Modify the Relationship dialog box The SLS_PRODUCT_ LOOKUP table has an entry for each product for each language. 52 IBM Cognos Business Intelligence V10. See Figure 4-24. 14.Click the Create a Model Relationship icon in the top-left corner of the dialog box.Ctrl-click SLS_PRODUCT_LOOKUP . 13. In the Explorer Tree. and add the following items to the Products query subject.15. expand SLS_PRODUCT_DIM. – – – – – – – – PRODUCT_KEY PRODUCT_LINE_CODE PRODUCT_TYPE_KEY PRODUTCT_TYPE_CODE PRODUCT_NUMBER PRODUCT_IMAGE INTRODUCTION_DATE DISCONTINUED_DATE Figure 4-25 New Relationship and added query items to the Products query subject Chapter 4. as shown in Figure 4-25. Create reporting packages with IBM Cognos Framework Manager 53 . 17. expand the GO_TIME_DIM table. The results displays. and then drag the selected items to the Time query subject. Figure 4-26 New relationship and newly added query items to the Time query subject 54 IBM Cognos Business Intelligence V10. click DAY_KEY and then Shift-click WEEKDAY_EN. See Figure 4-26.16.Rename New Query Subject 2 to Time.1 Handbook .In the Explorer Tree pane. as shown in Figure 4-27: – ORDER_METHOD_KEY – ORDER_METHOD_CODE – ORDER_METHOD_EN Figure 4-27 New relationship and added items to the Order Methods query subject Chapter 4.In the Explorer tree pane. and add the following items to the Order Methods query subject. expand the SLS_ORDER_METHOD_DIM table. Create reporting packages with IBM Cognos Framework Manager 55 .Rename New Query Subject 3 to Order Methods.18. 19. which represent each of the layers. and Presentation Layer. or such as Foundation Objects Layer.2. The typical approach is to have three root namespaces: One for the data source objects One for model query subjects that are used to either model or consolidate metadata that is found in the data source query subjects One for the final presentation view. Consolidation Layer. Figure 4-28 IBM Cognos Framework Manager with newly created model from Model Design Accelerator 4.1 Handbook . When complete. the model is visible in the IBM Cognos Framework Manager UI. which typically consists of shortcuts The namespace layers can use any naming convention that makes sense to you. and Presentation View. 56 IBM Cognos Business Intelligence V10.20. See Figure 4-28. Business View. The Model Design Accelerator creates a model based on your selections. and then click Yes to the message.Click Generate Model.2 Model organization For IBM Cognos BI. You might see naming conventions such as Physical View. which represent the views. it is recommended to model in layers. Figure 4-30 Physical View layer Chapter 4. extra calls to the database for metadata might be required at run time.The Model Design Accelerator follows the layered modeling approach and creates a three-layered model automatically of the model that we created in the previous section. you might have to make some exceptions. Create reporting packages with IBM Cognos Framework Manager 57 .. For example. you might choose to create a model query subject to act as an alias for a dimension table that relates back to a fact query subject. as shown in Figure 4-29. However. contains the data source query subjects as they were when imported from the data source. After data source query subjects are modified. Figure 4-29 Project Viewer illustrating multiple layer approach Physical View The Physical View. namespaces are used to contain the shortcuts because facts can share one or more dimensions. This layer is the ideal location to implement filters. Figure 4-31 Business View layer Presentation View The Presentation View. Because namespaces provide uniqueness. allowing authors to query across multiple fact tables using a shared dimension. calculations. It might not always be the case. The Model Design Accelerator uses the consolidation methodology. but as a general rule. this is the layer to apply logic.Business View The Business View. If the underlying data source changes. all query items in this layer are given user-friendly names for report authors and analysts.1 Handbook . shown in Figure 4-31. which are logical groupings of fact and related dimension query subject shortcuts.. typically contains star schema groupings. you can remap the Business View to the correct items in the Physical View without affecting reports. You can easily create separate packages for different reporting needs based on this layer. a dimension shortcut with the same name can exist in multiple namespaces. shown in Figure 4-32 on page 59. and any other business logic that you might require. When more than one star schema grouping is involved. In our model. 58 IBM Cognos Business Intelligence V10. This layer also acts as an insulation layer for reports. it is recommended that you verify and edit certain property settings and determine if the correct relationships are in place to meet the reporting requirements. should numeric values be displayed as currency. Examine query item properties Object properties allow you to add additional information.2. and query item properties (shown in Figure 4-33 on page 60) let you edit the behavior of the query items. For example.3 Verify query item properties and relationships After importing metadata into IBM Cognos Framework Manager.Figure 4-32 Presentation View layer 4. Create reporting packages with IBM Cognos Framework Manager 59 . such as configuring the data format. percentage. such as descriptions and screen tips. or a just a number with comma separators? Chapter 4. SALES_ORDER_KEY. The following rules explain how the Usage property is applied during import: Numeric. This item is not a fact. datetime.Figure 4-33 Query items properties You need to examine two properties closely after import to ensure that they are set as expected: Usage property Regular Aggregate properties For example. but rather is a key. index. Examining this setting allows you to have relevant conversations with the database administrator. you expect to see the Usage property set to Identifier. time-interval. This issue occurs because the item is not indexed in the database. Sum is correct for most 60 IBM Cognos Business Intelligence V10. it is better that the field is indexed in the database. If this key is to be used in a relationship or filter.1 Handbook . Thus. or non-indexed columns are set as Facts Key. was imported as an Int32 data type and had its Usage property set to Fact. or any indexed columns are set as Identifiers Strings and BLOBs are set as Attributes The Regular Aggregate property for numeric facts (measures) describes how measures should be aggregated and defaults to SUM. date. as shown in Figure 4-33. Maximum. The remaining properties are used to improve performance by causing automatic retrieval through indexes while still displaying user-friendly selection values. however. Figure 4-34 Prompt Info properties You can specify the type of prompt that you want generated in the studios or the default prompt type for IBM Cognos Report Studio. Minimum. Another set of important properties to take advantage of are the Prompt Info properties. in some instances. The default is for the server to determine the prompt type based on data type. Create reporting packages with IBM Cognos Framework Manager 61 . Chapter 4. as shown in Figure 4-34. you might want to set the property to Average. and so on.measures (such as SALE_TOTAL and QUANTITY). for a Report Studio generated prompt on ORDER_METHOD_EN to display a list of ORDER_METHOD_EN names but retrieve data through the ORDER_METHOD_CODE. for Query Studio to display ORDER_METHOD_EN values but use ORDER_METHOD_CODE in the query’s filter. Use Item Reference identifies the default value that a manually-created Report Studio prompt uses in the query’s filter. it defaults to the values that are returned by the query item to which it belongs.Here is a brief description of the remaining properties: Cascade on Item Reference is for cascading prompts in Report Studio. set the Display Item Reference property for ORDER_METHOD_CODE to ORDER_METHOD_EN. If this field is left blank. set the Use Item Reference property in ORDER_METHOD_EN to ORDER_METHOD_CODE. as shown in Figure 4-35. For example. For example. Figure 4-35 Filter Item Reference property 62 IBM Cognos Business Intelligence V10. set the Filter Item Reference property to ORDER_METHOD_CODE. Display Item Reference identifies the default value that a manually-created Report Studio prompt displays for a particular query item.1 Handbook . where the list of Product Type choices is restricted to those within a selected Product Line. to see a list of ORDER_METHOD_EN names when using ORDER_METHOD_CODE in a prompt. Filter Item Reference identifies the value that an IBM Cognos generated prompt uses to filter a query. for example. For example. Create reporting packages with IBM Cognos Framework Manager 63 . Context Explorer. Figure 4-36 Context Explorer Chapter 4. is another useful UI element in IBM Cognos Framework Manager. As a quick side topic.Examine relationships Relationships are maintained in the Object Diagram or Context Explorer. shown in Figure 4-36. 1 or 0. this feature is useful when working with a subset of the model. Optional cardinality. right-clicking one of the selected items. back to relationships. is represented by a 0. Optional cardinalities require more processing but might be needed to return the desired results. IBM Cognos BI can aggregate the facts properly and not lose records from either fact table. In short. Mandatory cardinality. as in 0. 64 IBM Cognos Business Intelligence V10.. Cardinality is used by IBM Cognos BI to determine which query subjects are facts and which are dimensions in the context of a query... and dimension query subjects have only 1. It is important to model as a star schema so that there is no ambiguity about the nature of a query subject. and then clicking Launch Context Explorer.You can launch this window by selecting one or more query subjects. fact query subjects have only 1. in particular.1.n or 1. Now. By identifying which query subjects are facts.n or 0. and you must decide if you require optional or mandatory cardinalities..n cardinalities attached. This determination is important. For larger models. which generates an outer join in the SQL. when querying from multiple fact tables through a shared dimension.1 Handbook .1..1 cardinalities attached..n or 0. you must ensure that the appropriate relationships exist to meet your reporting needs. When verifying relationships. You can edit items directly in this window. is represented by a 1 as in 1. which generates an inner join in the SQL.. as shown in Figure 4-36 on page 63.. Figure 4-37 Snowflake dimension Chapter 4. Create reporting packages with IBM Cognos Framework Manager 65 . Snowflake dimensions are where the hierarchy for the dimension is split out into two or more tables. as shown in Figure 4-37.Unpredictable results can occur when query subjects have a mix of cardinalities. with the exception of when dealing with snowflake dimensions. 1 Handbook . The hierarchy does not branch of at higher levels such as product type level or product line level. there is a clear path from the highest level all the way down to the fact table. 66 IBM Cognos Business Intelligence V10.The product hierarchy is split out into the following tables: Product line Product type Product In this case. Refer to the IBM Cognos Framework Manager User Guide for more details about cardinality. To edit a relationship. Create reporting packages with IBM Cognos Framework Manager 67 . Figure 4-38 Relationship Definition dialog box Here. Chapter 4. you can change the cardinality on either end of the relationship. as shown in Figure 4-38. or create a compound definition with additional business logic by editing the expression located at the bottom of the dialog box. change the query items the join is based on. double-click the relationship line in the Diagram pane or the Context Explorer to open the Relationship Definition dialog box. However. 3. using the Model Design Accelerator does not make sense in this case. To import additional metadata manually. Importing SAP BW metadata: Just as a quick side note for those who want to import SAP BW metadata.4. and then click Run Metadata Wizard as shown in Figure 4-39. and Order Methods). therefore. Ensure that Data Sources is selected. Time. right-click the gosalesdw namespace under the Physical View. So. each SAP BW time hierarchy is depicted as an individual level. he will use the manual import process to bring in this additional table as metadata in the model. and click Next. follow these steps: 1. When a structure is imported into IBM Cognos Framework Manager. 68 IBM Cognos Business Intelligence V10. and click Next. Products and Time are both shared (conformed) dimensions for this fact table. has created a model that contains sales facts along with its related dimensions (Products. time-dependant hierarchies now reflect hierarchy or structure changes automatically. in the Project Viewer.2.4 Import additional metadata Our modeler. this model also requires sales target information. Select the GOSALESDW data source.1 Handbook . John Walker. IBM Cognos Report Studio users can use these structures to report on and compare levels that are valid for a specific time period. In IBM Cognos Framework Manager. Figure 4-39 Run Metadata wizard 2. and select SLS_SALES_TARG_FACT as shown in Figure 4-40. Click Next. Locate and expand the Tables folder. Figure 4-40 Metadata wizard: Select Objects dialog box Chapter 4. Create reporting packages with IBM Cognos Framework Manager 69 .4. Select Between each imported query subject and all existing query subjects in the model as shown in Figure 4-41. or both. you do not need to create the relationships manually after import. you can request to create relationships among either the objects being imported.5.1 Handbook . By selecting this option. The other two options in this section are often used when importing from a different database. In the first section. use the “Use primary and foreign keys” option for relational data from the same database. 70 IBM Cognos Business Intelligence V10. In the second section. select the “Both” option in the wizard. Figure 4-41 Metadata wizard: Generate Relationships dialog box Let us take a moment to discuss this dialog box. in general. Typically the database will have primary and foreign keys defined. the objects being imported and the existing objects in the model. When importing several additional tables that have relationships to each other and to objects already in the model. and it is not a primary key in the table.. shown in Figure 4-42. If this option is disabled. or you can edit specific relationships after import to meet your needs. 6. and to examine any relationships.1 to 1. MONTH_KEY is not Chapter 4. 7. Double-click the gosalesdw namespace. all relationships will be 1..In the third section. and click the Diagram tab to view the new query subject in the diagram. the model should have the following relationships for sales targets: – One to the time dimension – One to the product dimension The reason there is no relationship to the product dimension is that PRODUCT_KEY is not indexed in the database. an import converts outer joins to inner joins for performance reasons. In this case. Click Import. Figure 4-42 New Sales Target query subject with no relationships defined Notice that no relationships were generated. You can choose to generate outer joins if that meets your business needs. and then click Finish.1. The same issue applies to the time dimension. Create reporting packages with IBM Cognos Framework Manager 71 . by default. Another option is to enable or disable fact detection. but the keys should be indexed in the database to improve performance. 8. You can create the relationships manually in IBM Cognos Framework Manager. In the Project Viewer. and it is not indexed in the database.a primary key. point to Create. and then click Relationship as shown in Figure 4-43. and select MONTH_KEY from SLS_SALES_TARG_FACT. Figure 4-43 Create Relationship 72 IBM Cognos Business Intelligence V10. Right-click over the selected items. select MONTH_KEY from GO_TIME_DIM.1 Handbook . 9. The relationship is configured as desired.. Create reporting packages with IBM Cognos Framework Manager 73 . and the fact on the 1. Figure 4-44 Relationship Definition dialog box Chapter 4.n side. Click OK. The dimension is on the 1.1 side.10.. as shown in Figure 4-44.The Relationship dialog box opens. Figure 4-45 New relationship in Diagram pane 74 IBM Cognos Business Intelligence V10.1 Handbook .The Diagram pane now shows the new relationship as shown in Figure 4-45. expand Physical View gosalesdw SLS_SALES_TARG_FACT. Figure 4-46 New relationship in Diagram pane Now that the new sales target object is imported and relationships are created. 12. and click OK. you need to update the Business View and Presentation View manually. create a relationship between SLS_PRODUCT_DIM and SLS_SALES_TARG_FACT on PRODUCT_TYPE_KEY as shown in Figure 4-46. 13.In the Project Viewer. 14. Create reporting packages with IBM Cognos Framework Manager 75 . right-click the Business View namespace. type Sales Target Fact. The Presentation View is organized to have separate namespaces for each fact shortcut and its related dimensions. You can use the star schema groupings feature to create the grouping for the new Sales Target Fact query subject that will be created.11.In the Name box.A Query Subject Definition window for a new model query subject opens. and then click Query Subject. Under Available Model Objects. Chapter 4. point to Create.Using the same process. and click OK.15.Click the new Sales Target Fact query subject in the Business View namespace.In the Create Star Schema Grouping dialog box. shown in Figure 4-48. Ctrl-click Products and Time (these are related dimensions). Figure 4-48 Create Star Schema Grouping dialog box 76 IBM Cognos Business Intelligence V10.1 Handbook . right-click one of the selected items. Figure 4-47 Query Subject Definition window for new Sales Target Fact model query subject 16. and then click Create Star Schema Grouping. shown in Figure 4-44. and then click OK. type Sales Target in the Namespace field. 17.Drag SALES_TARGET to the Query Items and Calculations pane. 19. and then drag the Sales Fact. Products. Chapter 4. 20. Create reporting packages with IBM Cognos Framework Manager 77 . we describe using techniques to validate the model and the data itself.Name the namespace Sales. and then click Namespace.Right-click the Presentation View namespace. examined query item properties and relationships. Figure 4-49 Presentation View with star schema groupings To this point. we used the Model Design Accelerator to create a one star schema grouping. as shown in Figure 4-49. Time.18.Drag the Sales Target namespace to the Presentation View. and imported additional metadata which was then organized into an additional star schema for presentation purposes. point to Create. and Order Methods shortcuts into the Sales namespace. Next. and click Verify Selected Objects. In this example. or you can verify the entire model by verifying the model’s root namespace. Figure 4-50 Verify Model .Options dialog box In this window. you can select the types of items that you want to include in the validation process. in this case called Model.4.5 Verify the model You can verify independent model objects and their children.2. To verify the model: 1. The Verify Model . 78 IBM Cognos Business Intelligence V10. Right-click the root model namespace. we verify the entire model.1 Handbook .Options dialog box opens as shown in Figure 4-50. you can choose to repair the objects. if applicable. Create reporting packages with IBM Cognos Framework Manager 79 . Click Close. the model is quite simple and presents no issues as shown in Figure 4-51. Chapter 4. Figure 4-51 Verify Model Results dialog box 3. Click Verify Model.2. In other instances where issues are found. or open the items in the model to edit their definitions to resolve any issues. In this case. and you can also use Model Advisor. In this example. you are provided with links to the appropriate sections of the documentation. use the following steps: 1. To run the Model Advisor. Right-click the namespace that you want to examine. we use gosalesdw. which is an automated tool that applies rules based on current modeling guidelines and identifies areas of the model that you need to examine. Click Run Model Advisor.You can verify the objects in the model. Figure 4-52 Model Advisor dialog box. The Model Advisor dialog box opens. 80 IBM Cognos Business Intelligence V10. as shown in Figure 4-52. Options tab Here you can select or deselect items to be tested. To assist you in understanding the nature of the highlighted issue and some possible actions.1 Handbook . Chapter 4.2. Click Analyze. Figure 4-53 Model Advisor results In the results.6 Verify the data It is also critical that you test each of your query subjects alone and in conjunction with other query subjects to ensure that authors will receive predictable results. 4.2. 3. You can also click the more information links to read the documentation regarding specific modeling recommendations. Create reporting packages with IBM Cognos Framework Manager 81 . you can identify items that fit certain categories and click the icon under the Action column to view the object or objects in the Context Explorer. Click Close. The results display as shown in Figure 4-53. Figure 4-54 Test Results dialog box 82 IBM Cognos Business Intelligence V10.1 Handbook . our modeler John Walker tests Sales Targets against the Time dimension and tests the Products dimension against Sales to ensure that the numbers come back correctly. Right-click one of the selected items. select Auto Sum to emulate the default IBM Cognos BI behavior. and click Test. 3. as shown in Figure 4-54. Then. John Walker renamed all the query items in the Business View to user-friendly names and organized them in a logical manner. in the Test Results dialog box. and select Sales Target (formerly SALES_TARGET) from Sales Target Fact. select Year (formerly CURRENT_YEAR). in the Business View. and examine the results. which is to auto group and summarize in the studios.In this next scenario. Click Test Sample. To verify the data: 1. In the Project Viewer. select Month (formerly MONTH_EN) from the Time dimension. 2. select Auto Sum. select Product Line (formerly PRODUCT_LINE_EN). “Specify determinants” on page 84. In the Test Results dialog box.205. This issue is resolved by applying a filter in 4. select Product Name (formerly PRODUCT_NAME) from Products.7. 5.3. In the Project Viewer. 4.John Walker knows that the overall total for Sales Target for all time and products is 4. 7. We discuss this issue in the next section.368.2. determinants must be defined for the underlying query subject in the Physical View layer to ensure that this double counting does not occur. Figure 4-55 Test results dialog box Again. Click Close. yet the results show several results all adding up to well over four billion dollars.540 (over four billion). in the Business View namespace. 6. Validating the data is an important process. because overall summaries in the studios double count Revenue values once for each product language in the data source. and click Test Sample. and click Test. The results display as shown in Figure 4-55. Create reporting packages with IBM Cognos Framework Manager 83 . Validate your data often and as new items are imported or modeled to ensure predictable results. This issue is the result of double counting that occurs because the Month Key in the Time dimension repeats once for every day in the month. 4. and select Revenue (formerly SALE_TOTAL) from Sales Target. Right-click one of the selected items. Click Close. Because there is a relationship on the Month Key. Chapter 4. these results are questionable. “Add business logic to the model” on page 89. double-click the GO_TIME_DIM query subject. see the IBM Cognos Framework Manager User Guide. for BLOB data types in the query subject. Specifying determinants resolves this issue.7 Specify determinants As discussed in the previous section. In the Project Viewer. You can also use them to prevent the distinct clause on unique keys. 84 IBM Cognos Business Intelligence V10. To specify determinants to prevent double counting: 1.1 Handbook .4. and then click the Determinants tab. Determinants are required for dimensions that are connected to facts at levels of granularity that have repeating keys. In this section. we discuss double counting. double counting can occur in scenarios where a a relationship based on a key that is not unique and repeats in the data is used in a query. as shown in Figure 4-56. and to improve performance on dimensionally modeled relational (DMR) metadata. which is the most common of the issues. Determinants are a feature of IBM Cognos BI that are typically used to provide control over granularity when aggregating.2. in the Physical View namespace. For more detailed information about determinants. click Add. that require a group by clause to prevent double counting of facts. there are other levels in this data. Chapter 4. Right-click pk. 3. click Rename. all items in the query subject are an attribute of the determinant key. Right-click New Determinant. type Day. 4. Determinants tab During import.Figure 4-56 Query Subject Definition dialog box. To that end. and then press Enter. However. such as the MONTH_KEY. type Year. only one determinant was detected based on the table’s primary key. Because this key is unique. we create the appropriate determinants for this query subject and apply the Group by setting. Under the Determinants pane. 2. New Determinant displays below Day in the Determinants pane. Create reporting packages with IBM Cognos Framework Manager 85 . click Rename. and then press Enter. Repeat the steps to a create Quarter determinant.Repeat the steps again to create a Month determinant. from the Available items pane. Attributes are any items that are associated with the determinant key. With the focus still on Year. the determinant configuration is implemented in the generated SQL. you find a Group By clause on the MONTH_KEY in the SQL. For example. if CURRENT_MONTH is used in a report. drag CURRENT_QUARTER into the Attributes box. 9. and select the Group By check box beside Month. Click the Up Arrow key on the right to move Year above Day. drag the following items into the Attributes box: – CURRENT_MONTH – MONTH_NUMBER – MONTH_EN 86 IBM Cognos Business Intelligence V10. 11. When used in a report. 10. With the focus still on Quarter. 6.With the focus still on Month.1 Handbook . drag CURRENT_YEAR to the Key pane. with Key = QUARTER_KEY and select the Group By check box beside Quarter. Select the Group By check box beside Year. with Key = MONTH_KEY.5. 8. 7. Figure 4-57 shows the results.Right-click one of the selected items.In the Project Viewer. select Month (formerly MONTH_EN) from the Time dimension. click Test. Figure 4-57 Determinants tab with multiple determinants specified 12. conduct the same test from the previous section to see if the correct Sales Target values are returned. Create reporting packages with IBM Cognos Framework Manager 87 . 14. and select Sales Target (formerly SALES_TARGET) from Sales Target Fact.Click OK. 13. and then in the Test Results dialog box. select Auto Sum. To test this change. select Year (formerly CURRENT_YEAR). in the Business View. Chapter 4. Figure 4-58 Test Results after determinants are specified The values now appear correctly and double counting is prevented.Click Test Sample. 88 IBM Cognos Business Intelligence V10.Click Close. shown in Figure 4-58.15. 16.1 Handbook . and examine the results. The results should display as shown in Figure 4-59.3 Add business logic to the model Again. which had multiple Chapter 4. However. try applying determinants for the SLS_PRODUCT_DIM query subject. Figure 4-59 Determinants specified for SLS_PRODUCT_DIM 4. there are instances where it makes sense to implement items in the Physical View where the cost of performance and maintenance is low. which is also a repeating non-unique key just like the MONTH_KEY is in the GO_TIME_DIM. One example of this is the requirement we saw earlier to apply a filter on the SLS_PRODUCT_LOOKUP query subject. If you are feeling adventurous. typically business logic such as filters and calculations are applied in the Business View layer. Create reporting packages with IBM Cognos Framework Manager 89 .This model also includes an SLS_PRODUCT_DIM dimension that has a relationship on PRODUCT_TYPE_KEY. If model query subjects are used to create these aliases or to consolidate multiple underlying query subjects. type Language Filter. In the Project Viewer. and then in the Name box. Stand-alone filters are filters available across the model. 2.1 Add filters to the model The first filter John Walker applies is a filter on a data source query subject. Filters come in two forms in IBM Cognos Framework Manager: Embedded filters are created within query subjects and their scope is restricted to that query subject. double-click SLS_PRODUCT_LOOKUP.rows for the same product to support multiple languages. double-click PRODUCT_LANGUAGE. Embedded filters can be converted to stand-alone filters after they are created. then applying filters and calculations on these query subjects makes sense because they are not data source query subjects. They are appropriate when required in multiple query subjects or dimensions or to make commonly used filters readily available for authors. regardless of whether it was required. They are appropriate when the filter is intended for just one query subject or dimension. the filter would be applied in every instance an item from that query subject was used. The performance hit of scanning an extra table unnecessarily in such cases is greater than the performance hit that can occur from an additional metadata call to the database.3. we apply an embedded filter to the SLS_PRODUCT_LOOKUP data source query subject to filter on English values: 1.1 Handbook . in the gosalesdw namespace under the Physical View. In the following steps. Although adding this filter goes against the general rule of thumb. does it make sense to apply a product name language filter when only querying Product Line? If the filter is place in the Products model query subject in the Business View. 4. Click the Filters tab. click Add. 90 IBM Cognos Business Intelligence V10. 3. the filter must be applied a level lower than the Business View layer because the filter is required only when Product Name is included in the report. For example. Another example might be the requirement to implement aliases in the Physical View to control query paths between objects. In the Available Components pane. Create reporting packages with IBM Cognos Framework Manager 91 . Click in the Expression Definition pane at the end of the expression. Figure 4-60 Filter Definition dialog box Chapter 4. The result displays as shown in Figure 4-60.4. and then type = ‘EN’. ) – Design Mode Only This option limits the amount of data that is retrieved when testing in IBM Cognos Framework Manager or at report design time (when authoring reports in IBM Cognos Query Studio. and IBM Cognos Business Insight Advanced).5. Each filter has a Usage setting with the following options: – Always The filter is applied in all instances regardless of whether the filtered query item is in the query. Click OK.1 Handbook . 92 IBM Cognos Business Intelligence V10. Figure 4-61 Embedded filter Notice the Usage column beside the Name column. – Optional The filter is not mandatory and users can choose to enter a filter value or leave it blank. (This option applies only to filters that use a prompt value or macro. IBM Cognos Report Studio. The results display as shown in Figure 4-61. select Product Name (formerly PRODUCT_NAME) from Products. Click the Test tab. and click Test Sample. Figure 4-62 Test results with filter applied 7. in the Business View namespace. and select Revenue (formerly SALE_TOTAL) from Sales Target. In the Project Viewer. Chapter 4. 8. We conduct the same test that we conducted in the previous section between Products and Sales Fact again in the Business View to ensure that the results are as expected. The results display as shown in Figure 4-62. Click OK. select Product Line (formerly PRODUCT_LINE_EN). Create reporting packages with IBM Cognos Framework Manager 93 .6. select Auto Sum. and functions.3. 10.3. Figure 4-63 Test results dialog box The overall summary totals in the studios for Product Name are now accurate and are not double counted for each product name language in the data source. 94 IBM Cognos Business Intelligence V10. and click Test. We illustrate an example of a stand-alone filter in 4.9.1 Handbook . 4.Click Close. you might want to include a product break even value as one of the measures in the Sales Fact query subject in the form of Quantity * Unit Cost. Right-click one of the selected items. The results display as shown in Figure 4-63. For example. In the Test Results dialog box. Calculations can use query items. parameters. and then click Test Sample.3.2 Add calculations to the model You can create calculations to provide report authors with values that they regularly use. “Make the model dynamic using macros” on page 97. Chapter 4. double-click Sales Fact. you can embed the calculation directly in that object. 3. Stand-alone calculations are also valuable if you need to do aggregation before performing the calculation. 2. it is recommended that you apply calculations in model query subjects wherever possible. Under Available Components. This allows for better maintenance and change management. and then double-click the multiplication operator (*). To add an embedded calculation to a model query subject: 1. For query subjects. Create reporting packages with IBM Cognos Framework Manager 95 . Create a stand-alone calculation when you want to apply the calculation to more than one query subject or dimension. this calculation can be done for either data source query subjects or model query subjects. Click Add in the bottom-right corner. expand Operators.There are two types of calculations: If you want to create a calculation specifically for one query subject or dimension. This aggregation can be accomplished by changing the stand-alone calculations Regular Aggregate property to Calculated. in the Business View. click the Functions tab. and then double-click Quantity in the Available Components pane. type Break Even Point. However. In the Name box. 4. In the Project Viewer. Click Test Sample (the blue triangle button above the Name box) to verify that the calculation works. and then double-click Unit Cost.[Quantity] * [Business View]. click the Model tab.[Unit Cost] 6. Figure 4-64 Calculation Definition dialog box 5. 96 IBM Cognos Business Intelligence V10.Notice there is a description of the function in the Tips pane as shown in Figure 4-64.[Sales Fact]. Under Available Components.1 Handbook . The following results display: [Business View].[Sales Fact]. and then click OK again. Figure 4-65 Break Even Point calculation 4. In this example. Macros are enclosed by the number sign (#) character.3 Make the model dynamic using macros You can modify query subjects and other model properties to control dynamically the data that is returned using a combination of session parameters. Click OK. A session parameter returns session information at run time (for example. mapping a set of keys (source) to a set of substitution values (target). we show one example that incorporates all three elements in an embedded filter. we import a parameter map. A parameter map is a two-column table. The parameter map substitutes a user’s runLocale session parameter with a language code value found in the database. These items can be used to return data dynamically from specific columns or rows or even from specific data sources. and macros.3. that are to be evaluated at run time. This substitution is wrapped in a macro that also encloses the substitution value in single quotation marks because the filter expects a string value. such as EN for English or FR for French. Create reporting packages with IBM Cognos Framework Manager 97 .7. parameter maps. A macro is a fragment of code that you can insert within filters. and so on. calculations. The new calculated query item displays in the Sales Fact model query subject as shown in Figure 4-65. properties. Chapter 4. runLocale or account. Let us quickly examine each piece.UserName). Authors and analysts want to be able to see product names in their language based on their local settings. For the scope of this book. we alter the filter that was created for the SLS_PRODUCT_LOOKUP query subject. To alter the filter. For this exercise. follow these steps: 1.txt. click Browse. You can type in the values if you support only a small set of languages. 98 IBM Cognos Business Intelligence V10. 2. Click OK. 3. and then click Next. click Language_lookup. Click Finish. 4. In the Name box. and then click Open. The same mapping applies for other languages and their locales. next to the Filename box. point to Create. The Create Parameter Map wizard opens. type Language_lookup. Figure 4-66 Parameter Map values Note that en-us (and all other English variants) map to EN. we use a text file that has the mappings already entered. or you can import the values from a file. Click Import File. Navigate to <IBM Cognos BI install location>\webcontent\ samples\models. You can also base the parameter map on query items within the model. In the Project Viewer. 5. Then.1 Handbook . right-click Parameter Maps.To implement a macro to change a filter dynamically at run time. and then click Parameter Map. The values in the file are imported as shown in Figure 4-66. and then double-click runLocale to add it to the expression. Create reporting packages with IBM Cognos Framework Manager 99 . You can also find this function in the Functions folder under Available Components.) in the Source column for the Language Filter. and then expand Parameter Maps. double-click the SLS_PRODUCT_LOOKUP query subject. remove the EN portion of the expression. and then type sq(. The macro now requires a value to pass to the parameter map for substitution.Place the cursor right after the first macro tag (#). expand Session Parameters. and then click the ellipsis (. 11. and place the cursor after the equal sign (=).. 12. Click the Filters tab. which is the function for a single quotation mark.Double-click Language_lookup to add it to the expression as shown in Figure 4-67. Use the following syntax: [gosalesdw]. indicated by a red squiggly underline. 10.6. In this case.[SLS_PRODUCT_LOOKUP]. Figure 4-67 Expression definition with macro Notice that the parameter map is enclosed in the macro tags (#) automatically. 9..Under Available Components. In the Expression definition pane. click the Parameters tab. Chapter 4. Under Available Components. The value that the parameter map returns needs to be wrapped in single quotation marks. The Filter Definition window opens.[PRODUCT_LANGUAGE] #$Language_lookup{$runLocale}# = This syntax still shows that there is an error. the runLocale session parameter is passed to the parameter map. 8. In the gosalesdw namespace under the Physical View. 7. 4. it is en. Figure 4-68 Macro expression for Language Filter Notice the results in the Tip pane. click the Test tab.1 Handbook . 14. 100 IBM Cognos Business Intelligence V10. such as IBM Cognos Report Studio and IBM Cognos Business Insight Advanced. which allow for navigational functionality and access to dimensional functions in studios that support dimensional functions. and then type ).4 Create dimensional objects for OLAP-style reporting The IBM Cognos Framework Manager product allows you to create dimensionally modeled relational (DMR) models.Click OK. This pane shows what the macro resolves to based on the current runLocale. A DMR refers to the dimensional information that a modeler supplies for a relational data source to allow for OLAP-style queries.13. which is substituted for EN in the filter. In this case.Place the cursor just before the last macro tag (#). and then click Test Sample to ensure that all values come back in your local language. The final syntax displays as shown in Figure 4-68. Create reporting packages with IBM Cognos Framework Manager 101 . and then click Regular Dimension. we model a Time dimension. In the Available items pane. Chapter 4. The Dimension Definition window opens. expand Business View Time. In the Project Viewer. and then press Enter. In this case. point to Create. 2. and then click Rename. 5. Right-click Dimensional View. Type Time. you can provide hierarchy information to dimensions and measure scope for each Regular Dimension created. create a new namespace under Model called Dimensional View.4. These items are used to generate members in the studios data trees (where applicable) and retrieve the members at run time. The following steps show one example of how to create a Regular Dimension. 1. Regular Dimensions require that each level have a key and caption specified. and then drag Year (formerly CURRENT_YEAR) into the Hierarchies pane.Dimensional information is defined through the following dimensions: Regular Dimensions Measure Dimensions Scope Relationships A dimensionally modeled layer can be applied to any metadata in star schema format.1 Create Regular Dimensions Regular Dimensions consist of one or more user-defined hierarchies. Right-click the top Year in the hierarchy column. 4. 4. 3. and that the caption be a string data type. Each hierarchy consists of the following components: Levels Keys Captions Attributes Level information is used to roll up measures accurately when performing queries or analyses. When your metadata is in star schema format. Remember. you need to create an item to cast Year into a string. 102 IBM Cognos Business Intelligence V10. In this case Year is an integer. Rename Year(All) to Time (All). Figure 4-69 Dimension Definition dialog box You need to assign a business key and member caption to each level.1 Handbook . a caption must be a string data type. The results display as shown in Figure 4-69.6. therefore. If needed. rename CURRENT_YEAR to Year. . and then select _businessKey as shown in Figure 4-70. Figure 4-70 Specify Roles dialog box for Regular Dimension 8. 11. click the ellipsis (. Figure 4-71 Year level roles Chapter 4. and then click Close.7. and drag Year to the Expression definition pane. Create reporting packages with IBM Cognos Framework Manager 103 . char(4)) This expression casts the integer value to a string. 10.Click OK. under Available Components. type Year Caption.) for Role for the new Year Caption item. Click Close.. click the ellipsis (. select _memberCaption.. expand Business View Time.Edit the expression to display as follows: cast([Business View]. In the Name box. In the Hierarchies pane. click the Year level in the bottom pane.. 9.[Time]. Then. The Year level now has a business key and member caption assigned as shown in Figure 4-71.[Year].) in the Role column. and then click Add in the bottom-right corner. char(10)) 104 IBM Cognos Business Intelligence V10. select _memberCaption as the role. and change the expression as follows: cast([Business View].Add a new item for this level called Quarter Caption with the following expression.12.Assign the _memberCaption role to Quarter Caption. because it has no parent keys. Notice that the _businessKey role is already assigned to Day Key in the bottom pane. 20.[Quarter (numeric)].Drag Month to the bottom-right pane. 13. and rename it to Day. because Day Key is an identifier as it is the primary key in the underlying table. drag Quarter Key under the Year level in the Hierarchies pane.. then leave this check box clear.1 Handbook . set it as the _memberCaption.Rename the level to Quarter.From the Available items pane.Drag Day Date to the bottom-right panel. 14. If they are.[Time]. 17. and then click Unique Level as shown in Figure 4-72. 19. Figure 4-72 Unique Level check box The Unique Level check box indicates that the keys in the levels above the current level are not necessary to identify the members in a level. and then assign a Role of _businessKey to Quarter Key in the bottom pane. and then assign the _businessKey role to Month Key in the bottom pane. rename it to Month. Note that Quarter (numeric) was formerly CURRENT_QUARTER.) for the Source column for Day Caption to edit its definition. and then rename it to Day Caption.Click the ellipsis (. char(1)) 15. 18.Drag Day Key below the Month level in the Hierarchies pane. 16.Drag Month Key below the Quarter level in the Hierarchies pane.. and then select the Unique Level check box.[Time]. The top level does not need this setting.[Day Date]. cast([Business View]. If your business keys. For these types of functions to work. Create reporting packages with IBM Cognos Framework Manager 105 . the order of the data must be correct and consistent.21. Figure 4-73 Time dimension hierarchy completed For some dimensions. captions. or attributes are sortable so that there is a logical order to the data. For example. select Unique Level as shown in Figure 4-73. you can use the Member Sort feature for Regular Dimensions to ensure the correct structure for you data. ensure specific sorting of the data in all scenarios to take advantage of dimensional functions that navigate the data.Click OK. Chapter 4. such as the Time dimension. you might want to use the Lag function. which allows you to view the current month and the previous month by lagging one month from the current month in a calculation. under Sorting Options. and Always (OLAP compatible) as shown in Figure 4-74. select Metadata. Figure 4-74 Dimension Definition.1 Handbook . and then. Data. Member Sort tab 106 IBM Cognos Business Intelligence V10.22.Click the Member Sort tab. Click Detect to detect which item will be used in the Level Sort Properties pane to sort the data.Click OK. which is correct.23. and then rename New Dimension to Time as shown in Figure 4-76. Member Sort tab For more information about the Member Sort feature. Figure 4-75 Dimension Definition. the business keys for the levels are used. refer to the IBM Cognos Framework Manager User Guide. In this case. Create reporting packages with IBM Cognos Framework Manager 107 . as shown in Figure 4-75. 24. Figure 4-76 New Regular Dimension for Time Chapter 4. 1 Handbook . Figure 4-77 Time dimension expanded in Project Viewer 108 IBM Cognos Business Intelligence V10.The new Time dimension is now complete and when expanded in the Project Viewer displays as shown in Figure 4-77. if you are feeling adventurous. We include the results in the model that we provide with this book. and Order Method as shown in Figure 4-79 on page 110. Figure 4-78 Products dimension configuration Chapter 4. create a dimension for Products as shown in Figure 4-78. Create reporting packages with IBM Cognos Framework Manager 109 .Again. 110 IBM Cognos Business Intelligence V10. 2. Right-click Dimensional View.4.2 Create Measure Dimensions A Measure Dimension is a logical collection of facts that enables OLAP-style analytical querying and is related to Regular Dimensions within scope. point to Create.1 Handbook .Figure 4-79 Order Methods dimension configuration 4. In the Model Objects pane. To create Measure Dimensions: 1. and then click Measure Dimension. expand Business View Sales Fact. Chapter 4. and rename the new Measure Dimension to Sales Fact. Click OK. Shift+click Break Even Point. Create reporting packages with IBM Cognos Framework Manager 111 . and then drag all the selected measures to the Measures pane as shown in Figure 4-80.3. Click Quantity. Figure 4-80 Sales Measure Dimension 4. Figure 4-81 Sales Fact and Sales Target Fact Measure Dimensions expanded in the Project Viewer 4.1 Handbook . Select only the Sales Target measure. underlying join relationships are still required to generate the SQL that is sent to the data source. However. Repeat the steps to create a Measure Dimension called Sales Target Fact that is based on the Sales Target Fact data source query subject in the Business View.3 Define scope for measures Measure Dimensions are related to Regular Dimensions through scope relationships that define at what levels a measure is in scope. you can use the Dimension Map in the Project Info pane. 112 IBM Cognos Business Intelligence V10. The results display as shown in Figure 4-81.4. A scope relationship is created automatically between a dimension and a measure dimension whose underlying query subjects have a valid JOIN relationship defined and are required to achieve predictable roll ups.5. To define the scope of a measure or group of measures. You can also create. Figure 4-82 Scope relationships You can double-click the relationships to edit scope. Double-click the Dimensional View namespace to give it focus. edit. Create reporting packages with IBM Cognos Framework Manager 113 . in our example. However. Chapter 4. Notice the scope relationships as shown in Figure 4-82. and delete Regular Dimensions and Measure Dimensions in this pane. we use the Dimension Map because it is a central location to define scope easily for all measures.To define the scope for the Sales Fact and Sales Target Fact Measure Dimensions: 1. as shown in Figure 4-83.1 Handbook . Figure 4-83 Sales Fact scope All levels in all dimensions are highlighted indicating that they are all currently in scope. and then on the toolbar click the Remove Scope icon (shown in Figure 4-85). click the Dimension Map tab. Click Sales Target Fact. 3. and then click Sales Fact in the Measures pane. which is at the Month level for the Time dimension and Product Type level for the Products dimension. which is correct for the Sales Fact measures.2. Figure 4-84 Set Scope toolbar icon 4. Sales Fact is also not in scope at all for the Order Methods dimensions. However. this is not the case for Sales Target Fact. under Time click the Month level. Click Product Type in the Products dimension. In the middle pane. We set the appropriate scope for the Sales Target measure. Figure 4-85 Remove Scope toolbar icon 114 IBM Cognos Business Intelligence V10. and then click Set Scope on the toolbar. and then on the toolbar click the Set Scope icon as shown in Figure 4-84. 5. Click the Order Methods dimension. Create reporting packages with IBM Cognos Framework Manager 115 . These namespaces are used to organize the relational and dimensional presentation views. the Day level for the Time dimension and the Product Name level for the Products dimension are no longer highlighted and are out of scope for Sales Target Fact. Drag the Sales Target and Sales namespaces into the Query namespace. 9. Figure 4-87 Presentation View organized Next. Figure 4-86 Scope settings for Sales Target Fact 6. create star schema groupings for the DMR objects and place them in the Analysis namespace. Chapter 4. In the Project Viewer. 8. Create another namespace. The results display as shown in Figure 4-87. Now that the dimensional objects are created and scope is set. right-click Presentation View. point to Create. and then click Namespace. The Order Methods dimension is completely out of scope.As shown in Figure 4-86. 7. and name it Analysis. Name the new namespace Query. we create a presentation view for authors and analysts. Click OK.1 Handbook . 116 IBM Cognos Business Intelligence V10.In the Dimensional View. right-click Sales Fact.10. Figure 4-88 Create Star Schema Grouping dialog box The Measure Dimension and all related Regular Dimensions are selected automatically based on the scope relationships. 11. and then click Create Star Schema Grouping. The result displays as shown in Figure 4-88. and then repeat these steps to create a star schema grouping for Sales Target Fact. A package must contain all the information that a specific user or group of users needs to create reports. The final Presentation View displays as shown in Figure 4-89. validated.5 Create and configure a package Up to this point. The next step is to publish packages to IBM Cognos Connection to make both the query package and analysis packages available to authors and analysts.12. modeled. and organized metadata to address the requirement to report and analyze sales and sales target information by products. Each report can contain information from a single package only. This modeling was done for both traditional relational reporting (query) and OLAP-style reporting (analysis) using DMR. Create reporting packages with IBM Cognos Framework Manager 117 . Chapter 4. You create packages. Figure 4-89 Final Presentation view 4. we have imported. and order methods. time. which are a subset of your model. to make metadata available to users.Drag the two new star schema grouping namespaces to the Analysis namespace. and then click Package. but you can browse to any location in IBM Cognos Connection to which you have write access. point to Create. It allows you to keep multiple copies of a model on the server. select a location where you want to publish your package.To create packages: 1. Define objects dialog box 4. click Yes. and select Sales Target and Sales as shown in Figure 4-90. which can be useful if you want to publish a new version of the model but want existing reports to continue using an earlier copy. 8. all function sets are included unless you specify which function set is associated with a data source in the Project Function List dialog box located in the Project menu. Opening an existing report in one of the studios will also cause the report to become associated with the latest version of the model. This option is checked by default.1 Handbook . and then choose the set of functions that you want to be available in the package. Click Next. In the Project Viewer. 9. right-click the Packages folder. By default. 3. 6. type GO Sales DW (Query). 5. expand Presentation View Query. In the name box. When prompted to launch the Publish Package wizard and publish your package to the server. Click Finish. and click Next. Clear Enable model versioning. In the Select Location Type dialog box. Click Next. Figure 4-90 Create Package. 118 IBM Cognos Business Intelligence V10. Clear the Model namespace. 7. We leave the default location of Public Folders. 2. Note that all new reports that you author always use the most recent version of a published model. “Apply security in IBM Cognos Framework Manager” on page 124.Repeat these steps to create and publish a package called GO Sales DW (Analysis). 12. Define objects dialog box Chapter 4.6.Click Close. We discuss security in more detail in 4. 11. Create reporting packages with IBM Cognos Framework Manager 119 . ensure that the “Verify the package before publishing” option is selected. Select only the Sales Target Fact and Sales Fact namespaces in the Analysis namespace as shown in Figure 4-91. When this option is selected. click Next.To verify your package before publishing.Click Finish to close the Publish Wizard. 13. Informational warnings that might display indicate that items in the package reference other items in the model and that those items are included in the package as well but are hidden from authors. Click Publish. Figure 4-91 Create Package.10. Use the Add and Remove buttons to specify which groups or roles will have access to the package.Specify who has access to the package. When you have finished adding security. 14. IBM Cognos Framework Manager performs a validation of your package and alerts you to any potential issues. as shown in Figure 4-92.The packages are now available in IBM Cognos Connection. Figure 4-92 IBM Cognos Connection. and are ready for use by authors in one of the studios.1 Handbook . Public Folders 120 IBM Cognos Business Intelligence V10. Figure 4-93 shows the GO Sales DW (Query) package in IBM Cognos Report Studio. Chapter 4. Create reporting packages with IBM Cognos Framework Manager 121 . Insertable Objects pane This package displays query subjects and query items for use in studios that support relational only packages. Figure 4-93 IBM Cognos Report Studio. Figure 4-94 shows the GO Sales DW (Analysis) package in IBM Cognos Report Studio. you can work with hierarchies. 4. and measures. Figure 4-94 IBM Cognos Report Studio. You can see details about each change that was made to the package and which reports are affected by a specific change.5. Reports that are authored using the package can be impacted by changes that you make to the model. This package allows for OLAP-style queries on a relational data source. Insertable Objects pane With this package. members. you can see how the changes that you make to a model will affect the package and any reports that use it.1 Handbook . attributes. levels.1 Analyze publish impact Before you publish your package. adding a new object to the model will not affect any existing reports. Changing the name of a query item in the model or 122 IBM Cognos Business Intelligence V10. For example. In the Project Viewer. From the Actions menu. click a package that was published. For example. It is important to note that changes to the model do not impact reports until the package is republished. 3. Reports use the published package. If you change the model query subject. The report definition will not be valid because the query item that it references is not in the package definition. you have a model query subject that references a data source query subject. it appears as a modified object in the analysis results. so your changes do not have any impact until the package is republished. click Package Analyze Publish Impact. You can then take whatever action is necessary. Chapter 4. Create reporting packages with IBM Cognos Framework Manager 123 . 4. and you can choose to search for dependant reports. The analysis is done on objects that a package uses directly as well as the underlying objects. The following types of objects are analyzed: Query subjects Query items Measures Regular dimensions Measure dimensions Hierarchies Levels Stand-alone filters Stand-alone calculations To analyze a publish impact: 1. click Close to close the Analyze Publish Impact dialog box. When you are finished. it also appears as a modified object in the analysis results.deleting a query item does affect a report. The results of the analysis display. If you change the underlying data source query subject. 2. the user will not have access to the secured object. groups. This level of security controls the data that is shown to the users when they build and run their reports. Row level security allows you to create a security filter and apply it to a specific query subject. and roles to define access.4.6 Apply security in IBM Cognos Framework Manager This section discusses security at a high level. Each type of security relies on users. When you add object-based security. In doing so. When you apply security to a parent object. In cases of conflicting access. For example. We do not implement security in our model directly but discuss generic steps about how to apply security. If you do not set object-based security. all objects in the model are visible to everyone who has access to the package. You can set security for all objects by setting security on the root namespace. In IBM Cognos Framework Manager. If a user is a member of multiple groups or roles and if one group is allowed access to an object and another is denied access. or role directly to the object. in your project you might have a Salary query subject. all of the child objects inherit the security settings. You might want this query subject visible to a Manager role but not visible to an Employee role. you must set it for all objects. When you explicitly allow or deny access to an object. you apply a specific user.1 Handbook . Before you add security in IBM Cognos Framework Manager. 4. you choose to make the object visible to the select users or groups.6. You might want an object to be visible only to one selected group or role. Package level security allows you to apply security to a package and identify who has access to that package. After you set security for one object.1 Object level security You can apply metadata security directly to objects in a model. or keeping it hidden from all users. ensure that security was set up correctly in IBM Cognos BI. The object inherits the security that was defined for its parent object. 124 IBM Cognos Business Intelligence V10. group. security is a way of restricting access to metadata and data. There are three different types of security in IBM Cognos Framework Manager: Object level security allows you to secure an object directly by allowing or denying users access to the object. you override the inherited setting. the denied access group or role membership will have priority. To remove object level security from the model: 1. 3. Then apply a security filter to the Products query subject to restrict their access to camping equipment data.6. or roles. A security filter controls the data that is shown to users when they author their reports. and click Delete. 3. click Specify Object Security. create and add members to a Sales Managers and Camping Equipment Reps groups. Click OK. or role by completing one of the following steps: – To deny access to a user. To accomplish this. group. group. Create reporting packages with IBM Cognos Framework Manager 125 . click Explorer. – To grant access to a user.2 Row level security You can restrict the data that is returned by query subjects in a project by using security filters. sales managers at the Great Outdoors company want to ensure that Camping Equipment sales representatives see only orders that relate to the Camping Equipment product line. 4. Chapter 4. groups. Select any of the security objects that you want to remove from the model. or roles that you want to change. Click the object that you want to secure. In the middle pane. select Allow. select Deny next to the name of the user. double-click the Packages folder to give it focus in the Explorer. group or role. 2. Select the users. You can also click Add to add new users. Remember that Deny takes priority over Allow. In the Project Viewer. 4.. and from the Actions menu. 2. group or role. groups. For example. Specify security rights for each user. c. You define package level security during the publish process the first time the package is published. add or remove groups or roles as required. 3. groups. To specify row level security: 1. click the Based On column. click the Permissions tab. click OK to return to IBM Cognos Framework Manager. 4. To modify access to your package after it has been published: 1. If you want to add a filter to a group. add users. Click Add Groups.6. 4. In IBM Cognos Connection. the security filter that is associated with these roles are joined together with ORs. click either Create/Edit Embedded filter or Insert from Model. the security filters are joined together with ANDs. In the Select Users and Groups window. or roles. click OK. To add new users: a. and from the Actions menu click Package Edit Package Settings to invoke IBM Cognos Connection in a new window. 126 IBM Cognos Business Intelligence V10. b.1 Handbook . Click the query subject with which you want to work. 2. Users without these permissions are denied access. although they can still view saved report outputs if they have access to the reports. and from the Action menu click Specify Data Security. Create. 4. After you modify the package access permissions.3 Package level security Package access refers to the ability to use the package in one of the IBM Cognos BI studios or to run a report that uses the package from IBM Cognos Connection.Multiple groups or roles: If a user belongs to multiple groups or roles. in the Filter column. 2. If a group or role is based on another group or role. 3. You can also grant administrative access to packages for those users who might be required to republish a package. If you want to base the group on an existing group. In the Select Users and Groups window. These options allow you to either select an existing filter from your model to use or define the expression for a new filter. Click the package that you want to edit. Create reporting packages with IBM Cognos Framework Manager 127 . Any warning or error messages generated are recorded here. you can view the SQL that is generated for the query on the Query Information tab.1 Examine the SQL When testing query objects in IBM Cognos Framework Manager.7 Model troubleshooting tips This section provides some tips for troubleshooting your models.4. Viewing this information can be a useful way to verify expected results and can be a valuable troubleshooting technique to help you debug a model. You can also select the Response sub-tab in the Query Information tab to view the request and response sequence to and from the data source. 4. Chapter 4. In particular. you can verify query path and join conditions and determine if elements of the query are being processed locally by comparing the Cognos and Native SQL.7. Items that appear in the Cognos SQL but are not replicated in the Native SQL indicate that additional processing of the data is required on the IBM Cognos servers. This issue might be due to unsupported functions in the vendor database. and then click the Query Information tab to view the SQL generated for the query as shown in Figure 4-95.1 Handbook . Figure 4-95 Query Information tab 128 IBM Cognos Business Intelligence V10. and click Test. Right-click a query subject or multiple selected query items. Test the results. 2.To test a query and examine the generated SQL: 1. and any warning messages that were generated.3.7. The objects that depend on the selected object display under Dependent objects. select Show Object ID. Create reporting packages with IBM Cognos Framework Manager 129 .2 Object dependencies You can find objects easily that depend on other objects or show the dependencies of a child object. 3. To view the request. Figure 4-96 Response tab 4. click an object. as shown in Figure 4-96. click the Response tab. In the Project Viewer. To show the object identifier for the dependent objects. 2. the response. Chapter 4. From the Tools menu. click Show Object Dependencies. To determine object dependencies: 1. select the part of the model hierarchy that you want to search.3 Search the model When working with a large project. Use the Context Explorer window by right-clicking an object and selecting Show Object Dependencies. use the equals condition. 3. 4. The search is not case sensitive. You can drag the Tools pane to the bottom of the IBM Cognos Framework Manager window and resize it to have a better view of the search results. 6. Click the double down arrow button to show the search criteria boxes. Use the Analyze Publish Impact window by clicking the Show Dependencies icon under Actions in the row that contains the object. If the selected object has children and you want to see the dependencies for a child object. click the Search tab. Use the Search tab to find objects quickly by applying different search criteria. click Tools. The objects that depend on the child object display under Dependent objects. select a condition to apply to the search string. lowercase. In the “Search in” list. the class.7. You can use uppercase. or a property. If you want to search using wildcard characters. 4. In the Search String field. click the plus sign (+) beside the object that contains the child object. In the “Class” list. a condition. The Condition box determines how the Search string value is matched with text in the model. 5. it can be difficult to locate the objects that you need to complete a task. It contains a list of possible search conditions. To search the model: 1. from the View menu. such as the location. 130 IBM Cognos Business Intelligence V10. type the text that you want to find. You can also show object dependencies using the following methods: Use the Project Viewer by right-clicking an object and selecting Show Object Dependencies.4. select the single class of objects that you want to search. If the Tools pane is not visible. Valid wildcard characters are an asterisk (*) and a question mark (?). Click a child object under the parent object. In the “Condition” list.1 Handbook . 2. In the Tools pane. or mixed case strings. 5. but not the object name. The model that we describe in this chapter is just a small subset of the more robust sample models that ship with the product. The results are listed at the bottom of the Search tab. If you select Subset. such as Description or Screen Tip. For the remainder of the book.7. select the type of property that you want to search. Now that you have seen the basics of developing a simple IBM Cognos Framework Manager model. The Object Name property restricts the search to the name of each object. right-click an object in the Search tab.To see an object in the diagram. Create reporting packages with IBM Cognos Framework Manager 131 . The Text Properties property searches the set of properties that contain text strings. the next search operates on the existing search results. After you do one search. and click Locate in Diagram. 10. In the “Property” list. To see an object in the Project Viewer. we use the sample models that ship with the product to demonstrate authoring business intelligence reports and analysis in IBM Cognos BI. The Subset option is cleared after each search. Click Search. The (All Properties) property searches all properties. it is a good idea to augment this knowledge with formal training to ensure the success of your IBM Cognos BI project. 9. 8. the Subset check box becomes available. Chapter 4. click an object in the Search tab. 1 Handbook .132 IBM Cognos Business Intelligence V10. All rights reserved.Part 3 Part 3 Business intelligence simplified © Copyright IBM Corp. 133 . 2010. 134 IBM Cognos Business Intelligence V10.1 Handbook . In this chapter. we discuss the following topics in this chapter: Information delivery leading practices Enabling access for more people Business use case © Copyright IBM Corp.5 Chapter 5. and at the same time to collaborate with other members of the team. to have a unique perspective on data. and information from websites. Business users want to have freedom to combine and explore information. we discuss how to deliver this information to users to help them answer key business questions. we discussed metadata modelling. As an introduction to how to achieve a successful business intelligence solution. In addition. such as data about their competition. All rights reserved. 135 . 2010. Business intelligence simplified: An overview In Chapter 4. they have access to information outside their companies. Companies today have various applications that produce large amount of data. “Create reporting packages with IBM Cognos Framework Manager” on page 33. meaningful information therefore providing business intelligence. It can be a challenge to transform all this data into complete. which creates packages to include all that information that you need for reporting purposes. business users can use flexible dashboards and reports. The IBM Cognos Business Insight workspace is designed to allow a user to focus on an area for analysis. which is typically static in nature. With IBM Cognos Business Insight. A good practice is to change the default names to a name that is more meaningful to users. do not display in a Content unless you give them a name. set the dashboard to open with the latest version. so that a report contains one object.1 Information delivery leading practices The IBM Cognos Business Intelligence (BI) product provides a unified. If saved output is included in a dashboard. Prompts do not have a default name and. When creating a new report for a dashboard.5. For Lists. detailed reports with which the user does need to interact should not be included in an IBM Cognos Business Insight workspace. because they can confuse the consumer. From the user perspective. Change the default names of parts or components. Use saved output or views. such as a list or chart. Turn off the headers and footers. 136 IBM Cognos Business Intelligence V10. Each of the parts or components in a report has a default name. the default name is typically List1 or. However. With IBM Cognos Business Insight. Keep reports and report parts that are candidates to be included in IBM Cognos Business Insight workspace in a specific series of folders so that they can be found easily. Therefore. if two lists are in the report List1 and List11. that provides an at-a-glace view of business performance with a simple and visually compelling way of presenting data. interactive workspace for business users to create their view on data by combining all types of information and to personalize content to provide unique insights and to deliver faster business decisions. you can use saved output as part of the workspace. as such. You can include any report in IBM Cognos Business Insight. Business users can customize existing dashboards and change them in a way that answers their question or they can build completely new dashboards. users can change the version that they want to see. consider the following recommended approaches: Use reports that focus on data that is of interest to the user. Use atomic-level reports or purpose built parts. reports that are added to a dashboard are focused and summarized. To achieve this goal. build reports at an atomic level. or execute the report. unlike production reporting. a dashboard needs to be an uncluttered display of relevant data.1 Handbook . This option typically renders smaller results that fit better in a workspace. If you want to see multiple dashboards at the same time. try using the URL and adding /m to the URL to get the mobile version (for example. if a report contains a date prompt. you can use that prompt to apply a filter to that report only. After you save them. do not make a widget bigger than it needs to be. When using a chart to represent the data. as long as they share the same caption or dimension. For example. When using a web page in a workspace. For quick access to the dashboards that you use on a regular basis. add them to My Favorites. You can create several different types of reports and then use them in the dashboard. Use workarounds for the dashboard printing. Use multiple dashboards. Chapter 5. Business intelligence simplified: An overview 137 . However. or you can press Ctrl+P to use the web browser to print the information that is displayed on the screen. In the next sections. the other reports that have dates. Alternatively. www. Avoid overlapping hidden widgets. you need to open IBM Cognos Business Insight in chrome mode (with the web browser showing toolbars and menus). Use My Favorites. you can open the favorite dashboard or report from the Getting Started Page or the Content tab. use the multi-tab option of the web browser. You cannot print a dashboard directly from IBM Cognos Business Insight. so try not to overlap them.Do not overwhelm users with charts.com/m).ibm. and use appropriately-sized widgets. for example the year. Also. display only the data that is relevant for the users. you can save a report widget to a PDF file and then print the PDF file. Use mobile support. Use the “Fit all Widgets to Window” option after you add all the widgets to the dashboard. Widgets can get “hidden” under other widgets. can respond to a year selected from the first report. we discuss each of these reports in detail. avoid multi-color backgrounds or third-dimensions added to bars or lines unless these features provide meaningful information. Reuse prompts. For example. You can use one prompt to apply the same filter to more than one report on the workspace. For that purpose. crosstab reports (also called matrix reports) are reports that show data in rows and columns. quantity. such as revenue. However. or as in Figure 5-2.2 Crosstabs Like list reports.1 Handbook . where each column shows all the values for that item in the database. You can also group data in a list report by one or more columns. 5. return quantity. Figure 5-1 shows an example of a simple list report. 138 IBM Cognos Business Intelligence V10. such as product orders or a customer list. or you can report on more than one measure. You can add any data that can be aggregated to the body of the crosstab as a measure.5.1 List reports A list report is a type of report that displays detailed information. Figure 5-2 Crosstab report You can create a nested crosstab by adding more than one item to rows or columns. Figure 5-2 shows a crosstab report. or include headers or footers to provide additional information. the values at the intersection points of rows and columns show summarized information rather than detailed information. add summaries.1.1. Measures define that data is reported. Figure 5-1 Simple list report A list report shows data in rows and columns. Business intelligence simplified: An overview 139 . They reveal trends and relationships between values that are not evident in lists or crosstab reports.5. For example. and bar charts use horizontal objects. This type of chart is useful when you want to see how individual values are ranked from highest to lowest level where sorted column or bar charts are best. These charts make it easy to compare individual values just by comparing the heights or lengths of two bars.1. we discuss the various chart types and give suggestions about when to use each of them.3 Charts Charts are the visual representation of quantitative information. Column and bar charts These charts show trends over time or compare discrete data. IBM Cognos BI includes the following chart types: Column charts Line charts Pie charts Bar charts Area charts Point charts Combination charts Scatter charts Bubble charts Bullet charts Gauge charts Pareto charts Progressive charts Quadrant charts Marimekko charts Radar charts Win-loss charts Polar charts In the next sections. Column charts present data using vertical objects. you can create a report that visually compares actual sales and planned sales or that shows the percentage share of product lines in the total revenue of the company. Chapter 5. Figure 5-4 Stacked bar chart 140 IBM Cognos Business Intelligence V10. Figure 5-3 Simple column chart You can use more complex bar or column charts to display part-to-whole relationships as a stacked bar chart. as shown in Figure 5-4.Figure 5-3 shows a simple column chart.1 Handbook . Line. Figure 5-5 Line chart Chapter 5. area. Business intelligence simplified: An overview 141 . and point charts Line charts are similar to column charts. this chart is a good choice. but instead of using columns they plot data at regular points and connect them by lines. For example. If you are interested only in a trend line but not individual values or when you are comparing many data series. Figure 5-5 shows the distribution of gross profit over all the months in a year for different order methods. 142 IBM Cognos Business Intelligence V10. Figure 5-6 Area chart As for column and bar charts. Typically. you can use complex area charts (stacked area charts) to show the relationship of parts to the whole.1 Handbook . it is best not to use stacked line charts because they are difficult to distinguish from unstacked line charts when there are multiple data series. instead of having lines.You can accomplish the same result with the area chart where. Figure 5-6 shows the line chart shown in Figure 5-6 as an area chart. areas below the lines are filled with colors or patterns. Figure 5-7 shows an example of a point chart. Figure 5-7 Point chart Chapter 5. Business intelligence simplified: An overview 143 . but the points on the chart are not connected with lines. Just the data points are shown.Another variation of a line chart is a point chart. A point chart is similar to a line chart. They are useful for highlighting relationships between the various data series. Figure 5-8 Combination chart 144 IBM Cognos Business Intelligence V10. They plot multiple data series using columns. or lines all within one chart. areas.1 Handbook .Combination charts Combination charts are a combination of the charts mentioned previously. Figure 5-8 shows a combination chart that displays revenue and gross profit for marketing campaigns. For example. Figure 5-9 shows the relationship between quantity sold and return quantity for each product line. indicating that there is no correlation between two measures Concentrations: Points appear in particular area of the chart. bubble. indicating anomalies. as in the previous chart example In this chart. Their purpose is to show correlations between two sets of data (measures). which indicates many product or product lines with a small number of items sold and a high number of returns Exceptions: Points stand out from the remaining pattern. Figure 5-9 Scatter chart Using this chart makes it easy to discern the following patterns: Linear trends: Either positive trends (points are going up from left to right in a pattern that looks like a line) that indicate a positive correlation between two measures or a negative trend (points are going up from left to right) Non-linear trends: Points in a pattern to form a curved line. for example in upper-left corner. Chapter 5. and quadrant charts Scatter and bubble charts plot two measures along the same scale. Business intelligence simplified: An overview 145 .Scatter. indicating positive or negative correlation Randomness: Points arranged randomly. the Outdoor Protection product line has a higher number of returns than other product lines. such as strengths. Figure 5-10 Bubble chart Quadrant charts are in fact bubble charts with a background that is divided into four equal sections. These charts are used usually for showing financial data. You can create more complex bubble (or scatter) charts by adding a forth measure to the chart by specifying that the data point appears in different colors based on that measure.000. The size of the bubbles shows the Gross Profit and the color of the bubbles shows whether the quantity is less than 1. weaknesses.000. and threats (a SWOT) analysis.Bubble charts are similar to scatter charts but contain one additional piece of information—the size of the bubble represents the third measure. Figure 5-10 shows an example of a chart with a correlation between Unit Sale Price and Unit Cost.000 (yellow) or greater than 20.1 Handbook . Legacy quadrant charts use baselines to create quadrants.000 (green).000. Note that bubble charts are not supported for Microsoft Excel output. You can change the size of the quadrants.000. 146 IBM Cognos Business Intelligence V10.000 and 20. and current default charts use colored regions. You can use quadrant charts to present data that can be categorized into quadrants. opportunities.000 (red) between 1. Figure 5-11 Quadrant chart Chapter 5.Figure 5-11 shows an example of the quadrant chart. Business intelligence simplified: An overview 147 . Figure 5-12 Pie chart Note that reports in PDF format or HTML format show a maximum of 16 pie charts. To show actual values.Pie and donut charts Pie and donut charts are used to show the relationship of parts to the whole by using segments of a circle. pie charts are not a good choice for a chart if you have measures that have zero or negative values. a stacked bar chart or a column chart (as shown in Figure 5-4 on page 140) provides a better option. Figure 5-12 shows a pie chart showing proportions of advertising costs.1 Handbook . Also. 148 IBM Cognos Business Intelligence V10. Business intelligence simplified: An overview 149 .Bullet charts Bullet charts are one variation of a bar charts. Figure 5-14 Combination of a bullet chart and a list report Chapter 5. Bullet charts shows a primary measure. in comparison to one or more other measures. and poor. Figure 5-13 Bullet chart Because they deliver compact information and do not need too much space on a dashboard. satisfactory. such as list reports as shown in Figure 5-14. such as good. you can add bullet charts to other report objects. as in Figure 5-13. It also relates the compared measures against colored regions in the background that provide additional qualitative measurements. Reading a value from a gauge chart is as easy as reading a value on a dial.1 Handbook . and each value is compared to a colored data range. Figure 5-15 shows how to compare three measures (product cost. These charts are usually used to show the KPIs in executive dashboards. Note that PDF output and HTML output of reports are limited to show up to 16 gauge charts. and revenue) on the same gauge chart.Gauge charts Gauge charts (also known as dial charts or speedometer charts) are similar to bullet charts in that they also compare multiple measures but use needles to show values. Figure 5-15 Gauge chart 150 IBM Cognos Business Intelligence V10. planned revenue. Gauge charts are a better option than a bullet chart in the case where you need to compare more than two values (measures). These charts are not available for Microsoft Excel output. They include a cumulation line that shows the percentage of the accumulated total of all the columns or bars. so that you can identify and reduce the primary cause of problems. $800 100% Region Americas Asia Pacific Central Europe Cumulation Gross profit (in millions) $600 80% 60% $400 40% $200 20% s n nt nt nt rie ctio me me me sso uip uip uip Prote cce g Eq Eq Eq lf or lA ng in Go na tdo eri mp Ou rso ine Ca Pe nta u Mo $0 0% Figure 5-16 Pareto chart Chapter 5. Figure 5-16 shows an example of a Pareto chart showing the gross profit (in millions) for regions by product lines. You can use these charts for quality control data. You can create before and after comparisons of Pareto charts to show the impact of corrective actions. Business intelligence simplified: An overview 151 .Pareto charts Pareto charts rank categories from the most frequent to the least frequent. 2.500.000 500. These charts are not supported for Microsoft Excel output.000 Camping Equipment Positive Values Gross profit (in millions) 1.1 Handbook .000.000 1.000.Progressive column charts Progressive charts (or waterfall charts) are a variation of column or stacked charts. are useful for emphasizing the contribution of the individual segments to the whole.000.000. with each segment of a single tack displaced vertically from the next segment. Progressive charts.000. as well as stacked bar or column charts and pie charts.000 n nt nt nt es tio ori me me me tec ss uip uip uip Pro cce Eq Eq Eq lf or lA ng ing Go na tdo eri mp ne Ou rso Ca tai Pe un Mo 0 Figure 5-17 Progressive chart 152 IBM Cognos Business Intelligence V10. Figure 5-17 shows the contribution of each Product Line to Gross Profit (in millions).000. Individual segment height is a percentage of the respective column total value. Business intelligence simplified: An overview 153 .Marimekko charts Marimekko charts are stacked charts in which the width of a column is proportional to the total of the column’s values. Figure 5-18 shows the contribution of return quantity of returned items for product lines by order methods. Figure 5-18 Marimekko chart Chapter 5. These charts are useful if you want to compare a couple of variations against the same set of variables or to compare multiple measures. if necessary. You can also define the default measure. This type of chart is also useful for spotting anomalies or outliers.1 Handbook . as shown in Figure 5-20. Figure 5-19 Radar chart Win-loss charts Win-loss charts are microcharts that use the following measures: The default measure The win-loss measure The win-loss measure is the measure or calculation that you define. for example the months that have revenue over a certain threshold.Radar charts Radar charts compare several values along multiple axis that start at the center of the chart forming a radial figure. Figure 5-19 shows an example of a radar chart that compares revenue by product lines for different retailers. You can use these charts for visualizing the win-loss trends. Figure 5-20 Win-loss chart 154 IBM Cognos Business Intelligence V10. and the angle around the polar axis represents revenue.Polar charts Polar charts are circular charts that use values and angles to show information as polar coordinates. Figure 5-21 shows the revenue and quantity for each product line. Figure 5-21 Polar chart Chapter 5. Business intelligence simplified: An overview 155 . The distance along the radial axis represents quantity. you can add a baseline to show a sales target (see Figure 5-22) or break-even point. You can add trendlines to bar. bubble. area. line.1 Handbook . For example. Each baseline represents a value on an axis. Figure 5-22 Example of baseline added to a chart Baseline options (depending on the type of chart where baselines are available) include: Numeric value: Static numeric value Mean: Statistical mean plus or minus a number of standard deviations based on all charted data values on the specified axis Percentile (%): Specified percentile Percent on Axis (%): Percentage of the full range of the axis Trendlines graphically illustrate trends in data series and are commonly used when charting predictions. A trendline is typically a line or curve that connects or passes through two or more points in the series. 156 IBM Cognos Business Intelligence V10.Baselines and trendlines Baselines or trendlines provide additional details on a chart. and scatter charts. Baselines are horizontal or vertical lines that cut through a chart to indicate major divisions in the data. displaying a trend. for data values that increase or decrease rapidly and then level out Moving average. Business intelligence simplified: An overview 157 . for data values that both increase and decrease (as in example in Figure 5-23) Logarithm. Figure 5-23 Example of a trendline added to a chart The following trendlines are available: Linear. for data values that fluctuate and you want to smooth out the exceptions to see trends Chapter 5. for data values that increase or decrease along a straight line at a constant rate (for example revenue that increases over time period) Polynomial.Figure 5-23 shows an example of adding a polynomial trendline to a chart displaying revenue by product lines over time to see the trend. “Business scenario and personas used in this book” on page 21 describes the fictitious Great Outdoors company scenario that we use throughout this book. IBM Cognos Business Insight uses both Microsoft Windows navigation keys (such as F1 for online help or Ctrl+C and Ctrl+V for copy and paste) and application-specific shortcut keys.2 Enabling access for more people IBM Cognos BI includes features to create reports that are more accessible to people with a physical disability. refer to IBM Cognos Business Insight User Guide. such as a high-contrast display.3 Business use case Chapter 3. What are the Great Outdoors company’s lowest selling products? Are we making a profit by selling these products? Are we increasing our gross profit (margin)? Can we compare gross profit by all the Great Outdoors company regions? And by all product lines? Which product lines are best performing so we can concentrate on them in all regions? 158 IBM Cognos Business Intelligence V10..5. In the following chapters. 5. Shortcut keys directly trigger an action. WAI-ARIA ensures that people with limited vision can use screen-reader software along with a digital speech synthesizer to listen to displayed information. It sells products from third-party manufacturers to resells.Accessible Rich Internet Applications (WAI-ARIA).1 Handbook . For a complete list of supported shortcut keys. IBM Cognos Business Insight supports your system’s display settings. such as restricted mobility or limited vision. IBM Cognos Business Insight uses Web Accessibility Initiative . Major accessibility features in IBM Cognos Business Insight are: Use of command keys or shortcut keys to navigate through the workspace. such as the internet. We need a deeper insight into profitability of these campaigns. We need information about achieved revenue and profit. What are the most successful campaigns? Maybe we can focus on them in all Great Outdoors company regions? Chapter 5. How are we doing compared to the plan (actual versus planned)? Can we compare our revenue with the planned revenue? Can we add some visual representation in a form of charts to get an immediate insight just by taking a quick view of a report? Could we add some information about our competitors? Can we include data from external sources. Business intelligence simplified: An overview 159 . to our dashboards? Or information about our competitors that is publicly available? Or reports from a stock exchange or currency exchange rates from a bank internet site? How successful are our promotions? The marketing department is responsible for organizing campaigns and promotions.Do we have a considerable number of product returns? We want to see the quantity of returned products by product lines? Can we compare that with the quantity that was sold to see the percentage of returned items? Do we have some outliers among products with the higher percentage of returns? Maybe we need to consider another manufacturer for these products? How many units of a product should I buy for each period of the year? Can we predict how many units of each product that Great Outdoors company has to buy to satisfy the needs of the market? Can we make that prediction based on the historical data? How is the performance of our business against last year? We want a report on which we can compare current revenue data with the data from previous years. 160 IBM Cognos Business Intelligence V10.1 Handbook . the web-based interface that allows you to build sophisticated. interactive dashboards that provide insight and that facilitate collaborative decision making. we discuss the following topics: Dashboard overview Introduction to IBM Cognos Business Insight Interaction with the dashboard components Collaborative business intelligence © Copyright IBM Corp. we introduce IBM Cognos Business Insight. Individual and collaborative user experience In this chapter.6 Chapter 6. All rights reserved. 161 . In this chapter. 2010. Interactivity and personalization A dashboard is more than just a static set of reports. different data marts. RSS feeds. or slider filters. They can collaborate with team members to make decisions. dashboards can contain non-business intelligence data. images. Visibility on non-business intelligence content In addition to a variety of reports. Business users can use a free-form layout to add dashboard elements such as reports. properties that are in common for all dashboards can be summarized in following key features: At-a-glace view of business performance A dashboard is a visual display of the most important information about business performance. Pro activity and collaboration Business users can take action directly from within the dashboard using collaboration and workflow integration. textual objects. It has to be intuitive and interactive to allow business users to personalize content to fit their needs.6. based on their business needs. such as websites or RSS feeds. data warehouses. Different users have different understandings of what a dashboard is and how it should look.1 Handbook . they can interact with reports to sort or filter data. For business users. a dashboard is the key to understanding trends or spotting anomalies in performance. Information is consolidated and arranged in a way that makes it easy to monitor. to add additional calculation. and to change list or crosstab reports to a chart or vice versa. and so forth) to give users a complete view on business performance. Nevertheless. 162 IBM Cognos Business Intelligence V10. In addition. Assembling information from various different sources Dashboards combine data from various different data sources (enterprise resource planning systems.1 Dashboard overview Dashboard is a term that is used commonly in the context of business analytics and that is a popular way of presenting important information. customer relationship management. No read-only dashboards: You cannot create a read-only version of a dashboard. You can launch IBM Cognos Business Insight using one of the following methods: From the IBM Cognos Business Intelligence (BI) Welcome page From IBM Cognos Connection by clicking New dashboard or by clicking the hyperlink of an existing dashboard object From the Launch menu in IBM Cognos Connection and IBM Cognos Administration Directly in web browser by entering a URL using the following format:. If you use any of the other options to launch IBM Cognos Business Insight.6. it opens in chromeless mode. and chromeless mode does not these elements. When you launch IBM Cognos Business Insight directly in a web browser by entering a URL. it opens in chrome mode. If a business user has permission to access a particular dashboard. Chapter 6. Individual and collaborative user experience 163 .cgi?b_action=icd You can open an IBM Cognos Business Insight interface in two modes: Chrome mode Chromeless mode Chrome mode includes the toolbars and menus of a web browser.2 Introduction to IBM Cognos Business Insight IBM Cognos Business Insight is a web-based user interface that allows you to open or edit a dashboard or to create a dashboard. that user can also make changes to it. Figure 6-1 Business Insight user interface The user interface has the following components: A Getting Started page that displays when you launch IBM Cognos Business Insight An application bar A dashboard layout area A content pane that includes the Content and Toolbox tabs Widgets and filters 164 IBM Cognos Business Intelligence V10.1 Handbook .Figure 6-1 shows the user interface. send an email or collaboration.6.2. we demonstrate the features of some of these Chapter 6. the page closes. In the following examples. open an existing dashboard. or search for content. you can access the Action Menu. change the layout. If you do not want this page to display when you launch IBM Cognos Business Insight.2. For example.1 The Getting Started page Figure 6-2 shows the page that opens when you launch IBM Cognos Business Insight. Figure 6-2 The Getting Started page You can complete the following activities from this page: Create a new dashboard Open an existing dashboard View and open favorite dashboards from Favorites View how-to videos that provide an overview to Business Insight When you select any of the options on the Getting Started page. Individual and collaborative user experience 165 . 6.2 Application bar The application bar displays the name of the current dashboard and contains the icons for different actions that you can perform in the dashboard layout area. create a dashboard. disable it using the My Preferences menu option. 2. or you can add filters to narrow the scope of the data (sliders or select value filters). It is includes the following tabs: The Content tab displays IBM Cognos content in a hierarchy of folders and subfolders with dashboards that you can open and reports that you can add to a workspace. otherwise. For the complete list of available icons. you can enable or disable the display of information cards and refresh the display to get the current content.3 Dashboard layout area The dashboard layout area is the workspace on which you can combine data from different sources to gain insight into your business. Within the Content tab. 6. You can add various widgets with BI content (lists. images. refer to IBM Cognos Business Insight User Guide. and select value filters 166 IBM Cognos Business Intelligence V10. and RSS feeds). this view is unavailable The Toolbox tab includes the following types of widgets that are provided by IBM Cognos Business Insight: – Widgets that can add additional content to a business user’s workspace. or RSS feeds – Widgets that allow you to filter already added content. list. sliders.4 Content pane The content pane contains all that objects that you can add to a workspace. crosstabs. In addition. non-BI content (text. This content is the same content as in IBM Cognos Connection with My Folders (personal content and dashboards) and Public Folders (content that is of interest for many business users).icons. HTML pages. You can display content in thumbnail. such as images. you can filter the entire content using one of the following criteria: – All Content (default): Displays all the content that is available and that is supported in IBM Cognos Business Insight – My Favorites: Displays dashboards and reports that are marked as Favorite – My Folders: Displays the contents only from My Folders – Search: Displays the result of the search after a search is performed. or chart reports).1 Handbook .2. or tree view. HTML content. 6. When you select a widget or it is in focus. For business users. as shown in Figure 6-3. Individual and collaborative user experience 167 .000. an on demand toolbar displays. top-level location where the navigation begins. You can change the manner in which content displays in a widget.000 Figure 6-3 A widget with the on demand toolbar Widgets can communicate with other widgets. Depending on the type and content of a widget.6. widgets allow the interaction and manipulation with the content that they contain. language. if you have two report widgets that are created on the same dimensionally-modelled data source. whether it is a report or a filter.000.2. TrailChef Campaign Seeker Campaign Rising Star Campaign Revenue Gross profit Campaign name Outdoor Protection Campaign Hibernator Campaign Extreme Campaign EverGlow Campaign Course Pro Campaign Canyon Mule Campaign Big Rock Campaign 0 40. and so forth. summarized or detailed view.000 80. a variety of toolbar options are available.5 Widgets Widgets are containers for all objects that you can add to the workspace. Alternatively. when the data in one report is changed. For example. You can specify the title. Chapter 6. how the links in a widget are opened. the second report is updated based on user interactions in the first report. you can use the slider filter to filter the data dynamically in one or more report widgets. and navigation are supported. all report parts that include the header and the footer are added to a single widget. we discuss each of these widgets and when and how you can use them. Report widget Business users use the report widget to add reports or individual report parts (for example list. Reports parts are smaller and provide consolidated information for business users.There are two types of widgets inside IBM Cognos Business Insight: Content widgets Toolbox widgets In the following sections. or chart) to a workspace. It is leading practice to use report parts whenever possible instead of entire reports to improve the layout and usability of dashboards. If you add the entire report with several report parts to a dashboard. For any other IBM Cognos Metric Studio content that you add.. which is not the best choice for a dashboard. historical data for the metric displays in a form of a bar chart. If IBM Cognos Metric Studio is installed and configured as part of your IBM Cognos BI environment.1 Handbook . Each 168 IBM Cognos Business Intelligence V10. drill through. the content displays as a list of metrics for the selected item. A report widget includes the following reports: IBM Cognos Report Studio IBM Cognos Query Studio IBM Cognos Analysis Studio IBM Cognos Metric Studio Report views and saved report output versions Reports objects that contain prompts. This section describes the content widgets. Content widgets You can use content widgets when adding IBM Cognos content to the Content tab of a workspace. Communication note: IBM Cognos PowerPlay Studio report content does not interact with the slider filter and select value filter widgets. or the report specification. For example.metric in the list has a hyperlink that opens the individual metric in IBM Cognos Metric Studio. “Work with report versions and watch rules” on page 207). Support for reports in HTML format: IBM Cognos Business Insight supports only report versions that are saved in HTML format. they can use report output versions in report widgets. Chapter 6. When added to a workspace. Users can choose to view the saved report output versions (by default. For details about all properties that are available refer to the IBM Cognos Business Insight User Guide. you can change the title of a widget. it is the latest saved output version) or the live version of the report. You can change several properties of a report widget using the widget Actions Menu button. Widget-to-widget communication is also not supported. We use some of these actions in examples later in this chapter. PowerPlay widget If IBM Cognos PowerPlay Studio is installed and configured as part of your IBM Cognos BI environment.6. but you can also view the report in PDF format. the maximum number of rows per page. a IBM Cognos PowerPlay Studio report displays in HTML format. If business users do not have a need for the most current data in some reports.3. Users can also create watch rules based on specific conditions and thresholds for a given report version (see 6. Individual and collaborative user experience 169 . you can navigate to IBM Cognos PowerPlay Studio content in the Content tab and add IBM Cognos PowerPlay Studio reports to a dashboard using this widget. the location. TM1 Cube Viewer widgets listen to each other. you can add applications that are developed in TM1 to a workspace.1 Handbook . By default. You can add the following TM1 content to a dashboard: TM1 Websheet: Displays a spreadsheet with the TM1 data that you can view in a web browser TM1 Cube View: Displays a view of a TM1 cube IBM Cognos TM1 Contributor: Displays a web page with a URL that points to a TM1 Contributor Web Client 170 IBM Cognos Business Intelligence V10.You can take the following standard IBM Cognos PowerPlay Studio actions on this widget: Switch between crosstab and indent crosstab display Select a chart to display data Swap rows and columns Hide and show categories Create a calculation by using rows or columns Rank categories Zero suppression 80/20 suppression Custom exception highlighting Custom subsets Drill through TM1 widget If IBM Cognos TM1 is installed and configured as part of your IBM Cognos BI environment. Communication note: TM1 widgets do not interact with the slider filter and select value filter widgets. as shown in Figure 6-4.TM1 objects are displayed in HTML format in a dedicated TM1 Viewer widget with TM1 toolbar buttons on top of the widget. Figure 6-4 TM1 widget Chapter 6. Individual and collaborative user experience 171 . and the Applications folder has more sub-folders. or TM1 Contributors). The Views folder contains original TM1 Cubes and TM1 Cube views objects. which is Applications and Views. TM1 Websheet objects. not the entire folders. as shown in Figure 6-5. Entire TM1 content is located in a folder in the Content pane with two main folder at the highest level of the tree. 172 IBM Cognos Business Intelligence V10. Cube Views. or TM1 Cube Views® objects. Figure 6-5 Example of TM1 Navigation Viewer You can add only the individual TM1 content objects to a workspace (that is.TM1 Navigation Viewer is incorporated in the Content pane and is not available as separate widget. TM1 Websheets.1 Handbook . Individual and collaborative user experience 173 . The image can also be used as a link. packages. Web page widget This widget displays HTML based content such as a Web page on a dashboard. Using the image widget: You must add the image URL to the trusted domain list as defined in the IBM Cognos Configuration tool. In this section. we describe the toolbox widgets. Image widget This widget displays the image on the dashboard. either to add additional information or to filter the content of existing widgets in the workspace. you can configure the image widget to broadcast a specified URL in the web page widget or a new browser window when the image is clicked. The image must be a single file that is accessible by a URL. as shown in Figure 6-6. Chapter 6. Using the web page widget: You must add the web page URL to the trusted domain list as defined in the IBM Cognos Configuration tool. and reports. For example. not a web page. ad-hoc tasks. RSS feed widget This widget displays the content of a Really Simple Syndication (RSS) or an Atom news feed that is specified by a URL address.1 Handbook . and notification requests from My Inbox in IBM Cognos Connection. Using the RSS feed widget: You must add the RSS or Atom feed URL to the trusted domain list as defined in the IBM Cognos Configuration tool. For example. as shown in Figure 6-7. Figure 6-7 Select value filter widget Select value filter widgets are useful in a situation when you have several reports on a dashboard that show data by a variety of locations. product lines. The specified URL must point to a valid RSS or Atom feed and not a web page. or customers. you can narrow the scope of data. subsidiaries.My Inbox widget This widget displays an RSS feed of a user’s secure approval requests. because the valid RSS feed link opens an XML file. The RSS or Atom channel includes a list of links to specific web pages. Text widget You can use this widget to display text on the dashboard. 174 IBM Cognos Business Intelligence V10. you can filter on the product line or region. With these filters. Select value filter widget You can use this widget to filter report data dynamically on the report widgets that you added to a workspace previously. You can specify how these links open in the web page widget or whether the web page widget listens to broadcasts from the RSS feed widget automatically. which makes reports easier to read. However. for example just some specific product lines or years. Individual and collaborative user experience 175 . Therefore.When adding a select value filter. Selecting data for the filter: It is not possible to select one data item for more than one filter. because the report was authored in the manner that we described previously. You must include the data item or items based on the items that you want to filter in the report query. you can select data items that you can filter with the corresponding report widget to which the items belong. In addition. you can choose whether users can select single or multiple values in the filter widget. additional data items must exist in the initial query (but do not have to display on the chart or crosstab) and in this separate query. Chapter 6. The report must be authored in a way that allows this type of filtering. Figure 6-8 Filter report based on data that is not displayed You can specify the list of values that you want in a select value filter. For example. a bar chart shows returned quantity by product lines. as shown in Figure 6-8. You can also filter the reports on data items that are not shown in the report. you can filter the chart by years. and you must name the filter _BusinessInsight_. which is similar to the select value filter. to filter report data dynamically. All dashboards are editable. this widget can filter single values or value range. For example. they can also change it. changing a pie chart to a bar chart. Users have different needs for reports and data. or share information with the other members of the team.Slider filter widget You can use this widget. 6. Advanced business users or report authors can create reports and basic dashboards for a group of business users to include all information that is necessary for that group of users to work. sort data or perform additional calculations. you can also choose the data items on which to filter reports. Then. Users can complete a wide variety of tasks quickly and easily.3. you can view and interact with reports. you can filter data based on values that are not displayed on report widgets.1 Handbook . These needs might include rearranging the layout. This type of filtering is especially useful when filtering on a data range (see Figure 6-9) or numeric items. business users can personalize the dashboards to fit specific needs. such as revenue and quantity. In addition. if business users have permission to access a particular dashboard. Depending on the settings of the slider filter. Figure 6-9 Slider filter widget As with a select value filter. sorting data easily to see how 176 IBM Cognos Business Intelligence V10. in the workspace that opens (either an empty dashboard or a dashboard that contains widgets). Thus.1 Personalize content When launching IBM Cognos Business Insight. so they can make use of the free-form layout and can rearrange reports or add new reports. you can choose whether to open an existing dashboard or create a dashboard.3 Interaction with the dashboard components Dashboards created with IBM Cognos Business Insight allow business users an integrated business intelligence experience together with collaborative decision making. 6. Regardless of your selection. you can add and rearrange new widgets. Open the IBM Cognos Connection using the following URL: 2. Later. but is missing in the current version of a dashboard. marketing data. She begins by opening the current version of the GO Sales Dashboard. she interacts with the reports. To begin this scenario: 1..measures are ranked from highest to lowest values. you can create a new existing dashboard. including data that is relevant for the users. She uses the IBM Cognos Business Insight interface to create and change dashboards. and removing reports that are redundant. as shown in Figure 6-10. sales forecasting. and click GO Sales Initial Dashboard. and searching for an additional report and adding it to a workspace. “Business scenario and personas used in this book” on page 21. and adds filters to allow users to narrow the scope of data. On the My Actions pane click Create my dashboards to open a Getting Started Page of IBM Cognos Business Insight. creates additional calculations. On this page. Lynn Cope is an Advanced Business User in the Great Outdoors company. making some changes on the layout. Individual and collaborative user experience 177 . Our goal is to create a dashboard for Great Outdoors company executives and business users that combines all the relevant information that is needed to gain better insight into business performance of the company. Chapter 6. Her role is to enable senior management to have access to all relevant information in a dashboard. In this scenario. Open the Business Insight folder. and external data. The GO Sales Initial Dashboard opens. shown in Figure 6-11.1 Handbook . Figure 6-11 GO Sales Initial Dashboard opens in Business Insight 178 IBM Cognos Business Intelligence V10.Figure 6-10 Getting Started Page: Open an existing dashboard 3. Click Open. 4. click Show Titles as shown in Figure 6-12. You can show or hide the titles of all widgets on a dashboard. Right-click. By taking a closer look at data in a report. you can rearrange the layout of a dashboard. In the Edit Dashboard Style window. select it. To better understand the information shown in each report. By default. You usually want titles hidden so they do not take much space on a dashboard. you should see the cursor in a shape shown in Figure 6-13 on page 180. and then click Edit Dashboard Style. the titles are hidden. Figure 6-12 Turning on the titles 5. Showing titles: It is not possible to show the titles of just selected widgets. On the Application bar. Individual and collaborative user experience 179 . b. while hovering over the Application bar. turn on the titles on the widgets. click Layout and Style. and drag the widget to another Chapter 6. Change the places of the Gross profit by Region and Revenue Planned versus Actual widgets. To move a widget. To show titles: a. Then. location on the dashboard. move. Notice that you have reports that show almost the same data and that you need space on the dashboard for additional reports. 7. or resize widgets. Dotted guidelines display on the dashboard when you insert. remove the Return Quantity by Products and Order 180 IBM Cognos Business Intelligence V10.1 Handbook . These lines provide a visual aid to assist you in aligning widgets. Figure 6-13 Moving a widget 6. Rearrange the widgets so that they do not overlap. To make room for additional reports. click Remove. Figure 6-14 Deleting a widget Chapter 6. and then click Remove from Dashboard. When prompted. Click the widget.Methods report. Individual and collaborative user experience 181 . click Widgets Action. as shown in Figure 6-14. Next. Clear the “Show titles” option. Right-click the widget. 182 IBM Cognos Business Intelligence V10. as shown in Figure 6-15.1 Handbook . and select Change Display Type Column Chart. Turn off the widget titles in the same way as described in Step 4 on page 179. Using a column chart instead of a pie chart makes it easier to compare values for different product lines. change the display type for the Return Quantity by Product Lines report. Figure 6-15 Changing the display type 9.8. Figure 6-16 Modified dashboard 6. Chapter 6. “Widgets” on page 167. You can use the IBM Cognos Business Insight enhanced search feature to find and add relevant content to the dashboard.The dashboard should now look as shown in Figure 6-16.2 Add new content to broaden the scope You can add new widgets to a dashboard by dragging them from the Content or Toolbox tabs. any object described in 6. This feature is a full-text search similar to popular search engines. Using the same method.5. metric lists or individual metrics.3. you can add reports. Individual and collaborative user experience 183 .2. Index note: IBM Cognos content must be indexed before you perform a search. or in fact. report parts. TM1 objects. keep in mind the following rules: Search results include only the entries for which you have permission to access at the time of the last index update.1 Handbook . titles. row names. headings. if you enter camp. Searches look for matching prompts. column names. searching for report and Searches include word variations automatically. 184 IBM Cognos Business Intelligence V10. results also include camps and camping. but the individual item is not included. the dashboard is included in the search results. use the following operators as you use them in other search engines: – A plus sign (+) – A minus sign (-) – Double quotation marks (“ ”) If a search term matches a specific item on a dashboard. the result includes entries that contain all of the search keywords and entries that contain only one of the search keywords. To modify this type of search. and other key fields. When using more than one word in a search. Report returns the same result.When using the search capability. Searches are not case-sensitive. For example. For example. Figure 6-17 Result of the search in IBM Cognos Business Insight Search note: In addition to the IBM Cognos content. you can search annotations and IBM Lotus Connections dashboard activities. Chapter 6. the results are ranked according to the search term match relevance. as shown in Figure 6-17.When the search is complete. Individual and collaborative user experience 185 . list. reports.1 Handbook . or queries The type of report part. or pie chart The year of creation The owner of the object The metadata or packages that were used for to create this object Figure 6-18 Refine search option 186 IBM Cognos Business Intelligence V10. such as dashboards.After a search is complete you can refine search results using the following filters (see Figure 6-18): Result Type Part Date Owner Metadata Shows only report parts or hides report parts The type of IBM Cognos object. such as crosstab. including gross profit information. 5.Now. Go to the Metadata section. Lynn Cope made changes to a dashboard. Figure 6-20 Closing a search Chapter 6. Among the search results. 2. click Search Results for “promotion revenue” All Content as you shown in Figure 6-20. as shown in Figure 6-19. To add this report: 1. and it now looks as shown in Figure 6-16 on page 183. To close a search and return to the standard Content view. type promotion revenue. and drag it onto a dashboard. locate a Promotion Data (Revenue vs Gross Profit) report. Individual and collaborative user experience 187 . and click GO Data Warehouse (query) to narrow the result set. and press Enter. Refine the search by clicking Refine Search. 4. Locate the Search window in the upper-right corner of the IBM Cognos Business Insight user interface. A window opens next to the search results. Figure 6-19 Search for objects containing “promotion revenue” 3. back to our scenario. She wants to add a report that contains marketing data for the Great Outdoors company promotions. 1 Handbook . To change the color palette of the report. as shown in Figure 6-21. click Change color palette Jazz on the widget toolbar. Figure 6-21 Changing the color palette of the widget 188 IBM Cognos Business Intelligence V10.6. Chapter 6. Sorting is useful when you want to see.3. Individual and collaborative user experience 189 . Figure 6-22 Modified dashboard 6. We describe these features in this section.The dashboard now looks as shown in Figure 6-22. and charts. In IBM Cognos Business Insight. In addition. the ranking. you can interact with report widgets and apply custom sorting. Sorting data Sorting organizes data in either ascending or descending order. for example. you can sort lists. you can add basic calculations using data in the report. You can sort on a column that lists revenue in descending order to view revenue data from the highest to the lowest. based on an alphabetical or numeric value.3 Sort and filter data and perform calculations Apart from changes in the visual display of data in reports. and you can filter data. crosstabs. Sorting by label is not available in crosstab reports for summary rows or columns.1 Handbook . she sorts the Revenue column to display the regions with the highest revenue at the top of the report. calculations. To sort this data: 1. However. IBM Cognos Business Insight Advanced. Lynn Cope wants to use the possibility to sort the data in the report on a dashboard. sorted information for crosstabs does not displays in the information bar in the report widget. Figure 6-23 Information bar with current sorting status 190 IBM Cognos Business Intelligence V10. This sorting makes it easier for senior management to identify the best performing regions. Sorting by value is not supported on the outer edges of a nested crosstab or in relational crosstabs. or IBM Cognos Query Studio. nested measures. click the information bar to see the current sorting on the report (as shown in Figure 6-23).When sorting data. consider the following rules: For crosstab reports with sorting applied in IBM Cognos Report Studio. Notice that the report is sorted in ascending order by the label Branch region. with IBM Cognos Analysis Studio. In this scenario. or rows and columns based on single dimensional members. the sorted information displays in the information bar in the report widget. For the Revenue and sales target by region report. On the Revenue and sales target by region report. 2. click the Revenue column. as shown in Figure 6-24. To sort the report on the Revenue column in descending order. Figure 6-24 Sorting column in a report The report now looks as shown in Figure 6-25. Then. click Sort Descending on the toolbar. Individual and collaborative user experience 191 . Figure 6-25 Report with sorted column Chapter 6. 192 IBM Cognos Business Intelligence V10. IBM Cognos Business Insight reruns the calculation each time the report is refreshed. IBM Cognos Business Insight includes the following calculations: Sum (+) Difference (-) Multiplication (*) Division (/) Difference (%) Performing more complete calculations: If you need to perform more complex calculations. To modify the report: 1. you can perform basic calculations for list and crosstab reports using data from one or more report items (for example. pressing the Ctrl key on keyboard. click Do More to open the report in IBM Cognos Business Insight Advanced. and to filter the report to obtain only the campaigns that are the most profitable. First. add a column by clicking the “Promotion revenue” column.1 Handbook . to insert an additional column with a default name of Gross profit/Promotion revenue. as shown in Figure 6-26. On the widget toolbar. and clicking the “Gross Profit” column. and click Change Display Type List Table. Click Calculate Gross Profit/Promotion Revenue. Lynn Cope wants to modify the Promotion Data (Revenue vs Gross Profit) report to convert the report to a list report. convert the chart to a list report. to add one additional column (Gross Profit Margin=Gross Profit/Promotional Revenue). Instead. Next. Calculation results: The results of calculation are not stored in the underlying data source.Adding simple calculations In IBM Cognos Business Insight. follow these steps: a. 2. The results are always based on the current data in the data source. to divide the values from two columns). Go to the Promotion Data (Revenue vs Gross Profit) report. and click Move Right on the menu. To rename a column. right-click the column. and click Rename. To move the newly created column to the last position in a list report.Figure 6-26 Perform simple calculation b. Figure 6-27 Report with added calculated column c. Individual and collaborative user experience 193 . The report now looks as shown in Figure 6-27. Enter Gross Profit Margin as the name. Chapter 6. right-click it. any changes that you made to the content are lost. After you open and change the report (for example. the changes are saved in this copy. click Do More to open the report in IBM Cognos Business Insight Advanced. When you save a dashboard for the first time. use the Reset option on the widget Actions Menu button.41119418). 194 IBM Cognos Business Intelligence V10. but the original reports are not changed.4). Click Filter =0. Figure 6-28 Promotion Data report after filtering 5. You can only filter data by selecting values from a report. Applying more detailed filtering: To apply more detailed filtering to the report. As shown in the previous example. only the data that meets the criteria of the filter displays. If you want to revert to the original report. Filtering Filtering is a way to narrow the scope of data in reports by removing unwanted data. You cannot type the value manually. Resize the report widget. Note that the changes that you made are saved with the dashboard when you save it. Click Actions Menu. The report now looks as shown in Figure 6-28. and click Save to save this version of the dashboard.41119418.1 Handbook . Right-click the Gross Profit Margin for Extreme Campaign (value 0.3. a copy of each report widget is created for the saved dashboard. you apply a sort or add a calculation). 4. Using the Reset option: The Reset option is not available for saved output reports or for the reports where the original report was deleted or disabled. Next. When the report content is reset. narrow the data in a report and display only the campaigns with the high Gross Profit Margin (for example >0. However. In the case of our previous example. Figure 6-29 Information bar displaying applied filters Note that if you apply a filter or sort to data in a table report that is changed to a chart. All values are included in the condition. Chapter 6. Based on the parameter values that you select. the report is filtered. Individual and collaborative user experience 195 . the applied filters look as shown in Figure 6-29. you cannot filter on chart data in the report widget by using the filter actions from the report widget toolbar or context menu. you can use the Include or Exclude conditions. for example the name of the campaign as shown in Figure 6-30 on page 196.You can find the information about all the filters that are applied to the report on the information bar. When filtering on values that are non-numeric in a list or in crosstab reports. You can select multiple non-numeric values (in list reports within the same column and in crosstabs in column or row headings) on which to filter. You can filter the individual report widget using filter actions on numeric and non-numeric values. the information bar displays the filter and sort information in the chart. User can filter the data on reports using one of the following options: Prompt Filter in individual report widget using filter actions Slider or select value filter You receive prompts to select the parameter values before the report runs. >=. If the query is not shared. 196 IBM Cognos Business Intelligence V10. a filter applied to one report part is also applied to the other. the filter is applied only to the selected report part within the report widget. or you can use Between and Not Between if two values are selected (see Figure 6-31).1 Handbook .Figure 6-30 Filtering non-numeric data For numeric data you can use conditions (for example >. if all parts share the same query. <. >=) if only one value is selected. Figure 6-31 Filtering numeric data In case of compound reports that consist of more report parts. Chapter 6. Also. or IBM Cognos Report Studio in this manner. We discuss advanced filtering using slider filters and select value filters in the next section. 6. as shown in Figure 6-32. click the delete icon next to the filter that you want to remove. When you select a value on a filter widget. the report widget refreshes to display the filtered data items that you selected.3. if you have a select value filter for regions. Thus.If you want to remove a filter from a report widget on the information bar. it filters all reports that have regions as a data item. it filters data only on those reports that communicate with that filter.4 Use advanced filtering Filtering data in the report widget using a slider widget or a select value filter widget filters data in all reports that communicate with that particular filter. Figure 6-32 Removing filter Note that you can delete only filters that are created using one of the following methods: The filter button The filter context menu The slider filter The select value filter You cannot remove filters applied in IBM Cognos Analysis Studio. Individual and collaborative user experience 197 . IBM Cognos Query Studio. By default. A target widget is a widget that is listening to other widgets. By default. For example. If you do not want a target widget to receive information from any or all source widgets. You can also choose to disable some widget events while leaving other widget events enabled. Using filter widgets Filter widgets are especially useful if you have several reports on a dashboard that share the same data items. Filter widgets broadcast the information (sending the data based on your input or selection). widgets communicate with each other. you might want a widget to listen to filter events and to not listen to drill events from another widget. an image widget can broadcast a specified URL in a web page widget when the image is clicked. report widgets can interact with each other and with filter widgets. For example. If they are based on the same dimensionally-modelled data source and if the report contains items from the same hierarchy. two report widgets listen to each other. In our business scenario. drilling in one report widget affects a drill in the other report widget. After adding a select value filter. you must disable the communication in the target widget. Report widgets can be both source and target widgets. Image and RSS feed widgets are also source widgets. For example. The results of actions in the source widgets are shown in the associated target widgets. Lynn Cope wants to add a select value filter for regions to make filtering easier for the users of the dashboard. 198 IBM Cognos Business Intelligence V10. Based on the type of interaction. She needs to modify the communication between these widgets to change this behavior. Lynn notices that the filter is impacting one report that she does not want to filter. the following types of widgets are available: A source widget is a widget that is broadcasting information.1 Handbook . Individual and collaborative user experience 199 .The Great Outdoor company Sales Dashboard currently looks as shown in Figure 6-33. Figure 6-33 Dashboard before adding filter widget Chapter 6. Select Region. Figure 6-34 Select Value Filter properties window 200 IBM Cognos Business Intelligence V10. The Properties . You can filter on the Region data item for reports Revenue Planned versus Actual and Gross Profit by Region. as shown in Figure 6-34. Leave the default setting for the remainder of the options. and click OK.1 Handbook .Select Value Filter window opens.To modify the dashboard to change the communication between widgets: 1. Drag Select Value Filter from the Toolbox tab to the dashboard. You do not want to filter the data on the Revenue Planned versus Actual report. and Southern Europe. and click Apply. as shown in Figure 6-36. Northern Europe. and then click Listen to Widget Events. Individual and collaborative user experience 201 . Note that the Revenue Planned versus Actual and Gross Profit by Region refresh and now display data just for the selected regions. so you can remove filtering on that report widget. Figure 6-36 Dashboard with filter applied for the region 3. Click Action. Chapter 6.The widget opens on the dashboard (see Figure 6-35). Figure 6-35 Select a value filter widget by region 2. Select values for Central Europe. Scroll down to Select Value Filter. this widget will not communicate with the select value filter widget. Now.4. and clear that option as shown in Figure 6-37. Figure 6-37 Listen for Widget Events window 202 IBM Cognos Business Intelligence V10.1 Handbook . Figure 6-38 Filtering report after changes in Listening for Widget Events properties Chapter 6. Individual and collaborative user experience 203 . Go to the filter widget. However.5. Click Apply. and select Americas and Asia Pacific. the Revenue Planned versus Actual report remains the same. because it is not listening to the filter widget anymore (see Figure 6-38). The Gross Profit by Region report is filtered again and now shows data for these two regions. 1 Handbook .6. so you remove filtering that was applied previously with the select filter widget. Go to the information bar. Figure 6-39 Removing filter from a report widget 204 IBM Cognos Business Intelligence V10. and remove the filter as shown in Figure 6-39. You want the Revenue Planned versus Actual report to display data for all regions. Assure that the Atom feed URL (*. In our business scenario. 2.wss?keyword=null&maxFeed=& feedType=RSS&topic=80 3. Individual and collaborative user experience 205 . Click OK. web pages.5 Add non-BI content to a dashboard In addition to IBM Cognos BI content. shown in Figure 6-41.ibm. In the Properties RSS Feed window. Figure 6-40 Modified dashboard 6. To add non-BI content: 1.com in this case) is added to the trusted domain list that is defined in the IBM Cognos Configuration tool. enter the following URL:. such as images.com/press/us/en/rssfeed. Lynn Cope wants to include stock exchange reports and news from various websites.The dashboard now looks as shown in Figure 6-40. to a dashboard. or RSS feeds. text. Drag the RSS Feed widget from the Toolbox pane to the dashboard. you can add non-BI content. Chapter 6.3.ibm. Figure 6-41 Add RSS Feed to a dashboard The widget is added to a dashboard (see Figure 6-42).1 Handbook . Figure 6-42 RSS Feed widget 206 IBM Cognos Business Intelligence V10. reports are run directly against the underlying data source so that they reflect the latest data. For example. you might want to see older data to compare monthly revenue for a region before and after features are added. Figure 6-43 Report version options Reports in HTML format: Only report versions saved in HTML format are supported in IBM Cognos Business Insight. it is the latest saved output version) or to view the live version of the report. you do not need reports that are executed multiple times during working hours on the same data set. Report outputs are saved when the report runs in the background. Alternatively. Chapter 6. The watch rule sends an alert when a specific condition in a saved report is satisfied.6 Work with report versions and watch rules Usually. In this types of scenarios. Individual and collaborative user experience 207 .3. However. at time. You can choose to view the saved report output versions (by default. as illustrated in Figure 6-43. Watch rules are based on event conditions that are evaluated when the report is saved. you can define the watch rules to monitor events of interest. reports can use older data for comparisons. if reports are running against a data warehouse that is refreshed once daily.6. you can use the report output versions in report widgets. For the output versions of the reports. Open the Business Insight Source Reports folder. To add a watch rule: 1. To enable other users to monitor that result and to take measures if necessary. She noticed the negative Gross Profit in case of the Extreme Campaign for Outdoor Protection product lines. Lynn Cope created a list report with the campaigns by product lines and the gross profit. and add the Campaigns by product lines report to the dashboard. and navigate to the folder where you imported the deployment archive that we provided with the additional materials accompanying this book. and inspect the options that are available.1 Handbook . The next example shows how to use the watch rules in the business use case of the Great Outdoors company. she wants to add a watch rule to that value. For the details. Click Actions Menu.Watch rules can generate one of the following types of alerts: Send the report by email Publish a news item Send a notification to the task inbox Enabling watch rules: The report owner must enable watch rules for the report to allow users to add a watch rule for the report. refer to IBM Cognos Connection User Guide. Open IBM Cognos Business Insight. click Versions. select the Create New option. 2. 208 IBM Cognos Business Intelligence V10. Individual and collaborative user experience 209 . To add a watch rule to the negative Gross Profit value for Extreme Campaign.3.” Chapter 6. Select the “Send an alert based on a thresholds” option. and click Alert Using New Watch Rule as shown in Figure 6-44. A window opens where you can specify the rule. Figure 6-44 Add new watch rule 4. Leave the performance pattern as “High values are good. right-click the intersection of Gross Profit and Outdoor Protection Extreme Campaign. Figure 6-46 Alert type specification 210 IBM Cognos Business Intelligence V10. Figure 6-45 Watch rule specification 6.5.1 Handbook . Set the alert to send an email in the case of average performance and to publish a news item in case of good performance. and poor). You can set up a watch rule to send different alerts depending on the performance status of a condition (good. A window opens where you can specify an alert type. average. Click Next. Enter the value 10000 in the first box and 0 in the second (see Figure 6-45). Make a selection as shown in Figure 6-46. 9. Define the headline and text of the news item by clicking Edit the options for Publish a news item. and you can view it if you click Watch New Versions on the report widget toolbar as shown in Figure 6-47. Chapter 6. Define the list of users that you want to receive the email by clicking Edit the options for Send a notification. when it comes to making business decisions based on that information. Enter the following text as the name for a watch rule: Gross Profit for Outdoor Protection has met a threshold condition 10. However.4 Collaborative business intelligence Collaboration plays an important role in decision making and resolving any business issues. Figure 6-47 Watch new versions menu 6.7. The watch rule is added. Individual and collaborative user experience 211 . a team of users typically creates reports and dashboards and analyzes data. Creating reports and dashboards and analyzing data are tasks that are performed by individual users.Click Finish. Click Next. 8. Recipient permissions: The recipients of the shared dashboard URL must have permission to view dashboards. providing additional information. Otherwise. 212 IBM Cognos Business Intelligence V10.4.1 Create annotations Comments or annotations allow users to collaborate with other members of the team on the content of an individual report on the dashboard.Users can share a dashboard with other colleagues using various methods: Email a link to the dashboard using the Email Link option on the Actions Menu button. such as low sales figures for a product that was recently released and has been on the market for a few months. Note that you cannot print the entire dashboard. Export individual reports to any of the following formats: – – – – – PDF Microsoft Excel 2007 Microsoft Excel 2002 CSV XML In addition. you can use Ctrl+P to use the web browser printing. they cannot access it. To print the entire dashboard. This option opens Adobe® Reader with a PDF version of a report with full data and a preview of how that data will print. comments can be a reminder to investigate low sales results in a particular region or an explanation of some anomalies in data. These users can also add further comments about that report. Print individual reports to PDF format using the Print as PDF option on the Actions Menu button of a report widget. These comments are visible to other users who view the same report.1 Handbook . For example. You can achieve collaboration using one of the following methods: Annotations IBM Lotus Connections activities 6. you can collaborate with other users while creating reports or monitoring dashboards in IBM Cognos Business Insight. Your email client opens with a message that is populated with the dashboard name in the subject line and the link to the dashboard in the message body. Send a URL in an instant message or put the URL in a document using the Copy Link to Clipboard option on the Actions Menu button. Figure 6-48 Comments on individual cells in report Chapter 6. You can comment live reports and saved output versions. when the report is refreshed with data and cell value changes (perhaps the percentage is significantly lower). Adding or editing comments: To add or edit comments. In the example shown in Figure 6-48.. All users who can access the report can see comments that are added to it. not to the value. You can add comments to the following elements: Reports or reports part Data items in reports and report parts Individual cells in list and crosstab reports You can add comments by selecting the required report cell or report widget and clicking Comment in the widget toolbar. users must have execute access for live reports and read and traverse access for saved output versions. Individual and collaborative user experience 213 . the comments are included. When printing a PDF version of a report or exporting a report to PDF or Excel output. the value of a cell is added automatically.1 Handbook . Because activities are integrated with IBM Cognos Business Insight. You cannot edit or delete the comments added by other users. it is no longer possible to edit or delete comments from that session. and create and assign to-do items. You can add. the report does not include the comment added previously. 214 IBM Cognos Business Intelligence V10. the same cell is the Percentage of customers who returned a product with the reason listed as Wrong product shipped). For each comment. link to websites or their dashboards. 6. If you have another report that has the same cell (in the previous example. If there are multiple comments for the same cell or report widget. share files. or delete comments during the current dashboard session. Users can include stakeholders or other interested parties involved in the decision-making process. For example. The original value stays in the comment after the report is refreshed. and the time the comment was written (see Figure 6-48).2 IBM Lotus Connections activities One step further from collaborating by using comments is setting up activities in a web-based collaboration tool. Dashboard note: When the dashboard is closed. Users can post messages.4. users can use activities for collaborative decision-making in a single place. As shown in Figure 6-48. date. IBM Lotus Connections is a collaboration service that allows users to interact in an online location where they can create and share ideas and resources. they display in reverse chronological order. These reports are not linked and do not share the comments.The cell value is added to the comment by default. edit added comments. you can see the user’s name. users can use activities to post a link to their IBM Cognos Business Insight dashboard so that other users can use it for future analysis or to track and audit decisions and initiatives. A comment is specific only to the cell in the current report. Chapter 6.When you want to collaborate with other members of the team to resolve an issue or to perform an investigation. If you expand an activity by clicking More. start a dashboard activity from the application bar (see Figure 6-49) to create an IBM Lotus Connections activity that is connected to that particular dashboard. and the activity priority and due date are reported if they are set up. you can view the list of activities that are started for that dashboard. the date and time of the update. for example additional files or bookmarks Add to-do items and assign them to activity members Add comments Complete to-do items or mark an activity as complete In IBM Cognos Business Insight. you can work with the activity in IBM Lotus Connections. Figure 6-49 Start a dashboard activity in Business Insight In IBM Lotus Connections you can complete the following activities: Add members to an activity and change the access to an activity Add entries to an activity. When clicking an activity or specific entry within it. the activity opens in IBM Lotus Connections. the activity title. Individual and collaborative user experience 215 . a summary of the last three updates and the activity goal displays. the name of the user who performed the last update. After that. For each activity. 1 Handbook .216 IBM Cognos Business Intelligence V10. In this chapter. including statistics services and lineage and search features. All rights reserved.7 Chapter 7. based on fictitious business scenarios. you can become familiar with IBM Cognos Business Insight Advanced and how it can address real business situations. 2010. Executing the step-by-step instructions that we include in this chapter. we discuss the following topics: Explore the IBM Cognos Business Insight Advanced interface Choose a reporting style Change existing reports Create content Search for meaningful information Summarize data and create calculations Add filters to refine data Add external data Create a package with the Self Service Package wizard Create statistical calculations © Copyright IBM Corp. Self service interface for business users This chapter provides an overview of the features of IBM Cognos Business Insight Advanced. 217 . 1 Explore the IBM Cognos Business Insight Advanced interface IBM Cognos Business Insight Advanced is a web-based tool that is used by advanced business users and professional report authors and analysts to create and analyze reports. crosstabs. and charts. The interactive and analysis features allow them to assemble and personalize the views to follow a train of thought and generate unique perspectives easily. Its interface is intuitive to allow the minimum investment in training. This tool allows users to work with both relational and dimensional data sources. This tool also allows users to take advantage of the interactive exploration and analysis features while they build reports. Objective of this chapter: The objective of this chapter is give an overview of the major features of IBM Cognos Business Insight Advanced. This chapter does not include all the features. IBM Cognos Business Insight Advanced allows users to create reports using relational or dimensional styles. otherwise. For more information. The interface consists of the following key areas (see Figure 7-1): Page layers Context filters Insertable Objects pane Page navigation Work area Properties pane 218 IBM Cognos Business Intelligence V10.1 Handbook .7. as well as external data. However. it is important that you choose a reporting style that helps users make the most of their data and avoid mixing dimensional and relational concepts. refer to the IBM Cognos Business Insight Advanced User Guide. and allows them to show their data in lists. reports can display unpredictable results. you must recreate a new version of the reports in IBM Cognos Business Insight Advanced if you want to use this studio for those reports.1. Self service interface for business users 219 . Chapter 7. When you add a dimension level or dimension members in this area.Insertable Objects pane Page layers area Context filter area Properties pane Work area Page navigation Figure 7-1 IBM Cognos Business Insight Advanced user interface Using IBM Cognos Business Insight Advanced: IBM Cognos Business Insight Advanced is not a replacement for IBM Cognos Query Studio or IBM Cognos Analysis Studio. If you have reports that were created in these studios. 7. notice that one block with the current selection of the hierarchy is created in your report.1 Page layers The Page layers area is used to create sections or page breaks in reports. click the arrows in the page navigation. 220 IBM Cognos Business Intelligence V10.1 Handbook . as shown in Figure 7-2. you must add the Time dimension on the Page layers area. Figure 7-2 Adding Page layers Page layers: The Page layers configuration is applied to the entire report.For example. to analyze the Gross Profit metric by region for separate pages for each year. To change the section. 2 Context filters The Context filters area is used to filter reports for separate contexts of information. For example. If you need to apply the same Context filters for two or more objects. Figure 7-3 Adding Context filters Chapter 7. you will notice that one block with the context selection is created in your report. you can add the order method web to the Context filter. as shown in Figure 7-3.7. When you add a hierarchy or members of a hierarchy in this area. to analyze the Gross Profit metric by region but only for web sales.1. Context filters: The Context filters configuration applies only to the selected object. you must select each object and then add the desired dimension member. Self service interface for business users 221 . For example.1. such as the Work. Therefore.Adding a dimension member: You cannot add the same dimension level or members of a dimension both in the Page layers and Context filter areas. the toolbar contains shortcuts to set properties that impact the behavior of the report when you insert data from a dimensional model and allow users to add external data. Page layers. measures. 222 IBM Cognos Business Intelligence V10. Inside each dimension. View Members Tree For dimensional models or Dimensionally Modeled Relational (DMR) models. What you see in this tab depends on the selection that you made in the Insertable Objects Source toolbar. you can find dimensional members and metrics or query subjects and their query items. when you add a dimension member from a hierarchy that is used in the other configuration. if you have Year in the Page layers section and you add Quarter to Context filters. and dimensions. this view displays measures folders. the user can see live data (Figure 7-5) and use this data to create reports easily by dragging the items into one of the areas. or Context filter areas. Exploring this data model. As shown in Figure 7-4. Figure 7-4 Insertable Objects Source toolbar The Toolbox tab contains all the objects that you can add to your report to improve the readability. These objects are grouped in the following tabs: Source Toolbox The Source tab shows the data model. the Page layers configuration is reset and the Quarter is placed in the Context filter section. 7. because one configuration suppresses the other configuration. such as spreadsheets or comma-separated value (CSV) files. and create labels. IBM Cognos Business Insight Advanced removes the prior configuration automatically.1 Handbook . separate contents.3 Insertable Objects pane The Insertable Objects pane contains the objects that you can add to the reports. Self service interface for business users 223 . Chapter 7.View Member Tree options: The View Member Tree options are not displayed when users select a package that contains only a relational data model. Figure 7-5 View Member Tree displaying members View Metadata Tree The content of this view depends on the data model that is displayed. If you expand a dimensional data model (Figure 7-6). this view displays these items: Folders Namespaces Measure folders Measures Dimensions Hierarchies Levels Figure 7-6 Dimensional data source displayed on View Metadata Tree 224 IBM Cognos Business Intelligence V10.1 Handbook . query subjects. and query items.If you expand a Relational model (Figure 7-7). namespaces. Self service interface for business users 225 . this view displays folders. Figure 7-7 Dimensional data source displayed on View Metadata Tree Chapter 7. Figure 7-8 Toolbox objects 7. and charts. such as the text items. crosstabs. blocks. and go to the top and to the bottom of the report pages without having to run the report (Figure 7-9). Large reports: Do not include large reports in dashboards (IBM Cognos Business Insight workspace). labels. and hyperlinks (Figure 7-8). These icons allow users to scroll down. and improve the layout. images.1 Handbook .4 Page navigation The icons in this page navigation area become enabled if the report retrieves more than one page. Figure 7-9 Page navigation on first page of report 226 IBM Cognos Business Intelligence V10.1. create new columns with calculations. such as lists. scroll up.Toolbox tab This tab shows objects that allow users to display data. this area shows live data and the user can interact with it. this area does not show live data. charts. What to do if you see an asterisk (*) character: The following behavior can happen when you use IBM Cognos Transformer cubes or SAP Business Warehouse (BW) data sources.7.6 Properties pane This pane displays the formatting options that are available for a selected object in a report. Ancestor button The Properties pane has a icon called Ancestor. crosstabs. which allows users to select any part of a selected object. Self service interface for business users 227 .1. such as separate currencies in the calculation or roll-up 7. crosstab. or chart. one of the following conditions was detected: An unknown currency A value with an unknown or questionable unit of measure. If you see an asterisk character (*) in a list.1. and layout components. Chapter 7. If the Page Preview in the View menu is enabled. for example. to change the background of the rows inside a crosstab. You typically use this icon for layout purposes.5 Work area This area contains all the objects that are dropped on the report. If Page Design is enabled. such as lists. for example. 228 IBM Cognos Business Intelligence V10. Figure 7-10 shows the display in the Ancestor properties if the user selects a crosstab cell.1 Handbook .When you select one object and then select the Ancestor icon. a crosstab inside a table. this feature is useful when you need to find an object inside another object. all levels above the selected object displays. Figure 7-10 Ancestor properties Also. and XML Tools: Allow Cognos to check the report’s specification. open. suppress data. Microsoft Excel. and report properties Edit: Cut. PDF configuration. and configure the number of rows that is displayed on the Work area when Page Preview is set Structure: Set group configuration. Figure 7-11 Top toolbar menu The following list describes several of the options for each of the menu items: Blue bullet (upper-left corner): Create. swap rows and columns. sort. and delete commands View: Switch between Page Design and Page Preview. such as the interface behavior (Figure 7-11). CVS. copy. enable and disable toolbars and visual aids. show and copy the specification. add headers and footers to reports. This interface also displays a menu at the top of the window that allows users to configure more advanced features. PDF. create calculations. change summarization criteria. and show dimensional analysis features. manage external data. and Drill options Style: Set styles and conditional formatting to objects Run: Allow users to run the report in various output types: HTML. and configure advanced options of the interface behavior Chapter 7. and save reports.Toolbar and menu The Business Insight Advanced interface shows a toolbar with shortcuts for commonly used features. Self service interface for business users 229 . paste. such as Insert Children. Data: Set configurations to filter. and convert lists to pivots. Explore. To make this possible and easy. an Advanced Business User of a GO Americas subsidiary. sort.1 Handbook .2 Choose a reporting style Before users start authoring reports. Do you think about your data as a number of dimensions intersecting at cells? If yes. 230 IBM Cognos Business Intelligence V10. Lynn decides to use this existing report as a base for her analysis.7. it is imperative that they choose the reporting style that they will adopt in working with data: relational or dimensional. 7. and customize reports. use relational reporting style. When choosing the reporting style. filter. Mixing both reporting styles in one report can cause unpredictable results. refer to the Relational Reporting Style and Dimensional Reporting Style sections in the IBM Cognos Report Studio User Guide. the IBM Cognos interface provides a seamless integration between the IBM Cognos Business Insight and IBM Cognos Business Insight Advanced products. So.3 Change existing reports Lynn Cope. needs to analyze which product lines result in the greatest number of returns so that the GO Americas management can focus on these product lines to decrease the number of returns. Lynn knows that there is one chart inside the IBM Cognos workspace of the Great Outdoors company that already displays the return quantity by product lines. users must answer the following questions: Do you think about your data as tables and columns? If yes. The reporting style determines how to insert. For more information about leading practices when using relational or dimensional reporting styles. Resource information: You can use the relational reporting style even when you create reports that are based on a dimensional data source. use dimensional reporting style. Find the chart for which she is looking. 2.4 on IBM Cognos Connection. Figure 7-12 Business Insight and Business Insight Advanced integration: Modifying an existing report After Lynn clicks Do more.3.Using this integrated approach. Self service interface for business users 231 . Chapter 7. Open the IBM Cognos workspace of Great Outdoors by clicking My Folders Business Insight GO Sales Dashboard_6. and click Do more on the top of the report’s frame (Figure 7-12). the IBM Cognos Business Insight Advanced interface replaces the IBM Cognos Business Insight interface and shows the selected report. Lynn follows these steps: 1. Click in the chart area to see the chart components (see Figure 7-13). First.1 Handbook .3. Figure 7-13 Selecting the chart area 232 IBM Cognos Business Intelligence V10. she changes the sorting configuration for the chart to allow her to see the product lines in the chart sorted by the return quantity of their products. To execute this task: 1.1 Sort data With the IBM Cognos Business Insight Advanced interface opens.7. Lynn makes improvements to the report to meet her needs. and click the ellipsis (. click Descending. Click Product line in the Series area (Figure 7-14). Self service interface for business users 233 . Click Intersection (tuple). 3. On the top toolbar. as shown in Figure 7-14. 5. In the Sort type section.. Figure 7-14 Advanced Layout Sorting option: Set Sorting window Chapter 7.. and then click Edit Layout Sorting. click Sort.2.). which opens a window that allows you to insert a tuple. 4. 1 Handbook . Click GO Data Warehouse (analysis) Sales and Marketing (analysis) Returned items Returned items (Figure 7-15).6. Figure 7-15 GO Data Warehouse package tree: Expanding Returned items folder 234 IBM Cognos Business Intelligence V10. Self service interface for business users 235 . Figure 7-16 After the Return quantity member has been added 8. and drag it onto the Intersection members and measures area (Figure 7-16). 9.7. Click OK. Chapter 7. The Set Sorting window opens again. Click OK to close the Set Sorting window. Click the Return Quantity data item under the Returned items metrics folder. Lynn determines that Outdoors Protection is the product line that has the worst performance in terms of the return quantity of all Great Outdoors subsidiaries. Figure 7-17 Report after sorting has been performed 236 IBM Cognos Business Intelligence V10. as shown in Figure 7-17.After executing these steps.1 Handbook . Figure 7-18 Adding a Context filter Chapter 7.2 Filter data To meet her business needs. Lynn needs to filter the report to display only GO Americas and 2007 (current year) totals.3. Using Context filters Because Lynn is working with a dimensional data model. 2. Navigate to GO Data Warehouse (analysis) Sales and Marketing (analysis) Sales Organization (Figure 7-18). Click GO Americas under the Organization dimension.7. Self service interface for business users 237 . and then drag it onto the Context filter area. she can easily filter her data using the Context filter feature by following these steps: 1. and 2007). 2. To achieve this result. Drag 2007 under the Year dimension to the Context filter area. 2006. in fact.After performing these steps. 238 IBM Cognos Business Intelligence V10. as shown in Figure 7-19. the performance of the Outdoors Protection product line and the performance of the Camping Equipment product line in terms of return quantity are extremely close. 2005. Navigate to GO Data Warehouse (analysis) Sales and Marketing (analysis) Sales. Lynn notices that.1 Handbook . she follows the same steps to apply another Context filter for 2007 (Figure 7-20): 1. Figure 7-19 Report after GO Americas Context filter is applied The current report shows the total of Return Items for all of the years (2004. To have the best insight about the return quantity for the current scenario. Lynn needs to filter the report to show only the data for the current year (2007). only one product line. has an extremely high number of returns. Self service interface for business users 239 . Chapter 7.Figure 7-20 Report after 2007 Context filter is applied Now. Lynn can determine that. the Camping Equipment product line. in fact. Click Calculated measure. as shown in the following steps: 1.Default measure (y-axis).3. refer to the IBM Cognos Business Insight Advantage User Guide. To find more meaningful information.1 Handbook . Measures: To create this calculation. In the Name field. For more information about how to use the Measure Dimension. 240 IBM Cognos Business Intelligence V10.3 Perform calculations If we think about the business problem. 3. use the following measures: One measure from the Sales folder One measure from the Returned Items folder Do not set the Measure Dimension. type % of Items Return. Lynn can create this calculation easily by using the Query Calculation object in the Insert Objects pane. 2. Drag this object to the same place as the Return quantity metric . a Product line with a high number of returns for its products does not necessarily mean a large percentage of returns based on the number of products sold for the product line. Lynn needs to calculate the percentage of returns against the number of products sold.7. Click Query Calculation inside the Insert Objects pane. as shown in the Figure 7-21 on page 241. Self service interface for business users 241 .Figure 7-21 Creating a query calculation Chapter 7. Drag the [Return quantity1] measure to the Expression Definition area (see Figure 7-22).4. Open the Data Items tab.1 Handbook . Figure 7-22 Inserting a member from the Data items tab into Expression Definition 242 IBM Cognos Business Intelligence V10. expand the Operators folder. Self service interface for business users 243 .5. Open the Function tab. and drag the forward slash character (/) into the Expression Definition area (Figure 7-23). Figure 7-23 Inserting operator into Expression Definition Chapter 7. 1 Handbook . Drag [Quantity] to the Expression Definition (see Figure 7-24).6. Figure 7-24 Inserting [Quantity] item into Expression Definition 244 IBM Cognos Business Intelligence V10. 7. and navigate to Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Sales fact. Open the Source tab. Figure 7-25 Starting Expression Definition validation Chapter 7. To validate the expression. Self service interface for business users 245 . click the Validate icon at the top of the window (see Figure 7-25 and Figure 7-26).8. Figure 7-26 Validation results 246 IBM Cognos Business Intelligence V10.1 Handbook . Under the Intersection (tuple). Figure 7-27 Report with the new percent of Returned items metric After performing these steps. she follows these steps: 1. 5. 6.).. To solve this issue. and then click Edit Set Sorting.8% of its items returned (see Figure 7-27). Lynn determines that the Camping Equipment product line has 1.. Chapter 7. Click % of Returned items. click Return quantity. 2. Click Product line in the Series area. click the Sort icon. On the top toolbar. In Intersection members and measures. Self service interface for business users 247 . 4. Click the Calculate members and measures tab. 3. Lynn realizes that the sorting is not working as expected. Click the left arrow.After these changes. 7. click the ellipsis (. Click OK to close the Members window.1 Handbook .8. 248 IBM Cognos Business Intelligence V10. Figure 7-28 Members window after the Return quantity member has been removed 9. 10.Click OK to close the Set Sorting window. Click the right arrow (see Figure 7-28). After performing these steps. Self service interface for business users 249 . Figure 7-29 Report after sorting change to percent of Returned items Chapter 7. the chart shows the bars in the correct order (see Figure 7-29). 7.3. Figure 7-30 Drill down on the product line 250 IBM Cognos Business Intelligence V10. and the report is updated to show the product types (see Figure 7-30). now Lynn needs to discover which brands have the highest percentage of returned items.4 Set the right level of detail for the analysis To complete her analysis. To drill down into the data. She uses the drill-down and drill-up features as follows: 1.1 Handbook . double-click Camping Equipment. In the Series section. Click in the chart area. On the Set Sorting window. click Descending. Figure 7-31 Drill down on the product type 3. c. b. Click Edit Set Sorting. but she needs to fix the sorting of the report. Self service interface for business users 251 . d. After this action.2. Chapter 7. double-click again on the Lanterns product type (see Figure 7-31). To fix the sorting (as shown in Figure 7-32 on page 252): a. click the Sort icon. click the <#children(Lanterns)#> data expression. e. On the top toolbar. After this action. Lynn is satisfied with the results. . Click the right arrow.1 Handbook . click the ellipsis (.f. Click % of Returned items. i. g. Figure 7-32 Set Sorting properties for percent of Returned items 252 IBM Cognos Business Intelligence V10. Click OK to close the Set Sorting window. To the right of the Intersection (tuple) field. Click OK to close the Members window.. k.). l. j. h. Click the Calculated members and measures tab. Click Intersection (tuple). After performing these steps. Figure 7-33 Report after drilling down. Lynn has a report that shows the products that have a higher percentage of returned items (Figure 7-33). Self service interface for business users 253 . Lynn Cope. an Advanced Business User of the GO Americas subsidiary. Chapter 7. ordered by percent of Returned items 7. needs to create two reports that answer the following questions: Are we selling the right products? How does this year’s performance compare with the prior year’s performance? Lynn inserts these reports in the Great Outdoors company workspace.4 Create content Now. 4.1 Handbook . 3. 2. Navigate to Cognos Public Folders Samples Models. Click GO Data Warehouse (analysis) to create a report that is based on this package. Lynn uses IBM Cognos Business Insight Advanced and follows these steps: 1. To create this report. Lynn decides to create a list that shows the bottom ten product sales by region.7. launch Business Insight Advanced (see Figure 7-34). The Select a package window opens.1 Create a crosstab To answer the first question. In IBM Cognos Connection. Figure 7-34 Opening IBM Cognos Business Insight Advanced from IBM Cognos Connection 254 IBM Cognos Business Intelligence V10. Click Create new to create a report (Figure 7-35). Self service interface for business users 255 . Figure 7-35 Welcome window of IBM Cognos Business Insight Advanced Chapter 7.4. Chart. Crosstab. crosstabs.1 Handbook .5. 256 IBM Cognos Business Intelligence V10. IBM Cognos Business Insight Advanced provides a flexible approach to develop reports. and lists on the report. Figure 7-36 Default report types on IBM Cognos Business Insight Advanced Flexibility: Even if a you initially selected a List. Click the Crosstab report type (Figure 7-36). you can include other charts. or Financial report type. 6. double-click Double-click to edit text (Figure 7-37). First. rename the text header to Bottom 10 product sales by region. Figure 7-37 Inserting a report name Chapter 7. To insert the text. Self service interface for business users 257 . navigate to Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Products. Figure 7-38 Switch to View Metadata Tree 8.7. 10. 11. click View Metadata Tree to see the data model structure instead of live data (Figure 7-38). Drag the Quantity metric under Sales Fact to the Columns area. 258 IBM Cognos Business Intelligence V10.Drag the Product level to the Rows area. navigate to Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Sales fact. On the Insertable Objects pane.In the Insertable Objects pane.1 Handbook . 9. On the Insertable Objects pane. After performing these steps. Figure 7-39 Crosstab after adding columns and rows 12. Chapter 7.Click any product in the rows. 13. Self service interface for business users 259 . Lynn has a list of all products with their quantities (Figure 7-39).Click the Explore icon of the toolbar. Figure 7-40 Using Bottom 10 feature 260 IBM Cognos Business Intelligence V10. and then click Bottom 10 based on Quantity (Figure 7-40).14.1 Handbook .Click Top or Bottom. Click the Ancestor icon. Lynn is able to see the bottom 10 performing products (Figure 7-41). Chapter 7. Click Specified text. she can add the text for objects using the Summary text property.). Using the acessibility features of IBM Cognos Business Intelligence Advanced. 4. 2. Figure 7-41 Report after Bottom 10 function has been applied Including accessibility features Now. Click the ellipsis (. Self service interface for business users 261 . 3. To include accessibility features: 1. Click the Summary text property. rows. 5..After performing these steps. 6. Click Crosstab. Click anywhere in the crosstab (columns. Lynn wants to insert a summary text item to be used by screen readers for the crosstab. or metrics) area.. 2. Lynn uses IBM Cognos Business Insight Advanced to access a relational data source of Great Outdoors data and follows these steps: 1. Lynn can save this report to use it in the Great Outdoors company workspace in IBM Cognos Business Insight Advanced. and then insert the summary text for the chosen language (Figure 7-42): – Default text: Bottom 10 product sales by region – Spanish text: Los 10 productos menos vendidos por region Figure 7-42 Adding localized text for internationalization Now.1 Handbook .4. which compares historic information from the current year (2007) and the prior year (2006). 7. To create this report. The Select a package window opens.7.2 Create a chart Lynn needs too answer the second question asked earlier: How does the performance of this year compare with the prior year? She decides to create a chart that shows a line chart. If you want to insert summarization text for multiple languages. 262 IBM Cognos Business Intelligence V10. In IBM Cognos Connection. Navigate to Cognos Public Folders Samples Models. launch Business Insight Advanced. insert the summarization text. In the Default text field. click Add. select the language. 8. Self service interface for business users 263 . Click in one column of the list. Click the List report type. Chapter 7. 5. click Insert Chart (Figure 7-43). Drag the following items to the list: a.3. Go Data Warehouse (query) Sales and Marketing (query) Sales Time Month b. Click Create New to create a report. click Line. Click GO Data Warehouse (query) to create a report that is based on this package. Go Data Warehouse (query) Sales and Marketing (query) Sales Sales fact Revenue 7. In the Insert Chart window. On the top toolbar. 4. in the left pane. Figure 7-43 Creating a chart that is based on a list 9. 6. 10.Click in the list.1 Handbook . and click Delete (Figure 7-45).In the right pane.On the top menu bar. click Ancestor. Figure 7-44 Selecting a chart type 12. and in the Properties pane. 13.Click OK. navigate to Edit. Figure 7-45 Deleting option 264 IBM Cognos Business Intelligence V10. click Clustered Line with Circle Markers (Figure 7-44). 11. Drag the Revenue metric from Default measure (y-axis) to Series (primary axis). Figure 7-46 Move Revenue metric from Default measure (y-axis) to Series (primary axis) Chapter 7.14. as shown on Figure 7-46. Self service interface for business users 265 . Lynn has a line chart showing the total Revenue of all Great Outdoors subsidiaries by month (see Figure 7-47). Figure 7-47 Line chart displaying Revenue by Month 266 IBM Cognos Business Intelligence V10.1 Handbook .After performing these steps. for example Years followed by Revenue. Placing a data item: You can choose where to place a data item if there is a data item already in the same area. When you place a data item inside one of these areas. Figure 7-48 Adding a child member of Revenue in the Series area Chapter 7. If you place the data item in this situation.To separate the data by years. navigate to Go Data Warehouse (query) Sales and Marketing (query) Sales (query) Time. as a child of Revenue. A flashing black bar displays on the top or bottom side of the data item. Drag Year to the Series (primary access) area. Self service interface for business users 267 . Dropping the item replaces the existing item. 2. If you place the data item in this situation. In the Insertable Objects pane. as shown in Figure 7-48. the item is included as a nested data item. the following behaviors are expected: The entire area flashes. the item is included as a new stack of data. Lynn follows these steps: 1. A flashing black bar displays on the right or left side of the data item. After the report refreshes. click the Filters icon. Click the chart. With this view. 5. 268 IBM Cognos Business Intelligence V10.1 Handbook . it displays the Revenue by Year. and then click OK. In the drop-down list box under the radio button. she filters the report as follows: 1. 2. Click the Add icon. click Year. On the top toolbar. and then click Edit Filters. 3. Ensure that Custom based on data item is selected. Lynn can easily compare the Revenue trends between the years (see Figure 7-49). A small window opens to allow users to select if they want to create a complex filter (with AND and OR clauses) or a simple filter that is based on one data item. Figure 7-49 Revenue by Year To simplify the presentation and because Lynn wants to analyze the differences between the current year and the prior year only. 4. 6. The prompt displays only when a user runs this report in IBM Cognos Connection or IBM Cognos Business Insight.Year window. Click OK. In the Filter Condition . Chapter 7. Figure 7-50 Setting Year filter 7. the prompt does not display. change the operator to filter by values that are greater than or equal to () 2006. and then click OK again. as shown in Figure 7-50. Self service interface for business users 269 . Notice that when the report is displayed in IBM Cognos Business Insight Advanced. Override: To allow other users to show another range of years for comparison. Lynn also selects the “Prompt for values when report is run in viewer” option. is grouped by Product line and Product type. Navigate to Cognos Public Folders Samples Models. which was the requirement (see Figure 7-51). The Select a package window opens. In IBM Cognos Connection.3 Set conditional formatting Lynn needs to create a report to deliver to all Great Outdoors subsidiaries that gives highlights about which product lines match their sales targets for the month and which product lines do not. Lynn uses IBM Cognos Business Insight Advanced to access a Dimensional data source for the Great Outdoors data by following these steps: 1. To create this report.4. She decides to create a crosstab that compares Revenue with Sales target. Figure 7-51 Report showing the Revenue comparison between current and prior years 7. the report shows the comparison between 2007 (current year) and 2006 (last year). launch IBM Cognos Business Insight Advanced. Lynn also provides flexibility to other users who want to use this report by allowing them to select other years for analysis. and is filtered by the current year.1 Handbook . 270 IBM Cognos Business Intelligence V10.After performing these steps. 2. 10.3.In the Insertable Objects pane. 9. In the Insertable Objects pane. Chapter 7. navigate to Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Time. Click Camping Equipment under Products.Drag the data items to the Columns area (see Figure 7-52). Click GO Data Warehouse (analysis) to create a report that is based on this package. 8. 7. Click the Crosstab report type. navigate to Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Products. Figure 7-52 Adding multiple dimension members to a crosstab 11. Self service interface for business users 271 . Click Create New to create a new report. With the Shift key pressed. click Golf Equipment under Products (last item). In the Insertable Objects pane. 4. 6. click the View Member Tree icon. 5. On the Insertable Objects pane.12. navigate to Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Sales fact.1 Handbook . 272 IBM Cognos Business Intelligence V10.Drag the 2007 member under Time to the Columns area (see Figure 7-53). Figure 7-53 Adding a member with its children 13. In the Insertable Objects pane. navigate to Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales target Sales target fact.Drag Revenue under Sales Fact and place it as a child of Year members in the Columns area (under the Year column). Self service interface for business users 273 . as shown in Figure 7-54. Figure 7-54 Adding a child member in the column 15. Chapter 7.14. she drags the same metrics under the quarters total column. as shown in Figure 7-55. Figure 7-55 Adding a nested member in a column 17. Figure 7-56 Total line after inserted Revenue and Sales target metrics 274 IBM Cognos Business Intelligence V10.16. which is under Sales target fact.Lynn notices that the Revenue and Sales target values do not appear under the 2007 column (total). To correct this situation. to the right of Revenue.1 Handbook .Drag Sales target. as shown in Figure 7-56. Now. On the top toolbar. Figure 7-58 Report with crosstab displaying Revenue and Sales target by Product line and 2007 quarters Chapter 7.18. Lynn has a crosstab report. Self service interface for business users 275 . Figure 7-57 Suppress null data After performing these steps. She can suppress the null values easily by following these steps: a. but to meet her requirements. click the Suppress icon. which displays Revenue and Sales target by Product lines and 2007 quarters (Figure 7-58). so she decides to suppress the null values. she needs to include the metrics’ values for Product type. Lynn notices that there are columns without data in any rows. b. Click Suppress Columns Only (see Figure 7-57). Lynn can opt for one of three approaches: Expand members (Figure 7-59).1 Handbook . Figure 7-59 Using Expand members feature Figure 7-60 Using Next Level Down feature Figure 7-61 Nesting a column 276 IBM Cognos Business Intelligence V10. or nest the new column by dragging it to the right of Product line (Figure 7-61). Next Level Down (Figure 7-60).To include this new level on the report. click the Explore button on the top toolbar. To implement this approach. On the top toolbar. Sales target). Click % (Revenue. 2. 3. To make the analysis for the executives easier. After Lynn performs these steps. Lynn decides to use the Next Level Down approach. In the Columns area. 3. If you want to expand all five Product lines. Chapter 7. 2. Click one of the members in the Rows area. the report shows Product line and Product type in the Rows area. Lynn decides to add a calculated column that shows the percentage between current Revenue totals and Sales targets. This approach expands only one row per click. 4. and click Expand Member. Self service interface for business users 277 .For this report. click the Insert Calculation icon. Press Shift. and click Sales target. click each Product line and follow these steps. as shown in Figure 7-60. On the top toolbar. Click Create Next Level Down (Figure 7-62). click the Explore icon. She can easily include this column by using the calculation features that are included in IBM Cognos Business Insight Advanced and by following these steps: 1. click Revenue. select one member in the Rows or Columns area. she performs the following steps: 1. Figure 7-62 Inserting a Next Level Down Expand Member: To use the Expand Member approach. and the percentage calculation between these two columns by Product line and Product type (see Figure 7-64). Lynn decides to configure conditional formatting for the percent column. Sales target. Figure 7-64 Report after adding calculations Executives need to make decisions and give answers quickly. To improve the readability of this report. which helps executives to differentiate good and bad performance quickly. Figure 7-63 Inserting a calculated column using pre-built calculations Lynn also performs the same steps for the 2007 column. After Lynn performs these steps. 278 IBM Cognos Business Intelligence V10.1 Handbook . the report is displayed with Revenue.Notice that the formulas that are shown use your data and reflect the formula that IBM Cognos Business Insight Advanced creates (see Figure 7-63). In the New Conditional Style window. type % target. Sales target). 2. Figure 7-65 Setting a new conditional style for % (Revenue. 8. Sales target) column. 7. click the Add icon. Lynn follows these steps: 1. Click one of the values in the % (Revenue. and click OK. Click Add. Click Add in the lower-left corner of that window. Type 100%. Type 90%. In the Conditional Styles window. as shown on Figure 7-65. In the Conditional Style . and then on the top menu. Chapter 7. 3. click % (Revenue.To implement this conditional formatting. 9. click Style Conditional Styles (Figure 7-65).Numeric Range window. Sales target) metric 5. in the Name field. Click New Conditional Style. 6. Self service interface for business users 279 . 4. and click OK. Condition style options: Users can use the pre-built styles or create their own styles for a condition.Click OK twice to exit the Conditional Styles windows. and click OK.Type 120%. as shown in Figure 7-66.1 Handbook .Click Add. Figure 7-66 Add intervals to a Conditional Style 13. click the pencil icon on the right side of the style drop-down list box. 11. 12. To change the style.10.Change the values to the drop-down list box options that are displayed in the Style column. 280 IBM Cognos Business Intelligence V10. Self service interface for business users 281 . Repeat these steps for the calculation under the 2007 column. Right-click the column and click Edit Data Item Label (Figure 7-68). In the Data item name field. the report displays. She follows these steps for the percent columns under the quarters and the total columns: 1. as shown in Figure 7-67. 2.After Lynn performs these steps. type % of target. Lynn wants to change the label of the new percentage columns. Figure 7-68 Changing the label of a column Chapter 7. Figure 7-67 Example of one Product line’s values after applying a conditional style To improve the readability of the report. To separate the information for each subsidiary. as shown on Figure 7-69. she renames the report to Sales Revenue x Target.Now. Figure 7-69 Adding a member on the Page layers area Finally. Lynn needs to deliver this report to the six Great Outdoors subsidiaries with a clear separation between the performance of each region.1 Handbook . 282 IBM Cognos Business Intelligence V10. she drags the Organization member to the Page layers area. The Select a package window opens.4 Analyze the execution query path The Lineage feature of IBM Cognos makes it easy for report authors and business analysts to examine the origin of the data that is used in the reports and their query paths. Lynn can deliver reports comparing the current revenue and sales targets for 2007 for each subsidiary (Figure 7-70 and Figure 7-71). Self service interface for business users 283 . Figure 7-70 Report to be delivered to GO Accessories subsidiaries Figure 7-71 Report to be delivered to GO Americas subsidiaries 7. To analyze the execution query path: 1. launch Business Insight Advanced. an Advanced Business User. Lineage feature: You can use the Lineage feature when executing a report on IBM Cognos Viewer.After all these steps. She uses the Lineage feature to help her identify what calculation was applied to create the metric and if there is a filter applied for it.4. Lynn Cope. wants to understand how the Planned Revenue metric is calculated. Chapter 7. In IBM Cognos Connection. In the Insertable Objects pane. Click GO Data Warehouse (analysis) to create a new report that is based on this dimensional package.1 Handbook . 6. Figure 7-72 Selecting the Lineage feature 284 IBM Cognos Business Intelligence V10. Navigate to Cognos Public Folders Samples Models. 5. Click Lineage (Figure 7-72). 3. Click Create New to create a new report. navigate to Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Sales fact. and then right-click Planned Revenue.2. 4. and then click in List. Chapter 7. you can see an overview of the data’s definition (Figure 7-73). Self service interface for business users 285 . With this view. In this view. users can click each item in the diagram to see its metadata and definition. Figure 7-73 Lineage: Business View window 7. The Technical View tab shows detailed metadata of the data item. advanced business users and professional report authors can analyze the filters and calculations that are applied to the data (see Figure 7-74). Click the Technical View tab.The Business View window opens. Figure 7-74 Lineage: Technical View On the Technical View. Microsoft Excel 2002. PDF. All of these formats are available in IBM Cognos Business Insight Advanced.4. and XML.5 Render output in various formats and print content IBM Cognos supports various types of output formats. such as HTML. Ben Hall. Lynn notices that it is calculated based on the formula Unit price * Quantity (see Figure 7-75). Microsoft Excel 2007.1 Handbook . wants to export Microsoft Excel and PDF formats for the Sales Revenue x Target report that Lynn Cope created.Clicking the Planned revenue metric. 286 IBM Cognos Business Intelligence V10. delimited text (CSV). an Analyst for the Great Outdoors company. Figure 7-75 Lineage: Planned revenue formula 7. Figure 7-77 Report rendered in Microsoft Excel: First page Chapter 7.Excel 2007 (see Figure 7-76). with the same name as the report and splits the content of the pages into tabs in Microsoft Excel (Figure 7-77 and Figure 7-78). Figure 7-76 Selecting Run option IBM Cognos generates a new Excel file. Self service interface for business users 287 . on the top menu. click Run Run Report .Render in Microsoft Excel format To render the report in Microsoft Excel format. This feature can be useful when backing up and managing reports using a configuration management tool.1 Handbook .Figure 7-78 Report rendered in Microsoft Excel: Second page Render in PDF format To render the report in PDF format. such as IBM Rational® ClearCase®. 288 IBM Cognos Business Intelligence V10. Figure 7-79 Report rendered in PDF format Save reports locally IBM Cognos Business Insight Advanced and IBM Cognos Report Studio provide a feature that allows you to save and open reports on the local hard drives. click Run Run Report .PDF (see Figure 7-79). Register the LFA. In your browser. In IBM Cognos Business Insight Advanced. Open a command prompt window to the location of the LFA. 3.dll 4. 8. 7. and click (Local) Save As (see Figure 7-80). To enable (Local) Save As for IBM Cognos Business Insight Advanced. Chapter 7. Close and restart IBM Cognos Business Insight Advanced.dll file by typing the following command: regsvr32 LFA. Obtain the LFA. 6.To save a report. and then click OK. 5. Self service interface for business users 289 . follow these steps: 1. Select the “Allow local file access” option.dll from your IBM Cognos Business Insight administrator. set your computer and the IBM Cognos Business Insight server as trusted sites. Click the Advanced tab. from the Tools menu. users must register the LFA. 2. Figure 7-80 Saving a report locally Feature requirements To save and open reports locally.dll on Microsoft Windows.dll file. The menu items (Local) Open and (Local) Save As displays in the File menu. click Options. The dynamic link library (DLL) is located in the bin directory where IBM Cognos Business Insight is installed. click the blue bullet on the top menu. If you have problems running this feature. The senior managers need this information for their meeting with the manufacturer of these products. check the ActiveX configuration for your browser. 4.5 Search for meaningful information For dimensional data sources. and then select Quantity sold. Drag the selected items to the Columns area of the crosstab. you must enable ActiveX in your browser. and revenue figures for all Seeker products in their portfolio. To use this feature. Click Crosstab. Profit margin %. because it is based on ActiveX technology. has to create a report for senior management that contains quantity. and then click OK. profit margin. 3. and then click Create New. Launch Business Insight Advanced. perform a search on Product dimension to find the Seeker products. Lynn Cope.1 Handbook . navigate to GO Sales Cube Measures. Now. so she uses the Search option in IBM Cognos Business Insight Advanced to find them. 7. On the Source tab. 290 IBM Cognos Business Intelligence V10. product cost. and dimensionally modeled relational (DMR) data sources. you can perform a member search in IBM Cognos Business Insight Advanced to find the data that you need for your report quickly.Internet Explorer: This feature is supported only for Internet Explorer. and Revenue (press Ctrl to select all members). To use the Search option to find meaningful information: 1. 2. open the GO Sales Cube package. the Advanced Business User for the Great Outdoors company. online analytical processing (OLAP) data sources. Lynn is unsure to which product line these products belong. Product cost. 5. as shown in Figure 7-81. Figure 7-81 Search option on a menu Chapter 7. and click Search. Right-click the Products dimension. Self service interface for business users 291 . The new Search tab opens with the results of the search. enter the keyword Seeker. but click Search all descendants to include searching on all levels of the Product dimension (see Figure 7-82). 292 IBM Cognos Business Intelligence V10. as shown in Figure 7-83.1 Handbook . or you can directly add members from the Search tab to a report. Figure 7-83 Results of the search You can browse the hierarchy to explore members at lower levels. In the Member Search window. Click Search. Leave the option Starts with any of these keywords selected.6. Figure 7-82 Search options 7. 6 Summarize data and create calculations IBM Cognos Business Insight Advanced provides a range of summarization functions and calculations that can be applied to reports to help advanced business users get the best insight from their data. The end report displays.8. despite it.6. You save time with this method. users can create new summarized columns and rows using these functions if it makes sense to them. click the Summarize icon. you can search quickly and insert them from here. 2. Select all members (press Ctrl). Click one member of a set or data item (depending on the type of data source that you use) to create a summarization column or row. because instead of inserting all of the products into a report and adding a filter. Chapter 7.1 Summarization IBM Cognos Business Insight Advanced provides the following summarization functions to users: Total Count Average Minimum Maximum The first calculation applied depends on what is set on the data model. On the top toolbar. and drag them to the Rows area. To apply a summarization function. follow these steps: 1. Nonetheless. Figure 7-84 Report containing results of the search 7. 7. Self service interface for business users 293 . as shown in Figure 7-84. Figure 7-85 Inserting a summarization column After you follow these steps. Click the desired summarization function (Figure 7-85). a new column is created on the right side of the crosstab with the Average title (Figure 7-86).3.1 Handbook . Figure 7-86 Report after an Average summarization function is applied 294 IBM Cognos Business Intelligence V10. subtraction. as well as truncate blank spaces based on a number of characters (First ? characters. Last ? characters). Several of the calculations that are available for relational and dimensional data sources differ. and division of a column. IBM Cognos Business Insight Advanced allows users to deal easily with strings and remove blank spaces (remove trailing spaces). Round down. Self service interface for business users 295 . and difference. Several of these calculations require a number value if the calculation involves one measure only. percentage. the summarization functions are applied to the nodes of the crosstabs. and to the entire list.Create reports using a dimensional data source Reports that are based on a dimensional data source always apply summarization to sets. Calculations available for relational data sources When using a relational data source.6. Chapter 7. the user is unable to insert additional summarization columns and rows. Round up. In the case of relational data sources. The number can be provided when the user clicks the Custom option. Absolute. Figure 7-87 Create Sets for Members option 7. If the user does not turn on the Create Sets for Members option in the Insertable Objects pane (Figure 7-87). If the columns or rows are measure items. such as addition. Round. users can apply a range of calculations.2 Calculation IBM Cognos Business Insight Advanced provides several calculations for users. multiplication. charts. IBM Cognos Business Insight Advanced provides a capability to perform calculations across rows and columns. Figure 7-88 List report showing truncate. which is an advantage for the dimensional approach. users can apply a range of calculations.1 Handbook .Figure 7-88 and Figure 7-89 show examples of applied calculations. subtraction. which is set when the user clicks the Custom option. Users can calculate the percent of the difference between the first product line in terms of revenue and all other product lines easily. IBM Cognos Business Insight Advanced does not display the calculations for string. If the columns or rows are a measure. Several of these calculations need a number value. and difference. When using dimensional data sources. percent. and round calculations Figure 7-89 Crosstab report showing addition calculation Calculations available for dimensional data sources When using dimensional data sources. 296 IBM Cognos Business Intelligence V10. such as addition. multiplication. and division of a column. percentage. Revenue). an Advanced Business User of the Great Outdoors company. Create a simple report that groups Revenue by Product types under the Personal Accessories product line using a crosstab (Figure 7-92). she follows these steps: 1. To create a percent of base calculation. Figure 7-92 Simple report Chapter 7. Figure 7-90 Revenue percent of Base (Revenue. wants to make a quick comparison between the top product type in the Personal Accessories product line and the other product types in the same line using a dimensional data source. Personal Accessories) calculation Figure 7-91 Percent difference (Planned revenue. Self service interface for business users 297 . and subtraction calculations Example Lynn Cope. division.Figure 7-90 and Figure 7-91 show examples of how to work with the calculations. 2. Click the Revenue measure. On the top toolbar. 3. 4. 298 IBM Cognos Business Intelligence V10. Click Descending (Figure 7-93). Click the Revenue measure and press Ctrl. Figure 7-93 Sorting the Revenue column in descending order 5.1 Handbook . click the Sort icon. Lynn can see the percent of the Revenue of all product types against the Eyewear revenue (Figure 7-95). Chapter 7. as shown in Figure 7-94. Eyewear). Click the first member of the Rows area (Eyewear).6. click Calculate. and then click % of Base (Revenue. refer to the IBM Cognos Business Insight Advanced User Guide. Figure 7-94 Creating a percent of Base calculation After performing these steps. Self service interface for business users 299 . right-click one of the selected items. Figure 7-95 Percent of Revenue of all types against Eyewear type Revenue Resource: For more information about how to work with calculations. To apply a filter before or after auto aggregation. report authors.7. advanced business users. and avoid unpredictable results. refer to the Focusing Relational Data and Focusing Dimensional Data sections of the IBM Cognos Report Studio Guide.1 Handbook . This section describes the behavior differences between the two data source types. and make the appropriate selection in the Application section (see Figure 7-96). open the Edit Filters dialog box. provide accurate information.7. users can filter the data that is retrieved in the queries and apply the filter before or after auto aggregation.7 Add filters to refine data To create reports that meet clients’ expectations. which is based on a relational data source. they must understand the differences when a filter is applied in a report that uses dimensional data sources as opposed to relational data sources.1 Filter reports for relational data sources Consider the following points when filtering reports for relational data sources. and analysts must understand how to use filters. Figure 7-96 Creating a filter in a report that is based on a relational data source 300 IBM Cognos Business Intelligence V10. Summary and detailed filters When creating a report. 7. Also. Resource: For more information about leading practices on filtering. Greater and Lower: Retrieve data that is Lower than (<). Self service interface for business users 301 . advanced business users can create advanced filtering expressions easily. y and Exclude x. as illustrated in Figure 7-97.Combined filters A complex filter is a combination of two or more filters creating AND or OR logic. Chapter 7. OR. or Greater than (>) or Equal (=) a specific value. AND. click the expressions that you want to place inside the parentheses with the Shift key pressed. Include Null and Exclude Null: Include or exclude null values for the selected column. For example. Figure 7-97 Complex filtering expression created using Combined Filter feature Expressions: To create AND. or exclude the selection from the results. or NOT. and NOT expressions with parentheses. Between x and y and Not Between x and y: Retrieve data that is between or not between selected values. y: Focus on data that is based on the selection. Greater than (>). and select OR. the following list shows several commonly used filter expressions: Include x. With this feature. Lower than (<) or Equal (=). Filtering features You can use the Filter menu (shown in Figure 7-98 on page 302) to create filter expressions easily. 2 Filter reports for dimensional data sources Use filters to remove unwanted data from reports. which is based on a dimensional data source.7. When working with dimensional data sources.1 Handbook . you can filter only by members and measures.Figure 7-98 Filter features menu 7. 302 IBM Cognos Business Intelligence V10. users can focus the data that is retrieved in the queries using several options: Specifying members instead of data items during report development Applying filters within a set Using the filter function Using context filters Using the Explore menu features Using any other filtering options available on IBM Cognos Business Insight Advanced can cause unpredictable results. When creating a report. Data is retrieved from the database only if it meets the filter criteria. the user clicks one or more members of a dimension and drags them to the report object (list. crosstab. Figure 7-99 Inserting members in a crosstab Create Sets for Members option: If the Create Sets for Members option is enabled. To specify a member. If the users know what they want to see and will always use the same set. Self service interface for business users 303 . as shown in Figure 7-99.Focus reports using members You can specify the members who you want to see in a report using the View Member Tree on the Insertable Objects pane toolbar. or chart). a new set is created for the selection of members. Chapter 7. allowing the user to summarize and create calculations with the data. they need to choose this option. View Member Tree option: Using the View Member Tree on the Insertable Objects pane toolbar is a quick option to focus the data that the users want to see. In this window. Filtering the members in a Set is not the same as relational detail or summary filters. To apply filters in a Set: 1.1 Handbook . Click the Set that you want to filter. you can select the kind of filter that you want to apply. 2.Applying filters to members within a set Filters on dimensional reports need to be applied to a Set of members. which is indexed data Property: Filter by a descriptive data value. Figure 7-100 Applying a filter within a Set You have the following options for filtering: Caption: Filter by the member caption value. which is not indexed data Intersection: Filter by an intersection of members and metrics (tuple) that you define 304 IBM Cognos Business Intelligence V10. 3. On the top toolbar. click Explore. Click Filter Set. Figure 7-100 shows the Set Filter Condition window. Context filters: Context filters differ from other filters. and you see blank cells. to focus your report on a particular view of the data quickly. A context filter does not remove members from a report. The crosstab then shows the revenue for only Asia Pacific and web. To change the context. Instead. you drag Asia Pacific and web from the source tree to the Context filter section of the overview area. the following crosstab contains product lines in the rows. years in the columns. and revenue as the measure. Self service interface for business users 305 . Figure 7-101 Visualizing the Set Definition Context filters When working with dimensional data. Any summary values in the report are recomputed to reflect the results that are returned by the context filter.After applying a filter. you can verify that the filter logic was applied or you can change the logic that was applied by clicking Explorer Edit Set (see Figure 7-101). you can use context filters. When you filter data. For example. or slicer filters. Changing the context changes only the values that appear. It does not limit or change the items in the rows or columns. The members that are used as the context filter appear in the report header when you run the report. members that do not meet the filter criteria are removed from the report. Chapter 7. their values are filtered. We want to filter the values to show us the revenue for only web orders from Asia Pacific. and use only one member per hierarchy. Top 5 Sales’ Performers or Bottom 10 Clients’ Revenue). Figure 7-102 Crosstab with a Top 3 filter applied If you exclude a member from the initial set. consider a crosstab with a Top 3 filter applied (Figure 7-102). use only members from hierarchies that are not already projected on an edge of the crosstab. Exclude and Include Member: Exclude members from current set or initial set Drill down and drill up: Display parents or children of the selected valued in the dimension hierarchy If the set definition has more than one level. Explore features IBM Cognos Business Insight Advanced also provides several ways to filter dimensional data using the Explore button: Top or Bottom filters: Focus data on the items of greatest significance to your business question (for example. the crosstab applies the Top 3 filter again and excludes the selected member (Figure 7-103). Figure 7-103 Exclude a member from the initial set 306 IBM Cognos Business Intelligence V10.Guideline: When creating context filters.1 Handbook . for instance. To implement this logic. For example. Figure 7-104 Exclude a member from the current set Using custom filters When report developers need to create complicated logic for filtering in a dimensional data source. and the crosstab shows only two values on the edge on which the Exclude logic was applied (Figure 7-104).If you exclude a member from the current set.000 (this condition hides Outdoor Protection from the results). they can create customized expressions to filter members on reports. the Top 3 filter is kept. Lynn must replace the default row or column with a new expression.000. Self service interface for business users 307 . wants to filter a crosstab to show only the Product lines that have Revenue greater than USD5. Figure 7-105 Initial result for the report: Revenue by Product line for 2007 Chapter 7. Lynn Cope. who is a Professional Report Author. Figure 7-105 shows the report. select Other expression. on the Toolbox tab. [Revenue] > 5000000) 4. as shown in Figure 7-106. Figure 7-106 Creating a filter expression 308 IBM Cognos Business Intelligence V10.[Products]. type the following expression: filter([Sales (analysis)]. 2. type Filtered Product line. 3. drag Query Calculation to the same place as Product line (Rows area). In the Insertable Objects pane. In the Expression Definition section.1 Handbook .[Product line]. In the Name field. Click OK.[Products]. and then click OK.Lynn follows these steps: 1. the user notices that there are many rows with no values on the report.After performing these steps.3 Suppress data When a user works with dimensional data sources and creates an analysis nesting dimensions (in a crosstab or chart). Chapter 7. even if they do not have values for the metrics. Lynn notices that the Outdoor Protection product line is removed from the results (Figure 7-107). IBM Cognos returns all the members of the hierarchy. When working with dimensional data sources.7. Figure 7-107 Final result for the report: Revenue by Product line for 2007 7. Self service interface for business users 309 . IBM Cognos Business Insight Advanced provides the Suppress data feature that hides all the rows or columns (or both) that do not have data for the intersections.To avoid this scenario. as shown in Figure 7-108 and Figure 7-109. Figure 7-108 Report without the Suppress data feature applied 310 IBM Cognos Business Intelligence V10.1 Handbook . calculations are always performed before the suppression. 4. and for GO Americas only. Navigate to Cognos Public Folders Samples Models. In IBM Cognos Connection. 2. 3. for 2007 and 2006.7. launch Business Insight Advanced. Suppress feature: When using the Suppress feature. The Select a package window opens. wants to create a report to show data for the Camping Equipment Product line.Figure 7-109 Report with the Suppress data feature applied The rows with all null values are removed. Chapter 7. Click Create New to create a new report.4 Example Lynn Cope. Self service interface for business users 311 . and then click the Crosstab icon. Click GO Data Warehouse (analysis) to create a new report that is based on this package. To create this report: 1. an Advanced Business User for the Great Outdoors company. 7. Figure 7-111 Filtering a report based on a selection 312 IBM Cognos Business Intelligence V10. 6. navigate to Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Products. Drag Revenue under Sales fact to the Measures area (Figure 7-110).1 Handbook . Drag Time under the Sales folder to the Columns area. Click 2006. press Ctrl. 7.5. 10. 8. and click 2007.Click Include 2006. 2007 (Figure 7-111).On the top toolbar. Figure 7-110 Initial report showing the Revenue for all Camping Equipment members by year 9. 11. click the Filters icon. In the Insertable Objects pane. Drag Camping Equipment under Products to the Rows area. Chapter 7. Figure 7-112 Revenue by Camping Equipment product types. refer to the IBM Cognos Business Insight User Guide or contact IBM Cognos Education services. Figure 7-113 Final report filtered for GO Americas scenario Resource: For more information about how to work with multiple types of filters. by creating a Context filter. but Lynn wants to filter this report to show information for GO Americas only. the report is displayed with the Revenue totals for 2006 and 2007 grouped by Camping Equipment product types (Figure 7-112). 2006 and 2007 years only Lynn decides to slice the report for GO Americas. Self service interface for business users 313 . as shown in Figure 7-113.After performing these steps. 5 MB. A user can import a maximum of one external data source file per package. into their reports.000 rows. such as spreadsheets. Import your external data This step depends on the data source. Prepare your external data file for import Import your external data and link your data with your enterprise data Create reports with your external data Determine whether to share the reports Figure 7-114 Workflow: How to work with external data Prepare your external data file for import Advanced business users. When users import external data using this feature. and analysts must know their external data (the enterprise data to which they are trying to connect to make their analysis) and the objective of their analysis. the user must create a list report and link the external data source to the content of the list report (Figure 7-128 on page 328 shows an example). To be successful when creating reports using external data. with a maximum of 20. users need to follow the workflow that is shown in Figure 7-114. the users cannot store the data on the server. The IBM Cognos modeler can override these governors in IBM Cognos Framework Manager. 314 IBM Cognos Business Intelligence V10. Maximums: The maximum file size that a user can import is 2.1 Handbook .7. Only a link for the local data source is created. a package with a new data model merging the data source of the report and the external data is created. If the user wants to merge external data with a relational data source.8 Add external data IBM Cognos Business Insight and IBM Cognos Report Studio allow users to integrate external data. If the user wants to merge external data with a dimensional data source. the user can link the external data directly with the enterprise data source or to a list report. professional report authors. 1 External Data feature example Lynn Cope. applying sorting. 7. you usually save the report in you My Folders folder. because you do not have to distribute your file to each person with whom you want to share a report. grouped by Product line and Product type. needs to create a catalog report with product sizes and quantity available in English and French units. Chapter 7. an Advanced Business User of the Great Outdoors company. summarizing data. you must maintain the report to keep it current. Lynn has received a spreadsheet with the translation of the units to English and French. the people who are to see the report need to obtain the file that is used by the external data source for their computers and have it located in the same location so that IBM Cognos BI can find the source file. Determine whether to share the reports After you create a report using external data. because she knows it is not available in the data warehouse. Users can create reports with their data and perform many operations. She wants to use this information to build her report. users can create their reports with the new data source in the same manner as with regular packages. With this second method. refer to the IBM Cognos Business Insight Advanced User Guide. Self service interface for business users 315 . and grouping and adding calculations.Create reports with your external data file After IBM Cognos Business Insight Advanced creates the package. lists. and charts. such as creating crosstabs. it is easier to share reports. If a you want to share the report.8. Resource: For more information about working with the External Data feature. Another option is to place the source file on a shared drive that IBM Cognos BI can access and create the external data source based on that location. If you want to share a report. She can easily create this report in IBM Cognos Business Insight Advanced using the External Data feature. To create the External Data package: 1. and then click OK. 3. launch Business Insight Advanced. she needs to create a list report with the data that she wants to be available in the external data package. Save the report in the My Folders folder with the name Product information. click List. 316 IBM Cognos Business Intelligence V10. If required.1 Handbook . click View Metadata Tree. In the Insertable Objects pane. 4. 5. 6. drag the following data items to the list (Figure 7-115 on page 317): – From Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Products Products: • • • Product line Product type Product – From Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Products Products details: • • • • Product key Product number Product size code Product size – From Go Data Warehouse (analysis) Sales and Marketing (analysis) Sales Sales fact: • Quantity 7. 2. Navigate to Cognos Public Folders Samples Models. The Select a package window opens.Create the External Data package Because Lynn wants to merge external data with a dimensional data source. In IBM Cognos Connection. Click Create New to create a new report. Click GO Data Warehouse (analysis) to create a new report based on this dimensional package. xls) spreadsheet software files Tab-delimited text (. Click the Manage External Data icon (Figure 7-115).txt) files Comma-separated value (. Click Browse and choose the location of the external data (Figure 7-116 on page 318). The following extensions are supported: – – – – Microsoft Excel (.8. Self service interface for business users 317 . you are prompted to select the external data file each time that you run the report.xml) files Here. select the “Allow the server to automatically load the file” check box. Chapter 7. The users need to specify a namespace to use. View Metadata Tree Manage External Data Figure 7-115 Simple list report with Product data and Quantity values 9.csv) files XML (*. The namespace appears in the data tree of the Source tab in the Insertable Objects window and is used to organize the data items. users can specify which data they want to include on their reports. By default. the namespace is the imported file name without the extension. The namespace provides a unique name to associate with the data items that the users import. To avoid this step. If you change the default name for the namespace.This location can be on the local machine or on a network share. click Product size code..In the Existing report list.Figure 7-116 Selecting an external data source 10. and then click Open. 13. click PRODUCT_SIZE_CODE.In the External data list.). click the ellipsis (.In the Existing report section.Browse to the My Folders folder.Click Next. 15.Click the Product information report. 14.1 Handbook . 318 IBM Cognos Business Intelligence V10. 12. 11.. 16. Chapter 7. They can link the external data to the relational IBM Cognos package. product size code).Click New Link (Figure 7-117). Self service interface for business users 319 . 17. users do not need to create a report to link the data. Figure 7-117 Mapping the external data against an IBM Cognos data source Relational data source: When using the External Data feature with a relational data source.Click Next twice. Linking columns: Before you create the data mapping. make sure that the columns that will be linked match (for example. click Some values exist more than once (Figure 7-118).In the Existing query subject items section. Figure 7-118 Setting mapping options 320 IBM Cognos Business Intelligence V10.18.1 Handbook . Figure 7-119 Manage External Data window Chapter 7. click the ellipsis (..Set the location of the Package to My Folders.19.Click Finish.. as shown in Figure 7-119. and name it: Go Data Warehouse (analysis) External Data with Dimensional 22. Self service interface for business users 321 . 21.). 20.Click Save.On the Manage External Data window. 23. Figure 7-120 External data source information message 322 IBM Cognos Business Intelligence V10.1 Handbook . A message displays with information about the new package that will be created (see Figure 7-120).Click Publish. – This new package consists of two subjects: – One subject accesses the external data The other subject accesses the dimensional data that is extracted from the Product information report. the package that is used for the current report is changed. Figure 7-121 External Data package Chapter 7.24. and the new package appears in the Insertable Objects pane (see Figure 7-121). Self service interface for business users 323 .Click OK. After the package is created. and then click List.1 Handbook .Drag the following data items from the Insertable Objects pane (Figure 7-122): – From Go Data Warehouse (analysis) External Data product_size Product information: • • • Product line Product type Product Figure 7-122 Including Product information from the query subject created with the dimensional data 324 IBM Cognos Business Intelligence V10.25.On the top toolbar. 27. 26. click the New icon.Click No to saving the existing report because it is saved already. Drag the following data items from the Insertable Objects pane (Figure 7-123): – From Go Data Warehouse (analysis) External Data product_size product_size : • • PRODUCT_SIZE_EN PRODUCT_SIZE_FR Figure 7-123 Including Product size information from the query subject created with the external data 29.28. Self service interface for business users 325 . add Quantity (Figure 7-124).From Go Data Warehouse (analysis) External Data product_size Product information. Figure 7-124 Including Quantity measures from the query subject created with the dimensional data Chapter 7. 1 Handbook . 31. and PRODUCT_SIZE_FR. click the Group/Ungroup icon (Figure 7-125). click Product line.In the crosstab. PRODUCT_SIZE_EN.30.On the top toolbar. Product type. Product. Figure 7-125 Grouping list columns 326 IBM Cognos Business Intelligence V10. click the Sort icon.In the crosstab. Self service interface for business users 327 . 34.On the top toolbar. click Product line.Click Edit layout sorting (Figure 7-126).32. Figure 7-126 Edit Layout Sorting option Chapter 7. 33. Product type. Product type. the report shows the information from both external data and enterprise data sources. and Product (Figure 7-128).35.Click OK. ordered by Product line. and Product 328 IBM Cognos Business Intelligence V10. Figure 7-127 Setting Grouping & Sorting configuration 36. Figure 7-128 Report showing external and enterprise data ordered by Product line.1 Handbook .Drag each member from the Data items area to the Groups section. After performing these steps. as shown in Figure 7-127. “Create reporting packages with IBM Cognos Framework Manager” on page 33.7.9 Create a package with the Self Service Package wizard In Chapter 4. Figure 7-129 Enabling self-service package capability for a data source Chapter 7. On the Connection tab. click Capabilities Capability Self Service Package Wizard. you must create and publish a package from IBM Cognos Framework Manager. however. you have to enable the option Allow personal packages (see Figure 7-129). refer to IBM Cognos Administration and Security Guide. Then. and they will be listed in Public Folders or My Folders. you can create packages in IBM Cognos Connection. You must enable the self-service package capability for a data source. it is a property of a data source in IBM Cognos Administration. For details about setting the permission. For certain online analytical processing (OLAP) sources. Self service interface for business users 329 . You must meet two prerequisites to perform this task: The user must have execute permissions for the Self Service Package wizard capability. we discussed metadata modeling and how to create a reporting package in IBM Cognos Framework Manager. Select IBM Cognos Administration and navigate to the Security tab. For SAP Business Information Warehouse (SAP BW) and IBM Cognos PowerPlay Studio PowerCube data sources. Select the data source that you want to add from a list.7. Select a location for a package. The default location for packages is My Folders. The data source must have the self-service package capability enabled to be listed as a data source in the Self Service Package wizard (see Figure 7-129 on page 329). In our example. and in the upper-right corner. but you can change that here if you click Select another location. Click Next. Sales and Marketing Cube). 330 IBM Cognos Business Intelligence V10. follow these steps: 1. Figure 7-130 Creating a new package from IBM Cognos Connection 2.1 Create a package for Cognos PowerCubes If you have an IBM Cognos PowerPlay Studio PowerCube as a data source and if you want to use it in one of the IBM Cognos Studios for creating reports or analysis. and then click OK. as shown in Figure 7-130. you first must create a package. Open the IBM Cognos Connection. click Sales and Marketing Cube (see Figure 7-131). the data source is a Sales and Marketing Cube that is part of the IBM Cognos samples. Figure 7-131 Select a data source for a package 3.1 Handbook . click the New Package icon. To create a package for a PowerCube data source.9. For our example. Enter the name for the package (leave the default name. Click Finish. Any of the IBM Cognos Studios can use this package for reporting. You can define null-suppression options here: – Allow null suppression: Enables suppression. Figure 7-133 New package for a Cognos PowerCube Chapter 7. A new package for a Cognos PowerCube is added to IBM Cognos Connection. as shown in Figure 7-132. By default. Self service interface for business users 331 . – Allow access to suppression options: Allows the studio user to choose which types of values will be suppressed. Figure 7-132 Null suppression options for a Cognos PowerCube 5.4. all options are checked. – Allow multi-edge suppression: Allows the studio user to suppress values on more than one edge. as shown in Figure 7-133. such as zeros or missing values. 6. provides analysts with the ability to distribute reports with statistical insight to the larger business community. select Use Dynamic Query Mode. 10. 7. The number of objects that you can select is limited. 2. follow these steps: 1. By default. In IBM Cognos Connection. the more time the server spends processing the request. powered by IBM SPSS. A new package for an SAP BW will be added to IBM Cognos Connection. 8. which might have an impact on its performance for other applications.9. but be aware that the longer an SAP BW import takes. click Enhance the package for SAP BW organization of objects. 5. Select the objects that you want to include. click the New Package icon in the upper-right corner. 9. you can edit variable properties (click Edit the SAP BW variable properties for the package after closing this dialog) or click Close to finish creating the package. Click Next. To add an SAP BW data source in a package.When the “Package successfully created” message appears. and click Next. To have objects in the model organized in the same way that they are organized in Business Explorer Query Designer. If you want to use Dynamic Query Mode with the data source.10 Create statistical calculations IBM Cognos Statistics. You can change these settings. but creating a package has additional SAP BW-specific steps. click Enable SAP BW Dual Structures support. Select the languages to include in the package. Specify the object display name. Click Finish. Type the name for the package. To import SAP BW queries that contain dual structures and use the structures in IBM Cognos queries to control the amount and order of information that your users see. 3. 7. refer to the IBM Cognos Administration and Security Guide.7. you can select a maximum of two cubes and five info queries. 4. the prerequisites are the same. For details about how to set these parameters. 6. and click Next.1 Handbook . further expanding the breadth of reporting capabilities provided by IBM Cognos 332 IBM Cognos Business Intelligence V10.2 Create a package for SAP BW For SAP BW data sources. 1 IBM Cognos Statistics overview In this section. we describe the types of statistical objects in Descriptive Statistics. we introduce and provide a use case of IBM Cognos Statistics. The statistical capabilities that are provided by IBM Cognos Statistics are powered by the trusted market-leading IBM SPSS statistical engine. enabling you to make the most of best-in-class analytics within your organization. IBM Cognos Statistics provides the necessary fact-based statistical evidence to support key organizational decisions. For more information about any of these features. Descriptive Statistics Descriptive Statistics quantitatively summarize a data set. Now. Because IBM Cognos Statistics is seamlessly integrated into IBM Cognos Report Studio. In this section. For an overall sense of the data being analyzed. we provide the functions of IBM Cognos Statistics and sample images that were created from the Great Outdoors sales company data. In this section. IBM Cognos Statistics is easy to use for existing IBM Cognos Report Studio authors. Chapter 7.10. With these images. saving valuable time.software. we introduce each statistical function. because it uses IBM Cognos Report Studio objects and provides a convenient wizard interface. 7. Whether you are obtaining additional insight into key business variables or predicting future outcomes. you can show descriptive statistics along with more formal analyses. analysts can assemble reports containing statistical information easily and distribute the information across the enterprise. analysts no longer need to extract standardized trusted data from their business intelligence (BI) data warehouse into a separate tool to analyze and report on statistical information. see the IBM Cognos Report Studio User Guide. Self service interface for business users 333 . and half of the cases fall below the median.18 and that the standard deviation is extremely high: 31189. You can see that the average salary of this company is 49.1 Handbook . we use the Salary to Analysis variable and the Employee to Case variable (see Figure 7-134). The number of cases. The largest value of a numeric variable. 334 IBM Cognos Business Intelligence V10. Deviation N Median Minimum Maximum The arithmetic mean is the sum of samples divided by the number of cases. A measure of dispersion around the mean. The smallest value of a numeric variable. or records. Half of the cases fall above the median.147. Figure 7-134 Summary descriptive statistics: One value Notes regarding Figure 7-134: Mean Std.Basic Descriptive Statistics Descriptive tables describe the basic features of data in quantitative terms: Summary descriptive statistics: One value In this table.664. observations. Employee to Case variable. Figure 7-135 Summary descriptive statistics: Multiple values Descriptive statistics by grouping variable In the table in Figure 7-136. You can see that the standard deviation of Bonus value is lower than the standard deviation of Salary in this company. we use the Salary to Analysis variable. and Country to Grouping variable. Self service interface for business users 335 . we use the Salary and Bonus value to Analysis variable and the Employee to Case variable. Figure 7-136 Descriptive statistics by grouping variable Chapter 7.Summary descriptive statistics: Multiple values In the table in Figure 7-135. You can compare the salary of each country. You can use a histogram to summarize the frequency of observations graphically. is a convenient way to show groups of numerical data. Product to Case variable. Unit price distribution regarding all the products of this company displays. Figure 7-137 Histogram Boxplot A boxplot. such as these types: Minimum and maximum values Upper and lower quartiles Median values Outlying and extreme values 336 IBM Cognos Business Intelligence V10. and Country to Grouping variable.Histogram Histograms display the range of variable values in intervals of equal length.1 Handbook . which is also known as a “box-and-whisker” chart. Figure 7-137 uses the Unit price to Analysis variable. Extreme value Outlying value Whisker 75th percentage Median 25th percentile Figure 7-138 Boxplot Chapter 7.Figure 7-138 uses the Gross profit to Analysis variable. Self service interface for business users 337 . You can see the Gross profit distribution of retailers of each region and that VIP Department Stores is an excellent retailer in the U. and Region to Grouping variable. Retailer name to Case variable.S. You can use two types of statistical objects in means comparison. if the difference is due to something other than random chance. Figure 7-139 uses the Salary to Analysis variable and Employee to Case variable. One-Sample t-Test The One-Sample t-Test tests the probability that the difference between the sample mean and a test value is due to chance.1 Handbook . You can see that several of the high-salaried and low-salaried employees are out of range in the normal distribution in this company. Probabilities of . 338 IBM Cognos Business Intelligence V10.Q-Q Plot You can create a quartile-quartile (Q-Q) plot to chart the quartiles of a variable’s distribution against a distribution of your choice. that is. including the normal distribution. Figure 7-139 Q-Q Plot Means comparison You can compare the means of two or more groups to determine if the difference between the groups is statistically significant.05 or less are typically considered significant. Product to Case variable. Check the Sig. Self service interface for business users 339 . One-Sample t-Test provides two types of results: one type is One-Sample Statistics (Figure 7-140) and the other type is One-Sample Test (Figure 7-141). Probabilities of . but Mountaineering Equipment and Outdoor Protection differ significantly. ANOVA assumes that there is homogeneity of variance. we use the Revenue to Analysis variable.05 or less are typically considered significant. or significance values in the One-Sample Test. that the variance within each of the groups is equal. and you can see that Camping Equipment. and 30000000 to Test value. Figure 7-140 One-Sample Statistics Figure 7-141 One-Sample Test One-Way ANOVA You can use One-Way ANOVA to assess whether groups of means differ significantly. that is. and Golf Equipment do not differ significantly compared to the Test value. Chapter 7.In this table. Product line to Grouping variables. Personal Accessories. You can check for homogeneity of variance by using the Levene’s test. 1 Handbook . One-Way ANOVA provides various kinds of results. the Branch region to Independent variable.In this table. Figure 7-144. Figure 7-142 ANOVA Figure 7-143 Multiple Comparisons 340 IBM Cognos Business Intelligence V10. The Multiple Comparisons table shows the salary difference for each country. For example. and the Employee to Case variable. this test provides these three tables and one chart (see Figure 7-142. Figure 7-143. we use Salary to Dependent variables. and Figure 7-145). Self service interface for business users 341 .Figure 7-144 Homogeneous subsets Figure 7-145 Means Plots Chapter 7. Figure 7-146 Frequencies by Branch region Figure 7-147 Test Statistics 342 IBM Cognos Business Intelligence V10. which are also known as chi-square goodness-of-fit tests. Vacation days taken to count variable.1 Handbook . You can see Central Europe is the region whose employees take the most vacation. compare observed frequencies against expected frequencies using data from a single categorical variable. One-Way Chi-Square Test One-Way Chi-Square Tests. we use the Branch region to Analysis variable. In this table. and Employee to Case variable.Nonparametric tests You use nonparametric tests to compare frequencies in categorical data. The One-Way Chi-Square Test provides the types of results that are shown in Figure 7-146 and Figure 7-147. You test for significant differences between observed frequencies and expected frequencies in data that does not have a normal distribution. compare observed frequencies against expected frequencies using data from two categorical variables.5). Previous defaulted to Analysis variable2.Two-Way Chi-Square Test Two-Way Chi-Square Tests. Self service interface for business users 343 . and Figure 7-150). and the Customer ID to Case variable. In this table. You can see Pearson Chi-Square is significant (<0. The Two-Way Chi-Square Test provides various types of results (Figure 7-148. Figure 7-148 Case Processing Summary Figure 7-149 Crosstabulation Figure 7-150 Chi-Square Tests Chapter 7. we use Level of education to Analysis variable1. which are also known as chi-square tests of independence. which means that there is a significant difference between the default rates of customers with differing levels of education. Figure 7-149. Figure 7-152. Basic Correlation Basic Correlation is a measure of association between two variables. but simply helps you to understand the relationship.1 Handbook . The existence of a correlation does not imply causality. and Figure 7-153).904. This table uses Unit price to Analysis variable1. You can see that the Pearson Correlation is 0. which means that there is a positive relationship between Product cost and Gross profit. Figure 7-151 Basic Correlation chart 344 IBM Cognos Business Intelligence V10.Correlation and Regression Correlation and regression analysis let you examine relationships between variables. and the Product to Case variable. Quantity to Analysis variable2. Basic Correlation provides these kinds of results (see Figure 7-151. Linear Regression provides various types of results (see Figure 7-154. Figure 7-156. The key statistic of interest in the coefficients table is the unstandardized regression coefficient. The regression equation is as follows: dependent variable = slope * independent variable + constant The slope is how steep the regression line is. Product cost 0.528. Self service interface for business users 345 . Product cost to Independent variable. Figure 7-155. Chapter 7. You can use Linear Regression to predict the dependent variable when the independent variables are known. based on a scatterplot. The constant is where the regression line strikes the y-axis when the independent variable has a value of 0. In this table. and Product to Case variable. we use the Gross profit to Dependent variable.Figure 7-152 Basic Correlation Descriptive Statistics Figure 7-153 Basic Correlation correlations Linear Regression Linear Regression examines the relationship between one dependent variable and one or more independent variables. and Figure 7-157). So. the slope is 0. Figure 7-154 Linear Regression Variables Entered/Removed Figure 7-155 Linear Regression Model Summary Figure 7-156 Linear Regression ANOVA 346 IBM Cognos Business Intelligence V10.972.1 Handbook . the regression equation is the predicted value of Gross profit = 0.972. and the constant is 2861822.528* Product cost + 2861822.528.In this example. The aim of Curve Estimation is to find the best fit for your data. expressed as the correlation coefficient R square.. Self service interface for business users 347 . 1 Handbook . Figure 7-159. and Figure 7-161). Figure 7-160. and Product to Case variable with Linear model. Figure 7-158 Curve Estimation chart Figure 7-159 Curve Estimation Model Summary 348 IBM Cognos Business Intelligence V10. You can see differences between the estimated line and the actual value. Curve Estimation provides these kinds of results (Figure 7-158. Product cost to Independent variable. You can try using separate models with your data to help you find the model with the optimum fit.This table uses the Gross profit to Dependent variable. You use statistical process control (SPC) to monitor critical manufacturing and other business processes that must be within specified limits. Self service interface for business users 349 . but excessive variation can produce undesirable or unpredictable results. Chapter 7.Figure 7-160 Curve Estimation ANOVA Figure 7-161 Curve Estimation Coefficients Control Charts All processes show variation. Control Charts plot samples of your process output collected over time to show you whether a process is in control or out of control. and Figure 7-164).X-Bar Plot the average of each subgroup. Figure 7-162 X-Bar chart Figure 7-163 X-Bar Rule Violations 350 IBM Cognos Business Intelligence V10.1 Handbook . Figure 7-163. An X-Bar chart is often accompanied by either the R chart or S chart (Figure 7-162. The center line on the chart represents the mean of the ranges of all the subgroups (Figure 7-165). Figure 7-165 R chart Chapter 7.Figure 7-164 X-Bar Process Statistics R charts R charts plot range values by subtracting the smallest value in a subgroup from the largest value in the same subgroup. Self service interface for business users 351 . The center line on the chart represents the mean of the standard deviations of all the subgroups (Figure 7-166).1 Handbook .S charts S charts plot the standard deviations for each subgroup. Figure 7-166 S chart 352 IBM Cognos Business Intelligence V10. The center line on the chart represents the average change from one sample to another sample (Figure 7-167). Self service interface for business users 353 . Figure 7-167 Moving Range Chapter 7.Moving Range Moving Range charts plot the difference between each sample value and the preceding sample value. 354 IBM Cognos Business Intelligence V10.Individuals Individuals charts plot the measured value of each individual sample. They can vary between collection periods. Sample sizes do not need to be equal.1 Handbook . The center line on the chart represents the average of all individual samples in the chart (Figure 7-168 and Figure 7-169). Figure 7-168 Individuals chart Figure 7-169 Individuals Rule Violations p chart The p chart plots the percentage of defective units. such as the percent of automobiles with defects per shift. such as the number of automobiles with defects per shift. In this scenario.10. u chart The u chart plots the number of defects per unit. Sample sizes must be equal. Also. Sample sizes must be equal. They can vary between collection periods. he wants to show which product is a “pain” point or poor seller. and create a 1 x 2 table. Ben creates a statistics chart. He wants to create a sales summary report that shows the statistical relationship between the sales quantity and the inventory in the second quarter (2Q) of 2007 with IBM Cognos Statistics. 3.np chart The np chart plots the number of defective units. Sample sizes do not need to be equal.2 IBM Cognos Statistics use case: Create an IBM Cognos Statistics report Chapter 3. Chapter 7. Ben Hall is the Analyst. 2. c chart The c chart plots the number of defects. Launch IBM Cognos Report Studio with the Go Data Warehouse (query) package. Self service interface for business users 355 . such as the number of defects per automobile per shift. To create the chart: 1. such as the total number of defects per shift. Drag a Curve Estimation statistic object from the Insertable Objects pane to the left side of the table. Create a statistics chart First. Click the Insert Table icon. “Business scenario and personas used in this book” on page 21 introduces the Great Outdoors company business scenario. 7. We use IBM Cognos Statistics to answer the following business question for the company: How many units of a product should I buy by each period of the year? The executives wants a summarized report of sales performance that shows the relationship between the sales quantity and inventory of each product. as shown in Figure 7-170. In the Select Statistic dialog box. Create an advanced filter with Year=2007 and Quarter=’Q2’. click Cancel.1 Handbook . expand Correlation and Regression. Click OK. Figure 7-170 Select Statistic dialog box 5. 356 IBM Cognos Business Intelligence V10. In the next window. Insert the following measures and items. and click Curve Estimation.4. as shown in Figure 7-171: – Quantity to Dependent variable – Opening inventory to Independent variable – Product to Cases variable Figure 7-171 Insert measures and items 6. 7. you can use the crosstab to identify them. So. Run a report. Figure 7-172 shows the result. These points are much lower than the estimated line. Figure 7-172 Curve Estimation chart Chapter 7. Self service interface for business users 357 . which means that these items had too much inventory compared to their sales quantities. The red circles show the pain points. there is no direct way to identify the pain points at this point. If you want to identify the item name of the pain point. Drag a Crosstab object from the Insertable Objects pane to the correct table. Insert Quantity and Opening inventory in the Columns area. Click the (Opening inventory * 0.425 Figure 7-173 Coefficients 2.533* Opening inventory + 17297. 3. type slope value(0.Create crosstab to identify pain point Next. c. make a note about the following information in the statistical report (see Figure 7-173): – Slope: 0. type Constant value(17297. 358 IBM Cognos Business Intelligence V10.425) in the Number field. d. Create the same advanced filter for the crosstab that you created in the statistic report. e. b. 5. To create a crosstab: 1.533) column. and click OK.533) in the Number field. Insert Product in the Rows area.425 From this information. Click +(addition) in operation. Click *(multiplication) in the operation. Create the estimated column: a.1 Handbook . Figure 7-174 Crosstab 4. and click OK. Ben creates a crosstab. Delete the (Opening inventory * 0.533) column. Click Opening inventory.533 – Constant: 17297. and add a custom calculation. you can recognize the following equation: predicted value of Quantity = 0. and add a custom calculation. as shown in Figure 7-174. Before creating a crosstab. Click the (Regression . Adjust the crosstab location appropriately. Chapter 7.Quantity) to the right edge of the crosstab. Click the ((Opening inventory * 0. and rename it as “Regression” in the property pane (Figure 7-175). Figure 7-176 Crosstab 9. Click the Regression. Figure 7-175 Crosstab 6.533) + 17297. 7. 8.Quantity) cell. Move (Regression . and add the calculation (Regression Quantity). Self service interface for business users 359 .425) cell. and set the order as Descending (Figure 7-176). Quantity cell.f. You can identify the item name of the pain points with Quantity and Opening inventory values. as shown in Figure 7-177.Run the report. the crosstab shows the order of difference between the estimated line and the actual sales Quantity. You can see that these item names are “Glacier Basic” and “Double Edge”.1 Handbook . In this report. Figure 7-177 Statistic report 360 IBM Cognos Business Intelligence V10.10. In this chapter. 2010. Having access to the data that you.8 Chapter 8. Actionable analytics everywhere IBM Cognos Business Intelligence (BI) offers various capabilities that allow more people to take advantage of business intelligence in more places. so that you can make the best decisions possible is best described as actionable analytics. when you need it. we introduce the actionable analytics that are available with IBM Cognos BI. 361 . All rights reserved. This chapter includes the following topics: Accessibility and internationalization Disconnected report interaction Interact with IBM Business Analytics using mobile devices IBM Cognos Analysis for Microsoft Excel Business driven workflow © Copyright IBM Corp. 1 Accessibility and internationalization IBM Cognos BI includes features to help you create reports that are more accessible to people with a physical disability. Additionally. In this section. For example. you can open and display the contents of a drop-down list with the Alt+Down arrow key combination. graphic icons are replaced with text icons for easier viewing.8. such as images and charts Add summary text for crosstabs. lists. For more information. report consumers can do the following activities: Navigate without a mouse using arrow keys and function keys. we introduce functions that you can use to enhance accessibility and the support language of IBM Cognos BI. such as restricted mobility or limited vision. and tables Specify whether table cells are table headers When using IBM Cognos Business Insight or IBM Cognos for Microsoft Office. Use any combination of high contrast system settings and browser minimum font settings to control the display. you can move to the first item or object with the Ctrl+Home key combination. you can do the following activities: Add alternative text for non-text objects. 8. For example. see the IBM Cognos Business Insight User Guide. as shown in Figure 8-1. 362 IBM Cognos Business Intelligence V10.1 Handbook .1 Enabling access for more people When creating reports using IBM Cognos Report Studio and IBM Cognos Business Insight Advanced.1. when you set the operating system to high contrast. Actionable analytics everywhere 363 . including keyboard shortcuts. see the IBM Cognos Business Insight User Guide. so that all reports for all IBM Cognos BI users have accessibility features enabled.Figure 8-1 Text icons for high contrast Use the Freedom Scientific JAWS screen reader. JAWS is supported only with the Mozilla FireFox browser for this release of IBM Cognos Business Insight. The accessibility features. Switch from graphs to tables and control the palette settings to meet specific accessibility needs. see the IBM Cognos Connection User Guide and the IBM Cognos Administration and Security Guide. or they can choose to enable accessible output for all reports using a user preferences setting. Accessibility settings in the user preferences and report properties overwrite this setting. Users can enable accessible output by report. IBM Cognos Business Insight is accessibility enabled. IBM Cognos BI provides the ability to enable accessible output at many levels. The documentation includes alternate text for all graphics so that screen readers can interpret graphics. Administrators can control assessable output as a server-wide option. For more information. are documented as well. For more information. Chapter 8. the search functionality. Brazilian Portuguese.1 Handbook . Korean.8. Existing Group 1 language translations French.1. and Simplified Chinese New Group 1 language translations Traditional Chinese. German. and the indexed search functionality (see Figure 8-2).2 Providing internationalization IBM Cognos BI now expands language support for IBM Group 1 translations. Japanese. and Italian Figure 8-2 Setting for supported language 364 IBM Cognos Business Intelligence V10. Spanish. The following Group 1 languages are supported in the product user interface. You use IBM Cognos Report Studio to create active reports.2 Disconnected report interaction IBM Cognos BI offers a report function called IBM Cognos Active Report. Actionable analytics everywhere 365 . keeping the user experience simple and engaging. You use active report controls to create the layout of an active report and to filter and sort data in the report. making this an ideal solution for remote users such as the sales force.2. such as a chart. Report authors build reports that are targeted at users’ needs. 8. Chapter 8. Some users prefer viewing the numbers. IBM Cognos Active Report can be consumed by users who are offline. IBM Cognos Active Report is a report output type that provides a highly interactive and easy-to-use managed report.2 IBM Cognos Active Report features IBM Cognos Active Report produces reports that are an extension of existing IBM Cognos Report Studio values and the IBM Cognos Platform. allowing them to explore data and derive additional insight. 8.1 IBM Cognos Active Report overview IBM Cognos Active Report includes reports that are built for business users. Users continue to benefit from the enterprise value of one version of the truth. IBM Cognos Active Report extends the reach of business intelligence and analytics to employees in the field and business partners.8.2. and new interactive control types in IBM Cognos Report Studio serve as building blocks for creating active reports. and other users prefer viewing a visualization. Layout Users need data organized in a way that is easy to consume and understand. IBM Cognos Active Report is a disconnected and interactive report that is developed for employees in the field and growing ranks of telecommuters and business partners who are not connected to an intranet and need information and analytics to make quick decisions and actions. Figure 8-4 Drop-down list 366 IBM Cognos Business Intelligence V10. the lists are driven by a data item that you insert in the control. IBM Cognos Report Studio provides the following layout controls: Tab controls for grouping similar report items. By allowing report authors to add easy to-use controls to a report. clicking a radio button in a radio button group control shows a list object and clicking a different radio button shows a chart object. In data list boxes. For example. as shown in Figure 8-3 Figure 8-3 Tab controls Decks of cards for layering report items You can use decks and data decks to show different objects and different data respectively based on a selection in another control. users can interact with the data and filter it to obtain additional insight. IBM Cognos Report Studio provides several filtering controls: List and drop-down list control (as shown in Figure 8-4) Use list boxes and data list boxes to provide a list of items that users can choose from.1 Handbook . users can select one or more items in a list box. You can show or hide a column in a list or a column or row in a crosstab when the report is viewed. In reports. To help report authors deliver the content in the most consumable way possible.To help report authors deliver content in the most consumable way possible. Filtering and sorting Users need to focus on the data in which they are most interested. Control data for hiding or showing list columns Users can control the data that displays using check boxes. the check boxes are driven by a data item that you insert in the control. users can click one or more buttons simultaneously. the lists are driven by a data item that you insert in the control. Figure 8-7 Check boxes Toggle buttons (as shown in Figure 8-8) Use toggle button bars and data toggle button bars to add a group of buttons that change appearance when pressed.Interactions with charts (as shown in Figure 8-5) Use list boxes and data list boxes to provide a list of items from which users can choose. In data toggle button bars. the radio buttons are driven by a data item that you insert in the control. In reports. In reports. In data check box groups. In reports. users can click only one radio button at a time. the buttons are driven by a data item that you insert in the control. Figure 8-6 Radio buttons Check boxes (as shown in Figure 8-7) Use check box groups and data check box groups to group a set of check boxes. Actionable analytics everywhere 367 . In data list boxes. Figure 8-8 Toggle buttons Chapter 8. Figure 8-5 List box Radio buttons (as shown in Figure 8-6) Use radio button groups and data radio button groups to group a set of buttons that have a common purpose. In data radio button groups. users can select one or more check boxes simultaneously. users can select one or more items in a list box. In reports. 8. The report is divided by Product Line.. The business partner is not connected to the Great Outdoors company intranet or the IBM Cognos BI infrastructure. In data button bars.1 Handbook .2. who is in the role of the Professional Report Author in this example. you can run active reports from IBM Cognos Connection. the buttons are driven by a data item that you insert in the control. The generated report needs to be interactive and must show data without requiring a connection to the IBM Cognos Platform. providing users with an easy to consume interface. Authors can upgrade existing reports to be interactive by adding the controls that we mentioned previously. Similar to existing IBM Cognos reports. creates a sales summary report that shows the lowest sold products in 2007 3Q.Push button controls (as shown in Figure 8-9) Use button bars and data button bars to add a group of push buttons. “Business scenario and personas used in this book” on page 21. 368 IBM Cognos Business Intelligence V10. users can click only one button at a time. In reports. we introduce the Great Outdoors company business scenario. and you can schedule and burst these reports to users.3 IBM Cognos Active Report use case In Chapter 3. Create IBM Cognos Active Report Lynn Cope. Convert the report to an active report by clicking File Convert to Active Report. Actionable analytics everywhere 369 . Create a list report. and Quantity of [GO Data Warehouse(analysis)]-[Sales and Marketing(analysis)]-[Sales]. Product. and set the order to Ascending to Quantity and the grouping to Product line. Product type. 3. Insert a Data Tab Control object from the Insertable Objects pane to the left side of the list table and insert the Product line to Drop Item here as shown in Figure 8-10. Add a Context filter with Q3 2007. Launch IBM Cognos Report Studio and create a list with Product line. 2.To allow users to use the Active Report feature: 1. Figure 8-10 Insert Data Tab control Chapter 8. Add an Active Report Object. as shown in Figure 8-11.4. select Create New Variable and enter Product line Variable 1.1 Handbook . Then. Figure 8-11 Create a connection 370 IBM Cognos Business Intelligence V10. Create a connection by clicking the Interactive Behavior icon. Chapter 8. If your result data rows are more than 5. Actionable analytics everywhere 371 . Figure 8-12 shows the result.000. change the limit setting within Active Report Properties. Figure 8-12 Active report For more information. see the IBM Cognos Report Studio User Guide.5. Save and run the report. Figure 8-13 Downloaded Active Report file 2. 2.*\. he can use the report that is attached to the email to analyze the data to determine the lowest sold product at each product line in 3Q 2007. delete the URL filter by selecting Tools IE Tab option Delete /^file:\/\/\/. 372 IBM Cognos Business Intelligence V10. If you use Mozilla Firefox. To analyze the report: 1. Using the active report When the business partner receives an email from Lynn Cope.mht extension. right-click the file. use Internet Explorer (V7 or higher) or Mozilla Firefox (with UnMHT) to open an active report. Send the . and select appropriate browser (for example. She wants to download the Active Report as a local file and send it to a business partner.(mht|mhtml)$/. The file has a . Click each Product Line button to find the lowest sales quantity of each product line as shown in Figure 8-14.1 Handbook . To download a report: 1. Click the developed Active Report in IBM Cognos Connection and then select download to local storage. Open the active report.mht file to the business partner by email. (as shown in Figure 8-13).Download IBM Cognos Active Report Lynn Cope is also an Advanced Business User. To open the active report file. Actionable analytics everywhere 373 . Chapter 8. For more information. 8. see the IBM Cognos Report Studio User Guide.Figure 8-14 Disconnected active report IBM Cognos Active Report can provide a robust and interactive experience for disconnected analysis of business information.3 Interact with IBM Business Analytics using mobile devices IBM Cognos Mobile provides timely and convenient access to IBM Cognos BI information from a mobile device. In this section. This function provides decision makers with access to business critical information wherever they are and whenever they need it. Mobile users want to take advantage of personal mobile devices while interacting with business intelligence on the device. we introduce supported devices and a use case for IBM Cognos Mobile. users can have dashboards that were created in IBM Cognos Business Insight delivered to their devices. Improved prompting IBM Cognos Mobile offers improved prompting in the web application for the Apple iPhone. Support for Symbian S60 and Microsoft Windows Mobile 6. interact with. users can create a list of favorites and select one dashboard or report to display automatically on the Welcome window when they start IBM Cognos Mobile. 374 IBM Cognos Business Intelligence V10.2 and higher. 8. In addition. Support for RIM BlackBerry Smartphones IBM Cognos Mobile now supports the enhanced BlackBerry user interface for BlackBerry OS 4.3. Apple iPad. Prompting uses prompt identifiers and the surrounding text and formatting that desktop users see.1 Handbook . Users can run prompted reports intuitively by using prompting mechanisms that suit the mobile device.1 Extended device support IBM Cognos Mobile now supports the mobile devices that we describe in this section.3. Support for IBM Cognos Business Insight In addition to reports and analyses. Apple iPad. The new user interface is easier to navigate and provides an improved overall experience when accessing IBM Cognos BI content. and Apple iPod Touch IBM Cognos Mobile now supports Apple iPhone. and make decisions with IBM Business Analytics on your mobile device as we describe in this section.8. Support for Apple iPhone. and Apple iPod Touch devices.1 devices.1 devices IBM Cognos Mobile continues support for Symbian S60 and Microsoft Windows Mobile 6. Users can use familiar iPhone actions to access the same business intelligence content and IBM Cognos Mobile features that are available on other devices in previous releases.2 Simplified experience across all devices You can connect to. Zero footprint IBM Cognos Mobile for the Apple iPhone and Apple iPod Touch is an HTML 5 web application.Drill up and drill down IBM Cognos Mobile offers drill up and drill down capabilities. Actionable analytics everywhere 375 . Panning. and Zooming functions. Browsing improvements IBM Cognos Mobile offers Report Thumbnails. It enables the following functions: Rapid mass deployment to enterprise mobile users Faster device support to sustain the speed at which new devices enter the marketplace Instantaneous software updates that occur at the server You do not need to update mobile client software. which makes deployment transparent to users. users can return to the original report where they began the drilling process. Chapter 8. After drilling up or drilling down on one or more of those fields. These features allow users to gain additional insight into the information that they are consuming. Users can see the fields within the IBM Cognos BI content on their devices on which they can drill up or drill down. 8.1 Handbook .3. see the Welcome window opens. as shown in Figure 8-15.3 IBM Cognos Mobile use case Lynn Cope is an Advanced Business User who wants to reference a sales summary report on her Apple iPhone. Figure 8-15 IBM Cognos Mobile welcome window on Apple iPhone 376 IBM Cognos Business Intelligence V10. When you connect to IBM Cognos Mobile on the Apple iPhone. Figure 8-16 Favorites tab Chapter 8. Actionable analytics everywhere 377 .Figure 8-16 shows the Favorites tab. The third tab shows recently run reports by thumbnails. as shown in Figure 8-17.1 Handbook . Figure 8-17 Recently run reports 378 IBM Cognos Business Intelligence V10. Actionable analytics everywhere 379 . Lynn searches for Sales reports as shown in Figure 8-19.The Explorer tab shows IBM Cognos reports (similar to using IBM Cognos Connection) as shown in Figure 8-18. In this example. Figure 8-19 Search tab Chapter 8. Figure 8-18 Explorer tab The Search tab allows you to use the search function. 4 IBM Cognos Analysis for Microsoft Excel IBM Cognos Analysis for Microsoft Excel offers enhanced value as a Microsoft Office based authoring and analysis tool that can share resulting work back to a 380 IBM Cognos Business Intelligence V10. Figure 8-21 Zooming function 8.1 Handbook .You can use the text prompt as shown in Figure 8-20. Figure 8-20 Text prompt You can use the zooming function to show report detail as shown in Figure 8-21. The IBM Cognos styles are listed along with default Microsoft Excel styles.common business intelligence portal and can improve the user experience for financial analysts who work with a variety of data sources. you have the option of converting data on the current worksheet. and you can reorder items.Calculated Row Name or IBM Cognos . In addition. copying and moving the data to a new worksheet. After items are placed in the cells of a worksheet. you can convert an exploration to a formula. which enables Microsoft Excel users to author reports in Microsoft Excel and distribute them as secured web reports without the additional step of using a studio package. Publish Microsoft Excel reports You can publish explorations and lists directly to IBM Cognos Connection. When you convert your exploration. 8. You can gain access to IBM Cognos Analysis Studio styles. In this chapter. such as IBM Cognos Analysis Studio. Actionable analytics everywhere 381 . Chapter 8. You can save an exploration as a web report. You can use the calculation function without converting to formulas. such as totals and percentage change between years. you can rename column and row headings. or specifying the location for the converted data.1 Features of IBM Cognos Analysis for Microsoft Excel This section introduces the features of IBM Cognos Analysis for Microsoft Excel.Measure Summary. we introduce the features and a use case for IBM Cognos Analysis for Microsoft Excel.4. such as IBM Cognos . and IBM Cognos Analysis Studio. Calculation Calculations are now supported for explorations and lists. which enables you to create and maintain reports using advanced functions in an easy-to-use environment with drop zones. You can add business calculations. IBM Cognos Report Studio. You can modify attributes. You can open these Microsoft Excel reports using IBM Cognos Business Insight Advance. and then save the changes to a template to use again. such as font and alignment. Cell formatting Additional custom styles are available for formatting cells. through the Microsoft Excel function by clicking Style from the Format menu. If you refresh the data in a Microsoft Excel sheet. or block.2 IBM Cognos Analysis for Microsoft Excel use case Lynn Cope is an Advanced Business User who wants to create a sales summary report in Microsoft Excel. Create a sales report. Receiving the data as unformatted data can speeds processing time. create an exploration. 8. Time to column. You can change the format of data that is received from the IBM Cognos BI server to CSV. You can create Microsoft Excel calculations for the entire row. and insert Product to row.1 Handbook . Performance A streaming data mode for list-oriented queries supports large volumes of data requests with speed.4. column.User-defined rows and columns You can add user-defined rows and columns in the middle of explorations and lists to add calculations. To allow users to use the IBM Cognos Analysis for Microsoft Excel feature: 1. Revenue to data in [GO Data Warehouse(analysis)]-[Sales and Marketing(analysis)]-[Sales] as shown in Figure 8-22. create a crosstab report with IBM Cognos Analysis for Microsoft Excel. Comments You can add and preserve user comments and values. these comments and values are not deleted. Then. You can also add blank rows and columns in the middle of explorations or lists to enhance readability. First. Figure 8-22 Create an exploration 382 IBM Cognos Business Intelligence V10. and select 2007 .2. First. 2006 cell. Actionable analytics everywhere 383 . Then.2006 as shown in Figure 8-23. Add a calculation. add a column to show the difference of revenue between 2006 and 2007. Figure 8-23 Add a calculation Figure 8-24 shows a new column that reports the difference between 2007 revenue and 2006 revenue. Figure 8-24 Calculated column Chapter 8. press Ctrl while you click the 2007. click the calculation button. 3. Change the new column name to Latest Difference. Figure 8-25 Change name and order 384 IBM Cognos Business Intelligence V10.1 Handbook . Then. order the columns by right-clicking the year cell (for example 2004) and clicking IBM Cognos Analysis Reorder / Rename (see Figure 8-25). text. Actionable analytics everywhere 385 . Figure 8-27 User-defined column Chapter 8. Add rows and columns by clicking the Insert User Row / Column icon. You can reflect the cell attribute to other cells by selecting the appropriate Style name in the Style window. Figure 8-26 Cell formatting 4. and changing cell attributes.You can change the cell format by clicking the Camping Equipment cell and clicking Format Style Modify as shown in Figure 8-26. and create formula. shown in Figure 8-27. Figure 8-28 Add comment 6. Figure 8-29 Publish 386 IBM Cognos Business Intelligence V10.1 Handbook .5. Publish the Microsoft Excel report to IBM Cognos Connection as shown in Figure 8-29. Add comments to appropriate cells (see Figure 8-28). This service is based upon an open specification called WS-Human Tasks. These tasks can include expectations around when work on a task must be started by and completed by. However. 8. In this section. IBM Cognos Platform introduces the ability to encompass user actions into business intelligence events and processes in a way that can be managed and audited. you cannot capture what a user does with information that is provided by the event agent. IBM is leading the working Chapter 8. With email and news item posts.1 Enhanced event management The IBM Cognos Platform includes a new service to support enhanced event management functionality called the Human Task Service. In previous versions.Figure 8-30 shows the published Microsoft Excel report in IBM Cognos Connection. we introduce WS-Human Tasks and features and provide a use case of IBM Cognos Event Studio and My Inbox. The enhanced event management features of this version permit event authors to configure tasks to be assigned to individual users or groups of users. you cannot define how users are expected to respond to the event. In addition.5 Business driven workflow In this version. or they can request that a user decide how the agent should proceed to execute configured tasks based upon the user's analysis of or reaction to the event condition or task contents.5. Tasks can request approval for actions to be taken based upon an event. Figure 8-30 Published Microsoft Excel report in IBM Cognos Connection 8. user interaction with these events was provided only using email and IBM Cognos portal news items. Actionable analytics everywhere 387 . event authors could configure agents that detected events within a business intelligence data source and that took action based upon reconfigured criteria. The My Inbox area of IBM Cognos Connection (notification requests and ad-hoc tasks) A watch rule set up for a report (notification requests only) For more information. in a notification request. by updating the task from their task inbox. see the IBM Cognos Event Studio User Guide. potential owners or stakeholders can add deadlines at a later date.. 388 IBM Cognos Business Intelligence V10. see the IBM Cognos Administration and Security Guide. Alternatively. You can add deadlines to an ad-hoc task when you create it. Features of IBM Cognos Event Studio You can create the following human tasks in IBM Cognos Event Studio: Approval request task You can create an approval request task to an agent when you want an event to occur only after approval. You can include content. This task sends an approval request related to an event to the task inbox of specified recipients in IBM Cognos BI. such as report output. Notification request task You can create a notification request task to an agent to send a secure notification about an event to the inbox of specified recipients in IBM Cognos BI.1 Handbook . Ad-hoc task You can create an ad-hoc task to send a task to the task inbox of the recipients you specify. My Inbox A task inbox contains the following human tasks: Notification task You can also create a notification request task in My Inbox. perform the following: 1. Approval Request scenario In this example. To submit the report for approval. it must be approved by the Sam Carter.2 Human task service use case In this section. Launch the IBM Cognos Event Studio with Go Data Warehouse (analysis) package and add the Run a report task as shown in Figure 8-31.5. Figure 8-31 Task list Chapter 8. the Administrator. for a quality check.8. Actionable analytics everywhere 389 . Lynn Cope is a Professional Report Author who wants to take a sales report definition and burst it into reports for the following countries: USA Japan Brazil Before the bursted report is sent. we provide two use cases that use the human task service. and change settings as shown in Figure 8-32.2. Figure 8-32 Burst report task 390 IBM Cognos Business Intelligence V10.1 Handbook . Select the burst report. Add Run an approval request. and set Sam Carter as a potential owner.3. Figure 8-33 Approval request task Chapter 8. Actionable analytics everywhere 391 . and attach the bursted reports as shown in Figure 8-33. Enter a subject and body. Priority.4.1 Handbook . Icon. Task Owner Action as shown in Figure 8-34. For more information. Options (Send email or not on each phase). You can select Due Dates. see the IBM Cognos Event Studio User Guide. Figure 8-34 Approval request task options 392 IBM Cognos Business Intelligence V10. and set Lynn Cope as the recipient. Actionable analytics everywhere 393 . Chapter 8.5. Enter a subject and body. and attach the bursted reports as shown in Figure 8-35. Add a Send an email task. Save the agent and schedule an appropriate time. Figure 8-35 Email task 6. 1 Handbook . Figure 8-36 Email for approval 8. Sam Carter receives an email with the approval request as shown in Figure 8-36.7. Figure 8-37 Approval request message in My Inbox 394 IBM Cognos Business Intelligence V10. When Sam Carter logs in to IBM Cognos Connection. Approve the request. he can find a message in “My Inbox” as shown in Figure 8-37. as shown in Figure 8-38. Sam Carter can open the message and approve it. also providing a comment.9. Figure 8-38 Approve operation Chapter 8. Actionable analytics everywhere 395 . Enter a subject and body.10. 396 IBM Cognos Business Intelligence V10. and set Lynn Cope as the recipient. Launch IBM Cognos Event Studio.000. and create an event with an expression such as: [Employee name (multiscript)] = [Employee name (multiscript)] and [Year] = '2007' and aggregate ( [Quantity] ) > 300000 2. To create an automatic notification process: 1. Lynn Cope wants to create an automatic notification process to be altered to exceptionally high performances by the sales staff.1 Handbook .Lynn Cope receives an email that the approval is granted. Lynn wants to be notification if a sales person’s sales amount exceed 300. and the report is complete. as shown in Figure 8-39. Add a Run a notification request task. Figure 8-39 Email after approval Notification Request scenario In this scenario. and attach an Event list as shown in Figure 8-40. Actionable analytics everywhere 397 . Save the agent and schedule it execute every day. Chapter 8.Figure 8-40 Notification request task 3. 4. You can receive messages in My Inbox if when a sales person reaches 300.1 Handbook . you can find the Event list is attached and open it to see whom the sales person is. Figure 8-41 Notification message in My Inbox 398 IBM Cognos Business Intelligence V10.000 in sales as shown in Figure 8-41. In this message. Figure 8-42 Notification email Chapter 8. You can receive an email as shown in Figure 8-42. you can also see the Event list and discover the highest performing sales person. Actionable analytics everywhere 399 .5. In this email. 1 Handbook .400 IBM Cognos Business Intelligence V10. 401 . All rights reserved.Part 4 Part 4 Enterprise ready platform © Copyright IBM Corp. 2010. 402 IBM Cognos Business Intelligence V10.1 Handbook . Enterprise ready performance and scalability Today’s companies need a flexible decision making processes that can adapt quickly to changing opportunities and challenges within the market place.9 Chapter 9. All rights reserved. This feature enables you to analyze your business faster and allows for quicker decisions making based on this analysis which in turn increases business satisfaction. IBM Cognos BI meets that need with Dynamic Query Mode. This chapter includes the following topics: Overview of Dynamic Query Mode Configuring Dynamic Query Mode Query Service Administration Analyzing queries © Copyright IBM Corp. 403 . an in-memory query generation and caching technology that delivers fast results. 2010. Running and building reports requires several requests. This caching results in fewer request sent to the underlying data source and. 9. we provide a basic overview of the high performance in-memory query mode known as Dynamic Query Mode. a cache is built. These improvements benefit the query planning.9. the Dynamic Query Mode caches the results. for future re-use. thus. whether those results are metadata or data. This cache is data source and user specific and can be shared across report processes that are running on the same dispatcher. such as metadata and data requests. the Dynamic Query Mode offers different performance optimizations to improve the speed of data analysis. provides better query performance. and results that are returned while maintaining the security context. Table 9-1 Key optimizations IBM Cognos TM1 Java Connectivity Null Suppression optimization Master-detail optimization In-memory Cache 64-bit connectivity X X X Oracle Essbase X X X X SAP BW X X X X X In-memory caching As users run and build interactive and batch reports on packages that are Dynamic Query Mode-enabled. query execution.1. to be made to the data source.1 Overview of Dynamic Query Mode In this section.1 What is Dynamic Query Mode Dynamic Query Mode is an enhanced Java-based query execution mode that addresses increasing query complexity and data volumes through key query optimizations and in-memory caching. Table 9-1 lists these optimizations. As these requests return.1 Handbook . Depending on the data source that you use. 404 IBM Cognos Business Intelligence V10. You can. resulting in only one query sent to the data source for all output formats. In addition. it is important that you can troubleshoot unexpected results or slow run times easily. Enhanced null suppression Deeply nested reports generate large amounts of cells. IBM Cognos Business Insight Advance. because IBM Cognos TM1 implements its own caching. the master query is pushed as a separate edge to the detail query.4. IBM Cognos Query Studio. there is a higher probability that null values can occur where the relationship between the nested edges returns no data. you can use a single report to display information that normally would take multiple reports. The higher the amount of cells. Enterprise ready performance and scalability 405 . Optimized master-detail relationships A master-detail relationship links information and queries from the following types of data objects within a report: A master data object A detail data object Using a master-detail relationship. With the Dynamic Query Mode. which can impact performance.Dynamic Query Mode cache: The Dynamic Query Mode cache is not maintained for packages that are based on IBM Cognos TM1 cubes. the longer it takes for the report to evaluate which rows and columns contain only null values. In Compatible Query Mode. for example. and IBM Cognos Report Studio include an enhanced suppression feature that can reduce the amount of cells that need to be evaluated to achieve the desired result. For more information about this tool. the underlying data source has to endure high work loads. Thus. Chapter 9. “Analyzing queries” on page 424. these master-detail relationships generate a separate query for every element in the master result set. These issues can make reports large and difficult to read. The Dynamic Query Analyzer allows for easy access and analysis of Dynamic Query Mode log files using a graphical user interface. refer to 9. create a list report that contains an Order Method and link a crosstab inside this list to display detailed information in the context of this particular Order Method. Query visualizations To get the best performance possible from your IBM Cognos investment. The new query mode can provide the following benefit to both the administrator and users: Better query performance Re-use of frequently used results Easily identify issues Enhancements in the inner workings of the query planning process have generated more optimized queries that are specific to the OLAP data source vendor and version. This increase in 406 IBM Cognos Business Intelligence V10. Thus.3 Technical overview To truly understand the nature of the features in Dynamic Query Mode. The query mode is also streamlined to take advantage of the metadata and query plan cache. The same is true for executing reports in different output formats. which results in faster executing queries. CSV to PDF format does not trigger a new query to the data source because all that data is already in the cache. running in a 64-bit JVM provides the advantage of an increased size of the address space. Analysis users also benefit from caching but in a slightly different way. such as adding a calculation. which results in faster navigation and exploration of the hierarchy. the modification can be performed on the measures already in cache.1. affecting both speed and performance. Dynamic Query Mode is a Java-based query mode that addresses the increasing query complexity and data volumes. The implementation of various caching containers allows report authors and analysis users to take advantage of another performance optimization.1.2 Why use Dynamic Query Mode Using Dynamic Query Mode and its features offers some significant benefits to the IBM Cognos administrator. These requests can create a high load on the underlying data source. With the introduction of Dynamic Query Mode. to a report without this change resulting in another query to the underlying data source. Changing a reports output format from. The cache allows for report authors to make minor modifications. for example. For Dynamic Query Mode. we need to take a quick look at how the query mode is built. Implementing the query mode in Java allows it to take advantage of a 64-bit enabled Java Virtual Machine (JVM).9.1 Handbook . the 64-bit JVM is capable of addressing more virtual memory than its 32-bit variant. 9. The nature of analyzing data requires a lot of metadata requests to present results to the user. more types of metadata results can be cached. In case the added calculation is based on measures already present in the initial report. Dynamic Query mode is spawned in its own JVM instance as a child process of the dispatcher. No Java heap memory space is shared with the content manager or dispatcher. Dynamic Query Mode consists of the following main components: The transformation layer The execution layer The transformation layer provides for a runtime environment where. the tree can be passed on to the transformation layer so that the query planning can begin. the results from the planning phase and execution phase are placed into the respective layer’s cache container. After this conversion is complete. These transformations that implement the query planning logic are contained within separate transformation libraries and are called by the transformation layer to execute the node conversions. Both layers have security-aware. From a software architecture point of view.addressable memory allows Dynamic Query Mode to maintain more of the metadata and data cache in memory as long as needed. Chapter 9. These layers reuse data from the cache only if the data that is needed exists and if the users security profile matches. It is these transformations that allow the query mode to generate customized and enhanced MDX that is specific to your OLAP source. query transformations can be performed. When the planning iterations are complete. When Dynamic Query Mode receives a query request. The execution layer executes and processes the result from the planning phase during the execution phase. which enables the Dynamic Query Mode cache to use as many resources as possible. Enterprise ready performance and scalability 407 . After the execution layer completes its process. To improve performance and reduce interference with the dispatcher or content manager. The transformation layer then goes through every node on this tree and checks for nodes that qualify for transformation. the results are passed onto the report service to render the report. This process can take several iterations to finish. it converts this request into a tree structure called a plan tree. during the planning phase. the plan tree is transformed in to a run tree and is now ready for execution by the execution layer. self learning caching facilities for improved performance. Every time a query is executed. 4.1 Creating a connection in IBM Cognos Administration The first step in enabling Dynamic Query Mode connectivity. Optionally.9. you can also specify a description and tooltip for this entry. you need to make configuration changes. Choose IBM Cognos TM1 as the as data source type. Specify the data source name. and click Next. Open IBM Cognos Administration by clicking Launch IBM Cognos Administration as shown in Figure 9-1. that comes with the IBM Cognos Business Intelligence (BI) version 10 samples package. To create a connection: 1. 3. In the Data Source Connections section. As with Compatible Query Mode. For the purpose of this example. 9. IBM Cognos TM1. Click the Configuration tab. after installing the database client software. click New Data Source. you need to install the appropriate database client software. we use the Great Outdoors sample database.2 Configuring Dynamic Query Mode For a package to use Dynamic Query Mode. The connection can be used later when creating the IBM Cognos Framework Manager model and package.1 Handbook . is to create a data source connection to the OLAP database. prior to a data source connection and before a package can be created.2. Figure 9-1 Launch IBM Cognos Administration 2. 5. 408 IBM Cognos Business Intelligence V10. and click Next. 6. Specify the IBM Cognos TM1 connection string details, such as Administration host and server name, and complete the required credentials. Click Test the connection to verify the current configuration. Note that the result page now shows a successful state for both Compatible Query Mode and Dynamic Query Mode, as illustrated in Figure 9-2. Figure 9-2 Connection test 7. Return to the page where you entered the connection string details, and click Finish to end the wizard. 9.2.2 Creating a package in IBM Cognos Framework Manager After you create the connection to the OLAP database server, you can create an IBM Cognos Framework Manager model and import cubes to publish to IBM Cognos Connection as a package, Using multiple data sources: With IBM Cognos Framework Manager, you can include and mix multiple data sources in a package. However, when using Dynamic Query Mode, all data sources that are referenced in the package must be supported by the query mode. Otherwise, the publish will not succeed. Creating an IBM Cognos Framework Manager project and package to use with Dynamic Query Mode is similar to creating those for Compatible Query Mode. At the end of the publishing wizard, you need to specify to use Dynamic Query Mode. To create a new project and publish the default package using the Dynamic Query Mode, follow these steps: 1. Open IBM Cognos Framework Manager, and click the Create a new project link. 2. Give the project a name, and click OK. Chapter 9. Enterprise ready performance and scalability 409 3. IBM Cognos Framework Manager presents a list of supported metadata sources from which you can choose. Choose Data Sources, and click Next as shown in Figure 9-3. Figure 9-3 Metadata source 4. Choose the data source that you created in 9.2.1, “Creating a connection in IBM Cognos Administration” on page 408, and click Next. 5. Choose the plan_Report cube, and click Next to continue. 410 IBM Cognos Business Intelligence V10.1 Handbook 6. Continue until you reach the Metadata Wizard - Finish window, as shown in Figure 9-4. At this point, IBM Cognos Framework Manager has detected that no additional modeling is required and suggests that a package is created for publishing. Verify that the “Create default a package” option is selected, and click Finish. Figure 9-4 Create a default package 7. Give the package a name, and click Yes on the subsequent prompt to launch the package Publish wizard. Chapter 9. Enterprise ready performance and scalability 411 8. Click Next until you reach the Options page of the Publish wizard. Then, select the “Use Dynamic Query Mode” option, and click Publish as shown in Figure 9-5. Figure 9-5 Use Dynamic Query Mode 9. If the package is published, the final window of the publishing wizard displays. Click Finish. 10.After the package is published and available in IBM Cognos Connection, click the package’s properties icon to verify that the query mode is in use, as shown in Figure 9-6. Figure 9-6 Package properties 412 IBM Cognos Business Intelligence V10.1 Handbook 9.2.3 Transitioning to Dynamic Query Mode using IBM Cognos Lifecycle Manager IBM Cognos Lifecycle Manager assists you and enables you to be successful in the critical process of verifying upgrades from IBM Cognos ReportNet®, previous versions of IBM Cognos 8, or IBM Cognos BI version 10 to IBM Cognos BI version 10. It provides a proven practice upgrade process where you execute and compare report results from two different IBM Cognos releases to let you identify upgrade issues quickly. With the introduction of Dynamic Query Mode, IBM Cognos Lifecycle Manager also provides the possibility to verify and compare reports using this query mode. If you take these two features into consideration, IBM Cognos Lifecycle Manager is also a great tool to identify any issues with the transition of your current IBM Cognos version 10 packages and reports, based on Compatible Query Mode, onto Dynamic Query Mode. An additional advantage to this approach is that you are not affecting any packages and reports that users currently access. You can enable IBM Cognos Lifecycle Manager the Dynamic Query Mode on either a global project level or on a individual package basis. Chapter 9. Enterprise ready performance and scalability 413 To enable the use of the Dynamic Query Mode on all packages in an IBM Cognos Lifecycle Manager project, click Settings Configure, and navigate to the Preferences tab as shown in Figure 9-7. Figure 9-7 IBM Cognos Lifecycle Manager project settings In the Dynamic Query Mode Options section, notice two drop-down lists where you can enable Dynamic Query Mode, one for the source environment and one for the target environment. You can now choose one of the following options: The Default option instructs IBM Cognos Lifecycle Manager to validate or execute reports and packages using the query mode that is specified on the individual package. The DQM Disabled option instructs IBM Cognos Lifecycle Manager to validate or execute all reports and packages in this project using Compatible Query Mode. 414 IBM Cognos Business Intelligence V10.1 Handbook The DQM Enabled option instructs IBM Cognos Lifecycle Manager to validate or execute all reports and packages in this project using Dynamic Query Mode. If you select the Default option, you can specify the query mode on the package views individually, as shown in Figure 9-8. Figure 9-8 Specify query mode In the Options column, the DQM in bold means that Dynamic Query Mode engine is enabled on this package. Double-clicking DQM disables the use of the new query mode and the DQM is no longer bold. Testing note: Attempt testing against Dynamic Query Mode only if all data sources that are included in the package are supported by this query mode. Validating and executing packages that contain unsupported data sources will fail. 9.3 Query Service Administration A vital part of running and maintaining a successful IBM Cognos implementation is administration. By this we mean that knowing exactly what is going on in your system at any time and reacting to those events appropriately is essential for you to get the most out of your IBM Cognos investment. Chapter 9. Enterprise ready performance and scalability 415 IBM Cognos Administration allows for easy monitoring, configuring, and tuning of services that re available in the IBM Cognos instance. With the addition of Dynamic Query Mode, exposed as the Query Service in IBM Cognos Administration, new metrics and tuning options are added, as shown in Figure 9-9. Figure 9-9 IBM Cognos Administration 9.3.1 Query Service metrics You can monitor and configure the Query Service status and its thresholds, a starting point of a new state, in IBM Cognos Administration by opening the Metrics pane for the respective service. By default, all metrics record performance information, but no thresholds are configured because acceptable threshold values depend on the IBM Cognos operating environment and need to be configured accordingly. 416 IBM Cognos Business Intelligence V10.1 Handbook You can define the thresholds for the following Query Service metrics: Last response time: The time taken by the last successful or failed request Number of failed requests: The number of service requests where a fault was returned Number of processed requests: The number of processed requests Number of successful requests: The number of service requests where no fault was returned Percentage of failed requests: The percentage of processed requests that failed Percentage of successful requests: The percentage of processed requests that succeeded Response time high watermark: The maximum length of time taken to process a successful or failed request Response time low watermark: The minimum length of time taken to process a successful or failed request Seconds per successful request: The average length of time taken to process a successful request Service time: The time taken to process all requests Service time failed requests: The time taken to process failed requests Service time successful requests: The time taken to process successful requests Successful requests per minute: The average number of successful requests processed in one minute You can also create an agent that notifies you when thresholds are exceeded. Sample agents that monitor the audit database for threshold violations and perform common actions when violations are detected are included in the audit samples package. 9.3.2 Manage the cache in IBM Cognos Administration To increase the performance of recurring reports and to minimize the load on underlying data sources, Dynamic Query Mode includes a self-learning in-memory cache. The query mode can store the data cache in memory as long as needed. One side effect that stems from caching metadata and data request on an ever-changing data source is the possibility that the data that is contained in the Chapter 9. Enterprise ready performance and scalability 417 cache has become old and stale. This issue leads to reports that do not display the most recent data. To help overcome this issue, IBM Cognos Administration includes flexible cache maintenance features. You can locate the first feature by clicking Configuration Query Service Caching as depicted in Figure 9-10. Figure 9-10 Query Service Caching This new entry in the IBM Cognos Administration tool allows the administrator to clear the cache and write cache statistics to file on a Server Group basis in a ad-hoc fashion. After this task runs, all report servers in the selected server group that host a Query Service clear the cache or dump the statistics to a file. If you write the cache statistics disk, a new file is created in the ../logs/XQE/ folder on all IBM Cognos servers in that particular server group that host an instance of the Query Service. The file name adheres to the following template: SALDump_<datasource>_<catalog>_<cube>_<timestamp>.xml Example 9-1 shows an example of this file. Example 9-1 Server group cache statistics file SALDUMP_all_all_all_1283781787890.xml 418 IBM Cognos Business Intelligence V10.1 Handbook Figure 9-11 shown an example of the content of this file. Figure 9-11 Cache state This example illustrates a cache state that contains metrics for a package called DQM_GODB - DAN. This package is based of the GODB cube using the Essbase data source Essbase_DAN. Under the Cache Metrics comment, notice that for this package, 285 query requests were issued with five requests not being fulfilled by the package cache. Chapter 9. Enterprise ready performance and scalability 419 1 Handbook .Other than clearing the cache per Server Group. Figure 9-12 Query Service Administration task The cache maintenance tasks that you can create are the same as under the Query Service Caching section. Figure 9-13 Query Service Administration task options 420 IBM Cognos Business Intelligence V10. and cube as shown in Figure 9-13. all report servers that host a Query Service clear the cache or dump the statistics to file. After this task runs. catalog. with the only difference being the granularity. You can locate these tasks by clicking Configuration Content Administration as shown in Figure 9-12. You can schedule these tasks to clear or to write statistics to file based on data source. you can now also create or schedule Query Service Administration tasks. Figure 9-14 Query Service properties Chapter 9.3.xml Example 9-2 Individual cache statistics file SALDUMP_Essbase_DAN_GODB_GODB_1283781787890.You can determine the values that you need to enter here by examining Figure 9-11 on page 419. there is another area in IBM Cognos Administration that deals with the administration of the Query Service.3 Query Service settings In addition to the cache maintenance dialog boxes. This dialog box allows you to modify logging information and connection time-out.xml 9. Enterprise ready performance and scalability 421 . For example. You can locate this dialog box by clicking Configuration Dispatchers and Services Query Service properties as illustrated in Figure 9-14. a task that is clearing the cache will dump a a file according to the following template as shown in Example 9-2. SALDump_<datasource>_<catalog>_<cube>_<timestamp>. Query execution trace This switch toggles the recording of the run tree or also known as the query execution phase.Figure 9-15 shows the Settings tab within this dialog box. Figure 9-15 Query Service settings The Settings tab allows you to modify the following settings: Audit Logging level Controls the amount of audit information that is recorded for this service. the more it degrades system performance. Query plan trace This switch toggles the recording of the plan tree or also known as the query planning phase. 422 IBM Cognos Business Intelligence V10. The higher you set the logging level.1 Handbook . Write model to file This switch toggles the model information to be written to file for a given package when a report is run against it. Chapter 9. The file will be saved in the . 9. Example 9-3 Model file GreatOutdoors.. disable this service so that the resources that might be used by the Query Service can be used by other components running on this instance.4 Disabling the Query Service The Query Service is spawned as a child process of the dispatcher JVM process and. 4. Enterprise ready performance and scalability 423 .txt Disable query plan cache This switch toggles the caching of the plan tree or also known as the query planning phase. it takes up memory and resources. See Example 9-3. Click the IBM Cognos services entry under the Environment section. Find the Query Service enabled entry. as with other process. 3. 2. If for some reason you do not use the Query Service. Idle connection timeout This setting controls how long connections can be idle for before being terminated. To disable the Query Service: 1. Save and restart the IBM Cognos Service.3.txt and is typically requested by Customer Support as an aid in troubleshooting./logs/XQE/model/ folder as <packagename>. and set this to False. Open IBM Cognos Configuration. Modelers and professional report authors can use this tool to tune and improve the model and reports that they build.1 What is Dynamic Query Analyzer Dynamic Query Analyzer provides a graphical flow representation of the dynamic query run tree.Figure 9-16 shows where in IBM Cognos Configuration you can disable the Query Service to give those expensive resources back to the system.1 Handbook . 424 IBM Cognos Business Intelligence V10. rather than looking through text-based log files. Figure 9-16 Disable the Query Service 9. 9. This feature allows for a better understanding when it comes to important query decisions.4.4 Analyzing queries This section introduces Dynamic Query Analyzer. 2 Working with Dynamic Query Analyzer As a prerequisite./logs/XQE/ folder for every report that is executed using this query mode. shown in Example 9-4. Example 9-4 Query plan trace folder 20100901_16h56m243s_Quantity_Per_Year Chapter 9.3.4.3. The new folder is labeled according to the <date>_<time>_<reportname> template..9. After this setting is applied. “Query Service settings” on page 421. you need to configure Dynamic Query Mode to log the run trees to a file for every query that it handles. Enterprise ready performance and scalability 425 . a new folder is created in the . You can find more information about enabling this query execution trace in 9. and contains the trace files that can be interpreted with Dynamic Query Analyzer. The tool can open these file types either from the local file system or remotely using the HTTP protocol. In the case of a web server, create a virtual directory pointing to the ../logs/XQE folder, and amend Dynamic Query Analyzer Remote Log Access preferences as shown in Figure 9-17. You can launch the Preferences dialog box by clicking Window Preferences. Figure 9-17 Dynamic Query Analyzer preferences Note that in case the remote location is secured, you can also specify credentials to use for authenticating to that remote location. Other than the Remote Log Access, you can also specify IBM Cognos version 10 server and authentication information. Supplying this information enables Dynamic Query Analyzer to browse the content manager and run reports from within the tool. 426 IBM Cognos Business Intelligence V10.1 Handbook By default, the Content Store view does not display. You can add this view by opening the Show View dialog box: 1. Click Window Show View. 2. Expand Navigation (Figure 9-18), and double-click Content Store to add the view. Figure 9-18 Show View dialog box Chapter 9. Enterprise ready performance and scalability 427 A new view is added to the Dynamic Query Analyzer interface. This view is similar to the view shown in Figure 9-19. If you entered the server information in the Preferences dialog box (Figure 9-17 on page 426) correctly, the view displays the packages and reports that are available in the Content Store. Figure 9-19 Content Store view 428 IBM Cognos Business Intelligence V10.1 Handbook Expanding reports entries that have been run with the Query execution trace enabled will display the trace entries found in logs folder as shown in Figure 9-19 on page 428. Double-clicking the Runtree entry opens the trace (Figure 9-20) in the same way as when using File Open log. Figure 9-20 Query run tree Chapter 9. Enterprise ready performance and scalability 429 Apart from the Tree Graph view, which can be exported by clicking File Export Graph, there two more views that can be of great interest when analyzing report queues. The first view is the Graph Navigation view, shown on the left side in Figure 9-21, which displays the same run tree in another graphical tree that is easily expanded or collapsed. Figure 9-21 Graph Navigation and Query views The second view is the Query view, shown on the right side in Figure 9-21, which displays the data source specific query in a conveniently formatted syntax. 430 IBM Cognos Business Intelligence V10.1 Handbook 10 Chapter 10. IBM Cognos system administration In this chapter, we discuss new administration capabilities introduced as part of the IBM Cognos Business Intelligence (BI) version 10.1 release. The focus is broken into the following main topics: IBM Cognos Administration overview Moving to IBM Cognos BI version 10.1 from a previous release Using the administrative features Managing the environment Auditing 431 10.1 IBM Cognos Administration overview Meeting service commitments to the business and anticipating and responding to changing business requirements—all within tight budget constraints—are fundamental IT tasks. As noted in The Performance Manager: Proven Strategies for Tuning Information into Higher Business Performance (ISBN-10: 0973012412), IT also has a strategic role to play, which means going beyond the fundamentals.. If IT spends all of its time and resources ensuring service commitments, it is impossible to initiate and support more strategic opportunities. In this way, any way in which to reduce the time and effort to administer and maintain enterprise applications becomes a strategic gain. IT publications often compare IT’s job and the managing of enterprise applications to conducting an orchestra. There are multiple moving parts (databases, servers, networks) that must all work together to provide a positive user experience. Working in concert, the application can become a critical part of the business. Out of tune, it can become an immediate distraction, a critical failure, or worse, something deemed untrustworthy by users. If this happens, IT has lost user buy-in and, most likely, a portion of its return on investment. 10.1.1 IBM Cognos Administration capabilities IBM Cognos Administration provides capabilities for IT professionals to manage their business analytics systems proactively to prevent problems before they occur. These system administration capabilities (Table 10-1) let IT address the important considerations necessary to exceed commitments to the business while respecting budgetary and other resource constraints. Table 10-1 System administration capabilities Category Knowing the business intelligence system System administration capability Understand usage patterns. Understand the IBM Cognos BI system environment. Understand the business expectations. 432 IBM Cognos Business Intelligence V10.1 Handbook Category Resolving and preventing issues System administration capability Determine what thresholds to address. Track and evolve over time. Understand usage patterns Understanding usage patterns is important for both troubleshooting immediate system issues and for performance-tuning activities over the life cycle across the many components of the business intelligence system. There is no one simple approach to understanding usage patterns. They are unique to the cycle of how the organization gathers information, reviews it, and distributes it. For example, managing a centrally located user community that distributes pre-generated reports over email is not the same as managing a group of users who are always on the road and who want to access all information with a mobile device. Certain patterns are more obvious and better defined. For example, quarter-end and year-end typically generate extra system activity for most departments. Others vary depending on the industry or the culture of how the organization communicates. Understanding usage patterns means knowing the number of users that access a system at any given period of time, where these users are located, how much time they spend using the solution, and how they use the solution. To gain a better understanding of usage patterns, certain metrics provide a good indication of how well the business intelligence solution is adapting to how people use it. Whereas no one metric provides a complete picture of the system, these metric examples are a starting place for tracking usage patterns: Number of sessions This system-level metric provides the number of user sessions actively logged onto the system. Whereas this does not tell you how the business is using the system, it does provide insight into how many people choose to use the system. Number of processed requests This metric provides the number of requests received at a specific point in time. Looking at processed requests alone provides a sense of magnitude in terms of system usage. What are the peak periods versus the slower periods? How much are business intelligence applications being used? This is useful for determining optimal times to schedule batch reporting. Chapter 10. IBM Cognos system administration 433 Number of queue requests This metric provides the number of requests that have gone through the queue. A high number of queue requests might indicate a high volume of system activity at a particular point in time or an issue that needs to be addressed. With a more consistent and deeper understanding of usage patterns, an IBM Cognos BI system administrator can determine whether this is regular activity or an anomaly that needs explanation and potential action. Longest time in queue The time in queue high watermark metric provides the longest time that a request has been in the queue. As it increases, it indicates more system activity and longer waits. This is a useful metric to monitor for changes on a regular basis. If queue times increase over a short period, there might be a change to the usage patterns that requires further investigation. Understand IBM Cognos BI system environment IBM Cognos BI is built on a modern service-oriented architecture (SOA) platform. This flexible platform offers many deployment options based on the preferred IT infrastructure and enterprise architecture strategy. As the organization extends the IBM Cognos BI system, it should monitor the initial deployment strategy to ensure that it still fits the current deployment landscape. For example, you might have initially deployed business intelligence in a centralized server environment with certain affinities in place to handle usage patterns for the initial deployment. You might have decided to dedicate a particular server to running reports that are processing intensive or you might have allocated a local server for a geographic location without adequate network access. As the solution expands, the assumptions driving the initial architecture infrastructure might change. You might need to revisit them to ensure optimal solution performance. To better understand the overall health of the business intelligence system environment, consider the following metrics: Successful requests per minute This metric shows the average amount of successful requests relative to the amount of processing time it took to execute them. The algorithm for this metric was devised to provide a real measure that is not impacted by periods of inactivity. This metric is useful for determining and tracking server throughput. Number of processed requests per dispatcher The number of processed requests per dispatcher is a good indicator of load balance in the business intelligence system. If one dispatcher is handling a 434 IBM Cognos Business Intelligence V10.1 Handbook heavier load, you need to understand why. Is this a deliberate configuration choice based on the usage patterns, or does it require further review? Percentage of failed requests This metric provides the percentage of failed requests based on the total number of requests handled. This metric gives you trending information over longer periods of time to understand how the business intelligence services are performing. Understand the business expectations Whereas formal service level agreements (SLAs) provide a structured approach to communicate and set system expectations, IT needs to keep the communication channels open with business owners and ensure that priorities align with organizational strategy. Business expectations set the agenda for what metrics to track, the thresholds to set, and how to prioritize follow-up actions. For example, a company’s strategy might center on customer service with an objective to improve call center performance. Ensuring that critical call center information is readily available on demand will be at the top of the priority list. IT would want to monitor and ensure system uptime and report response times related to the call centers. By contrast, if an organization is trying to reduce costs through process optimization, then weekly or monthly reports are critical to manage. These reports might not demand faster response times, but monitoring failure rates would be key to ensuring that they are delivered in time to enable a streamlined process. Determine what thresholds to address Setting metrics and gathering data on usage patterns and technology environments and understanding business expectations, is important to effective system management. Identifying thresholds for those metrics simplifies IT’s task (and communication with business owners) by giving the context to determine when to take action. Thresholds make IT proactive. IT can flag issues before they affect users, breach SLAs, and lead to support calls. There can be hundreds of system metrics. Taking time to understand what metrics are vital for system management of your business intelligence solution is essential before setting any thresholds. IT is no different from the users it supports. Too much information makes taking the right course of action as difficult to determine as having no information at all. For example, if the longest time in queue starts increasing, it might require further investigation. If queue and wait times increase, users might wonder whether there is a system issue. Having an agreed-upon threshold on this metric Chapter 10. IBM Cognos system administration 435 would identify the point when IT needs to take further action to understand what is happening. Users would know the threshold and also know that their IT department is dealing with the issue. Track and evolve over time After IT has identified key metrics and set thresholds, it can respond to current situations proactively to avoid business disruption. The next consideration for system management is making system metrics work for IT and the business over the long term. Metrics provide IT and business users with insight into changing usage patterns and technology environments over time. With this information, IT can tune the business intelligence solution and adapt metrics and thresholds to maintain, meet, and improve service standards. To accomplish this, IT needs business intelligence reporting on its system information. As described in The Performance Manager, IT must use dashboards, scorecards, reports, analysis, and alerts to deliver the correct information to drive improved decisions within the IT department. Better IT decisions can affect everyone across the organization. Summary: Five steps to effective business intelligence system management To effectively manage their business intelligence solution, IT managers need to understand their business intelligence solution: Usage patterns Business intelligence system environment User expectations It also means being able to resolve problems and take actions quickly to prevent issues from occurring. To do this, IT needs to monitor metrics, set thresholds, and analyze usage patterns and key aspects of the business intelligence system environment. IT professionals can use this data to initiate conversations with their business partners—bridging technology and information requirements—to stay in touch with user expectations and service commitments. IT has the facility to adapt to changes in usage and new business needs. This becomes the means to drive continuous improvement in the performance of your business intelligence solution, and the roadmap to more effective performance management. With IBM Cognos BI, business intelligence administrators and IT professionals gain new facilities to manage the health of the business intelligence system. Task-oriented system monitoring gives administrators a new, consolidated view 436 IBM Cognos Business Intelligence V10.1 Handbook IT can deliver on its fundamental tasks (that is.2 The IBM Cognos Administration user interface IBM Cognos BI provides a centralized. use one of the following methods: Click the IBM Cognos Administration link from the Launch menu within IBM Cognos Connection (Figure 10-1) Figure 10-1 Launching the administration console from IBM Cognos Connection Chapter 10. Proactive administration through detailed system metrics and the ability to set thresholds that can be monitored let IT professionals identify and correct anomalies. In this way.1. using the IBM Cognos Platform. from scheduled and interactive reports to servers and dispatchers. With these system management capabilities. IBM Cognos system administration 437 . 10. meeting service-level commitments and responding to changing business requirements within budgetary constraints) and have the capacity to drive strategic objectives. IT can realize its full potential as leaders and change agents within organizations. delivers broad system management for IT to confidently deploy business intelligence.of all system activity. web-based administration console that provides administrators with the tools that are necessary to manage the IBM Cognos application. proactively manage the system. IBM Cognos BI. and respond to new requirements while meeting on-time service commitments. To launch the administration console. 1 Handbook .Select the Administer IBM Cognos content link from the welcome menu (Figure 10-2) Figure 10-2 Welcome to IBM Cognos software menu The administration console includes the following tabs (Figure 10-3): Status Security Configuration Index Search Each tab contains a set of logically grouped functions. on the left side of the user interface. Figure 10-3 Administration console tabs 438 IBM Cognos Business Intelligence V10. called tasks. To change the display in the right part of the interface. IBM Cognos system administration 439 . It is possible to toggle between background and interactive activities. but it is not possible to view both simultaneously. Locate the Filter frame in the lower-left side of the page. the Status tab provides a health check of the overall IBM Cognos environment (Figure 10-4). or suspended state. 1. The default view displays what is currently being handled through the background (batch) processes. Looking at the top of the Current Activities interface. a summary graph displays all of the current objects and the state that they are in. In addition to the object execution details. Chapter 10. Figure 10-4 Status tab and the associated tasks Current activities task The current activities task exposes all the objects in the environment (interactive or background) that are currently running in an executing. which have executed in the past. pending. waiting. Toggle between the background activities and interactive activities by selecting the appropriate radio button. click Apply. 2. and those that have been slated to execute in the future.Status tab The Status tab is designed to provide administrators with the visibility and insight into which objects are currently running both interactively and in the background (scheduled or batch). 3. Conversely. Looking at an entry. To see more details about the object.1 Handbook . 440 IBM Cognos Business Intelligence V10. in addition to the summary chart. click the Show Details icon on the toolbar. the following items are visible in the default view (Figure 10-5): The object name The time that it was requested Who executed the object The dispatcher and process ID handling the request The last successful execution time The status The priority assigned to the object Figure 10-5 Executing job in the current activities task Figure 10-5 shows a job in an executing state that was requested by Sam Carter. Keep in mind that the details that are shown might differ from task to task due to the nature of the task and the level of information available to the task. all of the objects currently in the system display in a paginated list.Focusing first on the background activities. This expands the list to include a second line of detail. the Hide Details icon removes the additional line of detail. import. and the priority that has been assigned to the object. Clicking the Advanced options link allows for filtering by the owner of an object being executed. agent. a series of filter options are included in the Filter section in the lower-left frame (see Figure 10-6). and so on). Figure 10-6 List of filters available to help reduce the results displayed in the list The default filter options allow the result set to be filtered by user running the object. and after the options have been determined. Chapter 10. report. the type (job.To help reduce the number of objects displaying or to quickly isolate and focus in on particular objects. the current state that it is in. and scope (package or folder in which the object resides). a dispatcher (which provides administrators with the insight to identify which dispatcher is handling which objects). Any single filter option or a combination of options can be selected. IBM Cognos system administration 441 . click Apply for the result set to be updated. you need to use the audit data (see Figure 10-7). For this type of information. Either it is pending (queued) or is executing. Past Activities task As the name implies. 442 IBM Cognos Business Intelligence V10. this task displays objects that were executed in the past.Moving on to the interactive activities. As with the current activities task.1 Handbook . most of the level of detail that was available in the background activities view is still applicable. The only default filter option available for the interactive view is the ability to filter by status. with a couple of minor exceptions: The objects running interactively have fewer states. The advanced options provide the ability to filter by dispatcher. Figure 10-7 Result list displaying past activities based on filter criteria The interface groups objects into three execution states: Succeeded Failed Cancelled The basic filter options permit filtering on predefined time periods or a user-defined custom period. the result set can be narrowed down to objects run by an individual user. One important thing to note about this interface is that only the objects that were executed in the background will be recorded. Interactive requests do not record histories that can be displayed through this task. Figure 10-9 List of scheduled objects based on the criteria set in the chart More information: Using the buttons to scroll the chart between days does not automatically change the filtered list. “Reacting to bottlenecks due to unexpected events” on page 495.4. This enables administrators to select a specific scheduled object and postpone the execution time. Figure 10-8 Hourly breakdown of scheduled activities in the system In addition to the filter options on the left part of the interface. Chapter 10. see 10.Upcoming activities task This task has gone through changes in the IBM Cognos release. This is accomplished by displaying an hourly breakdown of the amount of objects that have been scheduled to start in that particular hour timeslot (Figure 10-8). the currently applied filters are included above the list (Figure 10-9). For more details regarding postponing scheduled executions. Clicking a specific hour bar in the chart automatically filters the results in the list. it is possible to scroll between days by using the Next and Previous buttons beside the date at the top of the chart. Beneath the chart is a filtered list of the individual activities that are scheduled to execute.2. An interactive chart was introduced to better represent the estimated pending load on the system. which can be at a later date. IBM Cognos system administration 443 . To provide visual context to the filtered list. The default is to display the metrics for the overall system (environment). Metrics: Upper-right frame that lists all of the metrics. in a busy environment where report executions are submitted when the engine is at capacity. That is. System task This task provides administrators with an overall glimpse of how the system is faring through the use of status indicators on a dashboard. All other users have a default priority of 3 when defining a schedule. privileged users can assign a priority of 1 (high) to 5 (low) for the object. that pertains to the object in focus (from the scorecard fragment).1 Handbook . and their score.When creating a schedule. The System task interface is divided into three fragments: Scorecard: Left frame that displays a summary of the overall health of the components that make up the environment. they will be placed into a queue and processed when the engine resources become available. in the process of executing. is terminated when a priority 1 request is received. When the engine is ready to handle a new request it takes the oldest priority 1 request before it takes requests with a priority of 2 or higher. 444 IBM Cognos Business Intelligence V10. Settings: Lower-right frame that shows a read-only view of the configuration parameters that pertain to the object in focus. Priority indicates in which order the dispatcher processes requests when items are queued. Priority reports: Priority does not mean that a long-running priority 5 report execution. It means that the priority 1 request is executed first after the priority 5 request has completed executing. Besides the indicator lights that show the overall health of the object based on tolerance thresholds that can be applied to the metrics. which allows administrators to navigate through the system topology to verify the health of the servers. dispatchers. Industry standard terms employed are: Partially available: Indicates that one or more of its children is unavailable Available: Indicates that the object is online Unavailable: Indicates that the object is offline or is not started Chapter 10. IBM Cognos system administration 445 . which provides the ability to watch predefined key metrics as they relate to similar objects (see Figure 10-10) Figure 10-10 Scorecard view of the servers in the topology Figure 10-10 provides administrators with a view of the entire IBM Cognos BI topology. The sample shows that there are two servers that make up this particular environment. as well as their corresponding metrics The comparative view.The scorecard frame provides two views: The default standard view. and services. the status of the object is also displayed. as well as selecting individual services. a comparative view that allows administrators to see certain predefined metrics as they pertain to related objects in one single view. expand the scorecard frame by clicking Maximize. For example. This maximizes the frame within the interface and provides a comparative view (Figure 10-12). for example. server groups. report service. and their current state. provides a list of all report services. There is. Selecting a service. which maintains the parent/child relationships. The ability to perform metric comparisons is not be possible though unless you switch focus back and forth between report services. the wottmassons server is online and available. This drop-down menu allows for filtering of all servers. dispatchers. Figure 10-12 Comparative view of services in the environment 446 IBM Cognos Business Intelligence V10. The underlying dispatchers and services can be viewed by drilling down on the server name. Monitoring services side by side in the Scorecard view does not provide the ability to easily compare metrics across similar services in the environment. and services. To access this view. drilling down on the ca093489 server name reveals the dispatchers running on that server (Figure 10-11). while the ca093489 server is partially available. Figure 10-11 Server dispatchers. their overall health. using the Change view button beside an entry allows the results to be filtered to obtain the desired view. however.In the previous example.1 Handbook . The net effect is that the overall system status becomes partially available due to the fact that at least one child is unavailable. each with a different status Besides the ability to drill up and down. The parent dispatcher is visible by hovering over the service icon. Figure 10-15 Detailed list of schedules in the system Chapter 10. Figure 10-13 Tool tip on icon displays the dispatcher name Schedules task The schedules task displays all of the active (enabled or disabled) schedules in the environment and which user scheduled the object. who created the schedule.At first glance it is difficult to determine which dispatcher each of the report services belongs to. which produces a tooltip with the parent’s name (Figure 10-13). IBM Cognos system administration 447 . and the status and the priority assigned to the schedule (Figure 10-15). The summary chart is grouped by status (Figure 10-14). Figure 10-14 Summary chart displaying the status of the schedules The result list shows the scheduled objects along with the last modified date. and configuring parameters as they relate to the dispatchers and services that make up the IBM Cognos topology. and roles task This task allows administrators to create and manage application-specific groups and roles in the built-in Cognos namespace.1 Handbook . Content administration task This task is the interface that allows for the creation and management of content import and export definitions. not to be confused with security. 448 IBM Cognos Business Intelligence V10. The IBM Cognos BI release allows for a customizable approach to administration and responsibility so that various areas of responsibility can be delegated to more focused administrators. groups. and managed for the IBM Cognos BI solution. Users. and then either execute or schedule them. This helps to effectively distribute the administrative duties across users and groups. this task also provides the ability to create consistency checks and enhanced search index updates. Configuration tab The Configuration tab is a collection of administrative tasks pertaining to managing the content and the various aspects of the environment. The capabilities task is the tool that allows administrators to set the global capabilities that dictate which features and functionality are available to the users logging in to access the application. to the user community. is a collection of features that can be granted. In addition. Items such as managing printers. created. In addition to the content deployment objects. Data source connections task The data source connections task permits administrators to define and manage connections to data sources and the signon credentials associated with them. Capabilities task Capabilities. or denied. data source connections. are located on this tab.Security tab The Security tab contains all of the tasks that are required for managing the namespaces and capabilities. Printers task This task is where printer connections are defined. the external third-party security namespaces can be browsed and the contents of the users’ My Folders area can be viewed. and the management of business intelligence applications. 10. This testing is achieved by Chapter 10. 10.2 Moving to IBM Cognos BI version 10.Styles task This task allows administrators responsible for the IBM Cognos Connection portal to assign privileges to the various styles (skins) available to the user community. Configuration parameters can be set at the highest level and pushed down to the individual dispatchers through acquired properties. is made easy with use of IBM Cognos Lifecycle Manager. Index Search tab The Index Search tab provides administrators with the settings and parameters required to create and manage the enhanced search index. see 10.3. Access can be granted or denied to each style in the list as required. or they can be set individually on each dispatcher independent of the parent settings. Portlets task The portlets task is the mechanism within the product that allows administrators to manage and control the access rights to the portlets that are part of the IBM Cognos BI solution. For more information regarding the Index Search tab.2.1.1 from a previous release Upgrading. This index provides search results to enhanced consumers when executing searches to answer key business questions.1 Using IBM Cognos Lifecycle Manager to test the IBM Cognos environment IBM Cognos Lifecycle Manager is a utility that performs automated testing of IBM Cognos BI report content in multiple environments. Dispatchers and services task This task provides server administrators with a tool to manage the configuration settings of the dispatchers and services in the IBM Cognos topology. Denying access to a particular style for a user removes the ability to select that style as part of the user’s preferences. “Enhanced search” on page 464. IBM Cognos system administration 449 . This section describes the features of IBM Cognos Lifecycle Manager and how they can be used to manage application life-cycle tasks. programatically validating. Start the IBM Cognos Lifecycle Manager process by going to Windows Start All Programs IBM Cognos Lifecycle Manager IBM Cognos Lifecycle Manager Startup. there are options to open an existing project or to create a new project. select the Create blank project radio button.1 Handbook . On the New Project dialog box. 450 IBM Cognos Business Intelligence V10. Click the new project link (Figure 10-16). Before you can start testing. From the IBM Cognos Lifecycle Manager interface. Launch the application by going to Windows Start All Programs IBM Cognos Lifecycle Manager IBM Cognos Lifecycle Manager URI. You can find information about these practices at: Business Intelligence source environment and upgrade the content to the IBM Cognos target environment following IBM Cognos documented upgrade practices.ibm. Figure 10-16 IBM Cognos Lifecycle Manager welcome panel 4. 2. Creating an IBM Cognos Lifecycle Manager project To do this: 1. the folder structures in both environments must be identical. executing. and then comparing report output to ensure that there are no deltas between versions when upgrading. you must deploy content from an IBM Cognos 8. 3.com/developerworks/data/library/cognos/cognosproven practices.html To successfully complete the report executions and comparisons. Provide a name for the project. 9. Click Create. On the Basic tab. IBM Cognos system administration 451 . where the configuration parameters for both the target and source environments can be supplied. click Configure to launch the Configure dialog box. 6. Figure 10-17 Creating a new IBM Cognos Lifecycle Manager project 7. Select Validation Project from the drop-down Project Type menu (Figure 10-17). 8. supply the following information for both the target and source environment: – Name – Gateway URI – Dispatcher URI – Version Chapter 10. It is a good idea for the name to be indicative of the versions or content to be included. Back in the IBM Cognos Lifecycle Manager interface.5. obtain the parameter values by launching the appropriate versions of IBM Cognos Configuration.1 Handbook .– Maximum number of connections that IBM Cognos Lifecycle Manager will make to the individual environments The more connections specified. Figure 10-18 Defining the servers to use for the project Information unknown note: If certain required information is unknown. the more simultaneous requests will be made to each of the environments. In theory. more connections will result in a shorter time to complete the report executions. 452 IBM Cognos Business Intelligence V10. but there are many influencing factors (Figure 10-18). Chapter 10. supply the username. The Preferences tab controls the various types of output and locales that will be generated for comparison. which removes the need to supply the passwords every time that the project is opened. Generate report content to be validated For IBM Cognos Lifecycle Manager to validate. none of the default options are modified. and perform report comparisons between the target and source environments. and namespace ID that will be used to connect to each environment (Figure 10-19). Figure 10-19 Supply the required security credentials and ensure that they test successfully 11. The default PDF option is used for this example. password. 12.10.Click Save. The Advanced tab contains various options that can be selected and customized. the list of content to include in the project must be specified. but it is advisable that you do not use this option because of the persistence of the password on the file system. There is an option to save the passwords. For this example.On the Security tab. IBM Cognos system administration 453 . execute.Click Test Connection to test the validity of the credentials. click Generate Report List.To generate report content to be validated: 1. From the IBM Cognos Lifecycle Manager interface. Using the Select Search Paths dialog box. 2.1 Handbook . 454 IBM Cognos Business Intelligence V10. Figure 10-20 Selecting content to include in the validation project Click OK. select the desired collection of folders and the package to include by clicking the appropriate check boxes (Figure 10-20). 2.Excluding objects: If there is content that is included as part of this process that you later determine is not required. Chapter 10. We assume that the collection of reports in the source environment is functional. Figure 10-21 Out of scope content for an IBM Cognos Lifecycle Manager project After the content is loaded. IBM Cognos system administration 455 . so the reports should be validated in the IBM Cognos BI target environment to ensure consistency. you can exclude objects from the project by clicking their status and selecting Out of Scope (see Figure 10-21).2 Validating the target environment Although there are both source and target environments. it displays in the IBM Cognos Lifecycle Manager interface and is ready for the validation and execution steps (Figure 10-22). only the target environment needs to be validated. Figure 10-22 The IBM Cognos Lifecycle Manager project is now ready for comparative testing 10. The process of examining all of the selected content begins and verifies whether there are prompt values that need to be satisfied.1 Handbook . which means that the prompt values must be supplied before IBM Cognos Lifecycle Manager can programatically execute the reports. Click GO. such as a type in prompt control. To generate a prompt value automatically: 1. Value import note: If prompt values are defined and saved in IBM Cognos Connection. or individually select the objects that have prompt values missing. they are imported automatically into the IBM Cognos Lifecycle Manager projects and used when the reports are executed.Automatically generating prompt values One of the advantages of IBM Cognos BI reports is the ability to author a single report to satisfy different requests. This is achieved by including prompts in the reports that can change the results returned based on the selected prompt values. 456 IBM Cognos Business Intelligence V10. Using the drop-down menu in the footer. 3. Select the Target Validate task from the left side of the IBM Cognos Lifecycle Manager interface. as usage of report prompts is common. In certain cases. Figure 10-23 Automatically generate prompt values for missing report prompts 4. select Automatic Prompt Values Generation (Figure 10-23). IBM Cognos Lifecycle Manager attempts to generate a prompt value automatically. It is more than likely that there will be required prompts contained within the reporting environments. Click the check box in the upper-right corner of the header to select all of the content. 2. If there are missing prompt values. you must supply the values for the prompts manually. Click the action for Manual Prompt Capture (Figure 10-24). IBM Cognos system administration 457 . Figure 10-24 Manual prompt generation for a report 5. Chapter 10.If there are prompt values that are still required after automatically generating them. 4. Verify that the status of that report is changed to New. Figure 10-25 Manually enter a prompt value Click OK. 6. Navigate to a particular report with a Prompt Values Missing status. A Prompt dialog box opens. complete these steps: 1. Select the Prompt Values tab. Click Back. Manually enter a prompt value that satisfies the prompt criteria (Figure 10-25). 7. Click the report name to change to the Properties page for that report. 3. 2. click Target Execute. From the main IBM Cognos Lifecycle Manager interface. 2. and compared incrementally. it is advisable that you execute the reports incrementally in smaller batches. Figure 10-26 Completing the validation process 10. execute only report content that is required immediately to keep the amount of time to generate the output to a minimum. based on the format and language options and the supplied prompt values.Continue the process until all missing prompt values are defined. Repeat steps 2 through 4 for the target environment. you need to execute reports in both the source and target systems. IBM Cognos Lifecycle Manager does not retrieve output from IBM Cognos Content Store. 5. 4. or both. 458 IBM Cognos Business Intelligence V10.3 Executing target and source content After the validation process has completed successfully.2. Click GO to start the validation process (Figure 10-26). Click GO. After the reports have finished executing. a project contains all of the necessary content. 6. Then. To execute target and source content: 1. 2. Using the drop-down menu. Select the content to execute by clicking the appropriate check boxes. from both systems. Because report executions are submitted through the SDK and executed on the system (target. follow these steps: 1. 3. folders and packages are validated. which ensures that the report output that is created for comparison contains the same parameters. select Execute reports. Based on priority. Typically. executed. select Validate models/reports. versus submitting all content for execution. Initially. Instead. it generates all content. Click the Source Execute task.1 Handbook . source. the objects are marked as out of scope for the project. or both). but the content is in different stages of validation. Use the drop-down menu to click Compare reports and click Go. Chapter 10. Given that IBM Cognos Lifecycle Manager saves all data locally. 2. To compare report output versions: 1. Click the check boxes next to the reports that you want to compare. Figure 10-27 Performing the Source and Target Execute tasks (source shown) 10. There are two options for comparison. you need to complete further analysis to determine the nature of the difference and whether the difference is expected or acceptable. Before comparing output versions. IBM Cognos system administration 459 . you do not need to be connected to the source and target systems to complete the compare task. Click Output Compare to navigate to the Output Compare window. 3. and the other option does a PDF text comparison looking for changes to the contents.4 Compare the output to ensure consistency If the report comparison indicates deltas.Figure 10-27 shows the Source Execute window within IBM Cognos Lifecycle Manager. you need to validate and execute a report in both the source and target systems.2. The first option uses Adobe Flash as part of a visual comparison tool. and the difference is approved. which can then be opened in most web browsers.1 Handbook . IBM Cognos Lifecycle Manager provides content validators with the ability to annotate any content object. In addition to the commentary. content validators can also approve or reject the comparisons based on the findings. and the status of each report is updated to show the result of the compare task. the approved status causes the status on the package to change to Completed.2. if one report in a package produced a delta. the ability to comment on the status or progress is critical.5 Analyzing the project status A summary of project status can be viewed at any time throughout the validation process by opening IBM Cognos Lifecycle Manager’s Task Summary view. Figure 10-28 Output Compare showing reports that have been compared with no differences The compare task executes. Because validation projects can span multiple days or weeks. 10. The page is separated into the following sections: Tasks Validate Run Compare Total (which is a list of every report in the project along with the report execution times in both the source and target systems) 460 IBM Cognos Business Intelligence V10. As each step in the project completes. For example. The Task Summary view in IBM Cognos Lifecycle Manager has a print option and is also exportable to an Adobe Flash file.Figure 10-28 shows an example of the Output Compare window. the status of each task is updated and rolled up into the summary view. and there might be multiple people participating. When Chapter 10. Figure 10-29 IBM Cognos Lifecycle Manager: Task Summary (status summary for a project) Clicking around the Task Summary interface provides a filtered view of the statuses of project tasks for each phase of an IBM Cognos Lifecycle Manager project. indicates that the object is valid – Invalid: Following validation. or output comparison operation has not been performed on the object – Prompts Missing: Indicates that certain reports in the package or folder require prompts. execution. and no prompt values are defined If a report has required prompts. Each section shows the total number of objects that are in or have passed through each status within each project phase. execution. you must provide a prompt value for each one before you can execute the report. Every IBM Cognos Lifecycle Manager project moves through five phases and various states within each phase: Validate source/validate target. – Valid: Following validation. indicates that the object is invalid – New: Indicates that the validation. or output comparison operation. – Out of Scope: Informs IBM Cognos Lifecycle Manager to ignore this object during the validation.Figure 10-29 shows the IBM Cognos Lifecycle Manager Task Summary window. IBM Cognos system administration 461 . this value is specified. you can apply it to all of the object's children by selecting the Apply to all actions target and source check box Execute source/execute target. Compare – No Differences: Following output comparison. you compare reports in XML and PDF formats.1 Handbook . indicates that differences detected were rejected 462 IBM Cognos Business Intelligence V10. indicates that the object failed the operation – New – Prompts Missing – Out of Scope – In Progress: Indicates that the validation. or output compare. indicates that no differences were found – Differences: Following output comparison. but there are differences in the PDF output. and the XML reports are identical. or output comparison operation has not been performed on all children of an object or has failed for one or more children of an object – Partial Success: Indicates that the execution or output comparison operation was partially successful For example. indicates that differences found were approved – Reject: Following output comparison. indicates that the object executed successfully – Fail: Following validation. execution. execution. indicates that differences were found – New – Prompts Missing – Out of Scope – In Progress – Partial Success – Visual: The output type that the report was executed in requires a visual compare using the compare tool – Approve: Following output comparison. – Succeeded: Following execution. Figure 10-31 shows an example of commentary added to an IBM Cognos Lifecycle Manager entry using the Note column. an icon displays in the row for that report under the Notes column. A red dot on a report icon indicates that a phase failed or was rejected. When a note exists for a report within a project. A black dot indicates that the status for that phase is new. IBM Cognos system administration 463 . A missing icon indicates that phase has not yet been started. Hovering the mouse pointer over the status icons in the Progress column for a report provides a pop-up containing the status of each phase in text form. Figure 10-30 shows an example of status icons for a single report in IBM Cognos Lifecycle Manager. IBM Cognos Lifecycle Manager allows a report administrator to provide commentary related to specific reports. Figure 10-31 Notes associated with a rejected report in IBM Cognos Lifecycle Manager Chapter 10. Hovering the mouse pointer over the note icon for a report provides a pop-up containing text-based commentary associated with the report.For quick visual reference. Figure 10-30 Task status icons in IBM Cognos Lifecycle Manager Another element of the Task Summary window within IBM Cognos Lifecycle Manager is the Notes column. the bottom panel of the IBM Cognos Lifecycle Manager Task Summary window also provides a status icon displaying the status for each phase of the project: A green dot on a report icon indicates that a phase completed successfully. 3. values in the 464 IBM Cognos Business Intelligence V10. Both versions of the output are compared. gain insight. 10. modelled metadata. Select the new content to be compared by clicking the appropriate check boxes. click the Output Comparison task. These steps are performed without any user interaction. licensed enhanced consumer users have the ability to search for keywords located in report names. 2.3 Using the administrative features This section highlights administrative features in IBM Cognos BI version 10. After the comparison finishes. Click GO. Performing this type of comparison executes the following operations: The reports are sent to the source for execution and the output is returned to IBM Cognos Lifecycle Manager. users might potentially execute many reports. Comparison note: This method of comparing reports simultaneously submits requests to both the source system and the target system.10. 4.1 Enhanced search When trying to answer critical business questions. there is a way to streamline the process. To perform a one-click comparison: 1. 10. and spend large amounts of time browsing report output searching for desired results.6 One-click comparison Now that the phases of the IBM Cognos Lifecycle Manager utility are understood.2.3. or identify new trends. Change the drop-down menu to the Compare reports option. With IBM Cognos BI. The reports are sent to the target for execution and the output is returned to IBM Cognos Lifecycle Manager.1 Handbook . the project is ready for analysis of any reported deltas.1. report output. In the IBM Cognos Lifecycle Manager interface. if there is a report template that should be used for financial reports. The last section provides users with suggested content based on predefined suggestions that are defined on the Index Search tab. enhanced consumers are presented with an interface that contains the following frames (Figure 10-32 on page 466): Refinement The left frame that allows users to further refine the search results by filtering by various elements. The Create and Explore section provides the user with a default query that is based on the search criteria on which to start building a report. Results Center frame that displays the search results based on the search criteria and the refinements applied. IBM Cognos system administration 465 . then a suggested definition can be created that displays the link to the report template any time that users searched on words such as sales. such as the Google search appliance. This display is an excellent way to provide users with a headstart on getting the information that they are after when there are no existing reports that provide the necessary detail. and results returned from an external third-party search engine. Chapter 10. and metadata. profit. Related The related frame brings back results from an external third-party search source. such as the Google search appliance.reporting databases. owner of the object. which is an excellent way to include content that is not IBM Cognos so that users are presented with more inputs to help answer their questions. The types of filters that are available are creation date. results. For example. Users can use the default query and then customize to query to create a report that can be reused or shared with others. or budget. object type. When performing a search through IBM Cognos Connection. The standard section contains the search results directly related to the criteria used to perform the search. Selecting one of these refinements filters the results contained in the results frame. there can be multiple sections within the search results returned. Based on the search criteria. 466 IBM Cognos Business Intelligence V10. an enhanced search index must first be constructed. Because creating an index can be a lengthy process.Figure 10-32 Enhanced search user interface Before users can use the new search capabilities with IBM Cognos. depending on the types of objects included and whether reporting data is being indexed.1 Handbook . build an initial index that just includes IBM Cognos content. Administrators can influence the contents of an index by customizing the types of objects that are included as part of the build process. incremental additions can be made to the index to expand the search results returned to business users. After the first index has been created. This type of index results in a shortened time to build the search index and allows users to perform searches for content immediately versus having to wait for a full index to be constructed. IBM Cognos system administration 467 . Monitor usage and business requirements so that data values for popular reports and packages are incrementally added to the index. As the reporting environment evolves. Index scope Not applicable Proven practice Reduction of content types to index results in faster build times. Initial index containing only IBM Cognos content to quickly enable search capabilities for enhanced consumer users. All entries Refresh the search index. Only entries that have changed Chapter 10.Table 10-2 provides steps for building an effective enhanced search index. Typically defined once and only changed when the business requires. Executed once when the initial reporting content is ready to be used. Only entries that have changed Expand the index to include reporting data values. Build and schedule initial index. Reoccurring task that can be run nightly or weekly depending on the frequency of changes to the content. Table 10-2 Steps to building an effective enhanced search index Step Define indexable object types. the index will have to be refreshed to reflect the changes. 1 Handbook . deselecting the output type eliminates the time required to crawl through the report output results stored in the IBM Cognos content store (Figure 10-34). In the Indexable Types section. After the desired object types have been selected. it is good practice to verify that all of the content types are required for a particular environment or to meet business requirements. For the initial build. click Save. To modify the types of content included in the search index: 1. select the Index task from the left frame and the General tab in the right frame (Figure 10-33). select the object types to be included in the initial search index.Define indexable object types Although the default parameter settings for enhanced search allow administrators to build an initial index. Indexable Types Agent Data movement Package PowerPlay Report Report Output Agent view Document Package configuration PowerPlay report view Report template Output XML Analysis Folder Page PowerPlay 7 cube explorer Report view Port Select All Deselect All Content reference Job Planning application PowerPlay 7 report Shortcut Dashboard Metric task Planning task Query URL Figure 10-34 Indexable object types 3. On the Index Search tab from within the administration console. 468 IBM Cognos Business Intelligence V10. Figure 10-33 Index Search General tab 2. In IBM Cognos Connection. but is turned on by default because it speeds up searching. review the settings listed in Table 10-3. Also. Table 10-3 Index search security considerations Setting Index Access Control List Description Specifies whether the access control list for each object is retrieved from Content Manager during indexing. or how to predefine suggested content for users. consult the “Managing Index Search” chapter in the IBM Cognos Administration and Security Guide. All three index access control list settings are selected by default. the internal security check is used. For more information. see the IBM Cognos Installation and Configuration Guide. the Content Manager security check is used. If all three settings do not match.Securing the index and search results The index update service can retrieve the access control list from IBM Cognos Content Manager during indexing. IBM Cognos system administration 469 . Under Security. When selected. This option consumes additional resources. General must be selected. For more information regarding the index search options. click Launch IBM Cognos Administration. General and in Storage. The setting is selected by default. 3. the Content Manager security check is used. 2. On the Index Search tab. click Index General. how to include third-party search engine content. in the upper-right corner. Chapter 10. When deselected. To secure the index and search results: 1. Specifies whether the index access control list is updated when an incremental index is run. Index Access Control List in Search. Update Policies These are the minimum considerations required to build the initial index. Build an initial index Search results depend on the access permissions of the person who indexes the content and the user who searches the content. click Save only. the other values can be supplied as well. To build an initial index: 1. but for the initial index. 3. By default. Only the name is required. Click Next. Build the search index with an account that has access to all content in the public folders so that all content is available to users. Click Next. Content can be selected for exclusion if desired. Select the New Index Update icon from the toolbar. Navigate to the Content Administration task on the Configuration tab within the administration console. User permissions note: Regardless of the user permissions that built the enhanced search index. 5. all of the reporting content located in the Public Folders is included. which launches the New Index Update wizard. 470 IBM Cognos Business Intelligence V10. index all Public Folder content unless there is absolutely content that should not be included. search results only show content to which the user performing the search has access. On the Select an action wizard page. and then click Finish. On the Select the content wizard page. Specify a name of Initial Index for the new index update task on the Specify a name and description wizard page. 4. but. if desired. 2. the content to be included in the search index can be specified.1 Handbook . IBM Cognos system administration 471 . Modify the time that the task will run so that it does not coincide with heavy reporting periods. ensure that only the Properties and metadata option is selected for the content options. Because the initial index should just include the reporting content. whether metadata and reporting data values are included. which launches the Run with Options dialog box. Click the Run with options icon in the Actions list. 2. Using the drop-down calendar control beside the date field. select a month and day on which the task will execute.This results in an index update definition being created. 5. Assuming that the task is to be run at a later time. 4. depending on the options selected to index. 3. Figure 10-35 Index update definition created and ready to be executed Building the initial index for enhanced search can be a lengthy. the common practice is to schedule the index build process to not occur during periods of heavy reporting usage: 1. select Later. but will not have executed the actual build process because the Save only option was selected (Figure 10-35). To help ensure that building the initial index has limited impact on users. Chapter 10. and the topology of the installed IBM Cognos components. resource-intensive process. If an index had previously been created. Click Run (Figure 10-36). 7. Figure 10-36 Scheduled initial index update task definition 472 IBM Cognos Business Intelligence V10. The scope should be set to All entries.1 Handbook . the All entries option would replace the older index with the index created as part of this task. as this is the initial creation of an index.6. use either the Period drop-down menu in the Filter section to select one of the pre-generated periods. If auditing is enabled. After the frequency of change has been detected. navigate to the Upcoming Activities task on the Status tab and then use the graph Next control to scroll to the appropriate date. for more details regarding the audit facility within IBM Cognos BI. navigate to Status Past Activities.To verify that the index update task has been scheduled for the desired time. click the Advanced options link and select Type Index update. the execution duration for that task must be used. the easiest way to check the volume of change to the content for a given period of time is to execute a report against the audit database. “Auditing” on page 503. 4. Neglecting to periodically update the search index results in search results returning links to reports that are no longer present. it is essential to understand how long it takes to build the search index. To locate the history detail for the index update task. 3. If a lot of results are expected for the selected date range. 1. it is important to first determine how frequently the content is changing. administrators must ensure that the contents of the search index remain as accurate as the business demands.5. you can use the Advanced options in the Filter section. 2. that corresponds to the date and time when the index update task was started. Chapter 10. To avoid staleness of the search index. If the task was set to run at a much later date. Because the only point of reference at this juncture is the initial index update task. Refer to 10. After the date is located. In the administration console. or incomplete results because newer content is not highlighted. Figure 10-37 Upcoming scheduled index update task Refresh the search index As the content of the IBM Cognos BI analytics solution evolves. or click the Edit link to define a date range using calendar controls. Click Apply. IBM Cognos system administration 473 . the index update task displays in the results list (Figure 10-37). in case the duration spanned more than one day. The previous sections covered creating the search index and populating it with both modeled metadata and object properties. there is still a factor missing from the equation.5. end time. but the last dimension to providing users with a complete enhanced search index is the inclusion of data values from the underlying reporting databases. However. 474 IBM Cognos Business Intelligence V10. you might expect there will no longer be any additional search index updates required. and date. Because future index updates will not likely include all of the content that this task did. Figure 10-39 Verifying time taken to execute the initial index update task Expand index to include reporting data values With an incremental index update strategy defined and in place.1 Handbook . administrators are now provided with the maximum amount of time that should be required to update the index incrementally (Figure 10-39). Figure 10-38 Viewing the run history details of an index update task 6. Examining the upper portion of the “View run history details” dialog box reveals the start time. In the list of results on the right frame of Past Activities interface. locate the index update in question and select Initial Index View run history details (Figure 10-38). such as reports. The following secured features are associated with this function: Schedule by day Users can schedule entries daily. past activities. To expand this data to include actual data values that are contained within the report and the reporting database. only the object metadata and properties were included as part of the initial index. Users can access current activities. Chapter 10.3. IBM Cognos system administration 475 . as well as access to functionality within the application. and data sources. queries. regardless of whether the metadata has been used in reports. 10.As shown in Figure 10-36 on page 472. modify the index update task to include one of the following types of data: Referenced data Specifies that only data referenced by the expressions encountered in reports. or analyses After a strategy is devised for which content will have data included in the index. To grant access to the scheduling functionality independently from the monitoring functionality. and schedules on the Status tab in IBM Cognos Administration to monitor the server activities and manage schedules (as long as they have been granted access). queries. packages. The scheduling capability controls access to the scheduling functionality for items that can be run. Schedule by hour Users can schedule entries by the hour. upcoming activities. and analyses that are included in the scope of the indexing task are indexed All data Specifies that all data encountered in the models that are within the scope of the indexing task are indexed. create a new index update task with the appropriate settings and ensure that the scope is set to include only entries that have changed so that this new content is added to the index incrementally. use the scheduling capability.2 Restricting the scheduling options Administrators for IBM Cognos environments are able to control user access to object such as reports. Schedule by month Users can schedule entries monthly. 2. Sam decides to grant her access to IBM Cognos scheduling capabilities. Schedule by year Users can schedule entries yearly.1 Handbook . Click the Security tab.Schedule by minute Users can schedule entries by the minute. Sam Carter is the Administrator for the IBM Cognos environment at the Great Outdoors company. Sam now grants scheduling capabilities to the new self-serve scheduling role. Schedule by week Users can schedule entries weekly. for example. To simplify the addition of this capability for more users in the future. by minute scheduling is also denied for other capabilities that allow by minute scheduling. 476 IBM Cognos Business Intelligence V10. Sam creates a new role in the Cognos namespace called self-serve scheduling and adds Lynn Cope to that role. 3. Because Lynn Cope is an Advanced Business User and Report Author at Great Outdoors company. Sam decides to permit certain users at the Great Outdoors company to access the IBM Cognos scheduling features so that they can create and manage their own schedules. Sam is busy and decides to delegate several of his more simple tasks so that he can pay more attention to other work. Schedule by trigger Users can schedule entries based on a trigger. To grant access to scheduling capabilities within IBM Cognos BI: 1. Click Capabilities. User denied access to schedule by minute: If a user is denied access to the schedule by minute capability. the schedule by month capability. Scheduling Priority Users can set up and change the processing priority of scheduled entries. Log on to IBM Cognos BI using credentials for a user with access to administrative features and open IBM Cognos Administration. Figure 10-40 The Capabilities panel within IBM Cognos Administration Chapter 10.4. Click Scheduling. 5. The default display settings for IBM Cognos BI only display 15 items per window. Scheduling capabilities are controlled through a capability called Scheduling. This indicates that there are additional features associated with this capability that can be secured individually. Figure 10-40 shows the Capabilities panel. Note that the scheduling capability is highlighted as a live link. IBM Cognos system administration 477 . If necessary. you can display the scheduling capability by clicking the Next icon. Click Add and browse to the Cognos namespace by clicking Cognos. click Permissions.1 Handbook . 8.6. 478 IBM Cognos Business Intelligence V10. Click the down arrow next to Schedule by hour. Figure 10-41 Setting properties to grant access to schedule by hour capabilities 7. and then click Set Properties. Access to secured functions can be configured through the capability’s properties. When the Set Properties window opens. Figure 10-41 shows the features that are available within the scheduling function. Figure 10-42 shows the process of selecting the role that will be granted access to the schedule by hour capability. Figure 10-42 Selecting the custom self-serve scheduling role Chapter 10. click the add arrow. IBM Cognos system administration 479 .9. and then click OK. Select the role by selecting Self-serve Scheduling. You can also use the process that we described previously to restrict access to scheduling capabilities. 480 IBM Cognos Business Intelligence V10.Members of the self-server scheduling custom-created role can now access scheduling functionality and schedule their own activities without administrator involvement. Figure 10-43 Granting access rights 11. Administrators can permit or deny users the right to schedule their own activities based on schedule frequency or by trigger.10. one group of users can be granted full access to scheduling functionality. For example.Grant self-serve scheduling execute and traverse rights for the schedule by hour capability by selecting Self-serve Scheduling and then selecting Execute and Traverse under the Grant column. whereas another can be limited to only scheduling activities on a weekly basis. Click OK. Figure 10-43 shows an example of granting access to a capability for an IBM Cognos role.1 Handbook . a generic database credential is employed when running a summary call center report that compares the hourly breakdown comparison of incoming calls versus resolution rates.10.m. such as managers looking to see their teams’ individual call closure rate. their own unique database signon. Activities within IBM Cognos can be scheduled to execute on a recurring basis within a window of time during a single day. For example.3.m. Figure 10-44 Schedule frequency set to run every 5 minutes between 6:00 a. For example. and 8:00 a. If deeper personalized analysis was required. IBM Cognos system administration 481 . managers could provide. Chapter 10. and persist.m. IBM Cognos allows administrators to empower selected users to manage their own database credentials.3 Intra-day scheduling window Administrators can further manage the distribution of system processing load through the use of intra-day scheduling features for IBM Cognos. on Mondays.3. Figure 10-44 shows an example schedule frequency setting that uses the intra-day scheduling feature. This is essential when certain reports require a personalized view of the data instead of a generic result set. it can become challenging to manage user signons for large volumes of users across multiple data sources.m. on Mondays 10. and 8:00 a. a job can be scheduled to run every 5 minutes between the hours of 6:00 a.4 Allowing users to persist personal database signons As an administrator. 1 Handbook . Launch the administration console by selecting either the Administer IBM Cognos content link from the Welcome menu or by clicking the IBM Cognos Administration link from the Launch menu within IBM Cognos Connection.Empowering users to manage signons Administrators can provide users with the ability to manage signons by granting them access from within the administration console: 1. Otherwise. there are numerous namespaces. ensure that the Override the access permissions acquired from the parent entry check box is selected. Using the Actions drop-down menu for the manage own data source signons capability. 7. 5. b. group. 482 IBM Cognos Business Intelligence V10. only groups and roles display. group. 2. a. or role by browsing the namespaces from the “Available entries” dialog box. select the newly added entry by clicking the appropriate check box. On the Permissions tab. Then. or the location of the desired entry is unknown. On the Security tab. select it by clicking the corresponding check box. select the Capabilities task from the left frame. or role. click the green arrow button to the right of the dialog box to add the object to the “Selected entries” dialog box. 6. groups. 8. select Set Properties (Figure 10-45). Figure 10-45 Setting the properties on the manage own data source capability 4. administrators can search for specific accounts by selecting the Search link in the upper-right corner of the page. or roles to the capability access control list. From the access control list on the left. Remember to select Show users in the list if browsing for a user account. Click the Add link to add users. If the namespace is large. 3. After you locate the appropriate user. Click OK. Navigate to a specific user. in most cases. when standard defined signons are required to ensure data consistency for all users executing reports. Select the Grant check boxes for both of these actions. he is prompted to select one to use. Figure 10-46 Assigning execute and traverse actions Removing the IT burden of managing signons There are a few scenarios in which the management of data source signons should be managed by IT. it is a good idea to empower users to manage their own signons. However. providing that the users are aware of the necessary credentials. Created at the request of the user. Personal Chapter 10. Behavior If a user has access to multiple signons for the same data source. managed by IT. and they cannot have any access to an existing defined signon.9. To enable this capability. Click OK. There cannot be more than one signon per data source connection. Stored as part of the user profile and accessible through the My Preferences interface. IBM Cognos system administration 483 . Table 10-4 Order of preference for data source signons Signon type Defined and managed by IT Storage location Defined and stored under the data source connection name in the administration console (data source connections task). for example. For users to manage their signons. There is an order of operation in effect when the query layer is determining which signon to use when the query is executed (Table 10-4). they must be granted the ability execute and traverse on the manage own data source signons capability. and the policy in the left frame updates automatically (Figure 10-46). the execute and traverse actions must be granted. granting the user community access to at least one signon forces the usage of the defined signons. In an IBM Cognos environment that has been deployed with IT managing the data source signons. IT always retains control of the data source authentication strategy.1 Handbook . Behavior When the prompt is presented. If corporate policy changes and IT needs to assume responsibility once again. a prompt is presented to execute the report. 484 IBM Cognos Business Intelligence V10. while ignoring the existence of any saved personal credentials. either deleting the signons or denying everyone access to the existing signons (the most advisable approach). Figure 10-47 Privileged users can optionally persist data source credentials The key element to this feature is that although it empowers the user to manage his own data source credentials.Signon type Prompt Storage location When a user does not have access to a defined signon or a personal signon in his profile. quickly converts the authentication strategy and shifts the responsibility to the user. the user is optionally able to persist that signon in their profile if he has access to the correct capability (Figure 10-47). For example. Chapter 10. after the threshold is exceeded (one day. thresholds are eventually reached after a period of time. With count metrics for failed and successful requests (NumberOfFailedRequests and NumberOfSuccessfulRequests). one month. one week. IBM Cognos system administration 485 . FailedRequestPercent and SuccessfulRequestPercent This metric displays the percentage of failed/successful requests that have occurred..10. and so on). The system metrics are broken into three dimensions: Individual metric A metric group The service to which these pertain Individual metrics Key individual metrics that are monitored as part of a solid system management methodology are: AverageTimeInQueue The average time in queue is calculated based on the total amount of time that all requests have spent in the queue divided by the total number of requests that have been in the queue. and never return to a green status. This averaged value maps to the latency metric in the administration console. The reason for this is that tolerance thresholds built on the percentage metrics are always relevant regardless of when the last reset occurred. if the threshold score is set to turn red after 50 failed requests. One of the primary features is the metrics that are a part of the system task that pertain to the various components that make up the IBM Cognos BI environment: Services Dispatchers Servers Server groups These metrics provide administrators with better insight into the status and overall health of the various components that make up the business analytics solution. ResponseTimeHighWaterMark The value for this metric shows the longest period of time spent processing a request. the value is 100% and more than likely results in a threshold score of red. thus moving the red score to yellow and then eventually green. The percentage metrics change over time. NumberOfFailedRequests Number of failed requests. if the failed requests hit 50 after the first 50 requests. NumberOfProcessedRequests This specifies the amount of received requests that have been processed by the dispatcher. From that point forward. NumberOfSessions This indicates the amount of user sessions that are currently active in the environment. Using the previous example. not to be confused with the successful request percentage metric. the metric value decreases. This value maps to the number of queue requests in the administration console. NumberOfSuccessfulRequests Number of successful requests. if every request is successful. is a cumulative count of the number of failed requests that have occurred since the last reset.the threshold score is always red until the service is restarted or the metrics are reset. if only the percentage metrics are monitored through thresholds. 486 IBM Cognos Business Intelligence V10. QueueLengthHighWaterMark The value for this metric is an indication of what the highest amount of requests in the queue has been since the metric was last reset. either successful or failed. no resetting of metrics ever needs to occur. NumberOfRequests This specifies the amount of requests that have passed through the queue since the last time that the metrics were reset. MillisecondsPerSuccessfulRequest This is the average amount of time spent processing a successful request. Due to this. NumberOfSessionsHighWaterMark This indicates the maximum amount of user sessions that were active in the environment at one time. not to be confused with the failed request percentage metric.1 Handbook . is a cumulative count of the number of successful requests that have occurred since the last reset. the value would be 5. or perception. the value for the metric is 45. IBM Cognos system administration 487 . 10 requests are executed successfully and the server has spent 30 seconds executing the requests. After the second minute.5) is 45 seconds. each with a queue time of 1. ServiceTimeFailedRequests This metric value is the total amount of processing time that was used for all failed requests. The formula is: (Number of successful requests * service time for successful requests) / 60 seconds For example. if 30 requests have been in the queue at some point. the traditional definition would indicate the average is 10 requests per minute. This particular metric value is the total amount of processing time that was used for all requests. When looking at the metric after a minute. including both failed and successful. Chapter 10. This metric is a great way to track server throughput. but rather it is an indication of how many requests have been processed during the amount of time that the system has spent processing them. SuccessfulRequestsPerMinute The definition of this metric slightly differs from the traditional definition. TimeInQueue This cumulative metric shows the total amount of time that has been spent by all objects in the queue. and so on. TimeInQueueHighWaterMark This displays the longest amount of time that one object has spent in the queue. For example. of successful requests per minute.ServiceTimeAllRequests The service time metrics show the amount of time that was spent processing the requests. This is done to provide a real value that is not impacted by periods of inactivity. This algorithm shows that what the average successful requests is based on is the amount of processing time that it took to execute them and not the actual time. The actual use of this metric in IBM Cognos BI would be 20 after one minute and would still be 20 after 2 minutes. This value does not indicate an ongoing average from minute to minute. as the total time spent (30 * 1.5 seconds. but the majority of the individual metrics are a part of the three metric groups. Therefore. For more information about the services that make up the IBM Cognos BI solution. metrics are then consolidated through the rest of the topology: services to dispatcher. From the service level. requests to the individual services affect the metric values at the higher server and system levels. Services The final dimension to the system metrics is how the individual metrics and metric groupings relate to the service to which they are associated. Notable metrics that are included in this group are: – Amount of processed requests – The percentages of successful versus failed requests – The amount of processing time for these requests Queue These metrics provide insight into the amount of requests that are not handled immediately and therefore are placed into a queue to be processed when the resources become available. Metrics such as the number of current processes and the maximum number of processes that were spawned are available. There are a few metrics located outside of the three main metric groups (JVM uptime and heap size information. Understanding what actions are performed by each of the services provides greater insight into the values that are being reported. for example). and how much time requests have spent in the queue.2. The metrics are collected at the service level. This is an important fact when working with environments that are made up of multiple servers or dispatchers. dispatchers to the server. which is the lowest level in the topology. The system task displays metrics at all of the levels of the topology.1 Handbook . “IBM Cognos BI services” on page 16. When viewing the metrics scorecard that is 488 IBM Cognos Business Intelligence V10. see 2. Several metrics in this group are the amount of requests that have been in the queue.Metric groups The individual metrics are divided into three main metric groups: Request These metrics pertain to the specific requests that are handled by each component in the environment. and then the servers to the system level.1. Process These metrics display information regarding the amount of processes required by the product to function. the length of the queue. but does not change the contextual object or refresh any of the values in any of the other windows (Figure 10-49). If a higher level is in context. Because the metrics are live. they are dynamic and are only reset when the IBM Cognos BI service or process is restarted or when an explicit request. all of the metrics that belong to the service in context are reset. That said. IBM Cognos system administration 489 . For example. the poorest indication rises to the top. and the administration console in general. This was done to provide administrators with the ability to visually see. based on the general feedback from administrators. is that there is no auto-refresh feature. such as a server or the system. they are volatile and thus are reset every time that the service or server is restarted. It was a design decision. That is.available as part of the system task in the administration console. the green. there are a few manual refresh options available within the administration console: Fragment refresh This is the button located in the upper-right corner of the fragment that refreshes the values of the contents in the frame. and red traffic light indicators reflect the most severe indication value in the hierarchy. in the system task. all of the metrics that pertain to that object are also reset (Figure 10-48). whether any key metrics are not performing as well as expected without having to drill down to lower levels of the hierarchy. refreshing the metrics frame updates the metric values. is executed. As the metrics are dynamic and reside as part of the dispatcher. Figure 10-49 Refresh button will retrieve the current values for the metrics Chapter 10. yellow. Figure 10-48 Reset the request metrics as they pertain to the Content Manager service An important note regarding the metrics. at a glance. either manual or programmatic. This is possible by using the Reset button beside the metric grouping name in the Metrics dialog box. that this limited the ability to thoroughly analyze a series of metrics if the values changed on a regular basis. When the Reset is clicked. In certain situations it might be desired to have the metrics reset without restarting the entire application. The metric values are updated in real time and reside in an MBean within the dispatcher Java process. after the refresh has occurred.1 Metric tolerance thresholds Whereas the ability to have real-time metrics displayed in the administration console provides valuable information when monitoring the environment. it is possible to manually set a tolerance threshold on the individual metrics.1 Handbook .Page refresh This button is available on the main IBM Cognos BI toolbar located at the top of the browser page. metrics. which can provide the basis for automated alerts through IBM Cognos Event Studio. the entire page being viewed is refreshed. clicking this button when viewing the system task refreshes the scorecard. When clicked. This action causes all context to be lost and. 490 IBM Cognos Business Intelligence V10. With this in mind. For example. the value all but disappears when not actively in the console watching the metrics. Figure 10-50 Button to refresh the entire page Browser refresh Using the browser refresh button to update the administration console causes all pages to be refreshed. or by creating self-service personal alerts in IBM Cognos Viewer. To provide a reference as to the timeliness of the information being viewed. there is a summary bar at the bottom of each frame that displays the last time that a refresh occurred (Figure 10-51). and settings frames without losing context (Figure 10-50). Figure 10-51 Bottom of frame indicates when the values were generated and displayed 10.4. the default view allowed by your administrative capabilities displays. 3. Launch the IBM Cognos administration console. which means that certain underlying service metrics have values that are nearing the acceptable norm and might warrant further investigation. Select the System task on the Status tab. based on the range that the value resides. Click BatchReportService. In the scorecard frame.These thresholds allow administrators to set ranges that provide them with a quick overall view into the system health. IBM Cognos system administration 491 . yellow. Figure 10-52 System Scorecard showing all dispatchers A quick glance at the scorecard indicates that there is a dispatcher that has a yellow indicator. Creating a threshold Before an overall scorecard can be established. and red traffic light indicators. tolerance thresholds must first be defined on the key metrics: 1. Drill down again on an available dispatcher to reveal the underlying services. drill down on an available server. 2. Chapter 10. 4. The current metric values are displayed in the system task as green. an overall scorecard is possible (Figure 10-52). 5. When a series of thresholds is assigned to key indicators that pertain to the specific environment. This dialog box is divided into two sections. select Low values are good. The first section is the Performance pattern. 492 IBM Cognos Business Intelligence V10. 3. which displays metrics about the amount and duration of requests handled by the batch report service Take the following steps: 1. which provides metrics about the amount of configured and running batch report processes Request.Request . The second section is where the actual threshold values or ranges that will drive the type of indicator are defined. Figure 10-53 Request metrics for the batch report service The metrics batchreportservice frame consists of two metric groupings: Process. Expand the Request metric grouping by clicking the plus sign (+).Percentage of failed requests” dialog box opens. 2. middle.Clicking the BatchReportService object changes the focus of the metrics frame on the upper-right side so that all of the relevant metrics for the batch report service are displayed (Figure 10-53). Locate the percentage of failed requests metric and click the pencil icon. Because failed requests are undesirable. The “Set thresholds for metric . or low values are good (a green traffic light indicator). which specifies whether high. 4.1 Handbook . IBM Cognos system administration 493 .0% moves the box down to the green threshold. which results in 3. Clicking the down arrow beside 3.0% remaining green. which means that at 3% the indicator changes from green to yellow. but anything higher changing the status to yellow. If the metric value continues to increase and a value of 5% is obtained. 6. Enter a value in both of the boxes to define the range. the indicator light changes to a red.0%. then it turns yellow. Figure 10-54 Defining a metric threshold Light indicator: The yellow indicator light value is 3. Click OK to save the threshold. Chapter 10. Figure 10-54 shows that the indicator light is green until the value for the particular metric hits 3%.5. it is impossible to provide defaults that would have any relevance in all environments. The audit database entries also provide the information that is required to report on the volume and severity of the exceptions over time. This reporting is key in indicating peak periods of usage and keeping track of unexpected changes to usage patterns.1 environment. but what values should be used that make sense for the environment in question? Are the values being used based on the current 494 IBM Cognos Business Intelligence V10. available memory. such as size of the user base. Not only could setting metric thresholds be a timely exercise based on the volume of metrics available. server hardware. and so on. any threshold exceptions (changes to the indicator light color) are written to the COGIPF_THRESHOLD_VIOLATIONS table. number of concurrent users. Programatically setting thresholds One of the most common questions that is asked is why there are no default thresholds? The is because there are many factors that influence metric values. if auditing is enabled for the IBM Cognos BI version 10. the status indicator for the newly created metric threshold is displayed (Figure 10-55). This table allows administrators to proactively create IBM Cognos Event Studio agents that monitor the audit table for exceptions and that notify the required administrators. Figure 10-55 Metrics frame with newly created metric performance pattern After you define metric thresholds.1 Handbook . volume of reports being executed.Returning to the metrics frame. m. Reacting to these occurrences is critical to maintaining service level agreements and ensuring that the data in reports is accurate and not out of date. manually select the objects or use the Advanced options on the left to filter the results to display only the impacted objects and select all.html 10. 2. Launch the IBM Cognos administration console. it is possible to gather metric exports for a period of time. IBM Cognos system administration 495 . set the threshold ranges based on the real-world usage statistics for a given environment. In the Suspend Activities dialog box. which changes the list display at the bottom of the page.4. to 2 p. or if only certain entries will be impacted by the database outage. see the System Management Methodology located at:. Sam must react to the situation by taking the following steps: 1. typically spanning a high reporting period when available.2 Reacting to bottlenecks due to unexpected events System administrators are occasionally faced with unexpected environmental events or unforeseen changes to usage patterns. Select all of the entries by clicking the check box in the upper-left corner or the list. due to scheduled routine maintenance. receives a message indicating that there will be a 60-minute outage of the reporting database from 1 p. Using the metric export capability. Sam Carter. the IBM Cognos Administrator. To ensure that any reports scheduled during this time do not fail. For more information about setting the metric thresholds. Select the 1 p. programatically through the SDK. and then using the metric exports.settings? If so.m. the reports can be shifted to 3 p. are these values indicative of a typical day? What about during peak periods? Won’t all of the metric thresholds turn red during peak period usage? There is help for this. and change the calendar control to reflect a period when the reporting database is back online. 5. use the Actions drop-down menu beside one of the selected items to choose the Suspend option or use the Suspend button from the toolbar menu. 3. Select the Upcoming Activities task on the status task. to 2 p. After you select the objects. Chapter 10. to compensate for any additional minor delays.m.ibm. select Until. 4. Using a Great Outdoors company scenario. For example.m.com/developerworks/data/library/cognos/page258. time slot on the chart by clicking the bar above 13 along the x-axis. 6. Building on the previous scenario. The Upcoming Activities chart updates to reflect the changes. Launch the IBM Cognos administration console. Sam must now react differently to this unforeseen hurdle because there is no estimated time for the resolution. Click the bar above 15 along the x-axis to change the filtered list to the objects scheduled to run from 3 p. 5. so it is a good practice for IBM Cognos administrators to proactively examine the upcoming load for the day and make changes were necessary. 8.m. Select the objects that had been previously rescheduled. Using the Actions drop-down menu beside one of the selected items.m. Sam Carter receives word that there are complications with the database maintenance and that the outage will last longer than the previously anticipated 60 minutes. 4. select Indefinitely. or using the toolbar menu. 3. Select the Upcoming Activities task on the status task. but the ability to suspend reports for a predefined period of time can also be used to help spread an unexpected scheduling load across other less active periods. The scenario describes reacting to a planned database outage. Click OK. Click OK.1 Handbook . 496 IBM Cognos Business Intelligence V10.7. to 4 p. 2. Unfortunately. The ability to allocate scheduled objects throughout the day can help ensure system throughput. 6. there is no new estimate as to when the database will be back online and available for reporting. Sam chooses to Suspend the selected items. In the Suspend Activities dialog box. He must take the following steps: 1. This is a continual process of exporting and then loading the metrics so that the reports are always current. IBM Cognos system administration 497 . The other mechanism is to use a tool that is both capable of connecting to the Java environment directly and writing the values to the reporting database. The first mechanism uses product functionality to export the metrics to a text file and then using an extract. transform. reports can be created so that analysis of key metrics can be tracked. One such tool is IBM Tivoli Directory Integrator. After they are loaded into the database.3 System trending Through the combination of viewing the metrics available in the aDministration Console and proactively monitoring metric thresholds using IBM Cognos Event Studio alerts. there is an entry in the chart legend called Suspended that indicates the number of suspended objects (Figure 10-56). and load (ETL) tool to load them into a reporting database. There are a couple of ways to accomplish the recording of the metrics. But how is the system doing today in comparison to last week or last month? Is the system handling more or fewer requests than last month? Are the response times and queue lengths increasing due to higher system usage? Because the metrics in the console are essentially a current snapshot of the environment. To identify that there are items that are suspended indefinitely. there is no defined time for their execution. which connects to the IBM Cognos Chapter 10. the rescheduled items do not appear in any specific time slot in the chart. Figure 10-56 Upcoming activities chart with legend 10.Contrary to when the schedules were postponed and a new time was defined in the first scenario. it is possible to quickly respond to unexpected changes in usage patterns.4. so there is no way to represent them on the chart. it becomes almost impossible to answer these questions without a mechanism to record the metric values. Because they were suspended indefinitely. represent a resource running in the Java Virtual Machine (JVM). which stands for Managed Bean.ibm. These resources are represented by objects called MBeans. and then writes out the values to a relational database.html 10.1 Handbook .com/developerworks/data/library/cognos/page258. Java Management Extensions Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications and service-orientated networks. This process is automated and can be scheduled so that there is no manual intervention required. How this translates to the IBM Cognos topology is that the dispatcher component stores the raw metrics in an MBean within the JVM running the IBM Cognos BI application. the Java MBean must first be made available for external access.4. Connecting to metrics using JConsole This section describes the steps required to view the metrics externally using the JConsole application. Besides the administration console. see the System Management Methodology located at:. the metrics are accessible externally by using the industry-standard JMX. For more information regarding system trending.4 Consuming system metrics from external tools This section discusses Java Management Extensions and JConsole. Thus. for tools such as IBM Tivoli Monitoring to connect to the IBM Cognos metrics. a parameter must be added to one of files within the application server. a JMX agent must be created to interface with the IBM Cognos MBean. and the metric values can be written to the database at almost any interval. Before the metrics can be exposed to external sources. MBeans. 498 IBM Cognos Business Intelligence V10. To accomplish this.JVM.5 JDK package. reads the metrics from a customizable list of services. which is available as part of the Java V1. so after the entry has been added to the file. To enforce user name and password external access: 1. That is.Group Properties dialog box and type 9999 (to match the port number used for the rmiregistryport entry from a previous step). 3. Open IBM Cognos Configuration. complete the following steps: 1. In the explorer frame.For a default Tomcat installation. anybody can connect to the MBean if the proper connection string is known. select Environment. Chapter 10. Port number note: The number specified in the added string pertains to a port number. IBM Cognos system administration 499 . 2. Uncomment the existing rmiregistryport line by removing the # symbol (and modify the port number if required) (Figure 10-57). Figure 10-57 Enabling the rmi registry port for JMX support 2. but these policies do not apply when connecting externally. Save the file. Click the value field of the External JMX credential property. product access to the metrics can be locked down through the security policies in IBM Cognos Connection and the administration console. Ensure that the port number specified is available for use and is not being occupied by another application.properties file in a text editor. and then click the pencil icon. Navigate to the <install_dir>\webapps\p2pd\WEB-INF directory and open the p2pd_deploy_defaults. 4. It is important to note that there is no security associated with the JMX implementation. Locate the External JMX Port property in the Environment . a restart is required if the IBM Cognos BI application is already running. Locate the jconsole. If there are spaces in the installation path and the non-default IBM JRE is being used.exe executable in the bin directory of the Java JDK and launch it.useCodebaseOnly=true 500 IBM Cognos Business Intelligence V10. 7.External JMX credential” dialog box. Figure 10-58 Securing the IBM Cognos JMX interface 6. JConsole must be executed using the following command line: Jconsole -J-Djava.rmi.1 Handbook .server.5. Click OK. On the “Value . Because this is a setting that is read when the application is started. Save the new configuration parameters. specify the user ID and password that will be used to secure the IBM Cognos MBeans (see Figure 10-58). After you start or restart the application: 1. JMX implementation note: The JMX implementation does not allow for spaces in the install path when using a JRE other than the default IBM JRE provided with the IBM Cognos installation. 3. Chapter 10. 4. Expand the com. If more than one dispatcher is present in the environment. by expanding the reportService entry to expose the objects beneath. for example.2. Click Connect to connect to the system metrics using JMX.cognos section of the tree. 9. View the metrics for a service. Supply the proper credentials if the External JMX credential value was supplied in IBM Cognos Configuration. 6. Ensure that the MBeans tab is selected (if not selected by default). switch to the Advanced tab (see Figure 10-59). 8. Click the Metrics option beneath the dispatcher. which displays all of the metric names and values in the right frame. 7. Connect to the following JMX URL: service:jmx:rmi://machine_name/jndi/rmi://machine_name:9999/proxyserver Figure 10-59 Connecting to IBM Cognos using JMX Connection note: The machine_name entry must be the server name and cannot be localhost. report service. When presented with the JConsole: Connect to Agent dialog box. IBM Cognos system administration 501 . select one of them and expand the dispatcher name entry in quotation marks. This is the location of all of the metrics that reside in the administration console. 5. Providing that no additional report service requests were made. Click ReportService to filter the metrics in the upper-right frame. Drill into the same dispatcher as used in the JConsole interface.To compare the metrics displayed in the IBM Cognos administration console and JConsole: 1. Figure 10-60 Report service metrics in IBM Cognos administration console Figure 10-61 Identical report service metrics in JConsole 502 IBM Cognos Business Intelligence V10. although there might be slight formatting differences (Figure 10-60 and Figure 10-61). Keep drilling down until the ReportService entry is located. and navigate to the System task within the administration console. 2.1 Handbook . launch IBM Cognos BI. 4. 3.ReportService frame are identical to the values displayed in JConsole. Open a web browser session. the values displayed in the Metrics . 2. Although the information provides the ability to identify possible errors that have occurred in the environment. In the New Resource – Destination dialog box. The information logged by IBM Cognos Platform auditing can be used to support administrative requirements such as: Capacity planning Licensing conformance reporting Performance monitoring Identifying unused content By default. and other product details are logged to flat files that reside in the <c8_install>/logs directory. 4. SQL Server. Within the Explorer pane. With the usage audit data stored in a relational data source. IBM Cognos BI provides the ability to output usage information to a relational database. Chapter 10. Open IBM Cognos Configuration. 10. click the type of database target (DB2. type the database name and. 3. IBM Cognos system administration 503 . Informix®. and then click Resource Destination. In the New Resource – Database dialog box. an audit destination must first be configured in IBM Cognos Configuration: 1. Right-click the newly created Audit database and click New Resource Database.5 Auditing IBM Cognos Platform provides a complete auditing capability that permits administrators to report on and manage system usage.5. and so on). Even if the desired date range was still accessible. system messages. using the drop-down menu.1 Configure the audit database For the usage data to be captured in a relational source. viewing or reporting on the data is complicated and difficult to manage. right-click Logging. In fact. errors. reporting then becomes possible. expand Environment.10. file rollover parameter in IBM Cognos Configuration). type a name (we used Audit in our example) and click Database as the type. Use the data contained in the default log files primarily for troubleshooting and not for tracking usage. a sample audit model is supplied that includes several sample reports to assist in providing immediate benefit from the audit data. the information is volatile because of the versioning mechanism (that is. 5. 8. database login credentials. These steps define the JDBC connection that will be used to populate the audit database. 9. Save the configuration by clicking Save on the IBM Cognos Configuration toolbar. such as database host name and port number.1 Handbook . During the start phase. which prompts the application to create the necessary tables within the configured database. the configuration change is identified. Test the audit database connectivity either by right-clicking the newly created database name in the Explorer pane and then clicking Test or by clicking the new database name and then clicking the Test icon from the IBM Cognos Configuration toolbar. click the newly created database name and type the necessary parameters. 7. into the fields in the Properties pane. In the Explorer pane. and the database name. 504 IBM Cognos Business Intelligence V10. Changes do not take effect until after the IBM Cognos BI services have been restarted. Start (or restart if already running) the IBM Cognos BI services by clicking the Start icon (or the Restart icon) from the toolbar.6. The COGIPF_SYSPROPS table contains a single record that indicates logging version detail. and the COGIPF_THRESHOLD_VIOLATIONS records metric threshold exception details that are derived from the IBM Cognos BI system metrics (Figure 10-62). IBM Cognos system administration 505 .10. the audit database schema is added to the database the next time that the application is started.2 Audit table definitions After an audit database has been added to the configuration parameters in IBM Cognos Configuration. 18 tables are added to the audit database. but only 11 are used for auditing usage.5. When the application starts. Figure 10-62 IBM Cognos Audit database tables Chapter 10. The COGIPF_MIGRATION table is reserved for an upcoming migration application. Table 10-5 Audit database table definitions Table name COGIPF_ACTION COGIPF_AGENTBUILD COGIPF_AGENTRUN COGIPF_ANNOTATIONSERVICE COGIPF_EDITQUERY COGIPF_HUMANTASKSERVICE COGIPF_HUMANTASKSERVICE_DETAIL Description Stores information about operations performed on objects Stores information about agent mail delivery Stores information about agent activity including tasks and delivery Stores audit information about annotation service operations Stores information about query runs Stores audit information about human task service operations (tasks and corresponding task states) Stores additional details about human task service operations (not necessarily required for every audit entry. which should only be used for troubleshooting purposes under the guidance of customer support.Table 10-5 shows audit database table definitions.3 Audit levels The auditing facility within IBM Cognos BI provides five levels of detail. for example. which for the purposes of auditing means disabled.1 Handbook .5. the 506 IBM Cognos Business Intelligence V10. and at the end of the spectrum is full. For collecting audit detail. notification details and human role details) Stores information about queries that IBM Cognos software makes to other components Stores parameter information logged by a component Stores information about job runs Stores information about job step runs Stores information about report runs Stores information about threshold violations for system metrics Stores user logon and logoff information Stores information about report view requests COGIPF_NATIVEQUERY COGIPF_PARAMETER COGIPF_RUNJOB COGIPF_RUNJOBSTEP COGIPF_RUNREPORT COGIPF_THRESHOLD_VIOLATIONS COGIPF_USERLOGON COGIPF_VIEWREPORT 10. The levels start at minimal. 5. Table 10-6 IBM Cognos audit and logging levels System activity type System and service startup and shutdown. similar to the full level. 10.4 Audit and logging for IBM Cognos BI services The IBM Cognos BI architecture comprises various services. Only use trace. From within IBM Cognos Connection. “IBM Cognos BI services” on page 16. and then click Dispatchers and Services.5 Setting audit levels Setting the audit levels is done through the dispatchers and services task in the administration console in IBM Cognos Connection: 1. the only service that needs to have auditing enabled is the Content Manager Service. Table 10-6 shows IBM Cognos audit and logging levels.2. runtime errors User account management and runtime usage User requests Service requests and responses All requests to all components with their parameter values Other queries to IBM Cognos components (native query) Minimal Y Basic Y Y Y Request Y Y Y Y Y Y Trace Y Y Y Full Y Y Y Y Y Y 10. Request provides essentially the same level of detail as basic.1.only choices are minimal (disabled) and basic (enabled). see 2. IBM Cognos system administration 507 . Assigning the correct level of audit detail to the appropriate service provides a customized view of usage data.5. under the guidance of IBM Customer Support. Chapter 10. Understanding what functions are performed by each of the services provides greater insight into the audit detail that they record. 2. Each of these services has a configurable audit level. For more information about the IBM Cognos services. if only login information is desired. Click the Configuration tab. For example. click Launch IBM Cognos Administration to launch the IBM Cognos administration console. On the Configuration pane of the Dispatchers and Services window. audit settings must be configured for each dispatcher individually. This is the suggested way to set the audit settings. click the Settings tab. 7. Because of the inheritance model. This reveals the list of services that make up the dispatcher. and in those cases. audit settings made at the top level will be pushed down to all dispatchers and services that make up the environment. and then clicking Logging.1 Handbook . When presented with the Set Properties dialog box. Using the drop-down menus or check boxes. examine the properties shows that the value changed from Yes to No. and parameters with an acquired value of No will no longer be acquired from the 508 IBM Cognos Business Intelligence V10. as typically all dispatchers should be recording the same level of detail. select the dispatcher by clicking the link with the dispatcher’s name. Filter the displayed settings to show only settings related to logging by clicking the Category drop-down menu. However. 5. To configure customized settings on a dispatcher. Figure 10-63 Selecting the Configuration Properties to set audit levels 4. click the Set properties . 6. Repeat the steps above to set the individual audit settings on the dispatcher.3. other unique situations might require that settings differ from the dispatcher.Configuration icon on the main toolbar (Figure 10-63). click OK to save the new parameter values. set the auditing level for each of the services that make up the IBM Cognos BI environment. After the levels have been specified for the desired services. Overriding the settings on a dispatcher breaks the inheritance model. After you configure the customized settings for an individual dispatcher. Select the dispatcher that you want to configure by clicking the dispatcher’s link in the Configuration pane. Select this option. which selects all of the logging configuration settings. 4. which filters the parameter list to only display the audit-related entries. This does not really delete them. 2. there is a periodic need for troubleshooting. IPF trace note: Implement IPF trace only with the guidance of IBM Support. 7. Click the Set Properties . click the Settings tab. 2. Chapter 10. 6.5. To re-synchronize the parameters so that the values can be acquired: 1. One of the troubleshooting mechanisms with the IBM Cognos BI application is through the IPF component. Click OK to save the changes. Click the Reset to parent value link at the bottom-right corner of the window. 3. which resets the parameter values to the parent configuration and resets the acquired values to Yes. At the bottom of the dialog box.6 Maintaining audit detail while troubleshooting Among applications. called IPF traces. the configuration settings can be reset from the top level and pushed down to the dispatchers in the environment. 5. Click the Set properties – dispatcher icon. it simply overwrites them with the current configuration settings from the global configuration. Click the Settings tab. Figure 10-64 Option to reset all dispatcher configuration settings to the parent values 10. Click the check box in the upper-left corner. On the “Set properties – Dispatcher” dialog box. IBM Cognos system administration 509 .parent configuration. there is an option to delete the configuration settings of the children.configuration icon on the main page of the Dispatchers and Services panel. and then click OK to save the changes (see Figure 10-64). Change the drop-down menu to Logging. 3. To do this: 1. Alternatively. xml is found in the <install_location>/configuration directory.sample.1 Handbook . Example 10-1 Important fields to ensure uninterrupted audit logging <appender name="clientTCP" class="com. Example 10-1 highlights the fields that are important to ensuring that audit logging is not interrupted during IPF trace activities. By default. Each IBM Cognos software component provides a sample IPF Client Trace file to be used when instructed by IBM Support. the IPF client trace functions and audit records continue to be logged to the audit database.0. the master template file called ipfclientconfig.cognos.0.Enabling an IPF trace does not require a restart of the application and is enabled when a file called ipfclientconfig.xml file.indications.cognos. ipfRSVPclientconfig. Take care when editing the ipfclientconfig.xml template is configured to continue recording regular audit detail to the audit database.LogLocalUDPAppender"> <param name="Port" value="9362"/> </appender> <category name="Audit" class="com. As long as the TCP connectivity parameters are correct and the audit level is set to warn.indications.cognos.1"/> <param name="Port" value="9362"/> <param name="LocationInfo" value="false"/> <param name="ReconnectionDelay" value="30000"/> </appender> <appender name="clientRemote" class="com.xml. 510 IBM Cognos Business Intelligence V10.xml file when troubleshooting might cause regular audit information to stop being recorded to the audit database. Change the remoteHost value and the Port value to match the log server host and port number in IBM Cognos Configuration. various levels of detail can be output to different sources.xml file: Improper editing of the ipfclientconfig.LogTypedLogger"> <level value="warn"/> <appender-ref </category> Example 10-1 highlights the TCP connection parameters and the important parameter for the audit level. for example.indications. providing that server logging configuration settings are correctly entered into the ipfclientconfig. Within that file.LogTCPSocketAppender"> <param name="remoteHost" value="127. Figure 10-65 Logging server port and TCP settings in IBM Cognos Configuration 10. the parameters within IBM Cognos Configuration need to be examined.5. If the value is set to True. IBM provides sample reports to be used for various auditing scenarios. we apply audit functionality and walk through common scenarios to provide an overview of how IBM Cognos audit data can be used to provide a complete view of system activity. we explore the table structures of the audit database and provide example queries to satisfy common audit requirements. More detailed examples Chapter 10. the clientTCP will be the require entry in the IPF file (Figure 10-65). IBM Cognos system administration 511 . To demonstrate possible uses for the information stored within the IBM Congos audit database. To verify whether clientRemote or clientTCP needs to be used as the appender-ref value.7 Audit scenarios In discussing audit scenarios. Selecting the Logging entry beneath the Environment section displays the logging parameters in the right frame. Given that the audit information for IBM Cognos BI is stored in a relational database. administrators can also use SQL queries to get a detailed view of system activities. The database used to record audit information for IBM Cognos BI can also be used as a reporting data source for system administrators. If the Enable TCP? parameter is set to False. then the clientRemote must be used. Authentication Authentication is handled through the IBM Cognos Content Manager Service. COGIPF_USERID.of using audit information gathered by the IBM Cognos Platform can be found by referring the IBM Cognos Administration and Security Guide or by referring to the proven practices found within the IBM Congos System Management Methodology located on the IBM developerWorks® website:. Details about parameters used by IBM Cognos components can be obtained by query interactions with the COGIPF_PARAMETER table using COGIPF_REQUESTID. logons. and so forth. COGIPF_LOCALTIMESTAMP. jobs. security events.1 Handbook . user name and authenticating namespace) is contained in the COGIPF_USERLOGON table. COGIPF_LOGON_OPERATION. In the following scenario. Detailed information about jobs and job steps can be obtained from the COGIPF_RUNJOB and COGIPF_RUNJOBSTEP tables using COGIPF_REQUESTID. recording authentication-related detail requires auditing to be enabled for the IBM Cognos Content Manager Service. John Walker (jwalker) logs into IBM Cognos Connection. auditing is set to minimal for all services except the IBM Cognos Content Manager Service. and secondary information such as group membership is recorded in the COGIPF_ACTION table. 512 IBM Cognos Business Intelligence V10. Logging into IBM Cognos Connection causes audit data to be written into two tables: COGIPF_USERLOGON COGIPF_ACTION The primary information related to the user logon (that is.html Tables within the IBM Cognos audit database can be joined to provide further information about user sessions. and so on can be obtained by query interactions with the COGIPF_USERLOGON table using COGIPF_SESSIONID. COGIPF_NAMESPACE. Details about user sessions. reporting activity. we see a single entry for the login process (Example 10-2). For example. Therefore. COGIPF_USERNAME. Example 10-2 Sample SQL query to retrieve user logon information SELECT COGIPF_HOST_IPADDR. Upon examining the COGIPF_USERLOGON table.ibm.com/developerworks/data/library/cognos/cognosproven practices. To identify corresponding entries. IBM Cognos system administration 513 . COGIPF_LOGON_OPERATION. COGIPF_STATUS FROM COGIPF_USERLOGON WHERE (COGIPF_SESSIONID LIKE 'F37A9BB97C040B4FF7CE95FDEEE51314F55B83B1') Chapter 10. the records are not consecutive and therefore are hard to correlate. so it becomes possible to identify users from different business units if they are not part of the same security namespace. The same login operation also records two audit entries in the COGIPF_ACTION table. COGIPF_USERID. Example 10-3 SQL query using COGIPF_SESSIONID to select record for one session SELECT COGIPF_HOST_IPADDR. which shows logon operations and sessions expiring due to inactivity. Figure 10-66 Results of a query showing the audit table row created for user John Walker’s logon As shown in Figure 10-66. the records must be matched on COGIPF_SESSIONID (Example 10-3). User session expirations (the default passport timeout is 60 minutes of inactivity) are also indicated in the COGIPF_USERLOGON table. (The complete entry is truncated in Example 10-3 due to the length of the field. COGIPF_NAMESPACE.COGIPF_STATUS FROM COGIPF_USERLOGON Figure 10-66 shows the results of the query. COGIPF_USERNAME.) When users log out of the application. The namespace that the user belonged to is also included. In a busy environment where many users are logging in. a single record is written to both the COGIPF_USERLOGON and COGIPF_ACTION tables. The only record that is important from an audit standpoint is the record that queries the security namespace for the group membership of the user. the user logging in and the time of the operation are recorded. The default inactivity timeout is 60 minutes. Records are logged to the audit database. COGIPF_LOCALTIMESTAMP. </messageString></item> <item xsi:<nestingLevel xsi:2</nestingLevel> <messageString xsi:CAM-AAA-0125 The user 'baduser' does not exist in this namespace. Figure 10-67 An audit entry for a failed login attempt Examining the COGIPF_ERRORDETAILS column reveals the true source of the failure. the ability to track unsuccessful login attempts is critical for identifying unauthorized user access.1 Handbook .both failed and successful Example 10-4 shows the value for the COGIPF_ERRORDETAILS column for an invalid logon attempt. Example 10-4 COGIPF_ERRORDETAILS column for an invalid logon attempt <messages xsi: <item xsi: <messageString xsi:CAM-AAA-0055 User input is required.Whereas tracking user authentication is crucial for identifying usage patterns and license management.</messageString></item> <item xsi:<nestingLevel xsi:1</nestingLevel> <messageString xsi:CAM-AAA-0036 Unable to authenticate because the credentials are invalid. Figure 10-68 shows a record from COGIPF_USERLOGON. Figure 10-68 An audit entry for user logon attempts . a record is written to the COGIPF_USERLOGON table (Figure 10-67). Whenever an unsuccessful login attempt occurs.</messageString></item> </messages> 514 IBM Cognos Business Intelligence V10. auditing is set to minimal for all services except the presentation service. additional information can be obtained. the records are identical except for the error details (Example 10-5). Example 10-6 shows the SQL query. to just track saved report access. IBM Cognos system administration 515 . COGIPF_PACKAGE. COGIPF_REPORTFORMAT FROM COGIPF_VIEWREPORT By joining the COGIPF_VIEWREPORT and COGIPF_PARAMETER tables on COGIPF_REQUESTID. This report was created to help identify unauthorized access to the IBM Cognos environment.Because the audit record only indicates a success or failure status. In the case of incorrect passwords. auditing for the presentation service must be enabled. Example 10-5 Error message <messages xsi: <item xsi: <messageString xsi:CAM-AAA-0055 User input is required. COGIPF_TARGET_TYPE.</messageString></item> <item xsi:<nestingLevel xsi:1</nestingLevel> <messageString xsi:CAM-AAA-0036 Unable to authenticate because the credentials are invalid. COGIPF_REPORTPATH. Viewing a saved report Serving up saved content is the responsibility of the presentation service. Therefore.</messageString></item> </messages> A sample report called security risk mitigation is available as part of the IBM Cognos System Management Methodology. COGIPF_REQUESTID. such as the package used and the format in which the report was viewed. Example 10-6 SQL query SELECT COGIPF_LOCALTIMESTAMP. In the following scenario. All of this information is also available as part of a single record in the Chapter 10. COGIPF_STATUS. paying attention to the error details is important when trying to isolate unauthorized access to the application versus users incorrectly typing their passwords. COGIPF_REPORTPATH. Because interactive reports are handled by the report service. COGIPF_LOCALTIMESTAMP. Running the same report view query again with the additional auditing enabled reveals an entry in the COGIPF_USERLOGON table. COGIPF_RUNTIME.1 Handbook . Logging audit data for the Content Manager service is done by setting the logging level for the Content Manager service to basic. Example 10-7 Linking the report view with the correct user SELECT a. If the COGIPF_USERLOGON table is joined with the COGIPF_VIEWREPORT table on COGIPF_SESSIONID.COGIPF_SESSIONID) Viewing different types of report output produces different entries in the COGIPF_REPORTFORMAT column. versus a report actually being executed. COGIPF_PACKAGE FROM COGIPF_RUNREPORT 516 IBM Cognos Business Intelligence V10. COGIPF_STATUS. enabling auditing on the report service at a basic level records individual report executions (Example 10-8).COGIPF_REPORTPATH. Listed in the following section is a sample of all the types of report output formats. and using the operation type of VIEW indicates that saved content has been viewed.COGIPF_USERNAME FROM COGIPF_VIEWREPORT AS a CROSS JOIN COGIPF_USERLOGON AS b WHERE (a. b. it becomes possible to obtain that level of detail. The most basic requirement is tracking the fact that a report was executed. COGIPF_TARGET_TYPE.COGIPF_VIEWREPORT table. By enabling auditing for the Content Manager service. COGIPF_REQUESTID. it becomes possible to tie the report view with the correct user (Example 10-7). Example 10-8 SQL query SELECT COGIPF_PROC_ID. The information that is missing is the ability to identify which user viewed the saved report output. You can also determine the internal storeID of the report that was viewed. Executing a simple report interactively There are various items of detail that might be required when tracking report executions.COGIPF_SESSIONID LIKE b. The SQL statement in Example 10-8 provides the necessary information to obtain details such as the report name and where the report was executed from (COGIPF_REPORTPATH). the BIBUS process ID (PID) can be obtained. COGIPF_LOGON_OPERATION. What is missing from this level of detail is the ability to determine who executed the report. and certain records indicate whether the action was from a series of steps involved in a prompted report. Example 10-9 SQL query SELECT COGIPF_HOST_IPADDR. As part of the report execution audit detail. COGIPF_USERNAME. COGIPF_NAMESPACE. IBM Cognos system administration 517 . The sample audit package has all of this information contained in the same query item. the amount of time it took to execute in milliseconds (COGIPF_RUNTIME). fourrecords is written to the COGIPF_PARAMETER table. the service that handled the request (report or batch report). In addition to these details. and the time of the execution (COGIPF_LOCALTIMESTAMP). Chapter 10. For example. but the package that was modified as part of the IBM System Management Methodology isolates each piece of information in its own query item. there are no useful parameter details for a report execution unless the internal object storeID is required. the type of object executed (report or analysis). COGIPF_STATUS FROM COGIPF_USERLOGON Example 10-10 SQL query SELECT COGIPF_HOST_IPADDR. COGIPF_USERID. For this to occur. From an information standpoint. the package that the report was executed against (COGIPF_PACKAGE). COGIPF_LOCALTIMESTAMP. COGIPF_LOCALTIMESTAMP. auditing must be enabled for the Content Manager Service (Example 10-9 and Example 10-10). COGIPF_LOGON_OPERATION. it also becomes possible to correlate the report execution to the BIBUS process that handled the execution. By examining the COGIPF_PROC_ID column. The COGIPF_TARGET_TYPE contains various pieces of information that are all part of the same record. COGIPF_USERNAME. Example 10-12 Sample from COGIPF_REQUESTSTRING <thirdparty><![CDATA[select "COGIPF_THRESHOLD_VIOLATIONS".1.1)) else "COGIPF_THRESHOLD_VIOLATIONS"."COGIPF_RESOURCE_TYPE") + 8."COGIPF_THRESHOLD_VIOLATIONS"."COGIPF_RESOURCE_TYPE" end from "dbo". 112).COGIPF_SESSIONID LIKE b."COGIPF_THRESHOLD_VIOLATIONS" "COGIPF_THRESHOLD_VIOLATIONS" where "COGIPF_THRESHOLD_VIOLATIONS". convert(datetime. current_timestamp. "COGIPF_THRESHOLD_VIOLATIONS". By enabling an auditing parameter called audit the native query for report service."COGIPF_RESOURCE_TYPE") -((charindex('service='."COGIPF_METRIC_HEALTH" in ('poor'.COGIPF_USERNAME FROM COGIPF_RUNREPORT AS a CROSS JOIN COGIPF_USERLOGON AS b WHERE (a. Examining a sample from COGIPF_REQUESTSTRING shows that the SQL statement being used in the query is contained within the record detail (Example 10-12).charinde x('service ='. convert( char(8).cognos' then substring("COGIPF_THRESHOLD_VIOLATIONS"."COGIPF_RESOURCE_TYPE".1 Handbook . 112 )."COGIPF_RESOURCE_ TYPE") + 8). case when substring("COGIPF_THRESHOLD_VIOLATIONS". Example 10-11 List of reports executed and the users who executed them SELECT a. COGIPF_STATUS FROM COGIPF_USERLOGON Joining the COGIPF_USERLOGON and COGIPF_RUNREPORT tables on COGIPF_SESSIONID provides a list of reports executed and the users who executed them (Example 10-11)."COGIPF_LOCALTIMESTAMP"."COGIPF_METRIC_NAME"."COGIPF_THRESHOLD_VIOLATIONS".COGIPF_REPORTPATH.COGIPF_SESSIONID) One of the major differences between viewing saved report output is that running a report interactively executes queries at the database layer.10) = 'com."COGIPF_RESOURCE_TYPE". COGIPF_NAMESPACE.LEN("COGIPF_THRESHOLD_VIOLATIONS".COGIPF_USERID. it becomes possible to isolate the queries being sent to the reporting database. 'average')]]></thirdparty> 518 IBM Cognos Business Intelligence V10. b. COGIPF_RUNTIME FROM COGIPF_RUNJOB Examining the audit entry provides information such as the path to the executed job. COGIPF_STATUS. Executing the job produces a lone record in the COGIPF_RUNJOB table (Example 10-14).COGIPF_USERNAME FROM COGIPF_NATIVEQUERY AS a CROSS JOIN COGIPF_USERLOGON AS b WHERE (a.To determine which user executed the SQL statement. b. Figure 10-69 Query results showing records related to running jobs Chapter 10. COGIPF_SESSIONID.COGIPF_SESSIONID’) Executing reports through a job Depending on the steps contained within the job. COGIPF_REQUESTID. This scenario details the audit entries that are recorded when a job is executed that contains two report steps. IBM Cognos system administration 519 .COGIPF_SESSIONID LIKE ‘b. various tables can be written to as a result of a job execution. when it was executed.COGIPF_REQUESTSTRING. COGIPF_TARGET_TYPE. and how long it took to run in milliseconds (COGIPF_RUNTIME) (Figure 10-69). join the COGIPF_NATIVEQUERY and COGIPF_USERLOGON tables on COGIPF_SESSIONID (Example 10-13). Example 10-14 SQL query SELECT COGIPF_LOCALTIMESTAMP. COGIPF_JOBPATH. Example 10-13 Determining which user executed the SQL statement SELECT a. Setting the audit level to basic for the job service provides a a sufficient amount of audit data pertaining to the job execution to complete this task. There are two entries because there were two report steps contained within the job (Example 10-15). it is not the component responsible for executing the report steps. Because the job service is responsible for executing jobs.1 Handbook . This is because the request IDs are different. COGIPF_REPORTPATH.What is not provided are the details regarding the contents of the executed job. COGIPF_REQUESTID. COGIPF_RUNTIME. logging for the batch report service must be set to Basic. But further examination of the audit records indicates that there is no association between the job and reports. To capture that additional level of detail. the information is there to determine that one job and two reports were executed. Figure 10-70 Query results showing the two batch report service entries for the steps in our job 520 IBM Cognos Business Intelligence V10. COGIPF_TARGET_TYPE. Executing the same job results in a single entry in the COGIPF_RUNJOB table and two entries in the COGIPF_RUNREPORT table. COGIPF_PACKAGE FROM COGIPF_RUNREPORT With auditing enabled at both the job service and batch report service levels. and joining the tables on this field does not provide the necessary information (Figure 10-70). COGIPF_STATUS. it does not record specifics regarding the report steps. Therefore. Example 10-15 SQL query SELECT COGIPF_LOCALTIMESTAMP. IBM Cognos system administration 521 .The constant that ties these records together is the user that executed the job and reports. Although the metadata is designed to provide a head start to interpret and analyze the usage detail. the schedule object uses trusted credentials to authenticate the user.5. Example 10-16 SQL query SELECT COGIPF_SESSIONID. IBM Cognos Framework Manager model IBM Cognos Framework Manager is a model that is based on the audit detail contained within the audit database. COGIPF_NAMESPACE FROM COGIPF_USERLOGON The identifier of a user’s session is the session ID. By enabling auditing at a basic level for the IBM Cognos Content Manager Service. Figure 10-71 is based on the job execution with contents – by user report that is available as part of the System Management Methodology content package. Even if the job was scheduled to run during off hours. Chapter 10. Keep in mind that additional changes might cause the provided reports in the audit content package to fail when executed.8 Sample audit package This section introduces the IBM Cognos Framework Manager model and DS Servlet. COGIPF_USERNAME. the IBM Cognos Framework Manager model can be modified to suit any particular need. COGIPF_LOCALTIMESTAMP. Therefore. Figure 10-71 Query results showing logon and logoff for job step execution 10. joining the three tables together on that column provides the necessary detail to see who executed the job and which reports were associated to that job. COGIPF_LOGON_OPERATION. the user authentication action is recorded (Example 10-16). COGIPF_USERID. The information is returned in an XML format that can be consumed as a data source and can therefore be reported on.html 10. see IBM developerWorks:. For example. as well as the additional content that is part of the IBM Cognos System Management Methodology. To obtain this information. Sample audit reports The audit package provided as part of the IBM Cognos BI samples contains various reports that are intended as a head start to begin the analysis of the usage data contained within the audit database. Consult the IBM Cognos Administration and Security Guide documentation regarding the steps required to perform a content store backup.DS Servlet Audit detail is captured as actions occur within the application. but what about the need to trace something that does not happen and therefore is not recorded? Specifically. Before any deployments are executed.1 Handbook . For more information about the DS Servlet.5. when a user logs in and runs a report. an SDK application is provided with IBM Cognos BI that queries the IBM Cognos Content Manager component and provides a list of content. The fact that a report is never accessed means that it will never appear in the audit files. 522 IBM Cognos Business Intelligence V10. the login event is recorded along with the report execution details. Additional information regarding the configuration and deployment of the audit reports can be found as part of the core product documentation.com/developerworks/data/library/cognos/page258. how to create data sources. This allows activities to be traced. perform an entire backup of the content store. there might be a desire to discover content that is not being used. and how to import content packages.9 Audit content package This section provides a summary of the default sample audit package that is provided with IBM Cognos BI. Table 10-7 shows reports available with the System Management Methodology. Table 10-7 Reports available with the System Management Methodology SSM report name Active user sessions Description Provides a list of active user sessions in the environment, which includes their user name, login time, and session ID. The list is sectioned by namespace ID. Active means that users are currently logged in and using the product or sessions that are waiting for the inactivity timeout to be reached before they are logged out. There is a filter set to only show the current day's sessions, as certain conditions might affect logoff operations, thus providing inaccurate entries. Agent that checks for daily job execution failures and sends out an email notification when failures are detected. The email, by default, does not have any recipients specified. Before running the agent, add at least one valid email recipient. Agent that checks for daily report execution failures and sends out an email notification when failures are detected. The email by default does not have any recipients specified. Before running the agent, add at least one valid email recipient. Tracks the data source signons being used for each package and which users are using them. The previous activities task in the administration console only tracks histories of objects run in the background. This report lists all of the failed interactive report executions. The prompted list report provides a date and package filter and is sectioned by package. Shows the reports along with the process ID associated with them for reports that do not have a termination status. The report is sectioned by dispatcher IP. Displays all of the reports that are executed as part of a job and the last that time the job was executed. The report contents that are displayed are the reports that made up the last job execution. Prompted report (date range) that tracks the failed login attempts. Useful for identifying potential unauthorized access attempts. Daily job execution failures Daily report failures Data source signon usage - by package Failed interactive report executions - by package Process IDs for active reports - by IP address Report content by job Security risk mitigation Chapter 10. IBM Cognos system administration 523 SSM report name Successful interactive report executions - by package Description The previous activities task in the administration console only tracks histories of objects run in the background. This report lists all of the successful interactive report executions. The prompted list report provides a date and package filter and is sectioned by package. The multiple chart report provides insight into threshold exceptions: frequency, severity, and the services to which they pertain. The charts are filtered with a date range prompt. Threshold exceptions In addition to the reports that are based on the sample audit package, reports have been created based on the Audit Extension SDK application (Table 10-8). The reports are located in the folder within the System Management Methodology folder. For more information about the Audit Extension application, extract the files from the file located in the \SMM - Version 2\SDK Applications directory. Table 10-8 Audit extension reports Additional report name Object security policies Description Provides the list of all permissions for each user, group, or role that is explicitly part of an object’s security policy. The report shows a graphical representation of whether the action is granted or denied. Provides the list of all permissions for the EVERYONE role that is explicity part of an object’s security policy. The report shows a graphical representation of whether the action is granted or denied. Security Policy - EVERYONE 524 IBM Cognos Business Intelligence V10.1 Handbook 10.5.10 Audit extension The standard auditing features with IBM Cognos cover many aspects of operation. However, certain areas, such as the auditing of users and capability assignments, are not included. The aim of the audit extension application is to provide additional auditing for these areas. It currently covers the following areas: Account audit An audit of all the user accounts that are found in all configured namespaces and certain properties of those accounts (basic details, portal pages, created and modified dates, and so on). This allows reporting on the IBM Cognos user base. Content audit An audit of all the objects that exist in the main content store. This audit processes through the content store tree and logs all the objects (folders, reports, queries, and so on) that it finds. It logs the basic information (such as name, search path, object permissions, created and modified date) and certain details more specific to the item types (such as the specification XML of reports and queries, any saved parameter values applied to saved reports, and the details of report output versions). Status audit An audit of the current state of a server and related dispatchers. For each dispatcher registered in the target system, the configuration and activity is logged, saving information such as time taken to connect, number of active processes, and request duration. Usage The application is managed through a web front end that allows the configuration of server and namespace information and can be used to turn on or off individual audit types for a given server. Audits can be initiated in three ways: Using the management web interface Using a web services call (that is, from Event Studio) Using a simple URL/web form call The results of each audit are logged to a database, and an IBM Cognos Framework Manager model is provided to help report the data. For more information and to download the application, see: Chapter 10. IBM Cognos system administration 525 526 IBM Cognos Business Intelligence V10.1 Handbook Part 5 Part 5 Complete IBM Business Analytics solution 527 528 IBM Cognos Business Intelligence V10.1 Handbook 11 Chapter 11. Integrating IBM Cognos BI with IBM Cognos Business Analytics solutions This chapter provides an overview on integration of IBM Cognos Business Intelligence (BI) and solutions from the remainder of the IBM Cognos portfolio: IBM Cognos TM1 IBM Cognos Enterprise Planning IBM Cognos Controller Integrated solutions allow you to have a complete corporate performance management solution as a single product with integrated data and functionality that connects forecast, metrics, actual data, and reports seamlessly. In this chapter, we discuss the following topics: Overview of IBM Cognos Business Analytics solutions Business scenarios and roles to take advantage of IBM Business Analytics Integrating IBM Cognos TM1 with IBM Cognos BI Integrating IBM Cognos Planning Contributor with IBM Cognos BI Integrating IBM Cognos Controller with IBM Cognos BI 529 11.1 Overview of IBM Cognos Business Analytics solutions This section provides an introduction to the IBM Cognos Financial Performance Management portfolio of products. It gives an overview on basic functionality and features of IBM Cognos TM1, IBM Cognos Planning, and IBM Cognos Controller. 11.1.1 IBM Cognos TM1 Complex planning, analytics, and real-time reporting to high levels of detail with millions of items require the power of IBM Cognos TM1. The OLAP 64-bit technology of IBM Cognos TM1 meets even the most complex, multi-dimensional analytics needs of large-scale operations. So, you can query data when you need to, no matter how vast the data set might be. In addition, you can view instant updates from streamed data and drill through to transaction systems for added context and, thus, greater accuracy in decision making. IBM Cognos TM1 addresses all interrelated planning, analysis, and reporting needs with the following capabilities: Exceptionally fast analytics Data and user scalability Data integrity A multi-dimensional database and data tools Workflow A choice of interfaces, including Microsoft Excel, the web, and the IBM Cognos TM1 Contributor for managed participation 11.1.2 IBM Cognos Planning Planning, budgeting, and forecasting are critical financial management processes in most organizations. These processes are critical because they enable organizations to define strategic goals, to create tactical plans, and to track the progress on achieving those plans and goals. IBM Cognos Planning provides the capabilities to create long-range strategic plans, intermediate-range budgets, and short-term or continuous forecasting. These functions exist within inter-connected models that are fed from as many planning participants as an organization needs to include in the planning process. 530 IBM Cognos Business Intelligence V10.1 Handbook IBM Cognos Planning allows you to maximize the accuracy and efficiency of the planning, budgeting, and forecasting processes by providing the following capabilities: Aggregation and consolidation of planning data in a centralized location Scalability for large amounts of plan contributors, large and complex plan models, and large amounts of plan data Increased plan accountability through visual workflow status indicators and full audit tracking capabilities Powerful user features, such as Breakback (goal allocation), external data linking, data validations, commentary, versioning, and dimensional pivoting and nesting for analysis Separate environments for development and production that allows continuous server uptime even when incorporating structural changes to the planning model during a planning cycle Automated administration capabilities to reduce overhead and maintenance Integration with IBM Cognos BI solutions for full reporting, analysis, and scorecarding capabilities IBM Cognos Planning has the following major components: IBM Cognos Planning Analyst IBM Cognos Planning Contributor IBM Cognos Planning Analyst IBM Cognos Planning Analyst is a powerful business modeling tool that allows financial specialists to create models for planning, budgeting, and forecasting. These models include the drivers and content that are required for planning, budgeting, and forecasting. The models can then be distributed to managers using the web-based architecture of IBM Cognos Planning Contributor. IBM Cognos Planning Contributor IBM Cognos Planning Contributor streamlines data collection and workflow management. It eliminates the problems of errors, version control, and timeliness that are characteristic of a planning system solely based on spreadsheets. Users have the option to submit information simultaneously through a simple web or Microsoft Excel interface. Using an intranet or secure Internet connection, users review only what they need to review and add data where they are authorized. Chapter 11. Integrating IBM Cognos BI with IBM Cognos Business Analytics solutions 531 11.1.3 IBM Cognos Controller The ability of an organization to close its books, consolidate its accounts from all operations and partnerships, and prepare accurate and auditable financial statements is critical to maintaining credibility with existing and potential investors and financial markets. Adding to that challenge is that many times an organization has disparate financial information systems within various operating divisions and geographies. To meet these requirements and to handle new governance and financial reporting standards, organizations can rely on IBM Cognos Controller. A key component of the IBM Cognos performance management platform, IBM Cognos Controller is a comprehensive, web-based solution that offers power and flexibility for streamlined, best-practice financial reporting and consolidation—all in one solution. With IBM Cognos Controller, finance organizations can prepare financial information and analyze and then investigate and understand it in a centralized, controlled, and compliant environment. IBM Cognos. IBM Cognos Controller includes the following features: Web-based, fully scalable for any size organization Flexible processing of modifications to corporate and account structures and group histories Integrated scenario manager for simulation and modeling Real-time reconciliation of internal balances in data input Allocations that are automatically included in consolidation with status Extensive process monitoring and control Practical, automatic report book generation and distribution Support for IAS, IFRS, US GAAP, local GAAPs, and other regulatory requirements Standard reporting that provides information about financial performance for business stakeholders and managers Financial and management measures and metrics for scorecards, dashboards, and analytics 532 IBM Cognos Business Intelligence V10.1 Handbook 11.2 Business scenarios and roles to take advantage of IBM Business Analytics In this section, we present a business scenario that integrates the IBM Cognos TM1, IBM Cognos Planning, and IBM Cognos Controller applications with IBM Cognos BI. This scenario shows how to provide a reporting source for IBM Cognos BI professional authors and analysts. After reporting sources for each of the IBM Cognos Financial Performance Management applications are provided, you can use the reporting techniques that we discuss in this book to create IBM Cognos BI content. In addition, this scenario shows how to integrate the IBM Cognos TM1 applications in the dashboard to allow business users a single place to complete the following activities: Managed planning through an intuitive workflow process What-if analysis Reporting and analysis on planned and actual data Each of the IBM Cognos Financial Performance Management applications hold critical business information for planning, budgeting, forecasting, financial consolidations, product costing, workforce information, and other information, depending on the models that are defined within the applications. This information is valuable and must be presented to decision-makers on a timely basis along with the capability for the decision-maker to perform analysis and drill-down to details. IBM Cognos BI and the integration between the IBM Cognos Financial Performance Management applications provide these capabilities. We use the following roles in these sections: Modeler: John Walker (skills include IBM Cognos TM1 and IBM Cognos Business Intelligence Framework Manager). Professional Report Author: Lynn Cope (skills include report and dashboard authoring) Chapter 11. Integrating IBM Cognos BI with IBM Cognos Business Analytics solutions 533 and add IBM CognosTM1 Websheets or a link to the application for managed distribution. IBM Cognos TM1 Websheets or Cube Views enables users to write back to the underlying TM1 database. You can also drill down to the IBM Cognos TM1 database in real time (see reports in the SalesPlan Reports folder in Figure 11-1). or even add metadata (see the Applications and Views folder under GreatOutdoors_SalesPlan in Figure 11-1). change drivers such as Cost or Price. Lynn Cope. uses IBM Cognos TM1 data to create reports. the Advanced Business User.1 Handbook . add these reports to the dashboard. You can view reports that are created in IBM Cognos BI (for example IBM Cognos Report Studio or IBM Cognos Business Insight Advanced) with live IBM Cognos TM1 data with zero latency. 534 IBM Cognos Business Intelligence V10. Figure 11-1 IBM Cognos TM1 items in the Content tab We use the Great Outdoors company use case to show how to create a dashboard for the sales department. we show how to create a dashboard in IBM Cognos Business Insight for the users who are involved in the planning process. The dashboard can include the following objects: IBM Cognos TM1 Contributor gives users access to a secure workflow that guides users through the planning process (see the IBM Cognos TM1 Planning Contributor item in Figure 11-1).11.3 Integrating IBM Cognos TM1 with IBM Cognos BI In this section. give a name to the package (that name will be visible to the business authors in IBM Cognos Connection portal and studios) and publish a package. 11. Select the cube you want to import. refer to the IBM Cognos TM1 Installation Guide. Open the IBM Cognos Framework Manager and create a package for IBM Cognos TM1 data using the data source created in the step 1.3. we do not discuss the IBM Cognos TM1 modeling and how to create an IBM Cognos TM1 Contributor application for a managed contribution. For details. You need the following connection parameters to create a data source: – Administration Host: The name of a system where the IBM Cognos TM1 server resides that can be identified by the network – Server Name: The IBM Cognos TM1 server name as configured in the TM1s.1 Creating a data source and package For IBM Cognos TM1 data to be available for professional report authors to create reports the modeler. click the “Create a default package” option. refer to the IBM Cognos BI Administration and Security Guide.cfg file 2. After the import is complete. For details about configuring IBM Cognos TM1 to work with IBM Cognos BI and to use the same security. Integrating IBM Cognos BI with IBM Cognos Business Analytics solutions 535 . We also assume that IBM Cognos TM1 is configured to work with IBM Cognos BI. We give an overview of the steps for that process in this section. Chapter 11. Open IBM Cognos Administration. We assume all this work has been done previously. To create a data source and package: 1. John Walker must create an IBM Cognos TM1 data source and publish a package to the IBM Cognos Connection portal. Prerequisite: The prerequisite is that you have the IBM Cognos TM1 client installed on the same computer as the IBM Cognos Business Intelligence installation. and add an IBM Cognos TM1 cube as a data source.In this section. they can use IBM Cognos Report Studio to create reports.The new package displays in IBM Cognos Connection. They are created in IBM Cognos Report Studio and are using IBM Cognos TM1 greatoutdoors server as a data source.2 Objects used in the dashboard Now that the IBM Cognos TM1 package is available for the professional report authors. The deployment archive that contains these reports is located in the following directory: <c10_location>\webcontent\samples\content\TM1 The TM1 Sales Plan package that uses the greatoutdoors IBM Cognos TM1 server. The reports use live IBM Cognos TM1 data so that users can see the change in data immediately after a value is entered to the IBM Cognos TM1 cube by a planning contributor.3. Figure 11-3 shows the following objects in IBM Cognos Connection that we used when creating this dashboard: Reports (SalesPlan Reports) are part of samples that are coming with the IBM Cognos Business Intelligence installation. The server data is located in the following directory: <c10_location>\webcontent\samples\datasources\cubes\TM1 TM1 Contributor Applications is created in IBM Cognos TM1 Contributor and we will use a link to the application and add that to a dashboard.1 Handbook . as shown in Figure 11-2. Figure 11-2 IBM Cognos TM1 package added to IBM Cognos Connection portal 11. Users can monitor the planning process or change drivers to see how that influences the entire performance. 536 IBM Cognos Business Intelligence V10. you need to complete the configuration steps that we discuss in this section. Integrating IBM Cognos BI with IBM Cognos Business Analytics solutions 537 .Figure 11-3 IBM Cognos TM1 objects in IBM Cognos Connection 11. Figure 11-4 Contents of IBM Cognos TM1 Server in IBM Cognos Business Insight Chapter 11.3.3 Configuration steps To have IBM Cognos TM1 server objects (cube views and websheets) available in IBM Cognos Business Insight (as shown in Figure 11-4). refer to the IBM Cognos TM1 Operations Guide. we continue with a report from the IBM Cognos samples. Lynn can use them on the dashboard that she wants to create.atom. 11. In this scenario.3. For details regarding the IBM Cognos authentication security configuration. she uses the IBM Cognos TM1 Websheets and IBM Cognos TM1 Contributor application to create a dashboard with all the components that are necessary for the users who are involved in the planning process for the Great Outdoors company.xml.xml file. For details. add an existing report. Launch IBM Cognos Business Insight.4 Business case After professional report authors create and save the reports on the IBM Cognos Connection portal.sample sample file that is located in the following directory: <c10_location>\templates\ps\portal\variables_TM1. Complete the following steps: 1. First. the existing report was created in IBM Cognos Report Studio. If you do not have an existing report. Specify the URL that is used for web access and the server host and server name so that IBM Cognos Business Insight can connect to the IBM Cognos TM1 servers.sample Additional configuration steps: When you use IBM Cognos authentication.This section provides only an overview of the steps.xml. A sample file is located in the following directory: <c10_location>\configuration\icd\contributions\contrib directory\tm1_contribution. Apart from that. In this case. 3. Configure the variables_TM1.1 Handbook . To complete this example. 538 IBM Cognos Business Intelligence V10.sample 2. A sample contribution file is provided with IBM Business Insight. refer to the IBM Cognos TM1 Installation Guide. you need to perform additional configuration steps. and select Create New. follow these steps: 1.cfg file of the IBM Cognos TM1 Server to add the ServerCAMURI and ClientCAMURI properties that point to IBM Cognos BI and to add the CAMPortalVariableFile=portal/variables_TM1. you can use IBM Cognos Business Insight Advanced and create one from scratch. Configure the Tm1s. that includes properties for an IBM Cognos TM1 server that uses CAM authentication and properties for a server that does not use CAM authentication. Figure 11-5 IBM Cognos TM1 report widget The reports shows the Unit Sale Price for Channels and Product by different versions of budget. 3. This data is the data that was entered in to a planning application by the contributors in the Sales department during the planning process. Now. it looks as shown in Figure 11-6. and drag it to the dashboard. Navigate to Public Folders TM1 Contributor Applications TM1 Planning Contributor. add a link to the IBM Cognos TM1 Contributor planning application so that users can open it directly from this dashboard. Figure 11-6 IBM Cognos TM1 contributor application Chapter 11. Integrating IBM Cognos BI with IBM Cognos Business Analytics solutions 539 .2. Navigate to Public Folders SalesPlan Reports Channel Pricing Comparison Crosstab1. including the difference and the percentage of difference. When added. and add the report part to the dashboard. The report widget looks as shown in Figure 11-5. and the Contributor workflow opens with all the tasks that belong to the user who is logged in (see Figure 11-7).4. Click TM1_SalesPlan_application.1 Handbook . (This view is a view that was created in IBM Cognos TM1 Architect and is looking at a Sales Plan cube. where the IBM Cognos TM1 Websheets are stored. Figure 11-7 IBM Cognos TM1Contributor workflow 5. you can add items from GreatOutdoors_SalesPlan Applications Great Outdoors. Lynn wants to add the widget that contains an IBM Cognos TM1 Cube View that allows a user to write directly to the IBM Cognos TM1 cube. In the same view. 540 IBM Cognos Business Intelligence V10. Figure 11-8 IBM Cognos TM1 Cube View 6. Navigate to GreatOutdoors_SalesPlan Views Sales Plan SalesPlan_View. IBM Cognos TM1 Websheets also allow writing directly to the IBM Cognos TM1 cube.) Drag the view to a dashboard. When added. it looks as shown in Figure 11-8. it is not designed for efficient and scalable enterprise reporting needs. and forecasting processes using a transactional data store that uses the efficiency of XML-formatted data. The dashboard looks as shown in Figure 11-9. a process of publishing that data into a star schema data store is required. but all objects pull the data from an underlying IBM Cognos TM1 Server. It combines objects that were created in IBM Cognos TM1 and IBM Cognos BI. Figure 11-9 Dashboard that contains various IBM Cognos TM1 widgets 11. Integrating IBM Cognos BI with IBM Cognos Business Analytics solutions 541 . However. without interruption.4 Integrating IBM Cognos Planning Contributor with IBM Cognos BI IBM Cognos Planning Contributor allows you to provide users access to the system at all times. budgeting. This data store design is the most efficient for a transactional system of this nature. Chapter 11. After the data is initially published to the star schema data store. To enable organizations to design and distribute reports that use the IBM Cognos Planning Contributor planning data. for efficient global planning.7. it can be refreshed incrementally to achieve near real-time reporting from the transactional planning datastore. opening just one cube in each package can result in a large number of packages in IBM Cognos Connection. However. which sometimes can be difficult to manage. By default. Using IBM Cognos Framework Manager. To use the data in IBM Cognos studios. you are presented with metadata for all the cubes in the application and can choose from multiple cubes to create reports. After the data is a source in IBM Cognos TM1.5 Integrating IBM Cognos Controller with IBM Cognos BI IBM Cognos Controller is delivered with an integration component. that automates the process of extracting data in near real-time from IBM Cognos Controller into IBM Cognos TM1. Financial Analytics Publisher. When you open the package in IBM Cognos studios. you can create a package that contains all the cubes in the IBM Cognos Planning Contributor application. The Financial Analytics Publisher component is added on top of IBM Cognos Controller and uses a temporary storage area before populating an IBM Cognos TM1 cube. For details about how to publish a IBM Cognos Planning Contributor application to a star schema data store. you get one cube in each package. From the IBM Cognos TM1 cube. The IBM Cognos Controller data in IBM Cognos TM1 is refreshed on a near real-time basis through an incremental publishing process from the controller transactional database. For more information about using IBM Cognos Controller Financial Analytics 542 IBM Cognos Business Intelligence V10. the IBM Cognos Controller data can be accessed by a number of reporting tools.1 Handbook . it can then be accessed as a data source for IBM Cognos BI for enterprise reporting purposes. You can create an IBM Cognos Planning Contributor package in one of the following ways: Using the IBM Cognos Planning Contributor administration console.You can use the star schema data store as a source of data for IBM Cognos BI. When configured. you can determine how many cubes to expose in a package. and you can define how often the service settings run. you can use IBM Cognos BI to report on and analyze real-time IBM Cognos Planning Contributor data. 11. the IBM Cognos TM1 cube is updated continuously. including IBM Cognos studios. you must create the IBM Cognos Planning Contributor package. refer to the IBM Cognos 8 Planning Contributor Administration Guide. As with IBM Cognos TM1 data. Integrating IBM Cognos BI with IBM Cognos Business Analytics solutions 543 . as is the case for existing publish. see the IBM Cognos Controller Financial Analytics Publisher User Guide.Publisher. C8BI. Figure 11-10 IBM Cognos Controller data flow Chapter 11. Figure 11-10 shows the data flow from IBM Cognos Controller to IBM Cognos TM1 for enterprise reporting using IBM Cognos BI. Perspectives) Publish Controller Database Trickle Temporary Storage Contr oller TM1 Cube FM Standard Reports Report Generator (will remain) Note: A key success factor for FAP. Users updating TM1 Viewers (For example. is the right skills. Both BI and Controller data model skills are required. 544 IBM Cognos Business Intelligence V10.1 Handbook . All rights reserved.Part 6 Part 6 Appendixes © Copyright IBM Corp. 545 . 2010. 546 IBM Cognos Business Intelligence V10.1 Handbook . redbooks.A Appendix A.ibm. All rights reserved. © Copyright IBM Corp. you can go to the IBM Redbooks website at: ibm.com/redbooks Select the Additional materials and open the directory that corresponds with the IBM Redbooks form number. 547 .zip file into this folder.com/redbooks/SG247912 Alternatively. Point your web browser at:. Locating the web material The web material that is associated with this book is available in softcopy on the Internet from the IBM Redbooks web server. Additional material This book refers to additional material that you can download from the Internet as we describe in this appendix. 2010. and extract the contents of the web material . SG247912. How to use the web material Create a subdirectory (folder) on your workstation. 548 IBM Cognos Business Intelligence V10.1 Handbook . transform. and load Great Outdoors International Business Machines Corporation International Technical Support Organization Java Management Extensions Java Native Interface Java Virtual Machine Online Analytical Processing Portable Document Format process Identification Relational Database Management System Really Simple Syndication Service Level Agreement Service-Oriented Architecture Statistical Process Control Single Sign-On User Interface © Copyright IBM Corp. All rights reserved. 2010.Abbreviations and acronyms BI BW CSV DLL DMR ETL GO IBM ITSO JMX JNI JVM OLAP PDF PID RDBM RSS SLA SOA SPC SSO UI business intelligence Business Warehouse comma-separated value Dynamic Link Library Dimensionally Modeled Relational extract. 549 . 1 Handbook .550 IBM Cognos Business Intelligence V10. 498” 460 <-> 788 pages .IBM Cognos Business Intelligence V10.1 Handbook (1.0” spine) 0.875”<->1. . . 1 Handbook ® Understand core features of IBM Cognos BI V10. The book is primarily focused on the roles of Advanced Business User. Professional Report Author. Modeler. For more information: ibm. Experts from IBM.1 Realize the full potential of IBM Cognos BI Learn by example with practical scenarios This book uses a fictional business scenario to demonstrate the power of IBM Cognos BI. This IBM Redbooks publication addresses IBM Cognos Business Intelligence V10.1. and IT Architect. INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization.Back cover ® IBM Cognos Business Intelligence V10.com/redbooks SG24-7912-00 ISBN 0738434817 . Administrator. You can use this book to: Understand core features of IBM Cognos BI V10.1 Realize the full potential of IBM Cognos BI Learn by example with practical scenarios IBM Cognos Business Intelligence (BI) helps organizations meet strategic objectives and provides real value for the business by delivering the information everyone needs while also reducing the burden on IT. Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.
https://www.scribd.com/doc/59346148/Cog-Nus
CC-MAIN-2017-30
refinedweb
90,528
59.6
Loadable Kernel Module Exploits Many useful computer security tool ideas have a common genesis: the cracker world. Tools, like port scanners and password crackers, originally designed to aid black-hats in their attempts to compromise systems, have been profitably applied by systems administrators to audit the security of their own servers and user accounts. This article presents a cracker idea—the kernel module exploit—and shows how you can improve your system's security by using some of the same ideas and techniques. First, I will discuss the origin of my idea and how it works, then I will attempt to demystify the art of kernel module programming with a few short examples. Finally, we will walk through a substantial, useful example that will help prevent a class of attacks from compromising your system. Before we get started, I need to mention the standard disclaimer. Be aware that a bug in kernel space is liable to crash your machine, and an endless loop in kernel space will hang your machine. Do not develop and test new modules on a production machine, and test modules thoroughly to ensure they do not destabilize your system or corrupt your data. To minimize data loss due to system crashes in the debugging cycle, I recommend that you either use a virtual machine or emulator (like bochs, plex86, the User-Mode Linux port or VMware) for testing, or install a journaling filesystem (like SGI's xfs) on your development workstation. Furthermore, none of the code examples in this article have been tested on an SMP machine, and most of it is likely not multiprocessor safe. Now that we have that out of the way, let's talk about modules. A few months ago, I was developing a system called audit trail generator for Linux. For every process on a system, I wanted to keep track of all system calls and their arguments. To this end, I experimented with several approaches, but none was as successful as I would have liked. Wrapping the libc function for write(), for example, only enabled me to log write() invocations that originated from C programs, and dynamic binary instrumentation was limited by the sorts of executables the instrumentation library could parse (C, C++ and Fortran). Being limited to auditing executables produced by one of a few languages was only a small practical limitation, since virtually every program on a GNU/Linux system is written in C, C++ or some language that has a C- or C++-based runtime library, like Perl or Python. However, the incompleteness of these solutions really bothered me on a theoretical level. I knew how straightforward it would be to bypass this system by invoking a system call from a little-known language that didn't rely on C or C++, or even by handcrafting a system call in assembly language. It was clear that it would be impossible to write an insubversible user-space auditing tool, and it would be tough to write a really useful tool without hacking into the kernel. Since I didn't want to maintain a patch or deal with a lengthy recompile-reboot-debug cycle, I didn't think doing this in kernel space was feasible. No sooner had I put these concerns on the back burner and started work on this project than I saw a message to my local LUG's mailing list that gave me an idea. This message was a forwarded advisory about a kernel module exploit. This particular module was a nasty one: it modified the behavior of certain system calls to hide itself from the lsmod command and to hide the presence of scanners, crackers, sniffer logs and other such files. I almost screamed “Eureka!” in my office. I didn't have to deal with maintaining a kernel patch, recompiling or rebooting; I could develop my tool as a loadable module. I recognized that the general technique behind module exploits could be adapted to add many types of useful behavior to system calls, including a different security policy, finer-grained security than the UNIX model allows and, of course, my audit trail generator. I will discuss some of the fun things you can do by altering and wrapping system calls a little later, but let us first get our hands dirty with an example kernel module. This is a simple example, akin to everyone's favorite first program, but it demonstrates the most basic parts of a loadable kernel module, the init_module and cleanup_module functions: #include <linux/kernel.h> #include <linux/module.h> int init_module() { printk("<1> Hello, kernel!\n"); return 0; } void cleanup_module() { printk("<1>I'm not offended that you" "unloaded me. Have a pleasant day!\n"); } You may have to use #define for the symbol MODVERSIONS and #include for the file linux/modversions.h from the Linux source tree, depending on how your system is set up. Call this short module hello.c and compile it with: gcc -c -DMODULE -D__KERNEL__ hello.cYou should now have a file called hello.o in your current directory. If you're currently in X, switch over to a virtual console and (as root) type insmod hello.o. You should see “Hello, kernel!” on your screen. If you would like to check that your module is loaded, use the lsmod command; it should show that your hello module is loaded and taking up memory. You can now rmmod this module; it will politely inform you that you have unloaded it. The linux/kernel.h and linux/module.h header files are the two most basic for any module development, and you are likely to need them for any module you write. It is best if these headers (unlike modversions.h) come from /usr/include/linux rather than a Linux source tree. (If your distribution vendor has made /usr/include/linux a link to the Linux source tree, complain—that practice is liable to cause major breakage and headaches for you.) You will use quite a few more of the kernel headers for any substantial module, and you will find that grep -l /usr/include/linux is a good friend while developing modules. Think of init_module as an “object constructor” for your module. init_module should allocate storage, initialize data and alter the kernel state so that your module can do its work. In this case, init_module is merely announcing its presence and returning 0 to signify success, as in many C functions. Therefore, our initialization for the hello module consists solely of calling the printk function, a particularly handy function to have at your disposal. Essentially, it functions like the standard C printf function, but for two differences. First, and most obviously, printk allows you to specify a priority for a given message (the “1” in angle brackets). Second, printk sends its output to a circular buffer that is consumed by the kernel logger and (possibly) sent to syslogd. Since the output of syslog is flushed frequently, calling printk with judiciously placed, high-priority messages can greatly aid debugging—especially since any bug in kernel-space code is liable to crash your machine or at least cause a “kernel oops”. Why not just use printf, you ask? Simple: to do so would be impossible. The Linux kernel is not linked to the C library, so old friends like printf are unavailable in kernel-space code. However, there are many useful routines in the kernel that give you functionality similar to library routines, including workalikes for most of the str family of functions from the C library. To use these in your modules, merely include linux/string.h (be careful not to include the C library version). If init_module is a constructor, remove_module is the destructor. Be sure to tidy up after your module as carefully as possible; if you don't free some memory or restore a data structure, you'll have to reboot to return your system
http://www.linuxjournal.com/article/4829?quicktabs_1=0
CC-MAIN-2015-11
refinedweb
1,322
60.45
Star Fox: Assault Arwing/Wolfen Riding FAQ by TheUnruly1 Version: 1.3 | Updated: 08/29/07 | Search Guide | Bookmark Guide | | / _| | | | ___| |_ __ _ _ __| |_ _____ ____ _ ___ ___ __ _ _ _| | |_ / __| __/ _` | '__| _/ _ \ \/ / _` / __/ __|/ _` | | | | | __| \__ \ || (_| | | | || (_) > < (_| \__ \__ \ (_| | |_| | | |_ |___/\__\__,_|_| |_| \___/_/\_\__,_|___/___/\__,_|\__,_|_|\__| ____ _ ___ _ _ _ ____ ____ ____ ____ Star Fox Assault |__/ | | \ | |\ | | __ |___ |__| | | Vehicle Riding FAQ | \ | |__/ | | \| |__] | | | |_\| Copyright 2007 TheUnruly1. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | | \ | | | |_\| TABLE OF CONTENTS =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= 1. INTRODUCTION & OTHERS (inttuon) 2. HOW TO RIDE (htrdmns) 3. WEAPONS (wepsess) 4. RIDER COMBAT (rdvsveh) 5. HOW TO BEAR A RIDER (htbaruo) 6. CHARACTERS TO USE (chartus) 7. CONCLUSION Best viewed in Courier New 10 point. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 1. INTRODUCTION (inttuon) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= To quote HAXage: =-_-==-_-==-_-==-_-==-_-==-_-==-_-= "That's right, riding. Mission 4, mission 7. Only this time, you don't have the Plasma Cannon, you fall off, and your friend doesn't go as slow as Wolf/Falco. And there's always that thought that they might flip you off accidentally, or your opponent might actually do some DAMAGE to you, unlike those dullard Aparoids that fly at you. So! Who wants to try?" =-_-==-_-==-_-==-_-==-_-==-_-==-_-= And that sums it up very well indeed. It's actually considered by some as a "cheat strategy", but if every little secret in a game was a cheat strategy, well, then competitive SSBM wouldn't be too much fun, would it? =D Of course, that is but one example. __________________________________ / \ --------- ABOUT ME --------- --------- --------- \__________________________________/ Alrighty then, I'll start. My name's Will, and I go by TheUnruly1. I make no claim of being a good player, although HAXage (whom I taught, and true to Star Wars spirit far outgrew my tactical training wheels), in a character profile referred to me in his FAQ as a "Krystal vet". As a matter of fact, I doubt this very statement at the cost of my pride. I am an average player, and I forever will be, in this game. The only thing true of that statement is that my main character is Krystal. (I like Barriers.) I also think of new, fun ways to attack my opponent when the chips are down. As a matter of fact, this manner of thinking helped me instigate a set of common rules about Riding, which is just one of those dangerous things. Anyways, I'm excited, so I'm moving on. __________________________________ / \ --------- WHY RIDE? --------- --------- --------- \__________________________________/ The reason one would ride is quite simple - to use a form of SFA combat that actually lets you and your teammate work with synergy. Such a team has the advantage of double cover - that is, the rider covers the machine he rides on, and the Arwing pilot keeps the rider safe. Strafes low to the ground from off an Arwing's wing can be easily fatal to pilots on the ground. A fly-by Gatling Gunning of a Landmaster can prove very satisfying indeed in killing something that usually counters an Arwing. Unless you're loaded with Homing Launchers, it takes a great aim and reflex(es) to be an effective rider, and that's discounting the fact that your bearer has to have these as well. __________________________________ / \ --------- SOLO RIDES --------- --------- --------- \__________________________________/ Riding on your own is a weird concept, but the way it works is you get out of your Arwing briefly by pressing Z so you can ride, firing your weapon, and then getting back in. The downside, as compared to team riding, is that while you are outside your Arwing your vehicle will just move in a straight line. However, it can be a good tool for quickly dispatching enemy aircraft if you have the right weaponry. Great sniping tool, too. The OTHER way to use a solo ride is...bailing out - if in trouble, press Z and then press Y. Nothing more. You will fall out through the neon lilac jetstream of your previous vehicle and hope your opponent doesn't notice you did. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 2. HOW TO RIDE (htrdmns) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= First off, I've only mentioned in passing how to actually GET RIDING. So, I will now take the time to detail all the major points of riding. __________________________________ -------- THE PREPARATIONS --------- --------__________________________________--------- You should always make sure you have a clear route up and a clear six, because you don't want people ripping you apart as you scramble for the plane. First and foremost, you need the designated rider (no one's drunk here, don't fear) on the wing of choice. Now how much health do your wings have? 59. This makes riding on the fuselage of the plane viable too... on a Wolfen, it's advised, and for an Arwing it's silly (considering you have those two massive wings). Getting up on the wing is tough for Falco, Krystal and sometimes Fox, which makes many people name the higher jumpers in the game the best riders. If you have problems, take a run at it, and try, try, again. (until you are finally silenced after the 10 seconds it takes for the enemy to seek and destroy you.) After that, just get the designated bearer into the plane, and-HEY, COME BACK, YOU DROPPED ME! __________________________________ -------- THE TAKEOFF --------- --------__________________________________--------- The takeoff, as we all agree, is the time where most riders fall off, because the bearer rushes the ascent (but this is about HOW TO RIDE, HOW TO BEAR will be covered later). Anyways, now is when you begin your R-button-holding spree - there's no point in moving much, unless your wing's about to die and you need to hop over to the fuselage. Just hang on, cover the plane's six, and save the real combat for after you have gained a lot of altitude. __________________________________ -------- STAYING ON --------- --------__________________________________--------- During a fight, DON'T PANIC and do not jump off. You act as much needed support for your friend's aircraft. KEEP R HELD DOWN if you're about to fall. If your wing is about to go, you can do one of two things. The first, the simpler but riskier one, is to hug the hull of the Arwing and move to the very edge of the wing (not on the tip, you dorfus, but on the edge where it connects to the plane). When a wing blows, everything but this little tip is removed. You will then be staying on rather dangerously, and could easily be shot off as the unbalanced aircraft teeters and totters. The upside of this is you still get the same features - less chance of hitting yourself - as you would on a full wing (kinda). A reader, whose GameFAQs name is nivlac91, sent me this about the Arwing's wing break area: ============. The other way, the safer but harder one, is to get your bearer to stop and jump from the wing to the fuselage/hull. This will then allow you to ride the fuselage just like solo riding, but with a movable aircraft. Wolfens, bad luck: your wings get almost completely blown off, and riding on your fuselage is the best option anyways, because your wings are so stubby. __________________________________ -------- RIDING A WOLFEN --------- --------__________________________________--------- Wolfens are a whole new game - in riding, you could say Wolfens are high risk, low return (something you don't want a mode of transport to be, normally). If you are merely sitting on one of the thin Wolfen wings that are menaces to board, you might just walk off without even knowing you let go of R. The fuselage is the only way to go, because as aforementioned, those wings are hard as a week-old kaiser bun to get on and stay on. There is, however a "safe spot" that is near the back: right where either of the upper wings joins with the hull, kind of wedged inbetween the wing and the upper brake on the back of the plane. You're good there, but you can't shoot to many different angles from there; you can only shoot out as if you were a fixed gun on the side of the Wolfen. After this two- paragraph explanation, a simple point is made - the wings, the back, and even the safe spot are worse options than just the fuselage. Also, Wolfens can't stop, and their wings are short, tiny, and pointy, making them very hard to Speedboard (will be explained later) or jump from wing to fuselage, if you got on them already. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 3. WEAPONS (wepsess) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= __________________________________ / \ --------- GOOD WEAPONS TO USE --------- --------- --------- \__________________________________/ Well, what weapons does one use when riding? There are two characteristics that need attention: **The weapon won't harm you. This is rather obvious, but we don't want to use Sensor Bombs when riding, do we? **Good control over the weapon is possible, or is offset by a large usefulness. The last bit was added in for the Gatling Gun. Therefore, I have a list of weapons that are useful for riding. The stats showing how many hits the weapons take to break a wing are to help you know when you're going to lose your footing (or when you'll snatch it from your opponent). And by the way, Planes take 1/16 of their HP in damage from hitting something (tested), and so do wings (guess). __________________________________ -------- HOMING LAUNCHER --------- --------__________________________________--------- Ammo per Item: 10 Max: 99 Lock-on Blow Off a Wing? 2 Hits Well, this is probably the best riding weapon there is. It locks on, comes in bulk, and does a crapload of damage to Landmasters. Not only that, but it's insured that it blows up away from where you are, and it's excellent at killing other aircraft/riders. It's almost like giving the Arwing a second Charge Shot, which is obviously good at dispatching Pilots too. __________________________________ -------- GATLING GUN --------- --------__________________________________--------- Ammo Per Item: 100 Max: 999 Blow Off a Wing? A mere 4 bullets GATLING GUN? WHAT? It has horrible range and sporadic firing! Why would you ride with one? Well, the reason we use the Gatling Gun, is of course to do tons of damage, kill vehicles in paltry numbers of seconds and whatnot. This makes it a godsend against Landmasters that you can use to swoop and destroy on the annoying tanks. The GGun also serves as your primary pilot killer, although the Homing Launcher can also take this job if you need range. It's good at tearing the wings off other rider-bearing planes as well if you can get close. __________________________________ -------- MISSILE LAUNCHER --------- --------__________________________________--------- Ammo Per Item: 3 Max: 5 Blow Off a Wing? 2 Hits Okay, okay, the Homing Launcher's the best pick for going after aircraft, but it's just too cool launching a guided missile off a plane and owning a Landmaster with it. Missile Launchers are mostly luxury items, so don't expect them to be your main weapon. __________________________________ -------- SNIPER & DEMON SNIPER --------- --------__________________________________--------- Ammo Per Item: 10 for Sniper Rifle, 5 for Demon Sniper Max: 99 for both Blow Off a Wing? Both 1 hit These weapons are complete BEASTS when unleashed in a riding setting. The Sniper, on one hand, is awesome against Pilots if you aim (extremely) well, and will take an Arwing down in 2 hits (Landmaster with 3). It is your oyster, and open to whatever purpose you want. The Demon, on the other hand, does only one thing: break stuff. Aim at selected target. Insure you will hit, as ammo is precious. Induce fear into your opponent as you ready the shot. Up in the air like this, you couldn't miss. You fire, annihilating the vehicle and leaving a hapless Pilot to be finished off with the weapon of your choice. That's the Demon Sniper, and that's how it usually ends up. __________________________________ -------- GRENADE --------- --------__________________________________--------- Ammo Per Item: 5 Max: 99 Blow Off a Wing? 2 Grenades, if you are actually dumb enough to let that happen Explodes after 5 1/2 seconds Yeah, there are oodles of grenade pros out there, saying "they're excellent and do tons of damage!" They are right, but for the average FAQ reader and myself, Grenades are a tool that is difficult to use. Sure, it's very hard to hit an Arwing with 'em effectively, so why in the hell am I recommending them? They can be used in bulk, and drop the hammer on Landmasters. That's about it though. __________________________________ / \ --------- WHAT NOT TO USE --------- --------- --------- \__________________________________/ Wear whatever you like, fellas, but consider ignoring these weapons when choosing your main kicks before taking off. __________________________________ -------- MACHINE GUN --------- --------__________________________________--------- Ammo Per Item: 200 Max: 999 Blow Off a Wing? 12 bullets "But it's a lot more accurate!" That's very true. However, when we consider the purpose of using it or the GGun while riding (gunning down vehicles or anything at close range) we can easily see that the Gatling Gun does its job 3 times better (4 times if we're talking Pilots) because it does much more damage per bullet. Do, however, get a Machine Gun ready if you get shot down, so you can last longer on foot. __________________________________ -------- BLASTER --------- --------__________________________________--------- Ammo Per Item: Start with weapon Max: N/A (infinite ammo) Blow Off a Wing? 2 full Charge Shots The Blaster violates the very first rule. Y'know, the one we said was obvious? **The weapon won't harm you. This is rather obvious, but we don't want to use Sensor Bombs when riding, do we? The Blaster will hit yourself with the large chargeup often enough (less so when firing from the wings, however) to cause damage. This is most particularly noticed when you are firing down at something. However, the fully Charged Blaster does quite a significant amount of damage, and it's definitely possible to have a great riding run using your Blaster to own Landmasters - I just don't recommend it because of the charge time that wastes your valuable airtime - yet another reason why high jumpers make the best riders. __________________________________ -------- SENSOR BOMB --------- --------__________________________________--------- Ammo Per Item: 5 Bombs Max: 99 Blow Off A Wing? 2 Hits (or maybe 1) The only practical use of these while riding is doing a plane-to-plane transfer (will be explained), dropping them and transferring back, and who the hell is good enough to regularly do that? =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 4. RIDER COMBAT (rdvsveh) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= __________________________________ / \ --------- UNITY AND ALL THAT --------- --------- --------- \__________________________________/ When you're riding, there's one thing that should be the first one in your mind: YOUR UNITY between you and your plane CANNOT BE BROKEN. Why? If you get shot off, lose a wing, get mauled but miraculously survive, whatever, it's a hassle to get back together. You are then left not with a unified vehicle but with an aircraft and a pilot, separate. The sturdier you stand, the greater chance of survival your unified vehicle has. The longer you last, the more you can whittle your opponent down. This means no going off into your own little world, but instead shooting down whatever threatens your plane. Your opponent, likely thinking that if your plane goes down, you go down, will break the bond by wrecking the plane. Thus, you protect your plane - the bonding force - over saving your own life. If the plane goes down, you effectively both go down. __________________________________ / \ --------- AGAINST THE LANDMASTER --------- --------- --------- \__________________________________/ Landmasters pose huge problems for any aircraft. With an extra set of guns, the threat is rather diminished, because you can use stuff like Grenades, Homing Launchers, and most of all Gatling Guns. Using a swoop attack that involves the Arwing pointing down and firing, you can simultaneously charge Landmasters with Arwing fire and Rider fire. This will likely destroy the Master before it can even hit you with a second Charge Shot, allowing you to go heal yourself of pain. MAKE SURE you are close before you start GGunning - you don't want to waste precious ammo and have to go get some more. Alternatively, you can annoy them by gaining lots of altitude, briefly swooping down into the range of the Homing Launcher, firing, and boost away. Doing this means that in about 3 hits, you'll wreck the Master. But you get the basic idea - since the tank is grounded, use your altitude advantage to keep yourself safe, and attack in bursts. __________________________________ / \ --------- AGAINST AN AIRCRAFT --------- --------- --------- \__________________________________/ Well, what do Pilots normally use against Arwings and stuff? Homing Launchers. This same principle applies here too, because several of your other weapons (GGun, Grenades) are unusable. Homing Launchers will always be the anti-air weapon of choice, but since you're flying at their level and all, Snipers (and definitely Demon ones) become a very viable weapon. The reason for this phenomenon is because being up an the air with a plane bearing you eliminates the flaw of trying to Snipe an Arwing from the ground - it can't evade you as well anymore. Height no longer will work as an escape, and neither will Boosting off. Previously, it was like this... ----------------------------------------------------- |--------| | ARWING | <--------------< *MISS!* |--------| B O O S T S ^ H ^ O ^ T ^ |--------| | SNIPER | |--------| ------------------------------------------------------- Now, it's like this... ------------------------------------------------------- |--------| B O O S T | ARWING | <--------------< |--------| |--------|<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<| RIDER | *BOOM* SNIPER SHOT |--------| -------------------------------------------------------- That explain it? Snipers are one of the weapons of choice for riding against aircraft. Missile Launchers aren't practical against Arwings, because not only are they luxury items that you will rarely find, they do almost twice as much damage to Landmasters. Still, if there are no Landmasters around, go ahead. Another thing about aircraft to use for your advantage: planes take a while to turn around, so if you get on their six you can start hammering them. If the aircraft tries to mix it up by doing a Loop, put on the brakes and try either of two approaches. For one, you could probably get off 2 free Homers. Or, if you're looking for the more complete ownage type attack, you could try and Snipe it as it comes out of the Loop...hard to pull off, but deadly. __________________________________ / \ --------- AGAINST PILOTS --------- --------- --------- \__________________________________/ Pilots are kinda hard to fight, because the height advantage actually HINDERS you. Not only that, but they're small as a pinhead, so Sniping is hard. What's the solution? Go really low. This too is dangerous because it leaves you open to GGuns and the like, which are the same kind of weapons that you should be using on them. Thus, it's basically whoever goes down first. Or it would be, but I'm forgetting that the Arwing has attacks too. Arwing swoop attacks are fatal to Pilots, let alone with an extra set of guns on your back. Use it. This does NOT mean that you should do an almost vertical Dive-Bomb; that's a nice way to lose your rider. Instead, you need to boost in and start from a long way away. After, pull out quickly (to avoid GGun damage), circle around in a large arc and repeat. So, let's recap, the main idea with Pilots is to use your noodle and your vehicle's mobility to outclass the munchkins. If Homing Launchers come your way, which they undoubtedly will, seek to destroy the guy (next time you dive) in one round of battle because you don't have the luxury of time (you'll get Homered). This is plenty easy to accomplish if you can get your Arwing bearer some upgraded lasers. Then the one Homer that they get on you won't pose a long term problem and can be shrugged off with a ring. Well, if you wanted to be cheap, you could use Homing Launchers every now and then while staying high so he couldn't hit you. But let's have fun, right? =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 5. HOW TO BEAR A RIDER (htbaruo) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= "I'm awake, I'm awake!" Okay, ladies and gents, this is the section teaching you how to be a big, predatory furry animal commonly seen as dangerous to wandering hikers. So, this whole time we've been discussing the Rider, but never the Bearer, right? Bearers are still half the job, and must not be ignored. Besides helping with combat, Bearers do plenty of things. __________________________________ / \ --------- MOVES AND STRATEGY --------- --------- --------- \__________________________________/ Your powerful ally, riding on your wing, is your main asset and is what gives you an edge. This person is also a weapon to be deployed - upon mutual agreement, there are several tactics to be employed letting you divide and conquer, or even just conquer. __________________________________ -------- SPEEDBOARDING --------- --------__________________________________--------- Speedboarding can be defined as two things. One is slower, the other one truly is SPEED-boarding. Anyways, they both entail a rider boarding an aircraft without it having to land. The first method involves your ARWING (not Wolfen, it won't work) braking to a stop and letting the rider jump on, just like landing (but not actually landing). This is hard except for Peppy/Slippy, really, and will usually end up making you take some damage. This is handy if you don't have the time to land, but daredevils have another option. The rider jumps, you fly towards the rider, and pick them up. Sounds insane, right? It's wicked fast if you do it, and you won't be wasting a moment in getting the rider on your wing. Again, good for high jumpers. __________________________________ -------- THE CATAPULT --------- --------__________________________________--------- WHEEEE! This is always fun to do and never gets old. It's a perfect way to either drop your little spy via airplane into enemy territory or to send him flying to new heights. Hell, you can even make it up Katina Tower or the Spire in Titania without booster packs. Get your rider on a wing and simply barrel roll pointing the control stick to the wing opposite the one they're on. WHEEE! You just sent your rider flying as the wing rolled upwards. Often done accidentally by reflex; TheUnruly1 is not responsible for injuries caused by mad riders after doing this by accident. When done right, though, it's pretty cool. __________________________________ -------- MIXING IT UP --------- --------__________________________________--------- Be sure to switch positions once in a while, it surprises your opponent - especially if you have very different sets of weaponry. Get one person to pack heavy weapons and missiles, the other one packs GGun and Grenades. The switch can be performed simply by the plane pilot pressing Z and then the rider pressing Z: AT LEVEL FLIGHT. (Although, sometimes it is fun having two riders on one plane at once. Can you say...battle on the planetop?) For fun, you could even drop your rider onto an enemy plane for a little sabotage. You may think that packing Motion Sensors is a good idea, but they will only hurt yourself. (The Sensor Bombs often fly off the plane anyway.) For this kind of battle, the best weapon is the GGun, because you're confined to the space of the plane. Just point down, hold A and watch the plane die. __________________________________ -------- TRANSFERS --------- --------__________________________________--------- Here is, everyone, the ultimate in gosu-skill indulgence; the plane-to-plane transfer. This is, in fact, the way to pull off the above paragraph. A plane flies high above another, which is slightly behind it. Jump while holding the control stick forward, and you just jumped from plane to plane. Can be perfected for use in almost any environment. You can also do something I call a catapult transfer by basic term, and a more real-sounding thing for specific term. When done with a Landmaster I call it a Defense Turret. In this, you are flying above a Master, braking to a halt and Catapulting your rider down to the vehicle who gets inside posthaste. I named it such because this is exactly what you're doing. You are quickly and efficiently deploying an Arwing counter to destroy someone who's following you. So, I call it a catapult transfer/Turret because I don't like the ring of "Quick Deployment of Rider to Arwing Countering Vehicle by Barrel Roll". A catapult transfer can also be done with an Arwing, but this is basically just doing a fast P2P transfer. __________________________________ -------- PLAN Bs --------- --------__________________________________--------- Sometimes, for the rider's sake, ya gotta know when to quit. If your plane is smokin' purple, it's good to not go down together. Get the rider off you and go suicidal. Although you probably already knew that, there are signs you can read as to WHEN to start thinking of a Plan B. 1. Your wings are gone. Not very rider-friendly, and your plane will teeter and totter a little. If it's just the wings, chances are you can keep going, but it's likely your body will be shot up too. Get a new plane for either of you. If it's just wings, then go find a laser upgrade...they heal your wings, strangely enough. Then you're good to go. Not one of the real problems that can stab you in the back, but it happens a lot. 2. Your rider has no Barriers and little health. Barriers are lifesavers when that Arwing flies at you shooting Rapid-fire lasers at your wing. If you get on without a Barrier to begin with, it's a little troublesome. But you can get over it. When the rider's health is low and with no barrier for backup, they're as good as gone from the enemy's next attack. The solution? Bail him out onto the ground by a vehicle. This way he can have another chance at life. 3. Your rider beigns to run out of weapons. DO NOT LET THIS SHUT YOU DOWN. The enemy will love you for that. Say your rider runs out of GGun. Does he still have ANYTHING to use against the enemy? Grenades, say? If not, then he always has the Blaster, right? Although if your only option is the Blaster, there's a far better solution; pack weapons yourself and let him fly the jalopy. If you wanna read about that, look above in "Mixing It Up". __________________________________ -------- MACHINE GUN FIRE --------- --------__________________________________--------- The infamous MG (and GGun) pose problems for the Riding system possibly more than anything else, except maybe the Homing Launcher. You can't Barrel Roll, Loop or U-Turn to avoid it, so how do you safeguard your rider from MG or GG fire? Let's fly away. Well, that only works if the person you're fighting is on the ground. Your Arwing attacks have longer range than an MG, and even more so for a GG. Strafing runs beginning from far away work well, and Homers will save your butt by killing them from long range. Of course, you could always outgun them, but you'll be licking your wounds after that. __________________________________ -------- THIS IS MY BOOMSTICK! --------- --------__________________________________--------- Haha...a category about Bombs. As you may already know, one of these badasses will wreck both you and your cargo in about a second. The way to go against this is switch your positions, since you have more health from being covered in the plane. This might be the time where you... __________________________________ -------- GET ANOTHER PLANE --------- --------__________________________________--------- It happens! No one lives forever in SFA/life! Should the purple smoke arise, bail and find a new prize, be it plane or Landmasters. Get both people out of the plane immediately. The way to do it: The Pilot flies above a vehicle, brakes, and Catapults the rider down to it, who gets in immediately. (Also known as a Defense Turret if done with Landmasters. HA! MEMORY TEST!) The guy still left in then seeks out another fresh vehicle OR a Ring/Platinum Star to refresh the one he's in if you've got the time. __________________________________ -------- LANDMASTER RIDING --------- --------__________________________________--------- WHAT? Yes, you can do it, have someone standing atop your Landmaster being an extra set of guns. It may not be as useful as Arwing riding, and it may be hazardous to the rider's health now and then but it can be done. It saves both your butts if you're plane's going down and you would otherwise both need to Defense Turret or do something like that. It's definitely easier to jump on top of one. The key of Master riding is to stay away from 1: the cannon, and 2: the wheels. The cannon is for obvious reasons, and the wheel is so you won't get hurt as if you were run over when you touch them. The perfect spot is on top of one of the two protrusions on the back of the Landmaster, on either side. You're behind the cannon and above the wheels. Weapons? Since you're slow and low, MGs or GGuns are the weapons of choice. If not, Snipers work well (just not as well as from a plane), Grenades are of course more viable and the blaster could also work. The awesome Homing Launcher, as you know, works anywhere. ---------------------------- Now you see the Bearer's huge role? Maybe this FAQ should be called the Bearing FAQ. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 6. CHARACTERS TO USE (chartus) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= An OPINION BASED list on who is the best rider in the game, who is 2nd best, who is... ------------- --> Slippy Toad and Peppy Hare #1 ------------- WHAT THE HELL? People think this toad is the worst player in the game, but alas, he is one of two best riders, and I'm gonna prove it. Is Peppy better? No. Slip's jump is but one less than Peppy's, and this still lets him do boardings with ease compared to others. His running is a little better, too. Slippy is small, and can't be seen easily, although the reticules take this advantage away often. Your enemy might not realize you were there if you position yourself behind something on the plane, though. The speedy chargeup time for the blaster is again not as spectacular as Peppy's but still gets the job done if you ever have to use it. There are two more trump cards I haven't mentioned yet. The first? Slippy OWNS at Defense Turrets with his five stars in Landmaster. The last, everyone, is the health - just look, bigger than anyone's but the cheapo that is Wolf. That'll help you survive a lot, since Slippy can survive two Grenades/three Homers/ten GGun bullets/two Bombs. An awesome asset for a Rider. If skilled, Slippy surpasses all else on top of a plane with his only disadvantage being the low crosshair size. Health bar sizes might be overrated statistics that don't matter to skilled pros, but even discounting the fact that most of you aren't, it sure helps in riding to have lots of it. Then again, so are crosshair sizes... =D Peppy seems to have no disadvantage at this, since he makes it all seem so damn easy. A five star jump, the fastest blaster charge in the game and the largest of any Pilot crosshair size means even someone with horrible reflexes can play well as Peppy as a rider. You can board with ease by tapping Y and soaring sky-high. Defense Turrets and other maneuvers like that can be done well, but not as well as with Slippy. And there's the two best. ------------- --> Wolf O'Donnell #2 ------------- Wolf is Falco with a crapload of health, an even faster run and a slightly better jump. Thus, 'tis general consensus he is really cheap, and people don't like him. I guess it makes sense why he's #2, then, right? Wolf can even live out a Sniper shot if someone hits you while you're riding. (which means in a very stupid way Wolf will last as long as the freakin' PLANE ITSELF when it comes to Snipers.) ------------- --> Fox McCloud #3 ------------- A team of two Foxes can pull off any maneuver they like, because all across the screen their stats are even. A Defense Turret could work as well as an abrupt position switch, since they have high score in Arwing, Pilot and Master alike. Not much more to say here than that, this position is earned for being able to do anything with a four-star crosshair size. ------------- --> Krystal and Falco Lombardi #4 ------------- Now that I've finished my Slippy rant, I don't have too much more to say. Maybe I'm biased here, but Krystal is my main character above all else. Her stats suck, but that's alright, 'cause she's got Barriers to spare, an invaluable resource during an all out assault (heh...Assault is the name of the game, after all). She thus has the best chance of survival if shot down. But tell me this, is it really bias putting your favorite character in FOURTH place and not first? Falco does not have the health for riding...and also not the jump. Neither does Krystal, but this is more blatantly extreme with Falco. He may be one of the better characters (and hands down the best bearer character) in the game, but not under these circumstances. He just gets killed a little too easy. Mind you, this is if you actually manage to board with his (and Krystal's too) weak jump. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 6. CONCLUSION | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= Question answering time: HOW THE HELL DID I DISCOVER THIS? Well, it started with a friend of mine who introduced me to the game. On the second ever time I played it, you could guess I wasn't that great...I tried to bail out of my plane in mid-flight, to see what would happen. I joyfully found I was still astride my plane, with a machine gun in hand. I then said "Look! Here comes something new!" and shot up his plane with my MG. He turned to my screen, saw me standing atop my Arwing with a machine gun finishing him off and asked "How did you do THAT?" I explained. Something like that. I then looked around online and saw I wasn't the only one doing this. Now, I wrote a guide to it. That's all the questions, look me up at the-unruly-1@hotmail.com for more questions. Include something about Star Fox Assault or Riding in the title, or through the spam filter you'll go on my direct order. See you at my next guide! TUn1 =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 7. COMMENTS | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= Sender: nivlac91. FAQ Display Options: Printable Version
http://www.gamefaqs.com/gamecube/561297-star-fox-assault/faqs/48975
CC-MAIN-2016-26
refinedweb
6,161
80.72
I’m putting the finishing touches on a side project at work that requires opening a file as an argument at the command line, or through a file open dialog box. Here’s a snippet that demonstrates how I implemented it. import sys import os def choose_file(): try: import Tkinter, tkFileDialog except ImportError: print "Tkinter not installed." exit() #Suppress the Tkinter root window tkroot = Tkinter.Tk() tkroot.withdraw() return str(tkFileDialog.askopenfilename()) if __name__ == "__main__": #If no file is passed at the command line, or if the file #passed can not be found, open a file chooser window. if len(sys.argv) < 2: filename = os.path.abspath(choose_file()) else: filename = os.path.abspath(sys.argv[1]) if not os.path.isfile(filename): filename = choose_file() #Now you have a valid file in filename It’s pretty straightforward. If no file is passed at the command line, or if the file passed at the command line isn’t a legitimate file, a file chooser dialog box pops up. If Tkinter isn’t installed, it bails out with an error message. The bit about suppressing the Tk root window prevents a small box from appearing alongside the file chooser dialog box.
https://chrisheydrick.com/2014/09/03/file-open-dialog-box-in-python/
CC-MAIN-2017-34
refinedweb
198
66.23
I was hoping that someone could show me how to use the equals method to change numbers that were entered. Three numbers are entered and the user is asked if they need to change a number. This is followed by a yes or no. It says if the answer is yes to change the number, to use the equals method. This is donethree times by once changing the number, adding a number and removing a number. Please help! Thanks Ytula - 2 Contributors - forum1 Reply - 2 Views - 14 Years Discussion Span - comment Latest Post by server_crash server_crash 64 Your question is somewhat unclear. But maybe I got it right. I posted code what I thought you were wanting if it's not let me know... import java.io.*; class NumbersandInput { public static void main(String[] args) throws IOException { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); String inData; int i; int i2; System.out.println("Please enter an integer"); i = Integer.parseInt(br.readLine()); i2 = i; System.out.println("\n Would you like to change the number?"); inData = br.readLine(); if (inData.equals("Yes")) { try { System.out.println("\n What would you like to change it to?"); i2 = Integer.parseInt(br.readLine()); } catch (NumberFormatException e) {} } else { System.out.println("No change was made"); } System.out.println(" \n \n \n The staring value of the integer was: " + i); System.out.println(" The eding value of the integer was: " + i2); } }
https://www.daniweb.com/programming/software-development/threads/12696/cannot-incorporate-equals-method-for-a-yes-or-no-answer
CC-MAIN-2018-47
refinedweb
237
61.02
Ngoại Ngữ Tổng hợp Tải bản đầy đủ - 0 (trang) 2 The Gfarm/BlobSeer file system design 2 The Gfarm/BlobSeer file system design Tải bản đầy đủ - 0trang Towards a Grid File System Based on a Large-Scale BLOB Management Service 15 Description of the interactions between Gfarm and BlobSeer Figure 2 describes the interactions inside the Gfarm/BlobSeer system, both for remote access mode (left) and BlobSeer direct access mode (right). When opening a Gfarm file, the global path name is sent from the client to the metadata server. If no error occurs, the metadata server returns to the client a network file descriptor as an identifier of the requested Gfarm file. The client then initializes the file handle. On a write or read request, the client must first initialize the access node (if not done yet), after having authenticated itself with the gfsd daemon. Details are given below. Fig. 2 The internal interactions inside Gfarm/BlobSeer system: remote access (left) vs BlobSeer direct access mode (right). Remote access mode. In this access mode, the internal interactions of Gfarm with BlobSeer only happen through the gfsd daemon. After receiving the network file descriptor from the client, the gfsd daemon inquires the metadata server about the corresponding Gfarm’s global ID and maps it to a BLOB id. After opening the BLOB for reading and/or writing, all subsequent read and write requests received by the gfsd daemon are mapped to BlobSeer’s data access API. BlobSeer direct access mode. In order for the client to directly access the BLOB in the BlobSeer direct access mode, there must be a way to send the ID of the desired BLOB from the gfsd daemon to the client. With this information, the client is further able to directly access BlobSeer without any help from the gfsd. 16 Viet-Trung Tran, Gabriel Antoniu, Bogdan Nicolae, Luc Boug´e, Osamu Tatebe 4 Experimental evaluation To evaluate our Gfarm/BlobSeer prototype, we first compared its performance for read/write operations to that of the original Gfarm version. Then, as our main goal was to enhance Gfarm’s data access performance under heavy concurrency, we evaluated the read and write throughput for Gfarm/BlobSeer in a setting where multiple clients concurrently access the same Gfarm file. Experiments have been performed on the Grid’5000 [2] testbed, an experimental grid infrastructure distributed on 9 sites around France. In each experiment, we used at most 157 nodes of the Rennes site of Grid’5000. Nodes are outfitted with 8 GB of RAM, Intel Xeon 5148 LV CPUs running at 2.3 GHz and interconnected by a Gigabit Ethernet network. Intra-cluster measured bandwidth is 117.5 MB/s for TCP sockets with MTU set at 1500 B. Access throughput with no concurrency First, we mounted our object-based file system on a node and used Gfarm’s own benchmarks to measure file I/O bandwidth for sequential reading and writing. Basically, the Gfarm benchmark is configured to access a single file that contains 1 GB of data. The block size for each READ (respectively W RIT E) operation varies from 512 bytes to 1,048,576 bytes. We used the following setting: for Gfarm, a metadata server and a single file system node. For BlobSeer, we used 10 nodes: a version manager, a metadata provider and a provider manager were deployed on a single node, and the 9 other nodes hosted data providers. We used a page size of 8 MB. We measured the read (respectively write) throughput for both access modes of Gfarm/BlobSeer: remote access mode and BlobSeer direct access mode. For comparison, we ran the same benchmark on a pure Gfarm file system, using the same setting for Gfarm alone. As shown on Figure 3, the average read throughput and write throughput for Gfarm alone are 65 MB/s and 20 MB/s respectively in our configuration. The I/O throughput for Gfarm/BlobSeer in remote access mode was better than the pure Gfarm’s throughput for the write operation, as in Gfarm/BlobSeer data is written in a remote RAM and then, asynchronously, on the corresponding local file system, whereas in the pure Gfarm the gfsd synchronously writes data on the local disk. As expected, the read throughput is worse then for the pure Gfarm, as going through the gfsd daemon induces an overhead. On the other hand, when using the BlobSeer direct access mode, Gfarm/BlobSeer clearly shows a significantly better performance, due to parallel accesses to the striped file: 75 MB/s for writing (i.e. 3.75 faster than the measured Gfarm throughput) and 80 MB/s for reading. Towards a Grid File System Based on a Large-Scale BLOB Management Service (a) Writing 17 (b) Reading Fig. 3 Sequential write (left) and read (right). Access throughput under concurrency In a second scenario, we progressively increase the number of concurrent clients which access disjoint parts (1 GB for each) of a file totaling 10 GB, from 1 to 8 clients. The same configuration is used for Gfarm/BlobSeer, except for the number of data providers in BlobSeer, set to 24. Figure 4(a) indicates that the performance of the pure Gfarm file system decreases significantly for concurrent accesses: the I/O throughput for each client drops down twice each time the number of concurrent clients is doubled. This is due to a bottleneck created at the level at the gfsd daemon, as its local file system basically serializes all accesses. In contrast, a high bandwidth is maintained when Gfarm relies on BlobSeer, even when the number of concurrent clients increases, as Gfarm leverages BlobSeer’s design optimized for heavy concurrency. Finally, as a scalability test, we realized a third experiment. We ran our Gfarm/BlobSeer prototype using a 154 node configuration for BlobSeer, including 64 data providers, 24 metadata servers and up to 64 clients. In the first phase, a single client appends data to the BLOB until the BLOB grows to 64 GB. Then, we increase the number of concurrent clients to 8, 16, 32, and 64. Each client writes 1 GB to that file at a disjoint part. The average throughput obtained (Figure 4(b)) slightly drops (as expected), but is still sustained at an acceptable level. Note that, in this experiment, the write throughput is slightly higher than in the previous experiments, since we directly used Gfarm’s library API, avoiding the overhead due to the use of Gfarm’s FUSE interface. 5 Conclusion In this paper we address the problem of managing large data volumes at a very largescale, with a specific focus on applications which manipulate huge data, physically distributed, but logically shared and accessed at a fine-grain under heavy concurrency. Using a grid file system seems the most appropriate solution for this context, 18 Viet-Trung Tran, Gabriel Antoniu, Bogdan Nicolae, Luc Boug´e, Osamu Tatebe (a) Gfarm alone & Gfarm/BlobSeer (b) Heavy access Gfarm/BlobSeer concurrency on Fig. 4 Access concurrency as it provides transparent access through a globally shared namespace. This greatly simplifies data management by applications, which no longer need to explicitly locate and transfer data across various sites. In this context, we explore how a grid file system could be built in order to address the specific requirements mentioned above: huge data, highly distributed, shared and accessed under heavy concurrency. Our approach relies on establishing a cooperation between the Gfarm grid file system and BlobSeer, a distributed object management system specifically designed for huge data management under heavy concurrency. We define and implement an integrated architecture, and we evaluate it through a series of preliminary experiments conducted on the Grid’5000 testbed. The resulting BLOB-based grid file system exhibits scalable file access performance in scenarios where huge files are subject to massive, concurrent, fine-grain accesses. We are currently working on introducing versioning support into our integrated, object-based grid file system. Enabling such a feature in a global file system can help applications not only to tolerate failures by providing support for roll-back, but will also allow them to access different versions of the same file, while new versions are being created. To this purpose, we are currently defining an extension of Gfarm’s API, in order to allow the users to access a specific file version. We are also defining a set of appropriate ioctl commands: accessing a desired file version will then be completely done via the POSIX file system API. In the near future, we also plan to extend our experiments to more complex, multi-cluster grid configurations. Additional directions will concern data persistence and consistency semantics. Finally, we intend to perform experiments to compare our prototype to other object-based file systems with respect to performance, scalability and sability. Towards a Grid File System Based on a Large-Scale BLOB Management Service 19 References 1. The Grid Security Infrastructure Working Group. security/gsi/. 2. The Grid’5000 Project.. 3. Bill Allcock, Joe Bester, John Bresnahan, Ann L. Chervenak, Ian Foster, Carl Kesselman, Sam Meder, Veronika Nefedova, Darcy Quesnel, and Steven Tuecke. Data management and transfer in high-performance computational grid environments. Parallel Comput., 28(5):749– 771, 2002. 4. Alessandro Bassi, Micah Beck, Graham Fagg, Terry Moore, James S. Plank, Martin Swany, and Rich Wolski. The Internet Backplane Protocol: A study in resource sharing. In Proc. 2nd IEEE/ACM Intl. Symp. on Cluster Computing and the Grid (CCGRID ’02), page 194, Washington, DC, USA, 2002. IEEE Computer Society. 5. Philip H. Carns, Walter B. Ligon, Robert B. Ross, and Rajeev Thakur. PVFS: A parallel file system for linux clusters. In Proceedings of the 4th Annual Linux Showcase and Conference, pages 317–327, Atlanta, GA, 2000. USENIX Association. 6. Ananth Devulapalli, Dennis Dalessandro, Pete Wyckoff, Nawab Ali, and P. Sadayappan. Integrating parallel file systems with object-based storage devices. In SC ’07: Proceedings of the 2007 ACM/IEEE conference on Supercomputing, pages 1–10, New York, NY, USA, 2007. ACM. 7. M. Factor, K. Meth, D. Naor, O. Rodeh, and J. Satran. Object storage: the future building block for storage systems. In Local to Global Data Interoperability - Challenges and Technologies, 2005, pages 119–123, 2005. 8. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. The Google file system. In SOSP ’03: Proceedings of the nineteenth ACM symposium on Operating systems principles, pages 29–43, New York, NY, USA, 2003. ACM Press. 9. HDFS. The Hadoop Distributed File System. common/docs/r0.20.1/hdfs_design.html. 10. Bogdan Nicolae, Gabriel Antoniu, and Luc Boug´e. Distributed management of massive data. an efficient fine grain data access scheme. In International Workshop on High-Performance Data Management in Grid Environment (HPDGrid 2008), Toulouse, 2008. Held in conjunction with VECPAR’08. Electronic proceedings. 11. Bogdan Nicolae, Gabriel Antoniu, and Luc Boug´e. Blobseer: How to enable efficient versioning for large object storage under heavy access concurrency. In EDBT ’09: 2nd International Workshop on Data Management in P2P Systems (DaMaP ’09), St Petersburg, Russia, 2009. 12. Bogdan Nicolae, Gabriel Antoniu, and Luc Boug. Enabling high data throughput in desktop grids through decentralized data and metadata management: The BlobSeer approach. In Pro´ Lect. Notes ceedings of the 15th Euro-Par Conference on Parallel Processing (Euro-Par 09), in Comp. Science, Delft, The Netherlands, 2009. Springer-Verlag. To appear. 13. P. Schwan. Lustre: Building a file system for 1000-node clusters. In Proceedings of the Linux Symposium, 2003. 14. Osamu Tatebe and Satoshi Sekiguchi. Gfarm v2: A grid file system that supports highperfomance distributed and parallel data computing. In Proceedings of the 2004 Computing in High Energy and Nuclear Physics, 2004. 15. Sage A. Weil, Scott A. Brandt, Ethan L. Miller, Darrell D. E. Long, and Carlos Maltzahn. Ceph: a scalable, high-performance distributed file system. In OSDI ’06: Proceedings of the 7th symposium on Operating systems design and implementation, pages 307–320, Berkeley, CA, USA, 2006. USENIX Association. 16. Brian S. White, Michael Walker, Marty Humphrey, and Andrew S. Grimshaw. LegionFS: a secure and scalable file system supporting cross-domain high-performance applications. In Proc. 2001 ACM/IEEE Conf. on Supercomputing (SC ’01), pages 59–59, New York, NY, USA, 2001. ACM Press. 17. FUSE.. Improving the Dependability of Grids via Short-Term Failure Predictions Artur Andrzejak and Demetrios Zeinalipour-Yazti and Marios D. Dikaiakos Abstract Computational Grids like EGEE offer sufficient capacity for even most challenging large-scale computational experiments, thus becoming an indispensable tool for researchers in various fields. However, the utility of these infrastructures is severely hampered by their notoriously low reliability: a recent nine-month study found that only 48% of jobs submitted in South-Eastern-Europe completed successfully. We attack this problem by means of proactive failure detection. Specifically, we predict site failures on short-term time scale by deploying machine learning algorithms to discover relationships between site performance variables and subsequent failures. Such predictions can be used by Resource Brokers for deciding where to submit new jobs, and help operators to take preventive measures. Our experimental evaluation on a 30-day trace from 197 EGEE queues shows that the accuracy of results is highly dependent on the selected queue, the type of failure, the preprocessing and the choice of input variables. 1 Introduction Detecting and managing failures is an important step towards the goal of a dependable and reliable Grid. Currently, this is an extremely complex task that relies on over-provisioning of resources, ad-hoc monitoring and user intervention. Adapting ideas from other contexts such as cluster computing [11], Internet services [9, 10] and software systems [12] is intrinsically difficult due to the unique characteristics of Grid environments. Firstly, a Grid system is not administered centrally; thus it is hard to access the remote sites in order to monitor failures. MoreArtur Andrzejak Zuse Institute Berlin (ZIB), Takustraße 7, 14195 Berlin, Germany, e-mail: andrzejak@zib.de Demetrios Zeinalipour-Yazti and Marios D. Dikaiakos Department of Computer Science, University of Cyprus, CY-1678, Nicosia, Cyprus e-mail: \{dzeina,mdd\}@cs.ucy.ac.cy 21 22 Artur Andrzejak and Demetrios Zeinalipour-Yazti and Marios D. Dikaiakos over, failure feedback mechanisms cannot be encapsulated in the application logic of each individual Grid software, as the Grid is an amalgam of pre-existing software libraries, services and components with no centralized control. Secondly, these systems are extremely large; thus, it is difficult to acquire and analyze failure feedback at a fine granularity. Lastly, identifying the overall state of the system and excluding the sites with the highest potential for causing failures from the job scheduling process can be much more efficient than identifying many individual failures. In this work, we define the concept of Grid Tomography1 in order to discover relationships between Grid site performance variables and subsequent failures. In particular, assuming a set of monitoring sources (system statistics, representative lowlevel measurements, results of availability tests, etc.) that characterize Grid sites, we predict with high accuracy site failures on short-term time scale by deploying various off-the-shelf machine learning algorithms. Such predictions can be used for deciding where to submit new jobs and help operators to take preventive measures. Through this study we manage to answer several questions that have to our knowledge not been addressed before. Particularly, we address questions such as: “How many monitoring sources are necessary to yield a high accuracy?”; “Which of them provide the highest predictive information?”, and “How accurately can we predict the failure of a given Grid site X minutes ahead of time?” Our findings support the argument that Grid tomography data is indeed an indispensable resource for failure prediction and management. Our experimental evaluation on a 30-day trace from 197 EGEE queues shows that the accuracy of results is highly dependent on the selected queue, the type of failure, the preprocessing and the choice of input variables. This paper builds upon on previous work in [20], in which we presented the preliminary design of FailRank architecture. In FailRank, monitoring data is continuously coalesced into a representative array of numeric vectors, the FailShot Matrix (FSM). FSM is then continuously ranked in order to identify the K sites with the highest potential to feature some failure. This allows a Resource Broker to automatically exclude the respective sites from the job scheduling process. FailRank is an architecture for on-line failure ranking using linear models, while this work investigates the problem of predicting failures by deploying more sophisticated, in general non-linear classification algorithms from the domain of machine learning. In summary, this paper makes the following contributions: • We propose techniques to predict site failures on short-term time scale by deploying machine learning algorithms to discover relationships between site performance variables and subsequent failures; • We analyze which sources of monitoring data have the highest predictive information and determine the influence of preprocessing and prediction parameters on the accuracy of results; 1 Grid Tomography refers in our context to the process of capturing the state of a grid system by sections, i.e., individual state attributes (tomos is the Greek word for section.) Improving the Dependability of Grids via Short-Term Failure Predictions 23 • We experimentally validate the efficiency of our propositions with an extensive experimental study that utilizes a 30-day trace of Grid tomography data that we acquired from the EGEE infrastructure. The remainder of the paper is organized as follows: Section 2 formalizes our discussion by introducing the terminology. It also describes the data utilized in this paper, its preprocessing, and the prediction algorithms. Section 3 presents an extensive experimental evaluation of our findings obtained by using machine learning techniques. Finally, Section 4 concludes the paper. 2 Analyzing Grid Tomography Data This section starts out by overviewing the anatomy of the EGEE Grid infrastructure and introducing our notation and terminology. We then discuss the tomography data utilized in our study, and continue with the discussion of pre-processing and modeling steps used in the prediction process. 2.1 The Anatomy of a Grid A Grid interconnects a number of remote clusters, or sites. Each site features heterogeneous resources (hardware and software) and the sites are interconnected over an open network such as the Internet. They contribute different capabilities and capacities to the Grid infrastructure. In particular, each site features one or more Worker Nodes, which are usually rack-mounted PCs. The Computing Element runs various services responsible for authenticating users, accepting jobs, performing resource management and job scheduling. Additionally, each site might feature a Local Storage site, on which temporary computation results can reside, and local software libraries, that can be utilized by executing processes. For instance, a computation site supporting mathematical operations might feature locally the Linear Algebra PACKage (LAPACK). The Grid middleware is the component that glues together local resources and services and exposes high-level programming and communication functionalities to application programmers and end-users. EGEE uses the gLite middleware [6], while NSF’s TeraGrid is based on the Globus Toolkit [5]. 2.2 The FailBase repository Our study uses data from our FailBase Repository which characterizes the EGEE Grid in respect to failures between 16/3/2007 and 17/4/2007 [14]. FailBase paves the way for the community to systematically uncover new, previously unknown patterns and rules between the multitudes of parameters that can contribute to failures 24 Artur Andrzejak and Demetrios Zeinalipour-Yazti and Marios D. Dikaiakos in a Grid environment. This database maintains information for 2,565 Computing Element (CE) queues which are essentially sites accepting computing jobs. For our study we use only a subset of queues for which we had the largest number of available types of monitoring data. For each of them the data can be thought of as a timeseries, i.e., a sequence of pairs (timestamp,value-vector). Each value-vector consists of 40 values called attributes, which correspond to various sensors and functional tests. That comprises the FailShot Matrix that encapsulates the Grid failure values for each Grid site for a particular timestamp. 2.3 Types of monitoring data The attributes are subdivided into four groups A, B, C and D depending of their source as follows [13]: A. Information Index Queries (BDII): These 11 attributes have been derived from LDAP queries on the Information Index hosted on bdii101.grid.ucy.ac.cy. This yielded metrics such as the number of free CPUs and the maximum number of running and waiting jobs for each respective CE-queue. B. Grid Statistics (GStat): The raw basis for this group is data downloaded from the monitoring web site of Academia Sinica [7]. The obtained 13 attributes contain information such as the geographical region of a Resource Center, the available storage space on the Storage Element used by a particular CE, and results from various tests concerning BDII hosts. C. Network Statistics (SmokePing): The two attributes in this group have been derived from a snapshot of the gPing database from ICS-FORTH (Greece). The database contains network monitoring data for all the EGEE sites. From this collection we measured the average round-trip-time (RTT) and the packet loss rate relevant to each South East Europe CE. D. Service Availability Monitoring (SAM): These 14 attributes contain information such as the version number of the middleware running on the CE, results of various replica manager tests and results from test job submissions. They have been obtained by downloading raw html from the CE sites and processing them with scripts [4]. The above attributes have different significance when indicating a site failure. As group D contains functional and job submission tests, attributes in this group are particularly useful in this respect. Following the results in Section 3.2.1 we regard two of these sam attributes, namely sam-js and sam-rgma as failure indicators. In other words, in this work we regard certain values of these two attributes as queue failures, and focus on predicting their values. Improving the Dependability of Grids via Short-Term Failure Predictions 25 2.4 Preprocessing The preprocessing of the above data involves several initial steps such as masking missing values, (time-based) resampling, discretization, and others (these steps are not a part of this study, see [13, 14]). It is worth mentioning that data in each group has been collected with different frequencies (A, C: once a minute, B: every 10 minutes, D: every 30-60 minutes) and resampled to obtain a homogeneous 1-minute sampling period. For the purpose of this study we have further simplified the data as follows: all missing or outdated values have been set to −1, and we did not make difference in severity of errors. Consequently, in our attribute data we use −1 for “invalid” values, 0 to indicate normal state, and 1 to indicate a faulty state. We call such a modified vector of (raw and derived) values a sample. In the last step of the preprocessing, a sample corresponding to time T is assigned a (true) label indicating a future failure as follows. Having decided which of the sam attributes S represents a failure indicator, we set this label to 1 if any of the values of S in the interval [T + 1, T + p] is 1; otherwise the label of the sample is set to 0. The parameter p is called the lead time. In other words, the label indicates a future failure if the sam attribute S takes a fault-indicating value at any time during the subsequent p minutes. 2.5 Modeling methodology Our prediction methods are model-based. A model in this sense is a function mapping a set of raw and/or preprocessed sensor values to an output, in our case a binary value indicating whether the queue is expected to be healthy (0) or not (1) in a specified future time interval. While such models can take a form of a custom formula or an algorithm created by an expert, we use in this work a measurement-based model [17]. In this approach, models are extrapolated automatically from historical relationships between sensor values and the simulated model output (computed from offline data). One of the most popular and powerful class of the measurement-based models are based on classification algorithms or classifiers [19, 3]. They are usually most appropriate if outputs are discrete [17]. Moreover, they allow the incorporation of multiple inputs or even functions of data suitable to expose its information content in a better way than the raw data. Both conditions apply in our setting. A classifier is a function which maps a d-dimensional vector of real or discrete values called attributes (or features) to a discrete value called class label. In the context of this paper each such vector is a sample and a class label corresponds to the true label as defined in Section 2.4. Note that for an error-free classifier the values of class labels and true labels would be identical for each sample. Prior to its usage as a predictive model, a classifier is trained on a set of pairs (sample, true label). In our case samples have consecutive timestamps. We call these pairs the training data and denote by D the maximum amount of samples used to this purpose. 26 Artur Andrzejak and Demetrios Zeinalipour-Yazti and Marios D. Dikaiakos Recall Precision Averaged recall / precision 1 0.8 0.6 0.4 0.2 0 js bi ca cr cp del rep gfal csh rgma rgmasc ver Sam attribute name (without prefix "sam−") swdir votag Fig. 1 Recall and Precision of each sam attribute A trained classifier is used as a predictive model by letting it compute the class label values for a sequence of samples following the training data. We call these samples test data. By comparing the values of the computed class labels against the corresponding true labels we can estimate the accuracy of the classifier. We also perform model updates after all samples from the test data have been tested. This number - expressed in minutes or number of samples - is called the update time. In this work we have tested several alternative classifiers such as C4.5, LS, Stumps, AdaBoost and Naive Bayes. The interested reader is referred to [3, 16] for a full description of these algorithms. 3 Experimental Results Each prediction run (also called experiment) has a controlled set of preprocessing parameters. If not stated otherwise, the following default values of these parameters are used. The size of the training data D is set to 15 days or 21600 samples, while the model update time is fixed to 10 days (14400 samples). We use a lead time of 15 minutes. The input data groups are A and D, i.e., each sample consists of 11 + 14 attributes from both groups. On this data we performed attribute selection via the backward branch-and-bound algorithm [16] to find 3 best attributes used as the classifier input. As classification algorithm we deployed the C4.5 decision tree algorithm from [15] with the default parameter values. Tài liệu liên quan Grids, P2P and Services Computing 1 SemMon’s impact on monitored system Tài liệu mới 2 Kết quả vận hành mô hình xử lý kị khí bán liên tục 1 Kết quả vận hành mô hình xử lý kị khí theo mẻ 2 Nội dung và phương pháp nghiên cứu 3 Tổng quan về quá trình sinh học kỵ khí 2 Tổng quan về rác thải sinh hoạt 1 Tổng quan về bùn thải TÀI LIỆU THAM KHẢO CHƯƠNG 4 KẾT QUẢ NGHIÊN CỨU CHƯƠNG 3 PHƯƠNG PHÁP NGHIÊN CỨU f) Bãi chôn lấp hợp vệ sinh Tài liệu bạn tìm kiếm đã sẵn sàng tải về 2 The Gfarm/BlobSeer file system design Grids, P2P and Services Computing-0 (trang) Tải bản đầy đủ ngay(0 tr) ×
https://toc.123doc.org/document/2691023-2-the-gfarm-blobseer-file-system-design.htm
CC-MAIN-2018-47
refinedweb
4,633
52.7
Wraps a QQuickWidget to display QML code. More... #include <qgsqmlwidgetwrapper.h> Wraps a QQuickWidget to display QML code. Definition at line 29 of file qgsqmlwidgetwrapper.h. Create a qml widget wrapper. Definition at line 26 of file qgsqml 37 of file qgsqmlwidgetwrapper.cpp. This method should initialize the editor widget with runtime data. Fill your comboboxes here. Reimplemented from QgsWidgetWrapper. Definition at line 42 of file qgsqmlwidgetwrapper.cpp. Clears the content and makes new initialization. Definition at line 62 of file qgsqmlwidgetwrapper.cpp. Definition at line 102 of file qgsqmlwidgetwrapper.cpp. writes the qmlCode into a temporary file Definition at line 72 of file qgsqml 32 of file qgsqmlwidgetwrapper.cpp.
https://api.qgis.org/api/classQgsQmlWidgetWrapper.html
CC-MAIN-2022-40
refinedweb
110
55
Build a form validation engine using custom React Hooks, from scratch, without having to learn a single form library. Read on to learn how! In part one, Simplify Forms using Custom React Hooks, we abstracted away all of the form event handler logic into a custom React Hook. As a result, the code in our form components was reduced by a significant amount. After publishing last week’s tutorial, I had a number of readers ask how I’d use React Hooks to solve two common problems related to forms: - Initializing the form values - Validation Therefore, I’ll be answering these questions in this tutorial. So, let’s begin learning how to initialize form values and handle form validation using React Hooks! What We’re Building We’ll be using the same project from part one. If you haven’t yet gone through the first tutorial on how to Simplify Forms with Custom React Hooks. Or, you can grab the full code and continue with this tutorial. There’s plenty of form libraries available for React. They do a great job of simplifying your code. However, by using a library, you’re adding to the (already long) list of dependencies your project relies on. Libraries are also opinionated. You have to learn how that library works, as well as its limitations. The goal of this tutorial is to walk you through writing your own custom React Hook that handles form validation for you. We’re going to start with initialization. Initializing the Form Values Actually, handling form initialization doesn’t require our custom React Hook, useForm, but it’s still an important part of the validation process. Start by opening up the original project in your text editor, open Form.js, and take a look at the HTML that’s being returned towards the bottom of the component, specifically the email input field: <input className="input" type="email" name="email" onChange={handleChange} value={values.email} required /> Let’s take a closer look at the value attribute. We pass in the email key returned from the values object that’s stored in the useForm custom Hook. In the React world, because we’re handling the input’s value ourselves, this means our email input field is a controlled input. Well, not exactly. The email input does become a controlled input, eventually, when we pass a real value to it. When the Form component first renders, it initializes the useForm custom const [values, setValues] = useState({}); We’re initializing the values state to an empty object. As a result, when our Form component gets values.email, it doesn’t find it inside values and therefore is undefined. So, our email input field starts off with a value of undefined, but when we type a value inside of the input, useForm finally sets the value of email inside of its state to be a non-undefined value. That’s not great. What we’re doing is switching from an uncontrolled input to a controlled input. We get a big 👎 from React for doing that. Bad developer, bad! || to the Rescue We’ve all seen, and perhaps even used the operator above, ‘||‘, inside of a conditional statement. That’s right, it’s OR. Therefore, we’re going to use the OR operator to set the default value of the email input, like so: <input ... value={values.email || ''} ... /> I sometimes think it’s helpful to explain In other words, we initialize the default value of the input to an empty string. I want to add that we’re not limited to using an empty string. If it’s a number input, we’d use 0, for example. Setting Up Form Validation Using React Hooks Now that we’ve tackled initializing the form values, let’s move on to extending our custom React Hook to handle form validation. We need to do several things in order to validate a form: - Define validation rules for the form - Store any errors in a state variable - Prevent the form from submitting if any errors exist Defining the Validation Rules Start by creating a new file for us to define rules for our email and password fields. Each form will have a list of rules that are specific to its input fields, so name the new file something specific, like LoginFormValidationRules.js. Add a single function called validate which takes one parameter, values, export it as the default value, and initialize a new object inside of the validate function called errors. we’ll return the error object at the end of the function so we can enumerate over the errors inside of the useForm custom Hook. export default function validate(values) { let errors = {}; return errors; }; Let’s add a validation rule for the email input field. The first rule, that’s likely going to apply to every required field in your form, will be to check that the value actually exists. export default function validate(values) { let errors = {}; if (!values.email) { errors.email = 'Email address is required'; } return errors; }; Because we’re building an object of errors, we actually check if the email value does not exist, and if so, then we add a new key to the error object called email. For an email to be correct however, it has to be written in a specific way, Usually something@something.com. How can we easily check that the email address is typed in the correct format? I’m going to say a phrase that makes even the most hardened developer shudder with dread, but please, hear me out. RegEx. 😱 If you’re like me, you won’t ever learn how to write a regular expression, and instead search for one online like a normal developer. A great site is RegExLib.com, which has thousands of useful examples. After the end of the first if clause, add an else if clause that tests the value of email against a regular expression. export default function validate(values) { let errors = {}; if (!values.email) { errors.email = 'Email address is required'; } else if (!/\S+@\S+\.\S+/.test(values.email)) { errors.email = 'Email address is invalid'; } return errors; }; Great! We’ve now defined a list of form validation rules that can be plugged into any number of React Hooks, so let’s test them out. Using Form Validation Rules inside of React Hooks Jump over to the Form component, inside Form.js. We initialize the useForm custom React Hook at the top of the component body. Let’s pass our validate function to the useForm Hook as the second parameter: ... import validate from './LoginFormValidationRules'; const Form = () => { const { values, handleChange, handleSubmit, } = useForm(login, validate); ... Next, head over to our custom React Hook, at useForm.js. Add the new validate parameter inside of the useForm function’s parentheses: ... const useForm = (callback, validate) => { ... Remember, validate takes an object, values, and returns another object, errors. For the Form component to display a list of errors, our useForm custom React Hook needs to store them in its state. Therefore, let’s declare a new useState Hook under values, called errors: const [errors, setErrors] = useState({}); Finally, when a user submits the form, we first want to check that there are no issues with any of their data before submitting it. Change the handleSubmit function to call validate instead of callback, passing in the values stored in the Hook’s state. const handleSubmit = (event) => { if (event) event.preventDefault(); setErrors(validate(values)); }; Detecting Change in Errors State We’re setting the errors state to the result of Enter the useEffect Hook. useEffect replaces the componentDidMount and componentDidUpdate lifecycle methods in React Class components. Furthermore, by passing an array with a value inside as the second parameter to useEffect, we can tell that specific Let’s add a useEffect Hook that listens to any changes to errors, checks the length of the object, and calls the callback function if the errors object is empty: useEffect(() => { if (Object.keys(errors).length === 0) { callback(); } }, [errors]); The It took me a while to wrap my head around the naming of the useEffect Hook, but if you think about it like: “as a result (side effect) of [value] changing, do this”, it makes much more sense. Preventing the Form from Submitting on Render Before we move on to the final section, hooking up the form HTML to the errors, there’s a problem with the login function inside our Form component. It’s being called when the page loads. This is because our useEffect Hook above is actually being run once when the component renders because the value of errors is initialized to an empty object. Let’s fix this by adding one more state variable inside of our custom React Hook, called ... const [values, setValues] = useState({}); const [errors, setErrors] = useState({}); const [isSubmitting, setIsSubmitting] = useState(false); ... Then setIsSubmitting to true inside handleSubmit. const handleSubmit = (event) => { if (event) event.preventDefault(); setIsSubmitting(true); setErrors(validate(values)); }; Check that useEffect(() => { if (Object.keys(errors).length === 0 && isSubmitting) { callback(); } }, [errors]); Finally, return the errors object at the bottom of the Hook: return { handleChange, handleSubmit, values, errors, } Your finished useForm Hook should now look like this: import { useState, useEffect } from 'react'; const useForm = (callback, validate) => { const [values, setValues] = useState({}); const [errors, setErrors] = useState({}); const [isSubmitting, setIsSubmitting] = useState(false); useEffect(() => { if (Object.keys(errors).length === 0 && isSubmitting) { callback(); } }, [errors]); const handleSubmit = (event) => { if (event) event.preventDefault(); setErrors(validate(values)); setIsSubmitting(true); }; const handleChange = (event) => { event.persist(); setValues(values => ({ ...values, [event.target.name]: event.target.value })); }; return { handleChange, handleSubmit, values, errors, } }; export default useForm; Displaying Errors in the Form Component Now our custom React Hook is saving a list of errors, let’s display them for our users to see. This is the final step to adding some proper form validation inside of any custom React Hooks. First, make sure to add errors to the list of variables and functions we’re getting from useForm: const { ... errors, ... } = useForm(login, validate); Bulma (the CSS framework we’re using) has some excellent form input classes that highlight inputs red. Let’s make use of that class by checking if the errors object has a key that matches the input name, and if so, adds the is-danger class to the input’s className: <div className="control"> <input className={`input ${errors.email && 'is-danger'}`} type="email" name="email" onChange={handleChange} value={values.email || ''} required /> </div> Finally, display the actual error message by adding an inline conditional below the input element to check again if the errors object has a key matching this input, and if so, displays the error message in red: <div className="control"> <input className={`input ${errors.email && 'is-danger'}`}{errors.email}</p> )} </div> Save everything, jump on over to your app running in your browser (npm start in the project if you didn’t do so already) and take your new form for a test run! Adding Additional Validation Rules 🤔 “Where’s the password validation?”, you might be thinking. I’m going to leave that part for you to add. Homework as it were. If you want the solution, you can check out the entire code base for this tutorial. Wrapping Up So there you have it, form validation and initialization using custom React Hooks. As always, if you enjoyed the tutorial, please leave a message in the discussion below. If you have any issues or questions, leave a comment below or hit me up on Twitter. It’d be my pleasure to help. Peace! Additional Reading 💻 More React Tutorials Thanks for this great post! Hey James, thanks for this article! I’m wondering how you’d tackle this use case – I have a form that I use for both creating and editing. When it first loads it’s in “create mode”, but if I select an item elsewhere on the page, I want to reinitialise the fields with that item’s values (“edit mode”). Assume that the new initial values are being passed in as props to the form component. How would you go about reinitialising in this way using hooks? Thank you very much sir, you really helped me! Thank you for this post, I love this approach 😉 But what I don’t like though is initialising the values from the value attribute by using the syntax you suggested: value={values.email || ”} Just want to have value={values.email} So I wanted to have some initialValues there. so I forked it and changed it so I now pass initialValues as the first parameter into useForm. I also made the validate function optional. See the fork here: What do you think? nice article 🙂 check out my hook for form validation. I made useForm initialize using values from a prop. One thing to note is to have a useEffect that updates the state of values each time the prop changes else it uses the old value and introduces bug. useEffect(() => { setValues(initialValues) }, [initialValues] ) How is this different from useReducer ()? The only thing I found is validation should be induced separately if we are using useReducer hook. @JamesKing, any suggestions? Good example! I have one question. By this structure you are assuming that your form invalid initially, and that’s ok. From other side your are triggering validation on submit only, right? Would it better to use instead or additionally input value change or onBlur event for validation check, so that user could immediately see that he had entered correct value and fixed invalid input issue? There’s a slight bug within useForm.js – you’re not setting the isFormSubmitting state to false after calling the callback function. This leads to callback spillover in situations where your callback function is some more advanced logic to be performed, such as a login call to a GraphQL API. Had I not fixed the state setting, my form would keep calling the API with every single change in my inputs(I also use blur events) I followed a different approach to handle the controlled to uncontrolled issue: I send an initialdFormfields object to useForm. As far as the form will initialize empty (as in a login) its exactly the same. But, wether I solve the controlled to uncontrolled problem with my strategy or yours, if you want to initialize the form with previous values depending on a component props, it will fail. Component consuming useForm: const ComponentConsumingUseForm = ({ initialFormValuesAsProps }) => { const initialFormFields = { name: initialFormValuesAsProps.name, lastname: initialFormValuesAsProps.lastname }; const { values, errors, handleChange, handleSubmit } = useForm( () => { //Stuff }, initialFormFields, validate ); UseForm: const useForm = (callback, initialFormFields = {}, validate) => { const [values, setValues] = useState(initialFormFields); … } values.name and values,lastname will be set as undefined, then to its real value, but the UI wont catch up, rendering an empty input. Is there a solution for that? Thank you so much in advance. How can I make it based on JSON data? Great two part article, very well explained! Nice post! I recommend use classnames or clsx libraries for dynamic classNames settings. Thanks James! I have extended the hook to support onChange validation. Instead of passing the validate function, I am passing a list of validators in a form of {key: fn} That way in onChange, we can see if there is a validator for a particular field and run validation then. The onSubmit simply runs all validators. const useForm = (callback, validators) => { … const handleChange = event => { const {id, value} = event.target event.persist() const validator = validators[id] if (validator) { setErrors(errors => ({…errors, [id]: validator(value)})) } setValues(values => ({…values, [id]: value})) } const validate = values => { let errors = {} Object.keys(validators).forEach(id => { const validator = validators[id] if (validator) { const error = validator(values[id]) if (error) { errors[id] = error } } }) return errors } useEffect(()=>{ if (isSubmitting && Object.keys(err).length ===0) { callback(); } },[err] ); in the above code, callback(); function calling at first time loading, thought isSubmitting is false at first Thanks, James for the effort you put to bring the awesome content each time .. Many are remote learners like me …God bless you This is an elegantly simple solution and a great article as well. Thank you for sharing this with us! Great article, however I have found one issue. If you use the react-hooks plugin for eslint (recommended by the react team as best practice), you’ll find that your useEffect hook requires the isSubmitting, and callback to be included in your dependency array. This causes the form to be submitted twice for me (which makes sense, once when isSubmitting is changed to true, and again when setErrors is set coming back as an empty array after validation). Without further changes to your approach, this can only be remedied by disabling eslint for the useEffect, something I don’t want to do. Any thoughts on how you’d go about fixing this? Thank you, sir. I have learned more from your post, but I got this warming: “React Hook useEffect has missing dependencies: ‘callback’ and ‘isSubmitting’. Either include them or remove the dependency array. If ‘callback’ changes too often, find the parent component that defines it and wrap that definition in useCallback”. I saw your comment about adding a conditional inside of useEffect(), but I still got the same error. Hi there, many thanks for nice article! I have added two more lines to your useForm as I wanted to setErrors when the input is changing 😉 “` import { useState, useEffect } from ‘react’; export const useForm = (callback, validate) => { const [values, setValues] = useState({}); const [errors, setErrors] = useState({}); const [isSent, setIsSent] = useState(false); const [isSubmitting, setIsSubmitting] = useState(false); useEffect(() => { if (Object.keys(errors).length === 0 && isSubmitting && isSent) { callback(); } }, [errors]); const handleSubmit = (event) => { if (event) event.preventDefault(); setErrors(validate(values)); setIsSubmitting(true); setIsSent(true); }; const handleChange = (event) => { event.persist && event.persist(); setIsSubmitting(false); const newValues = { …values, [event.target.name]: event.target.value }; isSent && setErrors(validate(newValues)); setValues(values => (newValues)); }; return { handleChange, handleSubmit, values, errors, } }; “` Wonderful post! Explained succinctly and clearly, Thank you so much, very useful Hiya, nice Hiya, this is a very nice tutorial. Very interesting article, well written too! I am new to React, and apologies if you consider this off topic but, i find that a use case where user enters an invalid input in a form (e.g email address) – invalid, as in which although syntactically correct, but is not a valid user record in the database, is almost never covered or explained in tutorials such as yours. Would you mind adding a bit on your tutorial showing how you would handle such validation/redirect, and clear the forms please? Cheers PS Any reader who has suggestions on this please feel free to explain, i have been busy chasing my tail for a while on this simple issue 😉 Hi James, thanks a lot for this article. Can you post another article regarding testing of these hooks using @testing-library?
https://upmostly.com/tutorials/form-validation-using-custom-react-hooks
CC-MAIN-2020-29
refinedweb
3,114
54.42
Jacob MurphyFull Stack JavaScript Techdegree Graduate 31,711 Points Items not appearing I'm having some trouble while following along in making the items show up on the builder page. here's my code: {%for category, chices in options.items()%} {%if category !='colors'%} <div class="grid-100 row"> <div class="grid-20"> <p class="category-title">{{ category.title() }}</p> </div> <div class="grid-80"> <input type = "radio" id = "no_{{category}}_icon" name = "{{category}}" value='' {%if not saves.get(category)%}checked{%endif%}> <label for = "no_{{category}}_icon"> <img src="/static/img/no-selection.svg"> </label> {%for choice in choices%} <input type="radio" id="{{category}}-{{choice}}_icon" name="{{category}}" value="{{choice}}" {% if saves.get(category) == choice %}checked{% endif %}> <label for = "{{category}}-{{choice}}_icon"> <img src="/static/img/{{ category }}-{{ choice }}.svg"> </label> {%endfor%} </div> </div> {%endif%} {%endfor%} Thanks in advance for any help! Jacob MurphyFull Stack JavaScript Techdegree Graduate 31,711 Points Hey Myers, thanks for the tip! To answer your question. I thought or purposed the code to make the different items for the courses' web-app bear builder appear. What it done was nothing. Everything does as its suppose to do up until the {%for choice in choices%} code block, then it doesn't do anything. 2 Answers Iain SimmonsTreehouse Moderator 32,241 Points You currently have the first line of what you pasted there as: {%for category, chices in options.items()%} There's a typo in 'choices', so try change the line to the following: {%for category, choices in options.items()%} That's why the category part worked. It got passed to the Flask/Jinja template code as expected. But when you go to the loop of {%for choice in choices%}, it couldn't find choices, so it just ignored the whole block. Be careful with your HTML formatting too, attributes shouldn't have a space between the attribute name, the equals sign, and the quotes around the value. It should be, for example: <input type="radio"> Jacob MurphyFull Stack JavaScript Techdegree Graduate 31,711 Points That typo seemed to be it! Thanks a bunch for pointing that out. man odell1,905 Points Show your question in HTML section *here is the python language ! Jacob MurphyFull Stack JavaScript Techdegree Graduate 31,711 Points Hey there! We're using the Flask Framework in this course, so in the HTML anything inside either double curly braces {{}} or curly braces and percentages {%%} is python code or objects. Myers CarpenterTreehouse Staff Myers CarpenterTreehouse Staff It's unclear what's going wrong w/o building a Flask app and pasting in your code. You might have more luck getting answers if you answer these questions each time. I understand the "what did you do?" part, you have a good code sample. But I don't understand what you thought it would do and what it actually do.
https://teamtreehouse.com/community/items-not-appearing
CC-MAIN-2020-50
refinedweb
472
65.93
Failure To Import ICE Client Module in Help Center I have a python application that I am attempting to incorporate ICE for the RPC transport functionality. But when I import my client, I get the following error: Ice for C++ (on which Ice for Python is based) does not require use of the main thread. Note however that destructors for static C++ objects are invoked when the main thread terminates, which is why we state that communicators need to be destroyed prior to the termination of the main thread. It would be easier for us to help if we could see a small, self-contained example that reproduces the issue you are experiencing. If you don't want to post it here, you can also send it to us by email. Please tell us your platform, compiler, and Ice version. Regards, Mark I wish I could. Matlab calls python, python snippit as follows: import Ice Ice.loadSlice('xxx.ice') .... ...at the point the python script is called results in this error: "Slice processing failed for 'xxx.ice'." ...end of parade. Regarding versions: Ice v3.3.1, python activestate v2.5.2.2. I have even hardcoded the entire path in the "loadSlice' statement to insure that the "xxx.ice" file is being found. Any suggestions or known limitations? Thanks, Andy You didn't mention which platform you were using, but I'll assume it's some flavor of Windows since you are using ActiveState Python. Please correct me if I'm wrong. Note that the Ice for Python extensions that we include in the Ice 3.3.1 binary distributions for Windows are compiled against the Python libraries available from python.org. We don't test with ActiveState Python, so we can't guarantee compatibility. Having said that, I just downloaded ActiveState Python 2.5.5.7 and I was able to use it to run our Ice for Python test suite successfully. (The test suite also makes extensive use of the loadSlice function.) It's important that you use the correct Ice distribution for Windows; in this case, I used the Ice 3.3.1 distribution for Visual Studio 2005SP1. Which one are you using? Also, have you tried statically compiling your Slice files using slice2py instead of using loadSlice? Regards, Mark I'm using PC/XP Pro SP2, ICE 3.3.1 w/ VC90.msi, and no, but I will give it a try! Thanks, Andy Using Slice2py helped in one regard, but I am still getting the following error which leads me to believe that ICE is configured to only run in the main thread: Traceback... ...c:\Ice-3.3.1\python\Ice.py", line 744, in main Application._ctrlCHandler = CtrlCHandler() File "C:\Ice-3.3.1\python\Ice.py", line 600, in __init__ signal.signal(signal.SIGBREAK, CtrlCHandler, signalHandler) ValueError: signal only works in main thread Can you provide some guidelines regarding how ICE may be implemented in a multithread python application? Thanks, Andy It's not Ice that has the limitation, but Python's signal function (this is where the ValueError originates). Python does not let you configure signals unless you're in the main thread. The Ice.Application class assumes you are calling its main method from the main thread because it needs to configure signals. You shouldn't call main from other threads. Note that Ice.Application is simply a convenience class that you are not required to use. Regards, Mark
https://forums.zeroc.com/discussion/comment/33845
CC-MAIN-2019-30
refinedweb
578
67.04
Socket-based abstraction for messaging patterns Project description aiomsg Pure-Python smart sockets (like ZMQ) for simple microservices architecture Warning ⚠️ Don’t use this! Use ZeroMQ instead. aiomsg is currently a hobby project, whereas ZeroMQ is a mature messaging library that has been battle-tested for well over a decade! Warning ⚠️ Right now this is in ALPHA. I’m changing stuff all the time. Don’t depend on this library unless you can handle API breakage between releases. Your semver has no power here, this is calver country. When I’m happy with the API I’ll remove this warning. Table of Contents Contents - aiomsg - Demo - Inspiration - Introduction - Cookbook - Publish from either the bind or connect end - Distribute messages to a dynamically-scaled service (multiple instances) - Distribute messages from a 2-instance service to a dynamically-scaled one - Distribute messages from one dynamically-scaled service to another - Two dynamically-scaled services, with a scaled fan-in, fan-out proxy - Secure connections with mutual TLS - Developer setup Demo Let’s make two microservices; one will send the current time to the other. Here’s the end that binds to a port (a.k.a, the “server”): import asyncio, time from aiomsg import Søcket async def main(): async with Søcket() as sock: await sock.bind('127.0.0.1', 25000) while True: await s.send(time.ctime().encode()) asyncio.run(main()) Running as a different process, here is the end that does the connecting (a.k.a, the “client”): import asyncio from aiomsg import Søcket async def main(): async with Søcket() as sock: await sock.connect('127.0.0.1', 25000) async for msg in sock.messages(): print(msg.decode()) asyncio.run(main()) Note that these are both complete, runnable programs, not fragments. Looks a lot like conventional socket programming, except that these sockets have a few extra tricks. These are described in more detail further down in rest of this document. Inspiration Looks a lot like ZeroMQ yes? no? Well if you don’t know anything about ZeroMQ, that’s fine too. The rest of this document will assume that you don’t know anything about ZeroMQ. aiomsg is heavily influenced by ZeroMQ. There are some differences; hopefully they make things simpler than zmq. For one thing, aiomsg is pure-python so no compilation step is required, and relies only on the Python standard library (and that won’t change). Also, we don’t have special kinds of socket pairs like ZeroMQ has. There is only the one Søcket class. The only role distinction you need to make between different socket instances is this: some sockets will bind and others will connect. This is the leaky part of the API that comes from the underlying BSD socket API. A bind socket will bind to a local interface and port. A connect socket must connect to a bind socket, which can be on the same machine or a remote machine. This is the only complicated bit. You must decide, in a distributed microservices architecture, which sockets must bind and which must connect. A useful heuristic is that the service which is more likely to require horizontal scaling should have the connect sockets. This is because the hostnames to which they will connect (these will be the bind sockets) will be long-lived. Introduction What you see above in the demo is pretty much a typical usage of network sockets. So what’s special about aiomsg? These are the high-level features: Messages, not streams: Send and receive are message-based, not stream based. Much easier! This does mean that if you want to transmit large amounts of data, you’re going to have have to break them up yourself, send the pieces, and put them back together on the other side. Automatic reconnection These sockets automatically reconnect. You don’t have to write special code for it. If the bind end (a.k.a “server”) is restarted, the connecting end will automatically reconnect. This works in either direction. Try it! run the demo code and kill one of the processes. And then start it up again. The connection will get re-established. Many connections on a single “socket” The bind end can receive multiple connections, but you do all your .send() and .recv() calls on a single object. (No callback handlers or protocol objects.) More impressive is that the connecting end is exactly the same; it can make outgoing connect() calls to multiple peers (bind sockets), and you make all your send() and recv() calls on a single object. This will be described in more detail further on in this document. Message distribution patterns Receiving messages is pretty simple: new messages just show up (remember that messages from all connected peers come through the same call): async with Søcket() as sock: await sock.bind() async for msg in sock.messages(): print(f"Received: {msg}") However, when sending messages you have choices. The choices affect which peers get the message. The options are: - Publish: every connected peer is sent a copy of the message - Round-robin: each connected peer is sent a unique message; the messages are distributed to each connection in a circular pattern. - By peer identity: you can also send to a specific peer by using its identity directly. The choice between pub-sub and round-robin must be made when creating the Søcket(): from aiomsg import Søcket, SendMode async with Søcket(send_mode=SendMode.PUBLISH) as sock: await sock.bind() async for msg in sock.messages(): await sock.send(msg) This example receives a message from any connected peer, and sends that same message to every connected peer (including the original sender). By changing PUBLISH to ROUNDROBIN, the message distribution pattern changes so that each “sent” message goes to only one connected peer. The next “sent” message will go to a different connected, and so on. For identity-based message sending, that’s available any time, regardless of what you choose for the send_mode parameter; for example: import asyncio from aiomsg import Søcket, SendMode async def main(): async with Søcket() as sock1, Søcket(send_mode=SendMode.PUBLISH) as sock2: await sock1.bind(port=25000) await sock2.bind(port=25001) while True: peer_id, message = await sock1.recv_identity() msg_id, _, data = msg.partition(b"\x00") await sock2.send(data) await sock1.send(msg_id + b"\x00ok", identity=peer_id) asyncio.run(main()) This example shows how you can receive messages on one socket (sock1, which could have thousands of connected peers), and relay those messages to thousands of other peers connected on a different socket (sock2). For this example, the send_mode of sock1 doesn’t matter because if identity is specified in the send() call, it’ll ignore send_mode completely. Oh, and the example above is a complete, runnable program which is pretty amazing! Built-in heartbeating Because ain’t nobody got time to mess around with TCP keepalive settings. The heartbeating is internal and opaque to your application code. You won’t even know it’s happening, unless you enable debug logs. Heartbeats are sent only during periods of inactivity, so they won’t interfere with your application messages. In theory, you really shouldn’t need heartbeating because TCP is a very robust protocol; but in practice, various intermediate servers and routers sometimes do silly things to your connection if they think a connection has been idle for too long. So, automatic heartbeating is baked in to let all intermediate hops know you want the connection to stay up, and if the connection goes down, you will know much sooner than the standard TCP keepalive timeout duration (which can be very long!). If either a heartbeat or a message isn’t received within a specific timeframe, that connection is destroyed. Whichever peer is making the connect() call will then automatically try to reconnect, as discussed earlier. Built-in reliability choices Ah, so what do “reliability choices” mean exactly…? It turns out that it’s quite hard to send messages in a reliable way. Or, stated another way, it’s quite hard to avoid dropping messages: one side sends and the other side never gets the message. aiomsg already buffers messages when being sent. Consider the following example: from aiomsg import Søcket, SendMode async with Søcket(send_mode=SendMode.PUBLISH) as sock: await sock.bind() while True: await sock.send(b'123) await asyncio.sleep(1.0) This server above will send the bytes b"123" to all connected peers; but what happens if there are no connected peers? In this case the message will be buffered internally until there is at least one connected peer, and when that happens, all buffered messages will immediately be sent. To be clear, you don’t have to do anything extra. This is just the normal behaviour, and it works the same with the ROUNDROBIN send mode. Message buffering happens whenever there are no connected peers available to receive a message. Sounds great right? Unfortunately, this is not quite enough to prevent messages from getting lost. It is still easy to have your process killed immediately after sending data into a kernel socket buffer, but right before the bytes actually get transmitted. In other words, your code thinks the message got sent, but it didn’t actually get sent. The only real solution for adding robustness is to have peers reply to you saying that they received the message. Then, if you never receive this notification, you should assume that the message might not have been received, and send it again. aiomsg will do this for you (so again there is no work on your part), but you do have to turn it on. This option is called the DeliveryGuarantee. The default option, which is just basic message buffering in the absence of any connected peers, is called DeliveryGuarantee.AT_MOST_ONCE. It means, literally, that any “sent” message will received by a connected peer no more than once (of course, it may also be zero, as described above). The alternative is to set DeliveryGuarantee.AT_LEAST_ONCE, which enables the internal “retry” feature. It will be possible, under certain conditions, that any given message could be received more than once, depending on timing and situation. This is how the code looks if you enable it: from aiomsg import Søcket, SendMode, DeliveryGuarantee async with Søcket( send_mode=SendMode.ROUNDROBIN, delivery_guarantee=DeliveryGuarantee.AT_LEAST_ONCE ) as sock: await sock.bind() while True: await sock.send(b'123) await asyncio.sleep(1.0) It’s pretty much exactly the same as before, but we added the AT_LEAST_ONCE option. Note that AT_LEAST_ONCE does not work for the PUBLISH sending mode. (Would it make sense to enable?) As a minor point, you should note that when AT_LEAST_ONCE is enabled, it does not mean that every send waits for acknowledgement before the next send. That would incur too much latency. Instead, there is a “reply checker” that runs on a timer, and if a reply hasn’t been received for a particular message in a certain timeframe (5.0 seconds by default), that message will be sent again. The connection may have gone down and back up within those 5 seconds, and there may be new messages buffered for sending before the retry send happens. In this case, the retry message will arrive after those buffered messages. This is a long way of saying that the way that message reliability has been implemented can result in messages being received in a different order to what they were sent. In exchange for this, you get a lower overall latency because sending new messages is not waiting on previous messages getting acknowledged. Pure python, doesn’t require a compiler Depends only on the Python standard library Cookbook The message distribution patterns are what make aiomsg powerful. It is the way you connect up a whole bunch of microservices that brings the greatest leverage. We’ll go through the different scenarios using a cookbook format. In the code snippets that follow, you should assumed that each snippet is a complete working program, except that some boilerplate is omitted. This is the basic template: import asyncio from aiomsg import Søcket, SendMode, DeliveryGuarantee <main() function> asyncio.run(main()) Just substitute in the main() function from the snippets below to make the complete programs. Publish from either the bind or connect end The choice of “which peer should bind” is unaffected by the sending mode of the socket. Compare # Publisher that binds async def main(): async with Søcket(send_mode=SendMode.PUBLISH) as sock: await sock.bind() while True: await sock.send(b'News!') await asyncio.sleep(1) versus # Publisher that connects async def main(): async with Søcket(send_mode=SendMode.PUBLISH) as sock: await sock.connect() while True: await sock.send(b'News!') await asyncio.sleep(1) The same is true for the round-robin sending mode. You will usually choose the bind peer based one which service is least likely to require dynamic scaling. This means that the mental conception of socket peers as either a server or client is not that useful. Distribute messages to a dynamically-scaled service (multiple instances) In this recipe, one service needs to send messages to another service that is horizontally scaled. The trick here is that we don’t want to use bind sockets on horizontally-scaled services, because other peers that need to make a connect call will need to know what hostname to use. Each instance in a horizontally-scaled service has a different IP address, and it becomes difficult to keep the “connect” side up-to-date about which peers are available. This can also change as the horizontally-scaled service increases or decreases the number of instances. (In ZeroMQ documentation, this is described as the Dynamic Discovery Problem). aiomsg handles this very easily: just make sure that the dynamically-scaled service is making the connect calls: This is the manually-scaled service (has a specific domain name): # jobcreator.py -> DNS for "jobcreator.com" should point to this machine. async def main(): async with Søcket(send_mode=SendMode.ROUNDROBIN) as sock: await sock.bind(hostname="0.0.0.0", port=25001) while True: await sock.send(b"job") await asyncio.sleep(1) These are the downstream workers (don’t need a domain name): # worker.py - > can be on any number of machines async def main(): async with Søcket() as sock await sock.connect(hostname='jobcreator.com', port=25001) while True: work = await sock.recv() <do work> With this code, after you start up jobcreator.py on the machine to which DNS resolves the domain name “jobcreator.com”, you can start up multiple instances of worker.py on other machines, and work will get distributed among them. You can even change the number of worker instances dynamically, and everything will “just work”, with the main instance distributing work out to all the connected workers in a circular pattern. This core recipe provides a foundation on which many of the other recipes are built. Distribute messages from a 2-instance service to a dynamically-scaled one In this scenario, there are actually two instances of the job-creating service, not one. This would typically be done for reliability, and each instance would be placed in a different availability zones. Each instance will have a different domain name. It turns out that the required setup follows directly from the previous one: you just add another connect call in the workers. The manually-scaled service is as before, but you start on instance of jobcreator.py on machine “a.jobcreator.com”, and start another on machine “b.jobcreator.com”. Obviously, it is DNS that is configured to point to the correct IP addresses of those machines (or you could use IP addresses too, if these are internal services). # jobcreator.py -> Configure DNS to point to these instances async def main(): async with Søcket(send_mode=SendMode.ROUNDROBIN) as sock: await sock.bind(hostname="0.0.0.0", port=25001) while True: await sock.send(b"job") await asyncio.sleep(1) As before, the downstream workers, but this time each worker makes multiple connect() calls; one to each job creator’s domain name: # worker.py - > can be on any number of machines async def main(): async with Søcket() as sock: await sock.connect(hostname='a.jobcreator.com', port=25001) await sock.connect(hostname='b.jobcreator.com', port=25001) while True: work = await sock.recv() <do work> aiomsg will return work from the sock.recv() call above as it comes in from either job creation service. And as before, the number of worker instances can be dynamically scaled, up or down, and all the connection and reconnection logic will be handled internally. Distribute messages from one dynamically-scaled service to another If both services need to be dynamically-scaled, and can have varying numbers of instances at any time, we can no longer rely on having one end do the socket bind to a dedicated domain name. We really would like each to make connect() calls, as we’ve seen in previous examples. How to solve it? The answer is to create an intermediate proxy service that has two bind sockets, with long-lived domain names. This is what will allow the other two dynamically-scaled services to have a dynamic number of instances. Here is the new job creator, whose name we change to dynamiccreator.py to reflect that it is now dynamically scalable: # dynamiccreator.py -> can be on any number of machines async def main(): async with Søcket(send_mode=SendMode.ROUNDROBIN) as sock: await sock.connect(hostname="proxy.jobcreator.com", port=25001) while True: await sock.send(b"job") await asyncio.sleep(1) Note that our job creator above is now making a connect() call to proxy.jobcreator.com:25001 rather than binding to a local port. Let’s see what it’s connecting to. Here is the intermediate proxy service, which needs a dedicated domain name, and two ports allocated for each of the bind sockets. # proxy.py -> Set up DNS to point "proxy.jobcreator.com" to this instance) Note that sock1 is bound to port 25001; this is what our job creator is connecting to. The other socket, sock2, is bound to port 25002, and this is the one that our workers will be making their connect() calls to. Hopefully it’s clear in the code that work is being received from sock1 and being sent onto sock2. This is pretty much a feature complete proxy service, and with only minor additions for error-handling can be used for real work. For completeness, here are the downstream workers: # worker.py - > can be on any number of machines async def main(): async with Søcket() as sock: await sock.connect(hostname='proxy.jobcreator.com', port=25002) while True: work = await sock.recv() <do work> Note that the workers are connecting to port 25002, as expected. You might be wondering: isn’t this just moving our performance problem to a different place? If the proxy service is not scalable, then surely that becomes the “weakest link” in our system architecture? This is a pretty typical reaction, but there are a couple of reasons why it might not be as bad as you think: - The proxy service is doing very, very little work. Thus, we expect it to suffer from performance problems only at a much higher scale compared to our other two services which are likely to be doing more CPU-bound work (in real code, not my simple examples above). - We could compile only the proxy service into faster low-level code using any number of tools such as Cython, C, C++, Rust, D and so on, in order to improve its performance, if necessary (this would require implementing the aiomsg protocols in that other language though). This allows us to retain the benefits of using a dynamic language like Python in the dynamically scaled services where much greater business logic is captured (these can be then be horizontally scaled quite easily to handle performance issues if necessary). - Performance is not the only reason services are dynamically scaled. It is always a good idea, even in low-throughput services, to have multiple instances of a service running in different availability zones. Outages do happen, yes, even in your favourite cloud provider’s systems. - A separate proxy service as shown above isolates a really complex problem and removes it from your business logic code. It might not be easy to appreciate how significant that is. As your dev team is rapidly iterating on business features, and redeploying new versions several times a day, the proxy service is unchanging, and doesn’t require redeployment. In this sense, it plays a similar role to more traditional messaging systems like RabbitMQ and ActiveMQ. - We can still run multiple instances of our proxy service using an earlier technique, as we’ll see in the next recipe. Two dynamically-scaled services, with a scaled fan-in, fan-out proxy This scenario is exactly like the previous one, except that we’re nervous about having only a single proxy service, since it is a single point of failure. Instead, we’re going to have 3 instances of the proxy service running in parallel. Let’s jump straight into code. The proxy code itself is actually unchanged from before. We just need to run more copies of it on different machines. Each machine will have a different domain name. # proxy.py -> unchanged from the previous recipe) For the other two dynamically scaled services, we need to tell them all the domain names to connect to. We could set that up in an environment variable: $ export PROXY_HOSTNAMES="px1.jobcreator.com;px2.jobcreator.com;px3.jobcreator.com" Then, it’s really easy to modify our services to make use of that. First, the dynamically-scaled job creator: # dynamiccreator.py -> can be on any number of machines async def main(): async with Søcket(send_mode=SendMode.ROUNDROBIN) as sock: for proxy in os.environ['PROXY_HOSTNAMES'].split(";"): await sock.connect(hostname=proxy, port=25001) while True: await sock.send(b"job") await asyncio.sleep(1) And the change for the worker code is identical (making sure the correct port is being used, 25002): # worker.py - > can be on any number of machines async def main(): async with Søcket() as sock: for proxy in os.environ['PROXY_HOSTNAMES'].split(";"): await sock.connect(hostname=proxy, port=25002) while True: work = await sock.recv() <do work> Three proxies, each running in a different availability zone, should be adequate for most common scenarios. TODO: more scenarios involving identity (like ROUTER-DEALER) Secure connections with mutual TLS Secure connectivity is extremely important, even in an internal microservices infrastructure. From a design perspective, the single biggest positive impact that can be made on security is to make it easy for users to do the “right thing”. For this reason, aiomsg does nothing new at all. It uses the existing support for secure connectivity in the Python standard library, and uses the same APIs exactly as-is. All you have to do is create an SSLContext object, exactly as you normally would for conventional Python sockets, and pass that in. Mutual TLS authentication (mTLS) is where the client verifies the server and the server verifies the client. In aiomsg, names like “client” and “server” are less useful, so let’s rather say that the connect socket verifies the target bind socket, and the bind socket also verifies the incoming connecting socket. It sounds complicated, but at a high level you just need to supply an SSLContext instance to the bind socket, and a different SSLContext instance to the connect socket (usually on a different computer). The details are all stored in the SSLContext objects. Let’s first look at how that looks for a typical bind socket and connect socket: # bind end import ssl import asyncio, time from aiomsg import Søcket async def main(): ctx = ssl.SSLContext(...) # <--------- NEW! async with Søcket() as sock: await sock.bind('127.0.0.1', 25000, ssl_context=ctx) while True: await s.send(time.ctime().encode()) asyncio.run(main()) # connect end import ssl import asyncio from aiomsg import Søcket async def main(): ctx = ssl.SSLContext(...) # <--------- NEW! async with Søcket() as sock: await sock.connect('127.0.0.1', 25000, ssl_context=ctx) async for msg in sock.messages(): print(msg.decode()) asyncio.run(main()) If you compare these two code snippets to what was shown in the Demo section, you’ll see it’s almost exactly the same, except that we’re passing a new ctx parameter into the respective bind() and connect() calls, which is an instance of SSLContext. So if you already know how to work with Python’s built-in SSLContext object, you can already create secure connections with aiomsg and there’s nothing more you need to learn. Crash course on setting up an SSLContext You might not know how to set up the SSLContext object. Here, I’ll give a crash course, but please remember that I am not a security expert so make sure to ask an actual security expert to review your work if you’re working on a production system. The best way to create an SSLContext object is not with its constructor, but rather a helper function called create_default_context(), which sets a lot of sensible defaults that you would otherwise have to do manually. So that’s how you get the context instance. You do have to specify whether the purpose of the context object is to verify a client or a server. Let’s have a look at that: # bind socket, or "server" ctx: SSLContext = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) So here, above, we’re creating a context object for a bind socket. The purpose of the context is going to be to verify incoming client connections, that’s why the CLIENT_AUTH purpose was given. As you might imagine, on the other end, i.e., the connect socket (or “client”), the purpose is going to be to verify the server: # connect socket, or "client" ctx: SSLContext = ssl.create_default_context(ssl.Purpose.SERVER_AUTH) Once you’ve created the context, the remaining parameters have the same meaning for both client and server. The way TLS works (the artist formerly known as SSL) is that each end of a connection has two pieces of information: - A certificate (may be shared publicly) - A key (MUST NOT BE SHARED! SECRET!) When the two sockets establish a connection, they trade certificates, but do not trade keys. Anyway, let’s look at what you need to actually set in the code. We’ll start with the connect socket (client). # connect socket, or "client" ctx: SSLContext = ssl.create_default_context(ssl.Purpose.SERVER_AUTH) ctx.verify_mode = ssl.CERT_REQUIRED ctx.check_hostname = True ctx.load_verify_locations(<something that can verify the server cert>) The above will let the client verify that the server it is connecting to is the correct one. When the socket connects, the server socket will send back a certificate and the client checks that against one of those mysterious “verify locations”. For mutual TLS, the server also wants to check the client. What does it check? Well, the client must also provide a certificate back to the server. So that requires an additional line in the code block above: # connect socket, or "client" ctx: SSLContext = ssl.create_default_context(ssl.Purpose.SERVER_AUTH) ctx.verify_mode = ssl.CERT_REQUIRED ctx.check_hostname = True ctx.load_verify_locations(<something that can verify the server cert>) # Client needs a pair of "cert" and "key" ctx.load_cert_chain(certfile="client.cert", keyfile="client.key") So that completes everything we need to do for the SSL context on the client side. On the server side, everything is almost exactly the same: # bind socket, or "server" ctx: SSLContext = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) ctx.verify_mode = ssl.CERT_REQUIRED ctx.load_verify_locations(<something that can verify the client cert>) # Server needs a pair of "cert" and "key" ctx.load_cert_chain(certfile="server.cert", keyfile="server.key") That describes everything you need to do to set up mutual TLS using SSLContext instances. There are a few loose ends to tie up though. Where do you get the certfile and keyfile from? And what is this mysterious “verify location”? The first question is easier. The cert and key can be generated using the OpenSSL command-line application: $ openssl req -newkey rsa:2048 -nodes -keyout server.key \ -x509 -days 365 -out server.cert \ -subj '/C=GB/ST=Blah/L=Blah/O=Blah/OU=Blah/CN=example.com' Running the above command will create two new files, server.cert and server.key; these are ones you specify in earlier commands. Generating these files for the client is exactly the same, but you use different names. You could also use Let’s Encrypt to generate the cert and key, in which case you don’t have to run the above commands. IF you use Let’s Encrypt, you’ve also solved the other problem of supplying a “verify location”, and in fact you won’t need to call load_verify_locations() in the client code at all. This is because there are a bunch of root certificate authorities that are provided with most operating systems, and Let’s Encrypt is one of those. However, for the sake of argument, let’s say you want to make your own certificates and you don’t want to rely on system-provided root certificates at all; how to do the verification? Well it turns out that a very simple solution is to just use the target certificate itself to be the “verify location”. For example, here is the client context again: # connect socket, or "client" ctx: SSLContext = ssl.create_default_context(ssl.Purpose.SERVER_AUTH) ctx.verify_mode = ssl.CERT_REQUIRED ctx.check_hostname = True ctx.load_verify_locations("server.cert") # <--- Same one as the server # Client needs a pair of "cert" and "key" ctx.load_cert_chain(certfile="client.cert", keyfile="client.key") and then in the server’s context, you could also use the client’s cert as the “verify location”: # bind socket, or "server" ctx: SSLContext = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) ctx.verify_mode = ssl.CERT_REQUIRED ctx.load_verify_locations("client.cert) # <--- Same as on client # Server needs a pair of "cert" and "key" ctx.load_cert_chain(certfile="server.cert", keyfile="server.key") Obviously, the client code and the server code are running on different computers and you need to make sure that the right files are on the right computers in the right places. There are a lot of ways to make this more sophisticated, but it’s probably a good idea to get the simple case working, as described above, before looking at the more complicated cases. A cool option is to make your own root certificate authority, which can be a standard “verify location” in all your microservices, and then when you make certs and keys for each microservice, you just have to “sign” them with the root key. This process is described in Be your own certificate authority by Moshe Zadka Hope that helps! Why do you spell Søcket like that? The slashed O is used in homage to ØMQ, a truly wonderful library that changed my thinking around what socket programming could be like. I want to talk to the aiomsg Søcket with a different programming language WARNING: This section is extremely provisional. I haven’t fully nailed down the protocol yet. To make a clone of the Søcket in another language is probably a lot of work, but it’s actually not necessary to implement everything. You can talk to aiomsg sockets quite easily by implementing the simple protocol described below. It would be just like regular socket programming in your programming language. You just have to follow a few simple rules for the communication protocol. These are the rules: Every payload in either direction shall be length-prefixed: message = [4-bytes big endian int32][payload] Immediately after successfully opening a TCP connection, before doing anything else with your socket, you shall: Send your identity, as a 16 byte unique identifier (a 16 byte UUID4 is perfect). Note that Rule 1 still applies, so this would look like identity_message = b'\x00\x00\x00\x10' + [16 bytes] (because the payload length, 16, is 0x10 in hex) Receive the other peer’s identity (16 bytes). Remember Rule 1 still applies, so you’ll actually receive 20 bytes, and the first four will be the length of the payload, which will be 16 bytes for this message. You shall periodically send a heartbeat message b"aiomsg-heartbeat". Every 5 seconds is good. If you receive such messages you can ignore them. If you don’t receive one (or an actual data message) within 15 seconds of the previous receipt, the connection is probably dead and you should kill it and/or reconnect. Note that Rule 1 still applies, and because the length of this message is also 16 bytes, the message is ironically similar to the identity message: heartbeat_message = b'\x00\x00\x00\x10' + b'aiomsg-heartbeat' After you’ve satisfied these rules, from that point on every message sent or received is a Rule 1 message, i.e., length prefixed with 4 bytes for the length of the payload that follows. If you want to run a bind socket, and receive multiple connections from different aiomsg sockets, then the above rules apply to each separate connection. That’s it! TODO: Discuss the protocol for AT_LEAST_ONCE mode, which is a bit messy at the moment. Developer setup Setup: $ git clone $ python -m venv venv $ source venv/bin/activate (or venv/Scripts/activate.bat on Windows) $ pip install -e .[all] Run the tests: $ pytest Create a new release: $ bumpymcbumpface --push-git --push-pypi The easiest way to obtain the bumpymcbumpface tool is to install it with pipx. Once installed and on your $PATH, the command above should work. NOTE: twine must be correctly configured to upload to pypi. If you don’t have rights to push to PyPI, but you do have rights to push to github, just omit the --push-pypi option in the command above. The command will automatically create the next git tag and push it. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/aiomsg/
CC-MAIN-2019-26
refinedweb
5,687
64.3
Hi, I have a JScrollPane with two coulmns. In the first column I have an image pane JTable, and in the second a list with names of sections. This second column I try to divide in two columns, one (the second column) to display the names of the sections (each row contains one name), and in the other column (the third) I want to show some values for every section in the row respectively. But, instead of displaying the desired values in the third column, I get the same names of the sections as in the second column. Here is a part of the code I have: private Vector<Section> daten = new Vector<Section>(0); //These are the values for the first column in the Jscroll private String[] header = {"Section","calcGYR"}; // These are the values for the second and third column (in this case the header for the both columns public TrafficObserveModel(Vector<Section> daten) { setData(daten); } public Object getValueAt(int row, int col) { return ((Section)daten.elementAt(row)); } public void setData(Vector<Section> daten){ this.daten = daten; fireTableDataChanged(); } public Vector<Section> getData() { return daten; } But I don't know how to modify the methods in order to render the desired integer values in the third column. Can anybody help me please. Thank you very much in advance.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/35614-problem-columns-jtable-jscrollpane-equal-values-two-columns-every-row-jtable-printingthethread.html
CC-MAIN-2017-51
refinedweb
218
57.2
Scrapy Beginners Series Part 1: How To Build Your First Production Scraper Whether you are a developer, data scientist or marketer, being able to develop web scrapers is a hugely valuable skill to have. And there is no better web scraping framework than Python Scrapy. There are lots of articles online, showing you how to make your first basic Scrapy spider. However, there are very few that walk you through the full process of building a production ready Scrapy spider. To address this, we are doing a 5-Part Scrapy Beginner Guide Series, where we're going to build a Scrapy project end-to-end from building the scrapers to deploying on a server and run them every day. Part 1: Basic Scrapy Spider - We will go over the basics of Scrapy, and build our first Scrapy spider. (This Tutorial)) For this beginner series, we're going to be using one of the simplest scraping architectures. A single spider, being given a start URL which will then crawl the site, parse and clean the data from the HTML responses, and store the data all in the same process. This architecture is suitable for the majority of hobby and small scraping projects, however, if you are scraping business critical data at larger scales then we would use different scraping architectures. We will cover these in other Scrapy series. The code for this project is available on Github here! If you prefer video tutorials, then check out the video version of this article. Need help scraping the web? Then check out ScrapeOps, the complete toolkit for web scraping. Part 1: Basic Scrapy Spider In this tutorial, Part 1: Basic Scrapy Spider we're going to cover: - What is Scrapy? - How to Setup Our Python Environment - How to Setup Our Scrapy Project - Creating Our Scrapy Spider - Using Scrapy Shell To Find Our CSS Selectors - How to Run Our Scrapy Spider, Plus Output Data in CSV or JSON - How to Navigate Through Pages For this series, we will be scraping the products from Chocolate.co.uk as it will be a good example of how to approach scraping a e-commerce store. Plus, who doesn't like Chocolate! What Is Scrapy? Developed by the co-founders of Zyte, Pablo Hoffman and Shane Evans, Scrapy is a Python framework specifically designed for web scraping. Using Scrapy you can easily build highly scalable scrapers that will retrieve a pages HTML, parse and process the data, and store it the file format and location of your choice. Why & When Should You Use Scrapy? Although, there are other Python libraries also used for web scraping: Python Requests/BeautifulSoup: Good for small scale web scraping where the data is returned in the HTML response. Would need to build you own spider management functionality to manage concurrency, retries, data cleaning, data storage. Python Request-HTML: Combining Python requests with a parsing library, Request-HTML is a middle-ground between the Python Requests/BeautifulSoup combo and Scrapy. Python Selenium: Use if you are scraping a site if it only returns the target data after the Javascript has rendered, or you need to interact with page elements to get the data. Python Scrapy has lots more functionality and is great for large scale scraping right out of the box: - CSS Selector & XPath Expressions Parsing - Data formating (CSV, JSON, XML) and Storage (FTP, S3, local filesystem) - Robust Encoding Support - Concurrency Managment - Automatic Retries - Cookies and Session Handling - Crawl Spiders & In-Built Pagination Support You just need to customise it in your settings file or add in one of the many Scrapy extensions and middlewares that developers have open sourced. The learning curve is initially steeper than using the Python Requests/BeautifulSoup combo, however, it will save you a lot of time in the long run when deploying production scrapers and scraping at scale. Beginners Scrapy Tutorial With the intro out of the way, let's start developing our Spider. First, things first we need to setup up our Python environment. Step 1 - Setup your Python Environment To avoid version conflicts down the raod it is best practice to create a seperate virtual environment for each of your Python projects. This means that any packages you install for a project are kept seperate from other projects, so you don't inadverently end up breaking other projects. Depending on the operating system of your machine these commands will be slightly different. MacOS or Linux Setup a virtual environment on MacOS or any Linux distro. First, we want to make sure we've the latest version of our packages installed. $ sudo apt-get update $ apt install tree Then install python3-venv if you haven't done so already $ sudo apt install -y python3-venv Next, we will create our Python virtual environment. $ cd /scrapy_tutorials $ python3 -m venv venv $ source venv/bin/activate Finally, we will install Scrapy in our virtual environment. $ apt-get install python3-pip $ sudo pip3 install scrapy Windows Setup a virtual environment on Windows. Install virtualenv in your Windows command shell, Powershell, or other terminal you are using. pip install virtualenv Navigate to the folder you want to create the virtual environment, and start virtualenv. cd /scrapy_tutorials virtualenv venv Activate the virtual environment. source venv\Scripts\activate Finally, we will install Scrapy in our virtual environment. pip install scrapy Test Scrapy Is Installed To make sure everything is working, if you type the command scrapy into your command line you should get an output like this: $ scrapy Usage: scrapy <command> [options] [args] Available commands: bench Run quick benchmark test check Check spider contracts commands crawl Run a spider edit Edit spider fetch Fetch a URL using the Scrapy downloader genspider Generate new spider using pre-defined templates list List available spiders parse Parse URL (using its spider) and print the results runspider Run a self-contained spider Step 2 - Setup Our Scrapy Project Now that we have our environment setup, we can get onto the fun stuff. Building our first Scrapy spider! Creating Our Scrapy Project The first thing we need to do is create our Scrapy project. This project will hold all the code for our scrapers. The command line synthax to do this is: scrapy startproject <project_name> So in this case, as we're going to be scraping a chocolate website we will call our project chocolatescraper. But you can use any project name you would like. scrapy startproject chocolatescraper Understanding Scrapy Project Structure To help us understand what we've just done, and how Scrapy structures it projects we're going to pause for a second. First, we're going to see what the scrapy startproject chocolatescraper just did. Enter the following commands into your command line: $ cd /chocolatescraper $ tree You should see something like this: ├── scrapy.cfg └── chocolatescraper ├── __init__.py ├── items.py ├── middlewares.py ├── pipelines.py ├── settings.py └── spiders └── __init__.py When we ran the scrapy startproject chocolatescraper command, Scrapy automatically generated a template project for us to use. We won't be using most of these files in this beginners project, but we will give a quick explanation of each as each one has a special purpose: - settings.py is where all your project settings are contained, like activating pipelines, middlewares etc. Here you can change the delays, concurrency, and lots more things. - items.py is a model for the extracted data. You can define a custom model (like a ProductItem) that will inherit the Scrapy Item class and contain your scraped data. - pipelines.py is where the item yielded by the spider gets passed, it’s mostly used to clean the text and connect to file outputs or databases (CSV, JSON SQL, etc). - middlewares.py is useful when you want to modify how the request is made and scrapy handles the response. - scrapy.cfg is a configuration file to change some deployment settings, etc. Step 3- Creating Our Spider Okay, we’ve created the general project structure. Now, we’re going to create our spider that will do the scraping. Scrapy provides a number of different spider types, however, in this tutorial we will cover the most common one, the generic Spider. Here are some of the most common ones: - Spider - Takes a list of start_urls and scrapes each one with a parsemethod. - CrawlSpider - Designed to crawl a full website by following any links it finds. To create a new generic spider, simply run the genspider command: # syntax is --> scrapy genspider <name_of_spider> <website> $ scrapy genspider chocolatespider chocolate.co.uk A new spider will now have been added to your spiders folder, and it should look like this: import scrapy class ChocolatespiderSpider(scrapy.Spider): name = 'chocolatespider' allowed_domains = ['chocolate.co.uk'] start_urls = [''] def parse(self, response): pass Here we see that the genspider command has created a template spider for us to use in the form of a Spider class. This spider class contains: - name - a class attribute that gives a name to the spider. We will use this when running our spider later scrapy crawl <spider_name>. - allowed_domains - a class attribute that tells Scrapy that it should only ever scrape pages of the chocolate.co.ukdomain. This prevents the spider going rouge and scraping lots of websites. This is optional. - start_urls - a class attribute that tells Scrapy the first url it should scrape. We will be changing this in a bit. - parse - the parsefunction is called after a response has been recieved from the target website. To start using this Spider we will have to do two things: - Change the start_urlsto the url we want to scrape. - Insert our parsing code into the parsefunction. Step 4 - Update Start Urls This is pretty easy, we just need to replace the url in the start_urls array: import scrapy class ChocolatespiderSpider(scrapy.Spider): name = 'chocolatespider' allowed_domains = ['chocolate.co.uk'] start_urls = [''] def parse(self, response): pass Next, we need to create our CSS selectors to parse the data we want from the page. To do this, we will use Scrapy Shell. Step 5 - Scrapy Shell: Finding Our CSS Selectors To extract data from a HTML page, we need to use XPath or CSS selectors to tell Scrapy where in the page is the data. XPath and CSS selectors are like little maps for Scrapy to navigate the DOM tree and find the location of the data we require. In this guide, we're going to use CSS selectors to parse the data from the page. And to help us create these CSS selectors we will use Scrapy Shell. One of the great features of Scrapy is that it comes with a built-in shell that allows you to quickly test and debug your XPath & CSS selectors. Instead of having to run your full scraper to see if your XPath or CSS selectors are correct, you can enter them directly into your terminal and see the result. To open Scrapy shell use this command: scrapy shell Note: If you would like to use IPython as your Scrapy shell (much more powerful and provides smart auto-completion and colorized output), then make sure you have IPython installed: pip3 install ipython And then edit your scrapy.cfg file like so: ## scrapy.cfg [settings] default = chocolatescraper.settings shell = ipython With our Scrapy shell open, you should see something like this: [s] Available Scrapy objects: [s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc) [s] crawler <scrapy.crawler.Crawler object at 0x0000025111C47948> [s] item {} [s] settings <scrapy.settings.Settings object at 0x0000025111D17408> [s] Useful shortcuts: [s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed) [s] fetch(req) Fetch a scrapy.Request and update local objects [s] shelp() Shell help (print this help) [s] view(response) View response in a browser In [1]: Fetch The Page To create our CSS selectors we will be testing them on the following page: The first thing we want to do is fetch the main products page of the chocolate site in our Scrapy shell. fetch('') We should see a response like this: In [1]: fetch('') 2021-12-22 13:28:56 [scrapy.core.engine] INFO: Spider opened 2021-12-22 13:28:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None) 2021-12-22 13:28:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None) As we can see, we successful retrieve the page from chocolate.co.uk, and Scrapy shell has automatically saved the HTML response in the response variable. In [2]: response Out[2]: <200> Find Product CSS Selectors To find the correct CSS selectors to parse the product details we will first open the page in our browsers DevTools. Open the website, then open the developer tools console (right click on the page and click inspect). Using the inspect element, hover over the item and look at the id's and classes on the individual products. In this case we can see that each box of chocolates has its own special component which is called product-item. We can just use this to reference our products (see above image). Now using our Scrapy shell we can see if we can extract the product informaton using this class. response.css('product-item') We can see that it has found all the elements that match this selector. In [3]: response.css('product-item') Out[3]: [<Selector xpath='descendant-or-self::product-item' data='<product-item class="product-item pro...'>, ... Get First Product To just get the first product we use .get() appended to the end of the command. response.css('product-item').get() This returns all the HTML in this node of the DOM tree. In [4]: response.css('product-item').get() Out[4]: '<product-item<div class="product-item__label-list label-list"><span class="label label--custom">New</span><span class="label label--subdued">Sold out</span></div><a href="/products/100-dark-hot-chocolate-flakes" class="product-item__aspect-ratio aspect-ratio " style="padding-bottom: 100.0%; --aspect-ratio: 1.0">\n ... Get All Products Now that we have found the DOM node that contains the product items, we will get all of them and save this data into a response variable and loop through the items and extract the data we need. So can do this with the following command. products = response.css('product-item') The products variable, is now an list of all the products on the page. To check the length of the products variable we can see how many products are there. len(products) Here is the output: In [6]: len(products) Out[6]: 24 Extract Product Details Now lets extract the name, price and url of each product from the list of products. The products variable is a list of products. When we update our spider code, we will loop through this list, however, to find the correct selectors we will test the CSS selectors on the first element of the list products[0]. Single Product - Get single product. product = products[0] Name - The product name can be found with: product.css('a.product-item-meta__title::text').get() In [5]: product.css('a.product-item-meta__title::text').get() Out[5]: '100% Dark Hot Chocolate Flakes' Price - The product price can be found with: product.css('span.price').get() You can see that the data returned for the price has lots of extra HTML. We'll get rid of this in the next step. In [6]: product.css('span.price').get() Out[6]: '<span class="price">\n <span class="visually-hidden">Sale price</span>▒8.50</span>' To remove the extra span tags from our price we can use the .replace() method. The replace method can be useful when we need to clean up data. Here we're going to replace the <span> sections with empty quotes '': product.css('span.price').get().replace('<span class="price">\n <span class="visually-hidden">Sale price</span>','').replace('</span>','') In [7]: product.css('span.price').get().replace('<span class="price">\n <span class="visually-hidden">Sale price</span>','').replace('</span>','') Out[7]: '8.50' Product URL - Next lets see how we can extract the product url for each individual product. To do that we can use the attrib function on the end of products.css('div.product-item-meta a') product.css('div.product-item-meta a').attrib['href'] In [8]: product.css('div.product-item-meta a').attrib['href'] Out[8]: '/products/100-dark-hot-chocolate-flakes' Updated Spider Now, that we've found the correct CSS selectors let's update our spider. Exit Scrapy shell with the exit() command. Our updated Spider code should look like this: import scrapy class ChocolatespiderSpider(scrapy.Spider) #the name of the spider name = 'chocolatespider' #the url of the first page that we will start scraping start_urls = [''] def parse(self, response): #here we are looping through the products and extracting the name, price & url'], } Here, our spider does the following steps: - Makes a request to ''. - When it gets a response, it extracts all the products from the page using products = response.css('product-item'). - Loops through each product, and extracts the name, price and url using the CSS selectors we created. - Yields these items so they can be stored in a CSV, JSON, DB, etc. Step 5 - Running Our Spider Now that we have a spider we can run it by going to the top level in our scrapy project and running the following command. scrapy crawl chocolatespider It will run, and you should see the logs on your screen. Here are the final stats: 2021-12-22 14:43:54 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 707, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 64657, 'downloader/response_count': 2, 'downloader/response_status_count/200': 2, 'elapsed_time_seconds': 0.794875, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2021, 12, 22, 13, 43, 54, 937791), 'httpcompression/response_bytes': 268118, 'httpcompression/response_count': 2, 'item_scraped_count': 24, 'log_count/DEBUG': 26, 'log_count/INFO':(2021, 12, 22, 13, 43, 54, 142916)} 2021-12-22 14:43:54 [scrapy.core.engine] INFO: Spider closed (finished) We can see from the above stats that our spider scraped 24 Items: 'item_scraped_count': 24. If we want to save the data to a JSON file we can use the -O option, followed by the name of the file. scrapy crawl chocolatespider -O myscrapeddata.json If we want to save the data to a CSV file we can do so too. scrapy crawl chocolatespider -O myscrapeddata.csv Step 6 - Navigating to the "Next Page" So far the code is working great but we're only getting the products from the first page of the site, the url which we have listed in the start_url variable. So the next logical step is to go to the next page if there is one and scrape the item data from that too! So here's how we do that. First, lets open our Scrapy shell again, fetch the page and find the correct selector to get the next page button. scrapy shell Then fetch the page again. fetch('') And then get the href attribute that contains the url to the next page. response.css('[rel="next"] ::attr(href)').get() In [2]: response.css('[rel="next"] ::attr(href)').get() Out[2]: '/collections/all?page=2' Now, we just need to update our spider to request this page after it has parsed all items from a page. import scrapy class ChocolateSpider(scrapy.Spider): #the name of the spider name = 'chocolatespider' #these are the urls that we will start scraping start_urls = [''] def parse(self, response):'], } next_page = response.css('[rel="next"] ::attr(href)').get() if next_page is not None: next_page_url = '' + next_page yield response.follow(next_page_url, callback=self.parse) Here we see that our spider now, finds the URL of the next page and if it isn't none it appends it to the base URL and makes another request. Now in our Scrapy stats we see that we have scraped 5 pages, and extracted 73 items: 2021-12-22 15:10:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 2497, 'downloader/request_count': 5, 'downloader/request_method_count/GET': 5, 'downloader/response_bytes': 245935, 'downloader/response_count': 5, 'downloader/response_status_count/200': 5, 'elapsed_time_seconds': 2.441196, 'feedexport/success_count/FileFeedStorage': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2021, 12, 22, 14, 10, 45, 62280), 'httpcompression/response_bytes': 986800, 'httpcompression/response_count': 5, 'item_scraped_count': 73, 'log_count/DEBUG': 78, 'log_count/INFO': 11, 'request_depth_max': 3, 'response_received_count': 5, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/200': 1, 'scheduler/dequeued': 4, 'scheduler/dequeued/memory': 4, 'scheduler/enqueued': 4, 'scheduler/enqueued/memory': 4, 'start_time': datetime.datetime(2021, 12, 22, 14, 10, 42, 621084)} 2021-12-22 15:10:45 [scrapy.core.engine] INFO: Spider closed (finished) Next Steps We hope you have enough of the basics to get up and running scraping a simple ecommerce site with the above tutorial. If you would like the code from this example please check out on Github here! In Part 2 of the series we will work on Cleaning Dirty Data & Dealing With Edge Cases. Web data can be messy, unstructured, and have lots of edge cases so will make our spider robust to these edge cases, using Items, Itemloaders and Item Pipelines. Need a Free Proxy? Then check out our Proxy Comparison Tool that allows to compare the pricing, features and limits of every proxy provider on the market so you can find the one that best suits your needs. Including the best free plans. - Part 1: Basic Scrapy Spider - What Is Scrapy? - Beginners Scrapy Tutorial - Next Steps
https://scrapeops.io/python-scrapy-playbook/scrapy-beginners-guide/
CC-MAIN-2022-40
refinedweb
3,568
62.88
Read my original post in my blog here: How to Open A File in Python Like A Pro It's a fundamental question. Usually people learn it at the time they get started with python. And the solution is rather simple. - 1st Attempt: use open() file = open('./i-am-a-file', 'rb') for line in f.readlines(): print(line) f.close() But is that a good solution? What if the file does not existed? It those exceptions and the program ends. So it's better to always check the existence of the file before reading / writing it. - 2nd Attempt: File Existence Check import os file_path = './i-am-a-very-large-file' if(os.path.isfile(file_path)): f = open(file_path, 'rb') for line in f.readlines(): print(line) f.close() Is this solution good enough? What if we have some other complex logics in every line, and it throws exceptions? In that situation, f.close() will not be called, resulted in the file not closed before the interpreter is closed, which is a bad practice as it might cause unexpected issues (for example, if a non-stop python program is running and it reads a temp file without closing it explicitly, while the OS (such as Windows) protects the temp file as it's being read, this temp file cannot be deleted until this program ends). In this case, a better choice is to use with to wrap the file operation, so that it automatically close the file no matter the operation succeeds or fails. - 3rd Attempt: use with import os file_path = './i-am-a-very-large-file' if(os.path.isfile(file_path)): with open(file_path, 'rb') as f: for line in f.readlines(): print(line) Is this the perfect solution? In most cases, yes, it is enough to handle the file operation. But, what if you need to read a very very very large file, such as a 4GB file? If so, the python program will need read the whole file into the memory before it starts to perform your operation. If this is an API in your server, and several requests come in to read multiple large files, how much memory do you need, 16GB, 32GB, 64GB, just for a simple file operation? We can do a very simple experiment in Window environment. First, let's create a 4GB with the following script. import os size = 1024*1024*1024*4 # 4GB with open('i-am-a-very-large-file', "wb") as f: f.write(os.urandom(size)) Now you have a 4GB large file and let's record our current memory statistic using Windows task manager. From the screenshot, it shows the process python uses 5,154,136 KB memories, which is about 5.19 GB memories, just for reading this file only! You can clearly see the steep increasing line from the memory diagram. (FYI, I have a total of 24 GB memory) Hence, to make our solution better, we have to think of a way to optimise it. If only we could read the line while we actually want to use it! Here comes the concept of generator and we can have the following solution. - 4th Attempt: use yield import os def read_file(f_path): BLOCK_SIZE = 1024 if(os.path.isfile(f_path)): with open(file_path, 'rb') as f: while True: block = f.read(BLOCK_SIZE) if block: yield block else: return file_path = './i-am-a-very-large-file' for line in read_file(file_path): print(line) And let's run it and monitor the memory change. Yay! While the console crazily prints the meaningless texts, the memory usage is extremely low compared to the previous version, only 2,928 KB in total! And it's an absolutely flat line in the memory diagram! Why is it so amazingly fast and memory-safe? The secret is that we use yield keyword in our solution. To understand how yield works, we need to know the concept of generator. Here is a very clear and concise explanation about it, check out What does the “yield” keyword do? on StackOverflow. As a quick summary, yield simply makes this read_file function to be a generator function. When read_file gets called, it runs until yield block, returns the first block of string and stops until the function gets called next time. So, only one block of file gets read each time read_file(file_path) is called. To read the whole file, multiple times of read_file(file_path) need to call ( for line in read_file(file_path)), and each time it only consume a little memory to read one block. So that's how to open a file in python like a pro, given considerations about the extreme cases (actually quite common if your service is performance critical). Hope you enjoy this blog post and share your ideas here! You can grab the demo source code on GitHub here: ZhiyueYi/how-to-open-a-file-in-python-like-a-pro-demo Updates on 7 July Thanks to @Vedran Čačić's comments, I learnt further about better solutions to it. If we have a very large text file, we can simply use - 5th Attempt with open(path) as file: for line in file: print(line) And it's totally OK. Or if we really want to process a binary file in chunks (like what I did in 4th Attempt, 1024 bytes per block), we could also use BufferedReader in the built-in io library to achieve the same thing like this - 6th Attempt import sys import io file_path = './i-am-a-very-large-file' with open(file_path, 'rb') as f: BLOCK_SIZE = 1024 fi = io.FileIO(f.fileno()) fb = io.BufferedReader(fi) while True: block = fb.read(1024) if block: print(block) else: break And now I noticed that the method in 4th Attempt is just dumbly re-inventing the wheel (BufferedReader). LOL, knowledge is power. Summary Here So what's the lesson learnt besides opening a file? I think, firstly, do not afraid to share your ideas even if it's not a perfect one (like what I did) and do not afraid to admit your mistakes. Share more and interact more, we can then gain insights from the others and improve ourselves. Cheers~ You can grab the demo source code on GitHub here: ZhiyueYi/how-to-open-a-file-in-python-like-a-pro-demo Discussion (10) Or simply Don't reinvent the wheel. And why do you think swallowing the exception of a non-existent file is a good idea? Read that Zen again. Errors should never pass silently. (Not to mention a race condition with your code.) It would be more useful if you mentioned local file encoding, and utf8 as a new sensible default. That would really be "like a pro". Thanks for sharing nice suggestions here about encoding, exception handling! Definitely worth for me to explore more! I’m still quite new to Python and there are still a lot to learn. As for reading large file examples, probably this is the case: imagine you have a server which has only one API to process a file and thousands QPS pressure. Though each time only some MB size files are processed, with thousands of requests coming in, it accumulates to a greater consumption of memories. Not to mention those servers with more functionalities. I hope I could have a real-life example for you but currently I don’t :( It's not reason for ":(", it's for ":)". Because it means you can write normal-looking easy code and it will work. Your example, even though fictional, has nothing to do with my comment: if you have to process files as a whole, then you do, and no amount of black magic will help you. If you don't, then the question is whether it's a text or a binary file, as I said. And then you should use line or block buffering as needed. If you're really strapped for memory, the first optimization I'd suggest is not using Python. Python is so dynamic that common data structures easily take up many times more memory than in "normal" languages with value semantics. I think I got what you mean here. is fast and low memory consumed (Just learnt it from you and tried by myself, thanks) And I agree with you that we should write the code as simple as possible in most cases, because having black magic here makes code less readable. But I still think this technique is worth to mention and good to know, in case somebody needs it for some extreme cases. Like what? Like "I have a binary file 4GiB in size and I'm just gonna spit it to stdout 1KiB at a time"? Not to mention that you don't do any decoding at all, so bytes objects are written to your screen raw, which isn't what you want, no matter the usecase. And not to mention "if it doesn't exist in the moment I check, I won't do anything, even though it might exist later when I'd actually try to read from it"? Sorry, I know you're trying to salvage your post, but "like a pro" doesn't mean that. A pro should know the terrain of possibilities they might encounter, and this is something you won't encounter. Ever. If you do, I'll eat my hat. :-P Now, if you actually need to process a binary file in chunks (not "pretend it's text and write it on the screen"), that's why block buffering is for. Learn to use it. docs.python.org/3/library/io.html#... You're in fact implementing another buffer on top of a builtin one, which really doesn't help your memory nor your speed. You are right. Thanks a lot for these helpful comments. It’s definitely a good lesson learnt Let me just tell you one more thing. I do this all the time around the Net (explainxkcd.com/wiki/index.php/SIW...). Usually people stick to their guns and refuse to admit they are wrong. DEV is the only community where people thank me for correcting them. Kudos for that! B-) Nobody is perfect. We are all learning to be better :D Though the post itself is not good enough, at least we had a meaningful conversation here and I get a better solution. Your code is not perfect either, what if a single line is a few GB? You just made the assumption that each line will be small in size which might not always be the case. No code is perfect, especially in Python. :-) But if we are more explicit about what exactly are we doing, we can produce code that is robust and good enough. For start, is your file textual or binary? They are not the same, although on various UNIXes you can often pretend they are. [Text files are sequences of characters, binary files are sequences of bytes. Bytes aren't characters, characters aren't bytes.] From the context, I realized you're probably talking about text files, although the "gimme bunch of random bits" is just wrong there. And for a good reason: in 33 years of working with computers in all forms, I never had to read a text file whose line didn't fit in memory. Have you? It's a honest question.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/zhiyueyi/how-to-open-a-file-in-python-like-a-pro-3oe0
CC-MAIN-2021-39
refinedweb
1,901
72.97
Parse::FSM - Deterministic top-down parser based on a Finite State Machine use Parse::FSM; $fsm = Parse::FSM->new; $fsm->prolog($text); $fsm->epilog($text); $fsm->add_rule($name, @elems, $action); $fsm->start_rule($name); $fsm->parse_grammar($text); $fsm->write_module($module); $fsm->write_module($module, $file); $parser = $fsm->parser; # isa Parse::FSM::Driver $parser->input(\&lexer); $result = $parser->parse; # script perl -MParse::FSM - Grammar.yp Parser::Module perl -MParse::FSM - Grammar.yp Parser::Module lib\Parser\Module.pm This module compiles the Finite State Machine used by the Parse::FSM::Driver parser module. It can be used by by a sequence of add_rule calls, or by parsing a yacc-like grammar in one go with parse_grammar. Can be used as a script to generate a module from a grammar file. The result of compiling the parser can be used immediately by retrieving the parser object, or a pre-compiled module can be written to disk by write_module. This module can then be used by the client code of the parser. As usual in top-down parsers, left recursion is not supported and generates an infinite loop. This parser is deterministic and does not implement backtracking. Creates a new object. Name of the grammar start rule. It defaults to the first rule added by add_rule. Perl code to include in the generated module near the start of the generated module and near the end of it. Adds one rule to the parser. $fsm->add_rule($name, @elems, $action); $name is the name of the rule, i.e. the syntatic object recognized by the rule. @elems is the list of elements in sequence needed to recognize this rule. Each element can be one of: The empty string is used to match the end of input and should be present in the grammar to force the parser to accept all the input; The accepted forms are: [term] - recurse to the term rule; [term]? - term is optional; [term]* - accept zero or more terms; [term]+ - accept one or more terms; [term]<+,> - accept one or more terms separated by commas, any token type can be used instead of the comma; $action is the Perl text of the action executed when the rule is recognized, i.e. all elements were found in sequence. It has to be enclosed in brackets {}, and can use the following lexical variables, that are declared by the generated code: $self: object pointer; @item: values of all the tokens or rules identified in this rule. The subrule call with repetions return an array reference containing all the found items in the subrule; Parses the given grammar text and adds to the parser. Example grammar follows: { # prolog use MyLibrary; } main : (number | name)+ <eof> ; number : 'NUMBER' { $item[0][1] } ; # comment name : 'NAME' { $item[0][1] } ; # comment expr : <list: number '+' number > ; <start: main > { # epilog sub util_method {...} } If the text contains a code block surronded by braces before the first rule definition, the text is copied without the external braces to the prolog of generated module. If the text contains a code block surronded by braces after the last rule definition, the text is copied without the external braces to the epilog of generated module. Statement are either rule definitions of directives and end with a semi-colon # sign to the end of the line. A rule defines one sentence to match in the grammar. The first rule defined is the default start rule, i.e. the rule parsed by default on the input. A rule name must start with a letter and contain only letters, digits and the underscore character. The rule definition follows after a colon and is composed of a sequence of tokens (quoted strings) and sub-rules, to match in sequence. The rule matches when all the tokens and sub-rules in the definition match in sequence. The top level rule should end with <eof> to make sure all input is parsed. The rule can define several alternative definitions separated by '|'. The rule definition finishes with a semi-colon ';'. A rule can call an anonymous sub-rule eclosed in parentheses. The last item in the rule definition is a text delimited by {} with the code to execute when the rule is matched. The code can use $self to refer to the Parser object, and @item to refer to the values of each of the tokens and sub-rules matched. The return value from the code defines the value of the rule, passed to the upper level rule, or returned as the parse result. If no action is supplied, a default action returns an array reference with the result of all tokens and sub-rules of the matched sentence. Every token or sub-rule can be followed by a repetition specification: '?' (zero or one), '*' (zero or more), '+' (one or more), or '<+,>' (comma-separated list, comma can be replaced by any token). Directives are written with angle brackets. Can be used in a rule instead of the empty string to represent the end of input. Shortcut for creating lists of operators separated by tokens, returns the list of rule and token values. Defines the start rule of the grammar. By default the first defined rule is the start rule; use <start:> to override that. Computes the Finite State Machine to execute the parser and returns a Parse::FSM::Driver object that implements the parser. Usefull to build the parser and execute it in the same program, but with the run-time penalty of the time to setup the state tables. Receives as input the module name and the output file name and writes the parser module. The file name is optional; if not supplied is computed from the module name by replacing :: by / and appending .pm, e.g. Parse/Module.pm. The generated code includes parse_XXX functions for every rule XXX found in the grammar, as a short-cut for calling parse('XXX'). The setup of the parsing tables and creating the parsing module may take up considerable time. Therefore it is usefull to separate the parser generation phase from the parsing phase. A parser module can be created from a yacc-like grammar file by the following command. The generated file (last parameter) is optional; if not supplied is computed from the module name by replacing :: by / and appending .pm, e.g. Parse/Module.pm: perl -MParse::FSM - Grammar.yp Parser::Module perl -MParse::FSM - Grammar.yp Parser::Module lib\Parser\Module.pm This is equivalent to the following Perl program: #!perl use Parse::FSM; Parse::FSM->precompile(@ARGV); The class method precompile receives as argumens the grammar file, the generated module name and an optional file name, and creates the parsing module. Paulo Custodio, <pscust at cpan.org> Calling pre-compiler on import borrowed from Parse::RecDescent..
http://search.cpan.org/~pscust/Parse-FSM-1.08/lib/Parse/FSM.pm
CC-MAIN-2017-13
refinedweb
1,120
63.49
when hosting WCF code in processes you don't control. Sidebar Gadgets are mini applications which live in the Sidebar, a UI element on the Windows Vista desktop. They are extremely handy for keeping an eye on information you are often interested to; they are also very good at providing you a quick-reach UI for tasks you perform often. As you know I wear the server guy hat, so I'm not really the best person for explaining the advanteges of Gadget: I would suggest visiting Michael and Jaime blogs if you want more details on the subject.When I thought of how the gadget model could be useful for me, I realized that much of the information I'd like to keep an eye on happens to be confidential (like being notified if I received a wire transfer, or getting the access statistics from my website); the actions I want to take when I react to changes in those data are also requiring high security levels (like accessing a portion of my home banking for giving approval for a certain utility bill to be paid). So, would not be great if we could use CardSpace for authenticating the services accessed by a Gadget? I thought for few nights about the issue, devised a strategy and wrote a proof of concept. Yesterday night I finally got it working: below I walk you through the process. In attachment you'll find the source code: it's not very polished, so I am not posting it to the community website just yet. Sidebar Gadget: a glance to the architecture I am going to cover a minimal portion of the architecture, to the purpose of understanding how to plug CardSpace in it. Gadgets are so much more that the few things I throw down here, please take the time to take alook at the whole story. That said, let's dive into it.A sidebar gadget is, in extreme synthesis, an HTML file (Gadget.htm, for example) that will be rendered inside the Windows Vista Sidebar. As such, it can do pretty much everything that DHTML offers you PLUS it takes advantage of some APIs which handle some resource on the local system (some document folder like the pictures, storing and retrieving gadget-specific settings, etc). The gadget application is described by a manifest file, Gadget.xml, which enumerates various metadata such as application name, version, author info, location of the source file, location of images used as icons, platform requirements, permissions required... a classical manifest.A gadget can also have other HTML files that are shown on specific moments: it is possible to provide a custom UI for changing the gadget settings (typically settings.htm), and a custom UI that pops up from the gadget (typically flyout.htm) and gives more real estate outside of the boundaries of the sidebar. In our walkthrough we don't use those two further UIs, but in a real gadget application you will likely leverage them. All the .HTM files in a gadget can make use of the classical web UI entourage, such as *.JS and *.CSS files; and images, naturally. All those files (and subfolders, if any) should live in the same folder as Gadget.htm and gadget.xml.Deploying a gadget is super easy: you zip the folder containing the above, you change the .ZIP extension to .GADGET and you obtain a file that can be easily installed. Now, there's a rougher way of deploying a gadget. You take the same folder above, but instead of zipping it you just copy it under %userprofile%\AppData\Local\Microsoft\Windows Sidebar\Gadgets. Then you just click on the plus sign on top of the sidebar (what? You're not running it already? Just type sidebar in the search field in the start menu, the shortcut to it is usually the first result) and pick your gadget for having it instantiated (you can have multiple copies).How does the typical gadget work? Like a web page from where you cannot leave. All UI updates are made using AJAXy tricks, refreshing only the parts of the UI that need updating. Sometime you can also "cheat" and embed in the page an ActiveX (any flavour you want), or an XBAP. It's that easy. Injecting CardSpace in a gadget After I gave the above description, a couple of times I've heard reactions such as "Ah brain dead easy! If this is just HTML, let's shove in it the CardSpace HTM Object tag and we're done!". Non so fast, pal. The HTML in a gadget is hosted and executed on YOUR client machine: this means that you are the subject AND the relying party, which does not really make sense in this situation (in other situations, it may). Besides, the sidebar renders the gadget's HTML without opening any HTTPS session. The object tag makes sense if it rendered in the context of an HTTPS session associated to the SSL certificate of the website you want to authenticate with. So no, we can't use the object tag directly.What else? Let's think about it for a moment. When a browser invokes CardSpace, it is explicitly passing to it the settings contained in the Object tag found on the page (that is to say the token policy of the RP); the browser also passes implicitly details about the RP, such as the certificate being used. In your gadget you don't get the same information automatically from the context: hence, the solution is force feeding CardSpace with those info in some other way. Since we don't have a way to to that via pure scripting, we have to do it via "real" code (read: compiled): that doesn't worry us, bacause there's the ActiveX venue. Seems promising, let's flesh it out.Let's try not to reinvent the wheel (..too much :P): there is already a way to use cardspace by supplying all the policy and cryptographic information via configuration, and that's via WCF. Fine! We can create an assembly containing the logic for calling a WCF service, and expose the relevant logic via ActiveX interface; then we instantiate the object via Javascript in Gadget.htm, again via javascript we invoke the service and always via javascript we use the results for updating the UI. (Note that it is possible to use CardSpace in this scenario w/o using WCF, and I am working on it, but it's considerably more complex. You'll see a post about it, similar to this one in term of depth, pretty soon).There is still a reality check that we need to deal with before implementing our vision. Our Gadget.htm file is instantiated by the process Sidebar.exe, which is already up&running when our gadget starts. This means that our assembly will be instantiated in an already existing AppDomain, which already read its own configuration file. Unfortunately the WCF portion of our code needs a lot of configuration info, and actually we want to keep it on a file that can be modified withour recompiling anything. This is a typical problem when you use WCF in processes you don't own. We cannot put all the relevant configuration in Sidebar.exe.config and still call ourselves architects, so another solution is in order. Actually, fixing this is relatively simple: from our ActiveX we spawn a new AppDomain, then we create & execute our WCF proxy from it. I have made few quick searches on MSDN and articles to see if somebody already did that, and to my surprise I could not find any. Well, now there will be one :-) this was the last "architectural" issue, from now on, just implementation bumps. Let's review the solution we devised: Grocery list style: Now that we got a good feeling of how it should work, let's walk through the code of the various parts of the solution. The sample gadget I am going to describe you a very, very simple gadget: it will turn out pretty ugly from the visual standpoint, but I'm OK with it. The extra code required for the prettification would ruin the signal/noise ratio, and I want to make my best for making you understand the fine points about cardspace and WCF here. You can add cosmetics later :-)Our gadget will display a list of roads: by pressing a button, we will get traffic info from a web service and we will change the color of every road name according to the traffic levels (green=good, red=bad, etc). The service that returns the traffic info will be secured via Windows CardSpace. If you're a loyal reader, that may sound familiar to you: this is exactly one of the services featured in my WPF sample. In fact, in the spirit of frugality of this post, the gadget we are creating here will call exactly that service. This means that for testing the gadget you'll need to download the sample from here (detailed blog post here) and you'll need to have TrafficService running when you use the gadget (if you don't, you'll still see cardspace popping up: but afer you select a card the call will fail). You will not need anything else from the WPF sample: also, I won't use any of the token caching technology I introduced at the time even if it actually makes sense.Now that we know what the gadget is supposed to do and what we need on our "backend", let's finally dive into the details of the implementation. Let's start with the basic Gadget-ry. What do we need to deploy in %userprofile%\AppData\Local\Microsoft\Windows Sidebar\Gadgets? The following: The manifest file is very, very simple: <?xml version="1.0" encoding="utf-8" ?> <gadget> <name>SidebarWCFCardSpacePoC</name> <namespace>Vittorio.SidebarWCFCardSpacePoC</namespace> <version>1.0.0.0</version> <author name="Vibro.NET"> <info url="" text="More Gadget Info"/> <logo src="images/logoIcon.png"/> </author> <description>A proof of concept: invoking a CardSpace-protected WCF method from a sidebar gadget</description> <icons> <icon height="100" width="125" src="images/gadgetIcon.png"/> </icons> <hosts> <host name="sidebar"> <base type="HTML" apiVersion="1.0.0" src="gadget.htm"/> <permissions>Full</permissions> <platform minPlatformVersion="0.3"/> <defaultImage src="images/dragIcon.png"/> </host> </hosts> </gadget> The only part that really interests us is the one highlighted in yellow, where we specify that the source file is "gadget.htm" and that we need permissions Full. For details on the gadget manifest format, please refer to the gadget development documentation. Also note: all the .PNG files referred in the manifests are what constitutes the content of the folder images. The file Gadget.htm is much more interesting: DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html xmlns=""> <head> <title>CardSpace Traffic</title> <style> body { margin: 0; ... } </style> <script type="text/javascript" > function GetTraffic() { var myControl = new ActiveXObject("WCFCardSpaceActiveX.WCFCardSpaceActiveXClass"); myControl.SetApplicationBase("C:\\Users\\myaccount\\AppData\\Local\\Microsoft\\Windows Sidebar\\Gadgets\\CardSpaceWCFGadgetPoC.Gadget\\"); myControl.SetConfigurationFile("C:\\Users\\myaccount\\AppData\\Local\\Microsoft\\Windows Sidebar\\Gadgets\\CardSpaceWCFGadgetPoC.Gadget\\WCFCardSpaceActiveX.dll.config"); var myList = myControl.GetTrafficConditions(); var colorArray = myList.split(","); document.getElementById("Road1").style.backgroundColor = colorArray[0]; // ... document.getElementById("Road7").style.backgroundColor = colorArray[6]; } </script> </head> <body> <div id="gadgetMainFrame"> <div id="gadgetContentFrameDocked"> <input type="button" onclick="GetTraffic()" value="Get Traffic Info" /> <div id="contentdiv"> <div id="Road1">Road 1</div> ... <div id="Road7">Road 7</div> </div> </div> </div> </body> </html> All the parts not highlighted, like the CSS definitions, are of no interest for us in this context and will be ignored.Starting from the body: The part highlighted in light blue shows a simple HTML button, wired with the Javascript function.The text highlighted in pink represents the UI elements that will have to change color according to the traffic info: simple DIVs with RoadN as IDs. Let's take a close look at the definiton of the GetTraffic function. The text highlighted in yellow indicated code that deal with the ActiveX.The first line instantiate the ActiveX by its ProgID, WCFCardSpaceActiveX.WCFCardSpaceActiveXClass. No indication that this is a .NET assembly.The next two lines feed the ActiveX instance with the path of the WCF configuration file and the path of the assembly on your file system. Chances are that you will have to simply substitute "myaccount" with your user alias.The fourth line invokes the main method of the ActiveX, GetTrafficConditions(), and saves the return value on a local variable. The green code is just UI update; we go thorugh the list and we change the color of the DIVs accordingly. That's it! From the Gadget developer perspective, it's so easy it's not even funny. If we take a look to the activex code, though, we'll find that the music changes. Below you can see the project structure for the custom activex assembly: The output of the project is a class library, that should be copied in the same folder as gadget.htm, gadget.xml and so on.THe resulting assembly must be registered as COM object. If you are using visual studio, you can avoid using regasm for registering the assembly under COM. If you go on project properties->build tab, you will find a "Register for COM interop" checkbox; check it, and next time you'll build the project the assembly will be registered. If you want to do everything by hand, you can use regasm from the command line.As you will read in the <Digression/> block below, for this example you have also to install the assembly in the GAC. THat's easily done via the command line tool gacutil. I am looking at eliminating the GAC requirement. Let's take a look at some notable file in the solution. The TrafficServiceContract.cs file it taken verbatim from the WPF sample mentioned above, the one containing the WCF service our gadget will invoke. It just contains the contracts definition for the service we want to invoke. I won't paste it here. The App.Config is identical to WCFCardSpaceActiveX.dll.config. It is substantially a simplified version of the WPF client config in the WPF sample mentioned above: I have just got rid of all the config for the other service (MeteoService), not used here, and I eliminated all the token caching configuration. Even if it's not very interesting nor new, I will paste it here in case you want to take a look. <configuration> <system.serviceModel> <client> <endpoint name="TrafficClient" address="" contract="TrafficService.ITraffic" binding="wsHttpBinding" bindingConfiguration="myBinding" behaviorConfiguration="TraditionalClientBehavior"> <identity> <certificateReference findValue='' storeLocation='LocalMachine' storeName='My' x509FindType='FindBySubjectName' /> </identity> </endpoint> </client> <bindings> <wsHttpBinding> <binding name="myBinding"> <security mode="Message"> <message clientCredentialType="IssuedToken"/> </security> </binding> </wsHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="TraditionalClientBehavior"> <clientCredentials> <serviceCertificate> <authentication trustedStoreLocation='LocalMachine' revocationMode='NoCheck'/> <defaultCertificate findValue='' storeLocation='LocalMachine' storeName='My' x509FindType='FindBySubjectName' /> </serviceCertificate> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> If your service listens on a different port, or if it is remote, the highlighted value must be changed accordingly. We finally got to the real meat of the post, the code behind WCFCardSpaceActivexClass. Since it is longer than the others and somewhat denser, I am going to break down the file in more sections. Let's start with the main ActiveX class declaration: namespace WCFCardSpaceActiveX { // Our ActiveX class [ProgId("WCFCardSpaceActiveX.WCFCardSpaceActiveXClass")] [ClassInterface(ClassInterfaceType.AutoDual)] public partial class WCFCardSpaceActiveXClass : UserControl string _applicationBase; string _configurationFile; public WCFCardSpaceActiveXClass() InitializeComponent(); public void SetApplicationBase(string applicationBase) _applicationBase = applicationBase; public void SetConfigurationFile(string configurationFile) _configurationFile = configurationFile; The yellow code shows the attributes needed for exposing the class as ActiveX. We have seen that ProgId used from the Gadget.htm code for instantiating this class.The AutoDual interface is useful for being able to use the current class via scripting.The green and light blue code handle the acquisition of the applicationbase and configuration file paths, respectively. Again, we have seen those methods invoked from Gadget.htm. The method that Gadget.htm invoked for getting the results was GetTrafficConditions: here there's how it is implemented. public string GetTrafficConditions() string temp = ""; try { //get our assembly among all the ones in the current AppDomain Assembly [] alist = AppDomain.CurrentDomain.GetAssemblies(); string myAssemblyName = String.Empty; foreach(Assembly a in alist) if (a.FullName.Contains("WCFCardSpaceActiveX")) { myAssemblyName = a.FullName; break; } //set up the AppDomain for the WCF proxy wrapper class TrafficServiceCaller AppDomainSetup ads = new AppDomainSetup(); ads.ApplicationBase = _applicationBase; ads.DisallowBindingRedirects = false; ads.DisallowCodeDownload = true; ads.ConfigurationFile = _configurationFile; // Create the AppDomain for the WCF proxy wrapper class TrafficServiceCaller AppDomain ad2 = AppDomain.CreateDomain("AD #2", null, ads); // Instantiates the TrafficServiceCaller wrapper class in the new Create an instance of MarshalbyRefType in the second AppDomain. // Gives back a transparent proxy to our TrafficServiceCaller instance, so that we can call it from the main AppDomain TrafficServiceCaller mbrt = (TrafficServiceCaller)ad2.CreateInstanceAndUnwrap(myAssemblyName, typeof(TrafficServiceCaller).FullName ); // Calls the method of the wrapper that will invoke the web service (& summon the CardSpace UI) temp += mbrt.GetTrafficConditions(); // Unloads the second AppDomain and destroys its content AppDomain.Unload(ad2); return (temp); } catch(Exception ee) temp+=ee.ToString(); } } This is the place where we perform the AppDomain trick. The light blue code retrieves the full name of the current assembly, as loaded in the sidebar root AppDomain: we will need it later, since the class that performs the actual WCF call lives here. I could have hardcoded the assembly full name, but that would not have been funny (and it would break easily every time you change something).The code highlighted in yellow creates a new AppDomain, assigning to it the config file we specified and forcinf the execution path to the folder where our assembly lives.The first line of green code creates an instance of our WCF caller class,TrafficServiceCaller, in the new AppDomain. This ensures that the WCF code in it will see the configuration file we specified, as opposed to the sidebar configurations settings. <Digression>This line was especially nasty to deal with, because I kept getting an error "cannot cast to transparent proxy". If I hosted the assembly from a console test application, however, everything worked. I know there are some subtleties about assembly resolution that can be fixed by overriding some system function, but I wanted a quick solution. Hence, I just installed my assembly in the GAC and everything magically worked. This is not ideal, since GAC instalaltion requires admin privileges: however we have to make some installation script or MSI anyway, since we need to register an ActiveX; furthermore if the gadget is used for high value transaction it probably makes sense to give the user a feeling that what they're installing is important stuff. That's what MIchael told me today, and I faithfully report it :) </Digression>Once we obtained the instance, we can call the method that will actually invoke the WCF service. After that, it's just cleaning. The last thing we need to examine is the TrafficServiceCaller class, where we will finally see some WCF code. [Serializable] public class TrafficServiceCaller: MarshalByRefObject public string GetTrafficConditions()(); string temp = MapColor(ti[0].color); for (int i = 1; i < ti.Count;i++) { temp += "," + MapColor(ti[i].color); } catch (Exception ee) return ("Something went wrong: \n\n" + ee.ToString()); private string MapColor(int i) switch (i) case 0: return ("green"); case 1: return ("yellow"); default: return ("red"); The yellow code enables the create-it-in-one-appdomain-but-call-it-from-another trick. The green code just creates the WCF channel according to the arbitrary configuration, then invokes the service.The light blue code transforms the format of the results (that was devised for the WPF client of the caching token sample) to something easier to handle from Javascript. Summary That's it. We have seen how we can call a WCF service from one sidebar gadget, and use CardSpace for protecting the operation. The procedure is not necessarily straighforward, but it is not that hard and especially it does not require any hack.If you are not afraid of some handwork, you can try the above by playing with the code I am attaching to this post. Remember, this is pretty rough and requires some work from your side (downloading the CardSpace+WCF+WPF sample and launch the TrafficService, copying under your %userprofile%\AppData\Local\Microsoft\Windows Sidebar\Gadgets the folder CardSpaceWCFGadgetPoC.Gadget that you will find in the attached ZIP, launching regasm /codebase and gacutil -i on the WCFCardSpaceActiveX.dll file, adding the gadget from the gallery via the "+" button on the sidebar) but it's really not hard, and it actually helps to understand how the various moving parts interact. As usual, if eneough people asks for it I will package it in a full fledged sample. Have fun! Last night the guys from Mindscape unleashed their powershell gadget onto the world. This very cool gadget Really cool article. If I may, I'd suggest using Jonathan's ks code for .NET assembliesCOM assemblies: In this post I am going to show you an example of CardSpace and an Office application working together. In this post I am going to show you an example of CardSpace and an Office application working together
http://blogs.msdn.com/b/vbertocci/archive/2007/04/06/securing-a-sidebar-gadget-with-windows-cardspace-and-wcf.aspx
CC-MAIN-2014-52
refinedweb
3,551
54.22
I was looking for way to solve this small Boolean pointer references problem but cannot came to a solution. I know C++ becomes complex when it comes to usage of Pointers and references. Below code fragments uses a flow like references of bool* true #include <iostream> int main() { bool* temp= nullptr; bool* temp2; bool* temp3; temp2 = temp; temp3=temp2; bool temp5 = true; *temp3 = temp5; std::cout << *temp <<std::endl; return 0; } You are assigning value to a nullptr. That is what is causing segmentation fault Just replace line bool* temp = nullptr; with this bool* temp= new bool; It will work now. And don't forger to delete it with delete operator.
https://codedump.io/share/BZaHX8KjYLwX/1/correct-way-to-reference-boolean-pointers-each-other-in-c
CC-MAIN-2017-47
refinedweb
111
66.67
isprintable() method in Python Hello programmers. In this post, we will learn about the use of isprintable() method in Python. In Python isprintable() is an inbuilt function for handling string. It checks whether the passed string is printable or not. If the string is printable it returns “true” otherwise “false”. It also returns “true” in case of an empty string. The availability of inbuilt functions made Python easier and most liked programming language in comparison to others. So let’s begin our tutorial with decent examples and explanations. Also read: File Truncate() Method In Python Understanding Python isprintable() method The isprintable() method in Python checks if string passed to it contains printable characters or not. Now you must have a question that what are printable characters? Characters like a digit, uppercase- lowercase letters, special characters, and space. The only whitespace which is printable is space. Beside space all whitespaces like “\t”, “\n”, etc. are not printable in Python. For better knowledge let’s see these examples. def fun(str): res=str.isprintable() print(res) str="Codespeedy Technology Pvt LTD" fun(str) Output: True All characters in “Codespeedy Technology Pvt LTD” are printable so the function returns true. What will happen if we only pass space(” “) to the function? Will, it returns “True”? See the example. str=" " fun(str) Output: True It proves that space is a printable character. Let’s see with other whitespaces. # \n betwwen two words str="Codespeed \n Technology PVT LTD" fun(str) str="\t " fun(str) str="\b" fun(str) Output: False False False All three whitespace characters(“\n”, “\t”, “\b”) are non-printable characters that’s why function return the “Fasle”. That’s enough for this tutorial. I hope you understood it well. If you want to give any suggestions related to this post please comment below. For a tutorial on other Python topics comment below the topic name. Thank You. Also read: Call an external command in Python
https://www.codespeedy.com/isprintable-method-in-python/
CC-MAIN-2021-17
refinedweb
324
68.87
I'm trying to catch SIGINT (or keyboard interrupt) in Python 2.7 program. This is how my Python test script test #!/usr/bin/python import time try: time.sleep(100) except KeyboardInterrupt: pass except: print "error" test.sh ./test & pid=$! sleep 1 kill -s 2 $pid bash test.sh test.sh test Python installs a small number of signal handlers by default: SIGPIPE ... and SIGINT is translated into a KeyboardInterrupt exception There is one case in which the default sigint handler is not installed at startup, and that is when the signal mask contains SIG_IGN for SIGINT at program startup. The code responsible for this can be found here. The signal mask for ignored signals is inherited from the parent process, while handled signals are reset to SIG_DFL. So in case SIGINT was ignored the condition if (Handlers[SIGINT].func == DefaultHandler) in the source won't trigger and the default handler is not installed, python doesn't override the settings made by the parent process in this case. So let's try to show the used signal handler in different situations: # invocation from interactive shell $ python -c "import signal; print(signal.getsignal(signal.SIGINT))" <built-in function default_int_handler> # background job in interactive shell $ python -c "import signal; print(signal.getsignal(signal.SIGINT))" & <built-in function default_int_handler> # invocation in non interactive shell $ sh -c 'python -c "import signal; print(signal.getsignal(signal.SIGINT))"' <built-in function default_int_handler> # background job in non-interactive shell $ sh -c 'python -c "import signal; print(signal.getsignal(signal.SIGINT))" &' 1 So in the last example, SIGINT is set to 1 ( SIG_IGN). This is the same as when you start a background job in a shell script, as those are non interactive by default (unless you use the -i option in the shebang). So this is caused by the shell ignoring the signal when launching a background job in a non interactive shell session, not by python directly. At least bash and dash behave this way, I've not tried other shells. There are two options to deal with this situation: manually install the default signal handler: import signal signal.signal(signal.SIGINT, signal.default_int_handler) add the -i option to the shebang of the shell script, e.g: #!/bin/sh -i edit: this behaviour is documented in the bash manual: SIGNALS ... When job control is not in effect, asynchronous commands ignore SIGINT and SIGQUIT in addition to these inherited handlers. which applies to non-interactive shells as they have job control disabled by default, and is actually specified in POSIX: Shell Command Language
https://codedump.io/share/TrL4FXugUjuo/1/capturing-sigint-using-keyboardinterrupt-exception-works-in-terminal-not-in-script
CC-MAIN-2016-50
refinedweb
427
55.24
Servo LED Button Blink Introduction: Servo LED Button Blink Hey there i hope you like my cool instructuble using servo motors LEDs and buttons using arduino feel free to comment Step 1: What You'll Need Arduino uno Bread Board(s) Jumper wires power source LEDs Servo motor Buttons you're AWESOMENESS!! Step 2: Setup You have all you're materials lets get started! 1. Use 2 of your jumper wires. One in a number pin (D0 - D13) and one in ground 2. put the other end of the wires into the bread board in the columns you want 3. The long leg of the LED you have will go into the ground (GND) column and the short on into the number column 4. The button: the button has 4 legs, 2 will go on one side of the bread board and two on the other. 5. Use 3 of your jumpers. one in GND on in 5V (5 volts) and one in an "A number pin" (A0 - A5) 6. Put the other end of the wires into the column of each of the legs of the button (besides the fourth) 7. Servo motor: The servo motor has 3 wires attached to it. A brown, yellow, and red all connected by pin slots at the end. The ground wire is brown, the red wire goes to "V IN" which is located next to the "A number pins" on the same side of the board, and the yellow wire goes to a "D number pin" (but specifically to pins 4, 5, or 6) 8. Note that the servo motor needs an external power source that can be plugged into the arduino (A picture of it is shown on the last step) 9. in this project I've done the above three times (excluding the motor) feel free to do it how ever many times you want Step 3: Coding #include <Servo.h> int x = 8; int y = 9; int z = 10; int led1 = 3; int led2 = 4; int led3 = 6; int led4 = 13; int led5 = 12; int led6 = 11; #include <Servo.h> Servo myservo; int pos = 0; void setup() { // put your setup code here, to run once: Serial.begin(9600); pinMode(z, INPUT); pinMode(x, INPUT); pinMode(y, INPUT); pinMode(led1, OUTPUT); pinMode(led2, OUTPUT); pinMode(led3, OUTPUT); pinMode(led4, OUTPUT); pinMode(led5, OUTPUT); pinMode(led6, OUTPUT); myservo.attach(5); } void loop() { // put your main code here, to run repeatedly: if (digitalRead(x) == 1){ digitalWrite(led1, HIGH); }else { digitalWrite(led1, LOW); } if (digitalRead(y) == 1){ digitalWrite(led2, HIGH); }else { digitalWrite(led2, LOW); } if (digitalRead(z) == 1){ digitalWrite(led3, HIGH); }else { digitalWrite(led3, LOW); } if (digitalRead(z) == 1 && digitalRead(x)== 1 ){ digitalWrite(led4, HIGH); }else if (digitalRead(z) == 1 && digitalRead(y)== 1){ digitalWrite (led5, HIGH); }else if (digitalRead(x) == 1 && digitalRead(y)== 1){ digitalWrite(led6, HIGH); } else { digitalWrite(led4, LOW); digitalWrite(led5, LOW); digitalWrite(led6, LOW); } if (digitalRead(x) == 1 && digitalRead(z) == 1 && digitalRead(y) == 1){ for (pos = 0; pos <= 180; pos += 1) { myservo.write(pos); delay(15); } for (pos = 180; pos >= 0; pos -= 1) { myservo.write(pos); delay(15); } }else { Serial.println("Servo is not spinning"); } } Step 4: Test and Finish Hooray you've made it to the final step! all that's left to do is upload the code to the arduino and watch the magic happen. Thanks for sharing :)
http://www.instructables.com/id/Servo-Button-Blink/
CC-MAIN-2017-51
refinedweb
560
66.98
Bug #6825 forking and pthread_cond_timedwait: Invalid argument (EINVAL) on OS X / 1.9.3-p194 Description here is the gist with required setup to reproduce bug. Also crash log and stdout. It seems that forking is essential for this setup to crash. Also, if you use database connection in some way prior to forking, it might not crash (however, with more complex code it still does). ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-darwin10.8.0] OS X 10.6.8 hostinfo output: Mach kernel version: Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 Kernel configured for up to 4 processors. 2 processors are physically available. 4 processors are logically available. Processor type: i486 (Intel 80486) Processors active: 0 1 2 3 Primary memory available: 8.00 gigabytes Default processor set: 88 tasks, 627 threads, 4 processors Load average: 0.55, Mach factor: 3.43 compiled with gcc version: i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) Associated revisions History #1 Updated by Mark A over 1 year ago Also notably ruby 1.8.7 does not crash. #2 Updated by Mark A over 1 year ago Ubuntu 11.04 doesn't crash with 1.9 ruby either. #3 Updated by Mark A over 1 year ago Ruby 1.9.2 doesn't crash either. #4 Updated by Mark A over 1 year ago ruby 2.0.0dev (2012-08-02 trunk 36596) [x86_64-darwin10.8.0] also does not crash. #5 Updated by Mark A over 1 year ago OS X 10.8 seems to be unaffected. #6 Updated by Mark A over 1 year ago Confirmed with another 10.6 / 1.9.3-p194. #7 Updated by Mark A over 1 year ago After some more fiddling it looks like it's mysql2 problem, not ruby's. Will duplicate issue there. #8 Updated by Eric Hodel over 1 year ago - Status changed from Open to Feedback #9 Updated by Mark A over 1 year ago Update: I greatly simplified my test-case. If I remove sleeping threads on line 31 or require 'active_record' on line 1, bug stops reproducing. Returning back to ruby-lang, as there is no mysql2 there anymore. Crash is still with the same error. Ideas, suggestions? #10 Updated by Mark A over 1 year ago Oh, it also reproduced on ruby 2.0.0.dev from current git, so I guess it is still not fixed. #11 Updated by Mark A over 1 year ago Another update: I opened up a file active_record.rb inside installed activerecord gem and completely commented it out (so that even ActiveRecord is not defined after require 'active_record'). Still crashes. I guess that takes care of gems and everything, so the problem should be between ruby, rubygems and standard library. Line require 'active_record' is still required for whole setup to crash for some reason. Hope that gives you some idea. #12 Updated by Eric Hodel over 1 year ago =begin I can't reproduce on OS X 10.8 ruby 2.0.0dev (2012-08-03 trunk 36602) [x86_64-darwin12.0.0] I modified your script to remove require 'active_record' and altered the main thread to sleep forever. This ensures that mysql and other C extensions are not loaded. It ran for over two minutes without problems. Can you reproduce this with require 'mysql' and not active_record? Can you show the console output with your modified active_record.rb (the loaded features section is of particular interest). Here is what I used: require 'net/http' Thread.abortonexception = true class Worker def initialize @tasks = [] work end def work Thread.new do loop do task = nil task = @tasks.shift if @tasks.length > 0 task.call if task sleep(0.25) end end end def schedule(&block) @tasks << block end end pid = fork do class TestLoop def initialize @worker = Worker.new (1..10).map { Thread.new { loop { sleep(0.5) } } } end def run loop do @worker.schedule { puts Net::HTTP.get("github.com", "/").length } sleep(0.25) end end end TestLoop.new.run end sleep (I don't have mysql installed to check.) =end #13 Updated by Mark A over 1 year ago I can't reproduce on OS X 10.8 ruby 2.0.0dev (2012-08-03 trunk 36602) [x86_64-darwin12.0.0] That seems right, 10.8 seems to be unaffected. Can you show the console output with your modified active_record.rb (the loaded features section is of particular interest). 1.9.3p194 :002 > require 'active_record' => true 1.9.3p194 :003 > $LOADED_FEATURES.each(&method(:puts)) enumerator.so /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/x86_64-darwin10.8.0/enc/encdb.bundle /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/x86_64-darwin10.8.0/enc/trans/transdb.bundle /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/defaults.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/x86_64-darwin10.8.0/rbconfig.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/deprecate.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/exceptions.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/e2mmap.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/init.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/workspace.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/inspector.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/context.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/extend-command.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/output-method.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/notifier.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/slex.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/ruby-token.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/ruby-lex.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/src_encoding.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/magic-file.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/x86_64-darwin10.8.0/readline.bundle /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/input-method.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/locale.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/version.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/requirement.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/platform.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/specification.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/path_support.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/irb/completion.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/prettyprint.rb /Users/mark/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/pp.rb /Users/mark/.rvm/scripts/irbrc.rb /Users/mark/.rvm/gems/ruby-1.9.3-p194@ruby-pthread-bug/gems/activerecord-3.2.7/lib/active_record.rb If rubygems don't get hit (as in add gem directory to $LOAD_PATH manually), bug doesn't seem to trigger. I am not sure, whether rubygems themselves are the reason or something non-trivial needs to happen for it to show itself. #14 Updated by Mark A over 1 year ago Switching require 'active_record' with require 'mysql2' still crashes the interpreter as long as required file is taken using gems mechanism. #15 Updated by Eric Hodel over 1 year ago - Status changed from Feedback to Open #16 Updated by Motohiro KOSAKI over 1 year ago At least, require 'mysql2' version nor drbrain version don't crash on my Mountain Lion environment. #17 Updated by Mark A over 1 year ago Yeah, that seems to be restricted to snow leopard. #18 Updated by Yusuke Endoh over 1 year ago - Status changed from Open to Assigned - Assignee set to Motohiro KOSAKI Kosaki-san, do you have any idea to addres this issue? Yusuke Endoh mame@tsg.ne.jp #19 Updated by Motohiro KOSAKI over 1 year ago - Assignee changed from Motohiro KOSAKI to Kenta Murata Kosaki-san, do you have any idea to addres this issue? I have no idea. unfortunately snow leopard is too old and i have no chance to get it. @mrkn, do you have any chance to see this issue? #20 Updated by Kenta Murata over 1 year ago I don't have snow-leopard environment, so I cannot investigate this issue. #21 Updated by Motohiro KOSAKI over 1 year ago - Status changed from Assigned to Closed I don't have snow-leopard environment, so I cannot investigate this issue. OK. Thank you. This looks like old OS X bug but we have no way to dig. give up. I'd like to close this feature as won't fix. Anyway snow leopard is no longer supported. To Mark, please reopen when you find exact reason and fixing way. we are very sorry for inconvenience. #22 Updated by Benoit Daloze over 1 year ago #23 Updated by Benoit Daloze over 1 year ago - Status changed from Closed to Assigned - Assignee changed from Kenta Murata to Benoit Daloze This is also true with latest trunk (r37462). I'm assigning to myself. bug6825.rb:31: [BUG] pthreadcondtimedwait: Invalid argument (EINVAL) ruby 2.0.0dev (2012-11-04 trunk 37462) [x86_64-darwin10.8.0] -- Control frame information ----------------------------------------------- c:0005 p:---- s:0013 e:000012 CFUNC :sleep c:0004 p:0007 s:0009 e:000008 BLOCK bug6825.rb:31 [FINISH] c:0003 p:---- s:0007 e:000006 CFUNC :loop c:0002 p:0005 s:0004 e:000003 BLOCK bug6825.rb:31 [FINISH] c:0001 p:---- s:0002 e:000001 TOP [FINISH] bug6825.rb:31:in block (2 levels) in initialize'loop' bug6825.rb:31:in bug6825.rb:31:in block (3 levels) in initialize'sleep' bug6825.rb:31:in #24 Updated by Benoit Daloze over 1 year ago I poked around an produced a core dump (the bug would not reproduce under gdb with a breakpoint set). Arguments to pthreadcondtimedwait() seem valid, in particular the timespec is about 500ms in the future. Other calls to pthreadcondtimedwait() always return ETIMEDOUT or 0. I saw rbthreadt::nativethreaddata.sleepcond was weirdly initialized. It is not initialized in nativethreadinit() if HAVEPTHREADCONDATTRINIT is undefined. And it is used in any case in ubfpthreadcondsignal(). Maybe checks for HAVEPTHREADCONDATTRINIT should not be done in nativethreadinit() and nativethreaddestroy() since these functions already do the right checks? This should be unrelated though, since OS X has pthreadcondattrinit(). It might be related to GVL release by multiple threads but I have no clue. It does not seem related directly to the parallel DNS resolution, since some traces have only threads in nativecondtimedwait(). And from the "only reproducible on snow-leopard" argument, it seems snow-leopard pthread's bug. @kosaki @mrkn Would it be useful if I could provide you the core dump and other info? #25 Updated by Motohiro KOSAKI over 1 year ago #26 Updated by Motohiro KOSAKI over 1 year ago - Status changed from Assigned to Closed - % Done changed from 0 to 100 This issue was solved with changeset r37474. Mark, thank you for reporting this issue. Your contribution to Ruby is greatly appreciated. May Ruby be with you. #27 Updated by Motohiro KOSAKI over 1 year ago - Status changed from Closed to Assigned #28 Updated by Usaku NAKAMURA over 1 year ago - Tracker changed from Bug to Backport - Project changed from ruby-trunk to Backport93 - Assignee changed from Benoit Daloze to Usaku NAKAMURA Kosaki-san, you can move a ticket to Backport because you are a committer. So, pelase do so instead of only changing the status to Open. #29 Updated by Benoit Daloze over 1 year ago kosaki (Motohiro KOSAKI) wrote: @kosaki @mrkn Would it be useful if I could provide you the core dump and other info? Thanks! r35672 seems broke this area and I'll fix it soon. However there is no r35672 in 1.9.3 branch and 1.9.3 seems correct. hmm... Could you please try 1.9.3 branch too? Unfortunately, r37474 does not seem to solve the problem (but it was definitely a potential problem). This is expected because snow leopard has pthreadcondattrinit(). So I don't know the reason for the bug. #30 Updated by Motohiro KOSAKI over 1 year ago - Tracker changed from Backport to Bug - Project changed from Backport93 to ruby-trunk - Assignee changed from Usaku NAKAMURA to Benoit Daloze Hi Eregon, Oops, I'm sorry. Perhaps I'm still overlooking anything else. Can you please share me your config.h and core file and build revision number? I'm willing to look core file myself. Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/6825
CC-MAIN-2014-15
refinedweb
2,366
59.7
From: Jonathan Turkanis (technews_at_[hidden]) Date: 2004-09-06 11:13:59 "Eric Niebler" <eric_at_[hidden]> wrote in message news:413BF440.20505_at_boost-consulting.com... > > Jonathan Turkanis wrote: > > "Eric Niebler" <eric_at_[hidden]> wrote: > >>2) s1 kind of looks like $1, which is the perl equivalent. > > > > > > I didn't think of 2). Did you put in in the docs? > > > No. Guess I should. > > > > FWIW, capital 'S' looks more > > like '$' to me. E.g., > > > > '<' >> (S1= +_w) >> '>' >> -*_ >> "</" >> S1 >> '>' > > > This is true, but it runs contrary to Boost's naming conventions. ALL > CAPS is for macros. I was thinking of vecS .. I forgot that 1 is a capital. :-) > >>That said, I'm open to suggestions for avoiding the name conflicts. I > >>would consider switching back to _1 _2 _3 if the technical problems were > >>overcome and if people liked it better. > > > > > > May I assume you have considered reusing the placeholders from boost::bind? > > There doesn't seem to be much operator overloading involving boost::arg<>. > > > I thought of that. Trouble is, xpressive's placeholders need to have an > operator=, which must be a member. Besides, I rely on ADL to find > xpressive's operators, and bind's placeholders are not in the correct > namespace for my purposes. This class of placeholders is a big problem. Jonathan Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/09/71500.php
CC-MAIN-2019-18
refinedweb
234
78.85
(and that Abraham Wald solved by inventing sequential analysis) We begin by importing some packages called by the code that we will be using in this notebook. import numpy as np import matplotlib.pyplot as plt import scipy.interpolate as interp import scipy.stats as st import seaborn as sb import quantecon as qe from ipywidgets import interact, widgets %matplotlib inline import warnings warnings.filterwarnings('ignore') #-Download supporting Wald_Friedman_utils.py file from GitHub-# qe.fetch_nb_dependencies(["Wald_Friedman_utils.py"]) from Wald_Friedman_utils import * A file named Wald_Friedman_utils.py already exists in the specified directory ... skipping download. Key ideas in play On pages 137-139 of his book Two Lucky People with Rose Friedman, Milton Friedman described a problem presented to him and Allen Wallis during World War II when they worked at the U.S. government's Statistical Research Group at Columbia University. Let's listen to Milton Friedman tell inferio he new method is obviously inferior or because it is obviously superior beyond what was hoped for $\ldots$ " Friedman and Wallis struggled with the problem but after realizing that they were not able to solve it themselves told Abraham Wald it. That started Wald on the path that led Sequential Analysis. We'll formulate the problem using dynamic programming. This started Wald on the path that led him to Sequential Analysis The following presentation of the problem closely follows Dmitri Berskekas's treatment in Dynamic Programming and Stochastic Control. An i.i.d. random variable $z$ can take on values $z \in [ v_1, v_2, \ldots, v_n]$ when $z$ is a discrete-valued random variable $ z \in V$ when $z$ is a continuous random variable. A decision maker wants to know which of two probability distributions governs $z$. To formalize this idea, let $x \in [x_0, x_1]$ be a hidden state that indexes the two distributions:$$ P(v_k \mid x) = \begin{cases} f_0(v_k) & \mbox{if } x = x_0, \\ f_1(v_k) & \mbox{if } x = x_1. \end{cases} $$ when $z$ is a discrete random variable and a density$$ P(v \mid x) = \begin{cases} f_0(v) & \mbox{if } x = x_0, \\ f_1(v) & \mbox{if } x = x_1. \end{cases} $$ when $v$ is continuously distributed. Before observing any outcomes, a decision maker believes that the probability that $x = x_0$ is $p_{-1}\in (0,1)$:$$ p_{-1} = \textrm{Prob}(x=x_0 \mid \textrm{ no observations}) $$ After observing $k+1$ observations $z_k, z_{k-1}, \ldots, z_0$ he believes that the probability that the distribution is $f_0$ is$$ p_k = {\rm Prob} ( x = x_0 \mid z_k, z_{k-1}, \ldots, z_0) $$ We can compute this $p_k$ recursively by applying Bayes' law:$$ p_0 = \frac{ p_{-1} f_0(z_0)}{ p_{-1} f_0(z_0) + (1-p_{-1}) f_1(z_0) } $$ and then$$ p_{k+1} = \frac{ p_k f_0(z_{k+1})}{ p_k f_0(z_{k+1}) + (1-p_k) f_1 (z_{k+1}) }. $$ After observing $z_k, z_{k-1}, \ldots, z_0$, the decision maker believes that $z_{k+1}$ has probability distribution$$ p(z_{k+1}) = p_k f_0(z_{k+1}) + (1-p_k) f_1 (z_{k+1}). $$ This is evidently a mixture of distributions $f_0$ and $f_1$, with the weight on $f_0$ being the posterior probability $f_0$ that the distribution is $f_0$. Remark: Because the decision maker believes that $z_{k+1}$ is drawn from a mixture of two i.i.d. distributions, he does not believe that the sequence $[z_{k+1}, z_{k+2}, \ldots] $ is i.i.d. Instead, he believes that it is exchangeable. See David KrepsNotes on the Theory of Choice, chapter 11, for a discussion. Let's look at some examples of two distributions. Here we'll display two beta distributions. First, we'll show the two distributions, then we'll show mixtures of these same two distributions with various mixing probabilities $p_k$. # Create two distributions over 50 values for k # We are using a discretized beta distribution p_m1 = np.linspace(0, 1, 50) f0 = np.clip(st.beta.pdf(p_m1, a=1, b=1), 1e-8, np.inf) f0 = f0 / np.sum(f0) f1 = np.clip(st.beta.pdf(p_m1, a=9, b=9), 1e-8, np.inf) f1 = f1 / np.sum(f1) fig = make_distribution_plots(f0, f1) fig.show() After observing $z_k, z_{k-1}, \ldots, z_0$, the decision maker chooses among three distinct actions: He decides that $x = x_1$ and draws no more $z$'s He decides that $x = x_0$ and draws no more $z$'s He postpones deciding now and instead chooses to draw a $z_{k+1}$ Associated with these three actions, the decision maker suffers three kinds of losses: A loss $L_0$ if he decides $x = x_0$ when actually $x=x_1$ A loss $L_1$ if he decides $x = x_1$ when actually $x=x_0$ A cost $c$ if he postpones deciding and chooses instead to draw another $z$ For example, suppose that we regard $x=x_0$ as a null hypothesis. Then We can think of $L_1$ as the loss associated with a type I error We can think of $L_0$ as the loss associated with a type II error Let $J_k(p_k)$ be the total loss for a decision maker who with posterior probability $p_k$ who chooses optimally. The loss functions $\{J_k(p_k)\}_k$ satisfy the Bellman equations$$ J_k(p_k) = \min \left[ (1-p_k) L_0, p_k L_1, c + E_{z_{k+1}} \left\{ J_{k+1} (p_{k+1} \right\} \right] $$ where $E_{z_{k+1}}$ denotes a mathematical expectation over the distribution of $z_{k+1}$ and the minimization is over the three actions, accept $x_0$, accept $x_1$, and postpone deciding and draw a $z_{k+1}$. Let$$ A_k(p_k) = E_{z_{k+1}} \left\{ J_{k+1} \left[\frac{ p_k f_0(z_{k+1})}{ p_k f_0(z_{k+1}) + (1-p_k) f_1 (z_{k+1}) } \right] \right\} $$ Then we can write out Bellman equation as$$ J_k(p_k) = \min \left[ (1-p_k) L_0, p_k L_1, c + A_k(p_k) \right] $$ where $p_k \in [0,1]$. Evidently,the optimal decision rule is characterized by two numbers $\alpha_k, \beta_k \in (0,1) \times (0,1)$ that satisfy$$ (1- p_k) L_0 < \min p_k L_1, c + A_k(p_k) \textrm { if } p_k \geq \alpha_k $$ and$$ p_k L_1 < \min (1-p_k) L_0, c + A_k(p_k) \textrm { if } p_k \leq \beta_k $$ The optimal decision rule is then$$ \textrm { accept } x=x_0 \textrm{ if } p_k \geq \alpha_k \\ \textrm { accept } x=x_1 \textrm{ if } p_k \leq \beta_k \\ \textrm { draw another } z \textrm{ if } \beta_k \leq p_k \leq \alpha_k $$ An infinite horizon version of this problem is associated with the limiting Bellman equation$$ J(p_k) = \min \left[ (1-p_k) L_0, p_k L_1, c + A(p_k) \right] \quad (*) $$ where$$ A(p_k) = E_{z_{k+1}} \left\{ J \left[\frac{ p_k f_0(z_{k+1})}{ p_k f_0(z_{k+1}) + (1-p_k) f_1 (z_{k+1}) } \right] \right\} $$ and again the minimization is over the three actions, accept $x_1$, accept $x_0$, and postpone deciding and draw a $z_{k+1}$. Here $ (1-p_k) L_0$ is the expected loss associated with accepting $x_1$ (i.e., the cost of making a type I error) $p_k L_1$ is the expected loss associated with accepting $x_0$ (i.e., the cost of making a type II error) $ c + A(p_k)$ is the expected cost associated with drawing one more $z$ Now the optimal decision rule is characterized by two probabilities $0 < \beta < \alpha < 1$ and$$ \textrm { accept } x=x_0 \textrm{ if } p_k \geq \alpha \\ \textrm { accept } x=x_1 \textrm{ if } p_k \leq \beta \\ \textrm { draw another } z \textrm{ if } \beta \leq p_k \leq \alpha $$ One sensible approach is to write the three components of the value function that appears on the rights side of the Bellman equation as separate functions. Later, doing this will help us obey the don't repeat yourself (DRY) rule of coding. Here goes: def expect_loss_choose_0(p, L0): "For a given probability return expected loss of choosing model 0" return (1-p)*L0 def expect_loss_choose_1(p, L1): "For a given probability return expected loss of choosing model 1" return p*L1 def EJ(p, f0, f1, J): """ We will need to be able to evaluate the expectation of our Bellman equation J. In order to do this, we need the current probability that model 0 is correct (p), the distributions (f0, f1), and a function that can evaluate the Bellman equation """ # Get the current distribution we believe (p*f0 + (1-p)*f1) curr_dist = p*f0 + (1-p)*f1 # Get tomorrow's expected distribution through Bayes law tp1_dist = np.clip((p*f0) / (p*f0 + (1-p)*f1), 0, 1) # Evaluate the expectation EJ = curr_dist @ J(tp1_dist) return EJ def expect_loss_cont(p, c, f0, f1, J): return c + EJ(p, f0, f1, J) To approximate the solution of the Bellman equation (*) above, we can deploy a method known as value function iteration (iterating on the Bellman equation) on a grid of points. Because we are iterating on a grid, the current probability, $p_k$, is restricted to a set number of points. However, in order to evaluate the expectation of the Bellman equation for tomorrow, $A(p_{k})$, we must be able to evaluate at various $p_{k+1}$ which may or may not correspond with points on our grid. The way that we resolve this issue is by using linear interpolation. This means to evaluate $J(p)$ where $p$ is not a grid point, we must use two points: first, we use the largest of all the grid points smaller than $p$, and call it $p_i$, and, second, we use the grid point immediately after $p$, named $p_{i+1}$, to approximate the function value in the following manner:$$ J(p) = J(p_i) + (p - p_i) \frac{J(p_{i+1}) - J(p_i)}{p_{i+1} - p_{i}}$$ In one dimension, you can think of this as simply drawing a line between each pair of points on the grid. For more information on both linear interpolation and value function iteration methods, see the Quant-Econ lecture on the income fluctuation problem. def bellman_operator(pgrid, c, f0, f1, L0, L1, J): """ Evaluates the value function for a given continuation value function; that is, evaluates J(p) = min(pL0, (1-p)L1, c + E[J(p')]) Uses linear interpolation between points """ m = np.size(pgrid) assert m == np.size(J) J_out = np.zeros(m) J_interp = interp.UnivariateSpline(pgrid, J, k=1, ext=0) for (p_ind, p) in enumerate(pgrid): # Payoff of choosing model 0 p_c_0 = expect_loss_choose_0(p, L0) p_c_1 = expect_loss_choose_1(p, L1) p_con = expect_loss_cont(p, c, f0, f1, J_interp) J_out[p_ind] = min(p_c_0, p_c_1, p_con) return J_out # To solve pg = np.linspace(0, 1, 251) bell_op = lambda vf: bellman_operator(pg, 0.5, f0, f1, 5.0, 5.0, vf) J = qe.compute_fixed_point(bell_op, np.zeros(pg.size), error_tol=1e-6, verbose=True, print_skip=5) Iteration Distance Elapsed (seconds) --------------------------------------------- 5 8.042e-02 4.618e-02 10 6.418e-04 7.645e-02 15 4.482e-06 1.071e-01 Now for some gentle criticisms of the preceding code. Although it works fine, by writing the code in terms of functions, we have to pass around some things that are constant throughout the problem, i.e., $c$, $f_0$, $f_1$, $L_0$, and $L_1$. Now that we have a working script, let's turn it into a class. This will allow us to simplify the function calls and make the code more reusable. So to illustrate a good alternative approach, we write a class that stores all of our parameters for us internally and incorporates many of the same functions that we used above. # %load -r 22-276 dependencies/Wald_Friedman_utils.py class WaldFriedman(object): """ This class is used to solve the problem presented in the "Wald Friedman" notebook presented on the QuantEcon website. Parameters ---------- c : scalar(Float64) Cost of postponing decision L0 : scalar(Float64) Cost of choosing model 0 when the truth is model 1 L1 : scalar(Float64) Cost of choosing model 1 when the truth is model 0 f0 : array_like(Float64) A finite state probability distribution f1 : array_like(Float64) A finite state probability distribution m : scalar(Int) Number of points to use in function approximation """ def __init__(self, c, L0, L1, f0, f1, m=25): self.c = c self.L0, self.L1 = L0, L1 self.m = m self.pgrid = np.linspace(0.0, 1.0, m) # Renormalize distributions so nothing is "too" small f0 = np.clip(f0, 1e-8, 1-1e-8) f1 = np.clip(f1, 1e-8, 1-1e-8) self.f0 = f0 / np.sum(f0) self.f1 = f1 / np.sum(f1) self.J = np.zeros(m) def current_distribution(self, p): """ This function takes a value for the probability with which the correct model is model 0 and returns the mixed distribution that corresponds with that belief. """ return p*self.f0 + (1-p)*self.f1 def bayes_update_k(self, p, k): """ This function takes a value for p, and a realization of the random variable and calculates the value for p tomorrow. """ f0_k = self.f0[k] f1_k = self.f1[k] p_tp1 = p*f0_k / (p*f0_k + (1-p)*f1_k) return np.clip(p_tp1, 0, 1) def bayes_update_all(self, p): """ This is similar to `bayes_update_k` except it returns a new value for p for each realization of the random variable """ return np.clip(p*self.f0 / (p*self.f0 + (1-p)*self.f1), 0, 1) def payoff_choose_f0(self, p): "For a given probability specify the cost of accepting model 0" return (1-p)*self.L0 def payoff_choose_f1(self, p): "For a given probability specify the cost of accepting model 1" return p*self.L1 def EJ(self, p, J): """ This function evaluates the expectation of the value function at period t+1. It does so by taking the current probability distribution over outcomes: p(z_{k+1}) = p_k f_0(z_{k+1}) + (1-p_k) f_1(z_{k+1}) and evaluating the value function at the possible states tomorrow J(p_{t+1}) where p_{t+1} = p f0 / ( p f0 + (1-p) f1) Parameters ---------- p : Scalar(Float64) The current believed probability that model 0 is the true model. J : Function The current value function for a decision to continue Returns ------- EJ : Scalar(Float64) The expected value of the value function tomorrow """ # Pull out information f0, f1 = self.f0, self.f1 # Get the current believed distribution and tomorrows possible dists # Need to clip to make sure things don't blow up (go to infinity) curr_dist = self.current_distribution(p) tp1_dist = self.bayes_update_all(p) # Evaluate the expectation EJ = curr_dist @ J(tp1_dist) return EJ def payoff_continue(self, p, J): """ For a given probability distribution and value function give cost of continuing the search for correct model """ return self.c + self.EJ(p, J) def bellman_operator(self, J): """ Evaluates the value function for a given continuation value function; that is, evaluates J(p) = min( (1-p)L0, pL1, c + E[J(p')]) Uses linear interpolation between points """ payoff_choose_f0 = self.payoff_choose_f0 payoff_choose_f1 = self.payoff_choose_f1 payoff_continue = self.payoff_continue c, L0, L1, f0, f1 = self.c, self.L0, self.L1, self.f0, self.f1 m, pgrid = self.m, self.pgrid J_out = np.empty(m) J_interp = interp.UnivariateSpline(pgrid, J, k=1, ext=0) for (p_ind, p) in enumerate(pgrid): # Payoff of choosing model 0 p_c_0 = payoff_choose_f0(p) p_c_1 = payoff_choose_f1(p) p_con = payoff_continue(p, J_interp) J_out[p_ind] = min(p_c_0, p_c_1, p_con) return J_out def solve_model(self, tol=1e-7): J = qe.compute_fixed_point(self.bellman_operator, np.zeros(self.m), error_tol=tol, verbose=False) self.J = J return J def find_cutoff_rule(self, J): """ This function takes a value function and returns the corresponding cutoffs of where you transition between continue and choosing a specific model """ payoff_choose_f0 = self.payoff_choose_f0 payoff_choose_f1 = self.payoff_choose_f1 m, pgrid = self.m, self.pgrid # Evaluate cost at all points on grid for choosing a model p_c_0 = payoff_choose_f0(pgrid) p_c_1 = payoff_choose_f1(pgrid) # The cutoff points can be found by differencing these costs with # the Bellman equation (J is always less than or equal to p_c_i) lb = pgrid[np.searchsorted(p_c_1 - J, 1e-10) - 1] ub = pgrid[np.searchsorted(J - p_c_0, -1e-10)] return (lb, ub) def simulate(self, f, p0=0.5): """ This function takes an initial condition and simulates until it stops (when a decision is made). """ # Check whether vf is computed if np.sum(self.J) < 1e-8: self.solve_model() # Unpack useful info lb, ub = self.find_cutoff_rule(self.J) update_p = self.bayes_update_k curr_dist = self.current_distribution drv = qe.discrete_rv.DiscreteRV(f) # Initialize a couple useful variables decision_made = False p = p0 t = 0 while decision_made is False: # Maybe should specify which distribution is correct one so that # the draws come from the "right" distribution k = drv.draw()[0] t = t+1 p = update_p(p, k) if p < lb: decision_made = True decision = 1 elif p > ub: decision_made = True decision = 0 return decision, p, t def simulate_tdgp_f0(self, p0=0.5): """ Uses the distribution f0 as the true data generating process """ decision, p, t = self.simulate(self.f0, p0) if decision == 0: correct = True else: correct = False return correct, p, t def simulate_tdgp_f1(self, p0=0.5): """ Uses the distribution f1 as the true data generating process """ decision, p, t = self.simulate(self.f1, p0) if decision == 1: correct = True else: correct = False return correct, p, t def stopping_dist(self, ndraws=250, tdgp="f0"): """ Simulates repeatedly to get distributions of time needed to make a decision and how often they are correct. """ if tdgp=="f0": simfunc = self.simulate_tdgp_f0 else: simfunc = self.simulate_tdgp_f1 # Allocate space tdist = np.empty(ndraws, int) cdist = np.empty(ndraws, bool) for i in range(ndraws): correct, p, t = simfunc() tdist[i] = t cdist[i] = correct return cdist, tdist Now let's use our class solve the Bellman equation (*) and check whether it gives same answer attained above. wf = WaldFriedman(0.5, 5.0, 5.0, f0, f1, m=251) wfJ = qe.compute_fixed_point(wf.bellman_operator, np.zeros(251), error_tol=1e-6, verbose=True, print_skip=5) print("\nIf this is true then both approaches gave same answer") print(np.allclose(J, wfJ)) Iteration Distance Elapsed (seconds) --------------------------------------------- 5 8.042e-02 4.080e-02 10 6.418e-04 7.262e-02 15 4.482e-06 1.044e-01 If this is true then both approaches gave same answer True Now let's specify the two probability distibutions (the ones that we plotted earlier) for $f_0$ we'll assume a beta distribution with parameters $a=1, b=1$ for $f_1$ we'll assume a beta distribution with parameters $a=9, b=9$ The density of a beta probability distribution with parameters $a$ and $b$ is$$ f(z; a, b) = \frac{\Gamma(a+b) z^{a-1} (1-z)^{b-1}}{\Gamma(a) \Gamma(b)}$$ where $\Gamma$ is the gamma function$$\Gamma(t) = \int_{0}^{\infty} x^{t-1} e^{-x} dx$$ # Choose parameters c = 1.25 L0 = 27.0 L1 = 27.0 # Choose n points and distributions m = 251 # f0 = np.ones(n)/n f0 = st.beta.pdf(np.linspace(0, 1, m), a=2.5, b=3) f0 = f0 / np.sum(f0) f1 = st.beta.pdf(np.linspace(0, 1, m), a=3, b=2.5) f1 = f1 / np.sum(f1) # Make sure sums to 1 # Create an instance of our WaldFriedman class wf = WaldFriedman(c, L0, L1, f0, f1, m=m) # Solve using qe's `compute_fixed_point` function J = qe.compute_fixed_point(wf.bellman_operator, np.zeros(m), error_tol=1e-6, verbose=True, print_skip=5) Iteration Distance Elapsed (seconds) --------------------------------------------- 5 9.200e-01 5.442e-02 10 4.523e-01 1.026e-01 15 1.223e-01 1.468e-01 20 2.126e-02 1.917e-01 25 3.316e-03 2.690e-01 30 4.752e-04 3.602e-01 35 6.826e-05 4.373e-01 40 9.806e-06 4.816e-01 45 1.409e-06 5.392e-01 The value function equals $ p L_1$ for $p \leq \alpha$, and $(1-p )L_0$ for $ p \geq \beta$. Thus, the slopes of the two linear pieces of the value function are determined by $L_1$ and $- L_0$. The value function is smooth in the interior region in which the probability assigned to distribution $f_0$ is in the indecisive region $p \in (\alpha, \beta)$. The decision maker continues to sample until the probability that he attaches to model $f_0$ falls below $\alpha$ or above $\beta$. The value function is smooth in the interior region in which the probability assigned to distribution $f_0$ is in the indecisive region $p \in (\alpha, \beta)$. Now to have some fun, you can use the slider and watch the effects on the smoothness of the of the value function in the middle range as you increase the numbers of functions in the piecewise linear approximation. The slider let's you choose the cost parameters $L_0, L_1, c$, the parameters of two beta distributions $f_0$ and $f_1$, and the number of points and linear functions $m$ to use in our piece-wise continuous approximation to the value function. It then draws a number of simulations from $f_0$, computes a distribution of waiting times to making a decision, and displays a histogram of correct and incorrect decisions. (Here the correct decision occurs when $p_k$ eventually exceeds $\beta$. col_slide = list(map(convert_rgb_hex, sb.color_palette("dark", 7))) col_num = list(map(convert_rgb_hex, sb.color_palette("hls", 7))) sliders = map(lambda a,b,c,d,e,f,g: widgets.FloatSlider(min=a, max=b, step=c, value=d, slider_color=e, color=f, description=g), [0.5, 5.0, 5.0, 1.0, 1.0, 1.0, 1.0], [2.5, 50.0, 50.0, 9.0, 9.0, 9.0, 9.0], [0.25, 2.5, 2.5, 0.5, 0.5, 0.5, 0.5], [1.25, 27.5, 27.5, 2.0, 2.5, 2.5, 2.0], col_num, col_num, ["c", "L0", "L1", "a0", "b0", "a1", "b1"]) cslide, L0slide, L1slide, a0slide, b0slide, a1slide, b1slide = list(sliders) mslide = widgets.IntSlider(min=15, max=251, step=2, value=133, description="m") interact(all_param_interact, c=cslide, L0=L0slide, L1=L1slide, a0=a0slide, b0=b0slide, a1=a1slide, b1=b1slide, m=mslide) For several reasons, it is useful to describe the theory underlying $\alpha$ and $\beta$ characterize cut-off rules used to determine $n$ as a random variable. Laws of large numbers make no appearances in the sequential construction. In chapter 1 of Sequential Analysis restricting what is unknown, Wald uses the following simple structure to illustrate the main ideas. a decision maker wants to decide which of two distributions $f_0$, $f_1$ govern an i.i.d. random variable $z$ The null hypothesis $H_0$ is the statement that $f_0$ governs the data. The alternative hypothesis $H_1$ is the statement that $f_1$ governs the data. The problem is to devise and analyze a test of hypthothesis $H_0$ against the alternative hypothesis $H_1$ on the basis of a sample of a fixed number $n$ independent observations $z_1, z_2, \ldots, z_n$ of the random variable $z$. To quote Abraham Wald, Let's listen to Wald longer: Let's listen carefully to how Wald applies a law of large numbers to interpret $\alpha$ and $\beta$: The quantity $\alpha$ is called the size of the critical region, and the quantity $1-\beta$ is called the power of the critical region. Wald notes that Wald summarizes Neyman and Pearson's setup as follows: Wald goes on to discuss Neyman and Pearson's concept of uniformly most powerful test. Here is how Wald introduces the notion of a sequential test •A decisions is made. If the first or second decision is made, the process is terminated. If the third decision is made, a second trial is performed. Again, on the basis of the first two observations one of the three decisions.
http://nbviewer.jupyter.org/github/QuantEcon/QuantEcon.notebooks/blob/master/Wald_Friedman.ipynb
CC-MAIN-2017-04
refinedweb
3,911
53.51
Introduction to Python Multiprocessing Multiprocessing is somewhat of a computerized version of multitasking. Multitasking is the process of handling several tasks at the same time efficiently. Similarly, Multiprocessing in Python is the ability to handle more than one process at the same time. In real life a person who is a multitasker is very successful at his work, similarly, a python program using multiprocessing is far better than one which is not using it. Python Multiprocessing uses the idea of parallel processing, in which code from a single program can be executed on the different cores of a computer using parallel code. Multiprocessing in Python Python has a module named multiprocessing which helps us write parallel code thus resulting in parallel computing. The following classes in Python multiprocessing help us create a parallel program: - Process - Queue - Pool - Lock In parallel programming, a code is run on different cores. We can know the number of cores in our system in the following way: Code: import multiprocessing print("The number of cores in the system is",multiprocessing.cpu_count()) Output: How can we create a parallel program using the different classes? Below are various points to build parallel programming: 1. Process Code: import numpy as np from multiprocessing import Process numbers = [2.1,7.5,5.9,4.5,3.5] def print_func(element=5): print('Square of the number : ', np.square(element)) if __name__ == "__main__": # confirmation that the code is under main function procs = [] proc = Process(target=print_func) # instantiating without any argument procs.append(proc) proc.start() # instantiating process with arguments for number in numbers: proc = Process(target=print_func, args=(number,)) procs.append(proc) proc.start() # complete the processes for proc in procs: proc.join() Output: The following are the two important functions of the Process class:- - start(): The start() function initiates the processing of the process object after it has been created. - join(): The join() function instructs the process to complete it’s execution. Without the join() function, the process would remain idle and would not terminate. It is very important to call the join() function after the termination of the process to free the resources for future processes, otherwise, it has to be done manually. The args keyword is used to send an argument through the process. 2. Queue Code: from multiprocessing import Queue objects = ["John",34,6578.9,True] counter = 1 # We would instantiate a queue object queue = Queue() print('Pushing items to queue:') for o in objects: print('Object No: ', counter, ' ', o) queue.put(o) counter = counter + 1 print('\nPopping items from queue:') counter = 1 while not queue.empty(): print('Object No: ', counter, ' ', queue.get()) counter = counter + 1 Output: Python Multiprocessing has a Queue class that helps to retrieve and fetch data for processing following FIFO(First In First Out) data structure. They are very useful for storing Python pickle objects and eases sharing objects among different process thus helping parallel programming. They are passed as a parameter to Process target function to enable the Process to consume that data during execution. There are mainly two functions which help us store and fetch data to and from the Queue:- - put: put() function helps to insert data to the Queue. - get: get() function helps to retrieve data from the Queue. In the FIFO data structure, the element which is stored first is to be retrieved first. FIFO data structure is analogous to Customer Service calls, whereby the customers being the elements, the ones who call the earliest have to wait for the least for getting the call connected to the customer service personal. 3. Pool Code: from multiprocessing import Pool import time work = (["1", 5], ["2", 2], ["3", 1], ["4", 3]) def work_log(work_data): print(" Process %s waiting %s seconds" % (work_data[0], work_data[1])) time.sleep(int(work_data[1])) print(" Process %s Finished." % work_data[0]) def pool_handler(): p = Pool(2) p.map(work_log, work) if __name__ == '__main__': pool_handler() Output: Python Multiprocessing Pool class helps in parallel execution of a function across multiple input values. The variable work when declared it is mentioned that Process 1, Process 2, Process 3 and Process 4 shall wait for 5,2,1,3 seconds respectively. During execution, the above-mentioned processes wait for the aforementioned interval of time as it is evident from the order of the print statements. 4. Lock Python Multiprocessing Lock class allows code to claim lock so that no other process can work on a similar code. There are two important functions of Lock as follows: – - acquire: acquire() function claims the lock - release: release() function releases the lock Let us consolidate all the things that we have learned into a single example:- Code: from multiprocessing import Lock, Process, Queue, current_process import time import queue def do_job(tasks_to_do, tasks_finished): while True: try: ''' get_nowait() function raises queue empty exception if the queue is empty. queue(False) function also does the same task. ''' task = tasks_to_do.get_nowait() except queue.Empty: break else: ''' if no exception is raised, add the task completion message to tasks_finished queue ''' print(task) tasks_finished.put(task + ' is finished by ' + current_process().name) time.sleep(0.25) return True def main(): number_of_task = 6 number_of_processes = 2 tasks_to_do = Queue() tasks_finished = Queue() processes = [] for i in range(number_of_task): tasks_to_do.put("Task no " + str(i)) # creating processes for w in range(number_of_processes): p = Process(target=do_job, args=(tasks_to_do, tasks_finished)) processes.append(p) p.start() # completing process for p in processes: p.join() # print the output while not tasks_finished.empty(): print(tasks_finished.get()) return True if __name__ == '__main__': main() Output: Conclusion It is time to draw closure to this article as we have discussed the basic concepts of Multiprocessing in Python. So next time you write a complex big code do remember to apply the concepts of Multiprocessing in it to fully appreciate it’s powering. Recommended Articles This is a guide to Python Multiprocessing. Here we discuss the introduction, multiprocessing and how can we create a parallel program using different classes. You can also go through our other related articles to learn more–
https://www.educba.com/python-multiprocessing/
CC-MAIN-2020-24
refinedweb
998
53.61
Windows Phone From Scratch In our Full Stack program we need to take a snapshot and persist it to isolated storage, for retrieval at a later time. This posed an interesting question: how do you put an image into a serializable form, yet reconstitute it to be the source for the Image control. Let’s simplify the problem and strip it down to its essentials. In this demonstration program we will interact with the UI shown in the figure. [Click on the image to see full size] The following controls are shown - A TextBlock acting as a prompt - A TextBox to gather the user’s name - A Picture button to bring up the camera - A Save button to save the user and the picture - A Find button to locate the picture of the person whose name is in the text box The Save and Find buttons start out disabled, and are enabled only when there is text in the text box. When you click save, a person object is created and the person’s name and picture are stored to a dictionary (the same state they need to be in for persisting in isolated storage). When you click find, if the person exists in the dictionary of people, then the person and picture are retrieved. The Person object represents the name as a string and the image as a byte array, public class Person { public string FullName { get; set; } public byte[] Thumbnail { get; set; } } We begin the code section by creating a camera capture task, a dictionary to “persist” the person objects and a byte array to hold the thumbnail pending persistence, private readonly CameraCaptureTask camera = new CameraCaptureTask(); private readonly Dictionary<string, Person> people = new Dictionary<string, Person>(); private byte[] thumbnail; In the constructor we establish the event handlers for the buttons and the event handler for the camera’s asynchronous call back (called when the chooser returns). public MainPage() { InitializeComponent(); camera.Completed += camera_Completed ; TakePicture.Click += ( o, e ) => { camera.Show(); }; FullName.TextChanged += ( o, e ) => { Find.IsEnabled = Save.IsEnabled = String.IsNullOrEmpty( FullName.Text ) ? false : true; }; Save.Click += Save_Click; Find.Click += Find_Click; } Line 4 sets up the callback for the camera task Line 5 is the event handler for the Picture button, it simply invokes the camera task Lines 6-10 are the event handler for the TextChanged event in the text box, enabling and disabling our two buttons depending on whether or not there is text in the box. Lines 11 and 12 set up the event handlers for the buttons. Camera Completed The Camera Completed task is responsible for retrieving the picture taken and displaying it to the user. We also take that opportunity to stash the picture into the member variable thumbnail for storage later when we have a Person object to store. private void camera_Completed( object sender, PhotoResult e ) { var _imageBytes = new byte[e.ChosenPhoto.Length]; e.ChosenPhoto.Read( _imageBytes, 0, _imageBytes.Length ); e.ChosenPhoto.Seek( 0, SeekOrigin.Begin ); Picture.Source = PictureDecoder.DecodeJpeg( e.ChosenPhoto ); thumbnail = _imageBytes; } The process of making the image ready for display is to create a byte array of the size of the image as shown on line 3. On line 4 we read the correct number of bytes into that byte array and on line 5 we seek sto the start of the array. We then assign to the Picture (image control) Source property the image created by calling the static method DecodeJpeg on PictureDecoder, passing in the photo as retrieved from the task. When the Save button is clicked our job is to create a new Person object, populate it with the text from the TextBox and the byteArray we stored in the member variable thumbnail and then store the person into the Dictionary. void Save_Click( object sender, RoutedEventArgs e ) { Person p = new Person() { FullName = FullName.Text, Thumbnail = thumbnail }; Picture.Source = null; FullName.Text = ""; people.Add(p.FullName,p); } While we are at it, we set the Picture (the image control) Source property to null, removing the picture from display and blank the text box – good feedback that the person has been saved. When the Find button is clicked we check to see if the name in the TextBox matches a name in the dictionary. If not, we show a messageBox that the name was not found. If so, however, we retrieve that person object from the dictionary. void Find_Click( object sender, RoutedEventArgs e ) { Picture.Source = null; if ( ! people.Keys.Contains(FullName.Text) ) { MessageBox.Show(FullName.Text + " not found."); return; } var per = people[FullName.Text]; if (per.Thumbnail == null) { Picture.Source = null; return; } byte[] data = per.Thumbnail; using (Stream memStream = new MemoryStream( data )) { WriteableBitmap wbimg = PictureDecoder.DecodeJpeg( memStream ); Picture.Source = wbimg; } } If the Thumbnail property is null, there is no picture to display and we set the Source property of the Image control to null and return. Otherwise, we create a local copy of the byte array held by the Person object which we pass into a new MemoryStream. That memory stream is used to create a WriteableBitMap which is the source for displaying the image. Wonderful, what a webpage it is! This webpage presents useful information to us, keep it up. Hi there, You have done a great job. I will definitely digg it and personally recommend to my friends. I am confident they’ll be benefited from this web site. I like the helpful info you provide in your articles. I will bookmark your weblog and check again here regularly. I am quite sure I will learn many new stuff right here! Best of luck for the next! Great post. I used to be checking continuously this weblog and I’m inspired! Very helpful information specially the final part : ) I care for such information much. I was looking for this particular info for a long time. Thanks and best of luck. Excellent blog here! Also your website loads up very fast! What host are you using? Can I get your affiliate link to your host? I wish my website loaded up as quickly as yours lol We’re a bunch of volunteers and opening a brand new scheme in our community. Your site offered us with useful info to work on. You’ve done an impressive process and our entire community can be thankful to you. Remarkable issues here. I am very satisfied to peer your article. Thanks a lott and I’m looking ahead to touch you. Will you kindly drop me a e-mail? Hey there! I know this is kind of off-topic but I had to ask. Does running a well-established website like yours require a lot of work? I am completely new to running a blog but I do write in my diary daily. I’d like to start a blog so I will be able to share my experience and views online. Please let me know if you have any suggestions or tips for brand new aspiring blog owners. Thankyou! It can be quite a bit of work at first, but then it settles into writing the posts and responding to questions. I suggest that you get a good blogging software and a good web host. I’ve been very happy with WordPress. That is a really good tip particularly to those new to the blogosphere. Short but very precise info… Appreciate your sharing this one. A must read article! hello!,I love your writing very a lot! proportion we communicate more about your article on AOL? I require a specialist in this house to unravel my problem. Maybe that is you! Looking forward to peer you. Superb. Cheers! Great post. I wasѕ checking constantly this blog and I am impressed! I car for suuch info much. I was lookіng ffor this certаin information for a ѵery long Extremely helpful info specially the last part timе. Thank you aand goodd just shared this useful info with us. Please stay us informed like this. Thank you for sharing. very quick for me on Chrome. Excellent Blog! Hi! Someone in my Facebook group shared this website with us so I came to look it over. I’m definitely enjoying the information. I’m bookmarking and will be tweeting this to my followers! Fantastic blog and amazing style and design. slots bonuses [Aaron] Excellent confident analytical vision for the purpose of detail and can anticipate troubles just before they will happen. Hello! I just wish to give an enormous thumbs up for the nice information you magnificent post, very informative. I wonder why the other specialists of this sector don’t realize this. You should proceed your writing. I am confident, you have a huge readers’ base already! Here is my weblog :: projektowanie logo program Hello! Pingback: Windows Phone From Scratch – Persisting an image – Pingback: Persisting an image | WP7 Developers Links ich bedanke mich alle Herzliche menschen so helfreich sind.
http://jesseliberty.com/2011/05/21/persisting-an-image/
CC-MAIN-2014-42
refinedweb
1,474
65.93
Uniform Function Call Syntax in D On the one hand, UFCS is nice syntactic sugar for those who hate free function interfaces (a group to which I do not belong). But it's more than that. It's also an easy way to extend the functionality of existing types, while maintaining the appearance that the new functionality actually belongs to the type. Here's how it works. Given a free function that accepts at least one parameter, the function can be called on the first argument using dot notation as if it were a method of that type. Some code will make it clear. import std.stdio; void print(int i) { writeln(i); } void main() { int i = 10; i.print(); 8.print(); } Notice that it works on both variables and literals (see the output over on DPaste, where you can compile and run D code on line). For a long time, I was rather ambivalent about UFCS. I didn't see the need. After all, I have no problem with free functions. Then I found a situation where it's a perfect fit. I'm using SDL2 in one of the many projects I've managed to overload myself with. The SDL rendering interface has several methods accepting SDL_Rect objects as parameters. While implementing a simple GUI, I wanted to maintain bounds information using a rect object. But I also need functionality SDL_Rect doesn't provide out of the box, like routines to determine the intersection of two rects, or if a rect contains a point. And despite not having a beef with free functions, it really does make a difference in the appearance of code when you have a bunch of free function calls mixed in with object method calls. So I started implementing my own Rect type, giving it an opCast method to easily pass it anywhere an SDL_Rect is expected. Then I realized how silly that is when I've got UFCS. So I scrapped my Rect struct and reimplemented the methods as free functions taking an SDL_Rect as the first parameter. And now I can do things like this. SDL_Rect rect = SDL_Rect(0, 0, 100, 100); if(rect.contains(10, 10)) ... auto irect = rect.intersect(rect2); And so on. I also had need of a Point type, which SDL doesn't have. But it was ugly mixing 'Poin't and 'SDL_Rect', so I aliased the SDL_ bit away and it's now just 'Rect'. With the combination of aliasing and UFCS, it's possible to hide implementation details without using a full-on wrapper to do so. Of course, it's not entirely hidden as the SDL_Rect is still directly accessible and you can still use the type by name. But it certainly can come in handy.
https://www.gamedev.net/blogs/blog/1140-d-bits/?page=1&sortby=entry_views&sortdirection=desc
CC-MAIN-2018-30
refinedweb
461
73.17
Lookup failed in SessionBean directories but I get Exception in method "Object objref = ctx.lookup("ejb/test/MyTestSessionBean");" NetBeans can't find "ejb/test/MyTestSessionBean". How ejb ejb what is ejb ejb is entity java bean ejb ejb why ejb components are invisible components.justify that ejb components are invisible - EJB java i have a login form in swing in netbean with username and password .i want store data in database through Sessionbean And Entitybean .when i insert data in username & password field , if data is incorrect show a massege Ejb Module Ejb Module Respected Sir/Mam I m using jdk1.5 and jboss4.0.2 Here...(); Object obj = ic.lookup("com/myapp/struts/ejb/Ejb... Administrator */ public class EJBBean implements SessionBean{ private SessionContext ejb - EJB ejb how can i run a simple ejb hello application.please give me...: Hope EJB JNDI LOOK UP PROBLEM - EJB EJB JNDI LOOK UP PROBLEM Hi, I am using jboss4.2 and created a sessionbean using EJB3 but while running client code I am finding...:// http Webservice EJB Webservies In this tutorial I will explain you how you can develop an EJB and then expose the EJB as webservice... discussed are: Create EJB and deployin on the server Then expose EJB: doubt in ejb3 - EJB ; } } sessionbean: package SesssionBean; import javax.ejb.Stateless; import...; import java.util.Properties; public class Usermain { @EJB private static how to call ejb with in another ejb - EJB how to call ejb with in another ejb Does any one have idea about ...how to call ejb with in another ejb Environment EJB Environment can we work EJB in JDK1.4 question - EJB EJB Query language reserved words What is the ejb query language reserved words tutorials - EJB ejb tutorial for beginners free Can anyone give me the reference of EJB tutorials for beginners Hibernate - EJB hibernate ejb hibernatepersistence jar Need to know about hibernate ejb hibernatepersistence jar i want to know the ejb 2.0 architecture by diagram and also ejb 3.0 architecture i want to know the flow in ejb 2.0 and ejb 3.0  ... for more information. j2 Technology - EJB i want to learn further .So please tell me whether should i learn EJB 3.0... sending link. you can lean more information about ejb, Spring and Hibernate. http Swing EJB Swing EJB Hi everyone !!! I tried to find wether EJB architecture can operate with swing, because I am trying to make application which would work... would work with EJB application server EntityBean - EJB Persistence API starting from EJB 3.0. Read for more information., use of ejb 3.0 use of ejb 3.0 what is use of ejb 3.0 Struts integration with EJB in JBOSS3.2 Struts integration with EJB in JBOSS3.2  ... is to write about EJB2.0 in JBOSS3.2 using STRUTS FRAMEWORK. The EJB specification provides Enterprise-level services. Typical services provided by the EJB NullPointerException - EJB ] at java.lang.Thread.run(Thread.java:595) PLEASE HELP..... I am new to ejb and jboss ejb3.0 - EJB difference - EJB Java - EJB Weblogic - EJB j2ee - EJB JMS - EJB JEE - EJB JSP - EJB EJB Interfaces EJB Interfaces Interface in java means a group of related methods with empty bodies. EJB have generally 4... for creating Remote interface. package ejb; EJB interceptors Demo EJB interceptors Demo Please give me a detailed description and the demo for developing the interceptors in EJB. If possible give me a videos EJB,java beans EJB,java beans What is EJB poles,mainfest,jar files? What is nongraphical bean? Please send me as notes my question - EJB my question is it possiable to create web application using java beans & EJB's with out implementing Servlets and jsps in that java beans and EJB's ejbSelect() vs ejbHome() - EJB ejbSelect vs ejbHome Hi, 1) What are the differences between ejbSelect() and ejbHome...() business methods in CMP bean? 2) In Ed Romans Mastering EJB 3rd edition, it is mentioned that ejb finder methods an overview of ejb 3 An overview of EJB 3. As we know that EJB architecture has failed in keeping the promise of increased productivity whereas EJB 3.0 has succeeded in doing so.... Furthermore EJB 3.0 treats enterprise beans like regular JavaBeans. It helps What are the differences between EJB and Spring What are the differences between EJB and Spring Hi, What are the differences between EJB and Spring Thanks
http://www.roseindia.net/tutorialhelp/comment/28556
CC-MAIN-2015-06
refinedweb
735
58.69
1. Introduction In the previous articles you saw a Singlecall remote object and a Singleton remote object. In this article I will show you the usage of Generics in the remote object and how the server will register it and how the client will consume it. For the previous articles, from the web site's home page select the remoting section from the side bar and navigate. You will see the other good articles on this topic from other authors also. Let us first begin with the server. I suggest that you first read the basic article here. It will be easy to understand this article once you know the basics. Search Tags: Search the below tags in the downloaded application to know the Sequence of code changes. //Server 0 //Client 0 2. The Generic Interface Start a Visual C# console project called GenRemSrv. Once the project is started, add a generic interface. Our remote object will implement this interface. Below is the code: //Server 001: Generic Interface which has only one method takes and //generic type and return same kind of generic type public interface IGenericIface<T> { void AddData(T Data); string GetData(); } Note the usage of the Letter T. It indicates that the function accepts any data type. In our example we are going to use this interface for int as well as string data types. 3. Remote Class using Generic Interface Add a new class to the GenRemSrv Project and Name it InputKeeper<T>. Here, once again the T stands for some data type. Also note how the generic interface is inherited here by specifying the T substitution. Below is the code: //Server 002: Public class that implements MarshalbyRef and the Generic interface public class InputKeeper<T> : MarshalByRefObject, IGenericIface<int> , IGenericIface<string> Next the constructor and the variables required for this are coded. Below is the code for that: //Server 003: Variable declaration int CollectedInt; string CollectedString; //Server 004: Constructor public InputKeeper() CollectedInt = 0; CollectedString = ""; System.Console.WriteLine("Input Keeper Constructoed"); Finally the interface functions are implemented as shown below: /; In the above code, note that the Adddata function is implemented twice; once using the int data type and again using the string data type. As we derived the class from the generic interface that supports both the data types int and string (IGenericIface<int> , IGenericIface<string>) it becomes necessory to implement the interface generic function void AddData(T Data); twice by substituting the required data types. 4. Hosting the remote objects I hope you read my first article. I am not going to explain everything, which I already explained in the article here. As our remote object itself a generic object (InputKeeper<T>), we need to register the object resolving the type T. In our case, we are using two different types integer and string and so we need two registerations. The code below registers the InputKeeper for both the data types on the TCP channel identified by the port 14750. /(); 5. The client application Add a new visual C# windows application and Name the project Generic User. Use the File add new project without closing the server project so that both the projects are available under a solution. The form design is shown below: The first send button will contact the generic remote object for integer and second send button will contact the generic remote object for string. The data collected will be displayed on the multi-select list box. Also note that we have two generic objects on the server's remote pool and they are independent. Enter some integer on the left group box and click send button and type some other integer then click the send button again. Do the same for the string also. This is just for the test and data is persisted on each object in their contexts and we registered the object as singleton. Provide the reference for Remoting runtime and the server project as we already did in our first .Net remoting project. The reference to that article is given in the introduction section. 1) Include the following Namespace for the form code: //Client 001: Namespace inclution using System.Runtime.Remoting; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp; using GenRemSrv; 2) Client declared generic interfaces. One interface is for integer and the other one is for string. The integer interface is used by the left send button click and right send button will use string interface. //Client 002: Object declaration using the generic types private IGenericIface<int> intObject; private IGenericIface<string> stringObject; 3) In the Form Load event handler, after registering the communication channel, the remote objects are retrieved (Proxy) and strored in the interface class members declared in the previous step. /) The left and right send buttons will make a call to the relevant remote object (Generic object). Remember, the function exposed by our remote generic interface is AddData and GetData. The AddData for integer simply performs summation of supplied integer and AddData for string will simply append the given string input. Note that for simplicity I collected the data in string and integer variables in the server. Collecting the input in a variable of Type T should do the proper implementation. For simplicity I used CollectedInt and CollectedString. However, I hope this will explain how to use generic remote objects in remoting environment. The code below is(); Running the application is shown below: Note: The attached solution is created in VS2005. If you have latest version say yes to the conversion UI displayed. .NET Remoting - Generic Remote Objects and Generic Interfaces Prerequirement of REMOTING English is just a language Man. Forget it Good,your remoting technology very good!:-) My english is bad!Please forgive me! Thanks!
http://www.c-sharpcorner.com/uploadfile/6897bc/net-remoting-generic-remote-objects-and-generic-interfaces/
crawl-003
refinedweb
955
55.24
traceparser() Process trace data Synopsis: #include <sys/traceparser.h> extern int traceparser ( struct traceparser_state * stateptr, void * userdata, const char * filename ); Arguments: - stateptr - A pointer to the parser's state information, obtained by calling traceparser_init() . - userdata - A pointer to arbitrary user data to pass to any event-processing callback that doesn't have its own user data. - filename - The name of the trace file that you want to parse. You can create this file by using tracelogger , TraceEvent() , or some combination of the two. Library: libtraceparser Use the -l traceparser option to qcc to link against this library. Description: The traceparser() function starts the parsing of the trace data in filename. You'll use this function if you're creating your own utility for parsing trace data (as an alternative to traceprinter ). Before calling this function, you must have called: - traceparser_init() to initialize the parser - traceparser_cs() , traceparser_cs_range() , or both to set up callbacks - traceparser_debug() to set the debugging mode (optional) When you've finished parsing the data, call traceparser_destroy() to destroy the parser. Returns: - 0 - Success. - -1 - Failure; errno is set. See also traceparser_get_info() for further details.
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/t/traceparser.html
CC-MAIN-2020-34
refinedweb
186
56.45
I am trying to convert an array into a percent change array. it is simple, but I do not know why I am getting a zero division error. I tried putting from __future__ import division at the top of my file, but no dice. my code: def convert(anarr): x = 1 while(x < len(anarr)): anarr[x] = (anarr[1] - anarr[x])/anarr[1] x += 1 print anarr main: >>> >>> >>> myarr = [20130101.0,34.75,34.66,34.6,34.6,34.61,34.65,34.69] >>> convert(myarr) >>> Traceback (most recent call last): File "<pyshell#10>", line 1, in <module> convert(myarr) File "C:\Users\viral\Desktop\python\mapping.py", line 38, in convert anarr[x] = (anarr[1] - anarr[x])/anarr[1] ZeroDivisionError: float division by zero Since you are modifying the array as you go, one of the elements is becoming 0 which causes the error. There are consecutive 34.6s in the array which have a percentage change of 0. Put the changes to a new array, and copy later if you want.
https://www.codesd.com/item/python-zero-division-error-float-division-by-zero-simple-script.html
CC-MAIN-2021-04
refinedweb
174
56.15
A lot of open source programs come with configure scripts these days. One purpose of such a script is to automate the guesswork of targeting a new system. In times of yore, programs came with a Makefile that had half a dozen different sets of compiler flags or options, all but one of which were commented out, and a note saying "select the appropriate flags for your system." For more comprehensive sets of configuration options, there might also have been a large C header called config.h containing a few dozen flags to set, depending on host system variables. The first approach was simply to have #ifdefs in code for the two systems supported; for instance, BSD and System V. As the number of Unix variants increased, it became more practical to have #ifdefs for each feature. Per-system code produced this: Listing 1. Per-system code #ifdef SUNOS4 || NEXT || NETBSD || FREEBSD || OPENBSD #include <string.h> #else #include <strings.h> #endif Per-feature code produced this: Listing 2. Per-feature code #ifdef HAS_STRING_H #include <string.h> #else #include <strings.h> #endif The second was easier to adapt to a new system, but required a great deal of work from the developer. Now, with dozens of potential target systems, developers benefit a lot from using the second method, except they can now build the configuration header file automatically. One way to do this is by using GNU autoconf code to build a configure script. This script does the necessary tests and creates a configuration header file with the right values. Another function of such a script is to set up predefined variables in a consistent way. One of the persistent problems with hand-editing flags was modifying the Makefile (for instance to install it in /usr/gnu instead of /usr/local) and then forgetting to modify the corresponding values in header files. Of course, this resulted in the compiled program not knowing where to find its own data files. One of the benefits of a configure script (if the maintainer has done things right) is that it automatically creates a consistent installation. Developers, please note that another benefit of a good configure script is that it should allow users to specify things like a preference for /usr/gnu over /usr/local. Finally, a configure script can do a lot of the work of guessing which optional packages are installed, or which requirements are missing. For instance, a program designed to work with the X Window System may well want to know where X has been installed, or even whether X has been installed. How is this all possible? Compile and try again A great deal of what configure does happens by a simple mechanism. To see this yourself, design a small test program that will compile if and only if the desired condition is true. Save it in a temporary file and try to compile it. For instance, imagine that you wish to know whether or not the X Windowing System has been installed in the path /usr/X11R6/. One way to do it would be to make a test program like this: #include <X11/X.h> int main(void) { return 0; } Now, if you tell the compiler to try to compile this, it will succeed if and only if <X11/X.h> is in the compiler's include path. So, for each directory you think X might be installed in, you try compiling the program with (directory)/include in the compiler's include path. If you get a value that allows the sample file to compile, then you've got the right include path. Note that there are predefined tests for all sorts of things in autoconf. Whenever possible, use these tests instead of writing your own. This has multiple benefits. First, new versions of autoconf may improve these tests and fix bugs in them that you'd otherwise have to fix yourself. Second, it saves you time. Of course, better still is avoiding a test entirely. If you can reasonably conclude that a test is unnecessary (for instance, even machines with bytes larger than 8 bits are still required to have sizeof(char) be 1), you can omit the test entirely. Some tests are functional tests; it's not enough to know that a function called memcmp() is provided, it has to actually have the correct semantics. Often, these are tests for very obscure bugs that have only ever been noted on one or two platforms. These tests will actually run the test program and check its output. Test programs generally use the standard Unix conventions: they return 0 in the event of a successful run and some non-zero value in the event of failure. Once you have enough of these, you can use them to automatically determine necessary compiler flags and definitions to put in a header file somewhere. Often, configure scripts will allow users to override some or all of the guesswork and provide known-good answers. Look especially at a case like, say, a system with a broken memmove(). If you don't know that it has a bug that affects only a few programs, you may build a program and put it in production without knowing that you'll see occasional catastrophic failures. In many cases, the net result of a long and complicated configure script is this: the target system provides every standard feature used by this program, and they all work correctly. Why not just set the flags by hand, in such cases? This is often quite reasonable for developers, but it is less reasonable for many users. Users may not be aware that their Linux distribution happens to have a given bug. They may not be clear on what packages are installed, or where. The script helps the people who need the most help do the most common thing. The extra work it can produce when the script goes wrong is probably a fair price to pay. What goes wrong? Now that you've got a basic idea of what configure does, you can start learning what goes wrong. There are two ways configure can fail. One is when configure is correct, and your system really does lack a prerequisite. Most often, this will be correctly diagnosed by the configure script. A more disturbing case is when configure is incorrect. This can result either in failing to produce a configuration, or producing an incorrect configuration. When configure guesses right, and you lack a prerequisite, all you have to do is obtain the missing prerequisite. Once you've found and installed it, re-run the configure script that was complaining about the missing prerequisite, and all should be well. (Be sure to remove the file config.cache, which contains cached results from previous tests; you want configure to start over from the top.) If you're developing a configure script, make sure you give a meaningful error message. If you're testing for a function that is part of a popular add-on package, don't tell the user the name of the missing function -- tell the user the name of the package they need. Make sure to put prerequisite information in the README file. And please, please, tell people what versions of other packages you tested with. Of course, even after the prerequisite has been installed, the config script might not find the newly installed program. In that case, you're back to what to do when configure guesses wrong. Read the documentation Almost always, the first thing you should try when configure fails is to run configure -h and checking the list of arguments. If it's not finding a library you're sure is installed, there may be an option to let you specify an alternative location for that library. You may also be able to disable or enable certain features. For instance, the configure script used for Angband (a Roguelike game) has an optional flag, --enable-gtk, to tell it to build with GTK support. Without this flag, it won't even try. You may have to set up some fairly elaborate variables for a configure script if your system is strangely configured, and you will almost certainly have to do something pretty weird if you're cross-compiling. A lot of problems can be solved by specifying a value for CC, the variable configure uses for the C compiler. If you specify a compiler, configure will use that one, rather than trying to guess which one to use. Note that you can specify options and flags on the command line this way. For instance, if you want to compile with debugging symbols, try this: CC="gcc -g -O1" ./configure (This assumes you're using a sh-family shell; in csh, use setenv to set the environment variable CC.) Reading config.log When the configure script runs, it creates a file called config.log, which contains a log of tests run and the results it encounters. For instance, a typical stretch of config.log might look like this: Listing 3. Typical contents of config.log configure:2826: checking for getpwnam in -lsun configure:2853: gcc -o conftest -g -O2 -fno-strength-reduce conftest.c -lsun >&5 ld: cannot find -lsun configure:2856: $? = 1 configure: failed program was: (a listing of the test program follows) If I were on a system where -lsun ought to provide getpwnam(), I'd have been able to look at the exact command line used to check for it, and the test program used. Debugging these a bit would then give me enough information to tweak the configure script. Note the helpful line numbers; this test starts on line 2,826 of the configure script. (If you're a shell programmer, you may enjoy reading the section of the configure script that arranges to print line numbers; in shells that don't automatically expand $LINENO to a reasonable value, the script makes a copy of itself using sed, with the line numbers filled in!) Reading the log file is the best starting point for understanding a test that failed or produced surprising results. Note that sometimes the test that fails is not actually important and is only being run because a previous test failed. For instance, configure may abort because it can't find an obscure library you've never heard of, which it is only trying to find because a test program for a feature in your standard C library failed. In such cases, fixing the earlier problem will eliminate the second test entirely. Buggy test programs There are a few other ways in which configure can occasionally guess wrong. One is when the test program isn't correctly designed and may fail to compile on some systems. As an example, consider the following proposed test program for the availability of the strcmp() function: Listing 4. Test program for availability of strcmp() extern int strcmp(); int main(void) { strcmp(); } This program is written to avoid using the <string.h> header. The intent is that if strcmp() is present in the library, the program will compile and link correctly; if it isn't, the linker will be unable to resolve the reference to strcmp(), and the program will fail to compile. On one version of the UnixWare compiler, references to strcmp() were translated automatically into an Intel processor's native string compare instruction. This was done by simple substitution of the arguments passed to strcmp() into a line of assembly code. Unfortunately, the sample program called strcmp() with no arguments, so the resulting assembly code was invalid and the compile failed. In fact, you could indeed use strcmp() on that system, but the test program incorrectly thought it was missing. Buggy tests are rare on the mainstream platforms autoconf is targeted at (notably Linux varieties, but also the major Unix distributions), and are most often a result of running a test on a compiler or platform that isn't widely tested. For instance, gcc on UnixWare didn't trigger the above bug; only the compiler that came with the system's native development package did. Often, the easiest thing to do is simply to comment out the relevant test in configure, and set the variable in question directly. Compiler not really working A particularly pernicious variation occurs when the compiler flags selected in the early phases of configure are able to link executables, but the resulting executables won't run. This can cause tests to fail gratuitously. For instance, if the linker command you're using is wrong, you might get programs that link correctly but don't run. As of this writing, a configure script can fail to spot this, so only those tests that require the target program to be run will report failure. This can be pretty surprising to debug, but the config.log script will make clear what went wrong. For instance, on one test system, I got this output: Listing 5. Test config.log output configure:5644: checking for working memcmp configure:5689: gcc -o conftest -g -O2 -fno-strength-reduce -I/usr/X11R6/include -L/usr/X11R6/lib conftest.c -lXaw -lXext -lSM -lICE -lXmu -lXt -lX11 -lcurses >&5 configure:5692: $? = 0 configure:5694: ./conftest Shared object "libXaw.so.7" not found configure:5697: $? = 1 configure: program exited with status 1 What really went wrong is that the compiler needed a separate flag to tell it that /usr/X11R6/lib needed to be in the list of directories to search at runtime for dynamically-linked libraries. However, this was the first test that actually ran the compiled test program instead of stopping once the program was compiled successfully. This is a pretty subtle problem. The solution, on this system, was to add -Wl,-R/usr/X11R6/lib to the CFLAGS variable. The command line: $ CFLAGS="-Wl,-R/usr/X11R6/lib" ./configure allowed configure to run this test correctly. This is especially pernicious for cross-compiling, since you probably can't run an executable actually created with the cross compiler. More recent versions of autoconf try very hard to avoid tests that require the test program to actually get executed. Finding missing libraries and includes Another common problem you're likely to find with configure scripts is the case where a given package is installed in an unlikely place, and configure can't find it. A good configure script will often allow you to specify the locations of files that are necessary, but that may be installed in unusual locations. For instance, many configure scripts provide a standard way to tell the script where to look for X libraries: Listing 6. Finding X libraries X features: --x-includes=DIR X include files are in DIR --x-libraries=DIR X library files are in DIR If that doesn't work, you can always try the sheer brute force method: specify the necessary compiler flags as part of your CC environment variable, or as part of the CFLAGS environment variable. Miscellaneous tricks One workaround, if the developer has provided the configure.in file that autoconf uses to generate the configure script, is to run the newest version of autoconf. It may just work, but even if it doesn't work perfectly, it may well resolve a few problems. The goal here is to update the specific tests used; it may be simply a bug in an older version of configure that is causing the problem. With that in mind, if you're the developer in this equation, be sure to distribute the configure.in file you used. If you're doing a lot of iteration on tweaking the arguments you pass to configure, and your shell's command-line editing isn't good enough for you, make a wrapper script that calls configure with appropriate arguments. After a bit of tweaking and a few failed tests worked around, your script might look like this: Listing 7. Wrapper script ./configure --with-package=/path/to/package \ --enable-widget \ --disable-gizmo \ --with-x=29 \ --with-blah-blah-blah CFLAGS="-O1 -g -mcpu=i686 -L/usr/unlikely/lib \ -I/usr/unlikely/include -Wl,-R/usr/unlikely/lib" Having the script in one place is a lot more convenient than typing something like that on the command line over and over -- and you can refer to it later, or mail someone a copy of it. Developing robust configure scripts An ounce of prevention is worth a pound of cure. The best way to make a configure script work is to make sure that, when you're generating one, you do the best you can to make it unlikely to fail. The most important thing when trying to build a robust configure script is simple. Never, ever, test for anything if you don't really need to. Do not test for sizeof(char); since the sizeof operator in C returns the number of char-sized objects used to hold something, sizeof(char) is always 1 (even for machines where char is more than 8 bits). In most cases, there is no reason to test for access to functions that have been part of ANSI/ISO C since the 1989 version of the standard came out, or for availability of the standard C headers. Worse still are tests for non-standard features when a standard one exists. Don't test for the availability of <malloc.h>; you don't need it. If you want malloc(), use <stdlib.h>. In many cases, simply removing a dependency on an obscure feature is more reliable than testing extensively to figure out which one to use. Portable programs are not as hard to write as they were ten years ago. Finally, try to make sure you're using the most current version of autoconf. Bugs get fixed pretty aggressively; it's very likely that an older version of autoconf will have bugs that have been removed in newer versions. Resources - For more information, see the autoconfhome page. - There are very detailed resources available for Creating Automatic Configuration Scripts. - Learn more than you ever knew you wanted to about GNU Make. - Learn more about cross-compiling from Christian Berger. - Erik Welsh's Bookmarks for Cross Compiling is a good place to start to find more specific information. - Learn how to convert an existing C program or module into a cross-platform shared library in this IBM developerWorks tutorial. - Trying to write portable code? Follow the 10 Commandments for C Programmers. - Red Hat has a section in Autobook on Writing Portable C with GNU autotools. - Speaking of Red Hat, learn how to make an even more convenient installer with the IBM developerWorks article "Packaging software with RPM." - If you are new to Linux, learn about installing software in "Part 9 of the Windows-to-Linux roadmap." - You'll find rules and download instructions for Angband, the game Peter mentions in this article, at thangorodrim.net. Angband is a Roguelike game..
http://www.ibm.com/developerworks/library/l-debcon/index.html
CC-MAIN-2014-42
refinedweb
3,156
62.07
StealJS 0.10.0 just landed with a new feature that could change the way you develop: live-reload. Live-reload is an extension for Steal that speeds up development by eliminating the need to ever refresh your browser. It does this by intelligently reloading modules that become stale as you change your code. This technique is also known as "hot swapping" of modules. Steal doesn't refresh the page, but only re-imports modules that are marked as dirty. The result is a blazing fast development experience. See live-reload in action: In this post I'll explain how you can add live-reload to your development workflow. Setup Let's start by installing the latest versions of steal and steal-tools. To do so you'll need an npm project: npm init # Specify main.js as you "main" npm install steal-tools -g npm install steal --save-dev We'll use CanJS to set up a Hello World but you can use any framework with live-reload. npm install can --save Next, we're going to create a small application that demonstrates rendering HTML and responding to reload events by re-rendering. We'll create: an html file that loads steal and your application, a main module that renders a template, and a template that says "Hello world". Your folder should look something like: node_modules/ steal/ jquery/ can/ index.html main.js hello.stache index.html <div id="app"></div> <script src="node_modules/steal/steal.js"></script> main.js import $ from "jquery"; import helloTemplate from "./hello.stache!"; function render() { $("#app").html(helloTemplate({ name: "world" })); } render(); hello.stache <div>Hello Hot module replacement comes to StealJS!</div> Open index.html in your browser and you should see Hello world!. Cool, now that you've gotten a skeleton app let's wire together live-reload for instant editing. Configuration Back in your package.json add a system.configDependencies section and add live-reload. { "system": { "configDependencies": [ "live-reload" ] } } This will import live-reload before your main loads, and set up hot-swapping of modules. In order to utilize live-reload we want to re-render after each reload cycle. A reload cycle is any time Steal tears down stale modules and re-imports fresh versions. How to do this varies depending on the framework you're using. For this simple example we're just going to replace our #site element's html by rendering our template. To do this we need to import live-reload in our main and call the render() function after reload cycles. Change your main.js to look like: main.js v2 import $ from "jquery"; import helloTemplate from "./hello.stache!"; import reload from "live-reload"; function render() { $("#app").html(helloTemplate()); } render(); // Re-render on reloads reload(render); Notice that on reloads we are simply calling render(). You can perform more advanced transformations such as only responding when certain modules are reloaded and you can define a function to teardown side-effects when your module changes. All of this is defined in the live-reload docs. Start using live-reload Now that our app is configured to be live-reloadable we need to start a local server that will notify the client on module changes. StealTools comes with this. You can start it with: steal-tools live-reload Within your project folder. After a second or so you'll see a message that says something like: Live-reload server listening on port 8012 Now reopen your browser and refresh index.html. You'll see a message in your console that a connection was made. You're all set! Make any changes to main.js or hello.stache and they should reflect in the browser almost instantly. Each time a message will be logged in your console letting you know which module was reloaded. I'm personally very excited to start using live-reload day-to-day. I think it's going to speed up the development code/debug cycle tremendously.
https://www.bitovi.com/blog/hot-module-replacement-comes-to-stealjs
CC-MAIN-2017-30
refinedweb
662
67.96
. My CSV File[sourcecode language="python"]ZIPCODE, CITY, STATECODE, STATENAME 02111, BOSTON, MA, MASSACHUSETTS 02481, WELLESLEY HILLS, MA, MASSACHUSETTS 05819, ST. JOHNSBURY, VT, VERMONT etc...[/sourcecode] My Django “ZipCode” model:[sourcecode language="python"]import datetime class ZipCode(models.Model): zipcode = models.CharField(max_length=5) city = models.CharField(max_length=64) statecode = models.CharField(max_length=2) statename = models.CharField(max_length=32) create_date = models.DateTimeField(default=datetime.datetime.now) def __unicode__(self): return "%s, %s (%s)" % (self.city, self.statecode, self.zipcode) class Meta: ordering = ['zipcode'][/sourcecode] My “load_data.py” Python script:[sourcecode language="python"]# Full path and name to your csv file csv_filepathname="/home/mitch/projects/wantbox.com/wantbox/zips/data/zipcodes.csv" # Full path to your django project directory your_djangoproject_home="/home/mitch/projects/wantbox.com/wantbox/" import sys,os sys.path.append(your_djangoproject_home) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' from zips.models import ZipCode import csv dataReader = csv.reader(open(csv_filepathname), delimiter=',', quotechar='"') for row in dataReader: if row[0] != 'ZIPCODE': # Ignore the header row, import everything else zipcode = ZipCode() zipcode.zipcode = row[0] zipcode.city = row[1] zipcode.statecode = row[2] zipcode.statename = row[3] zipcode.save()[/sourcecode] Importing a TSV File (tab separated values) into a Django Model This script will work for importing Excel TSV files into Django as well. Simply change the python script’s “dataReader” line to this: About Wantbox Wantbox is a consumer information website where users publish the things they want and other users supply purchasing recommendations, reviews and typical costs. The service covers a wide variety of consumer and business products and services. On Wantbox you can find someone to remove ice dams, remove a fallen tree limb or help you install new hardwood floors. 12 Responses to “How To Import a CSV (or TSV) file into a Django Model” >>> import sys,os >>> sys.path.append(your_djangoproject_home) >>> os.environ[‘DJANGO_SETTINGS_MODULE’] = ‘settings’ This code violates DRY principle, better practice use something like described here from django.core.management import setup_environ import settings setup_environ(settings) Hi, I wrote a third party app django-csv-importer which can do the job for you. Hello, nice script. How would you do it if I would like to uplaod on the website and it would do the db store on the backend without using ftp to upload the file and launching terminal script to lauch load_data.py. thanks Nice script. Thanks If you are getting this error: _csv.Error: line contains NULL byte Replace: import csv dataReader = csv.reader(open(csv_filepathname), delimiter=’,’, quotechar='”‘) With: import csv dataReader = csv.reader(codecs.open(csv_filepathname, ‘rU’, ‘utf-16’)) Hi, Up the subject ^^ For more easier, use django extensions with runscript command Awesome. Works like charm. Thank you! Thanks a lot for this script, saved me a lot of time. For anyone getting an error like: django AppRegistryNotReady(“Apps aren’t loaded yet.”) when you are using Django 1.7, try adding: import django django.setup() Details at: Thanks ..The script was very helpful nice it works for me thank you 🙂 Very helpful. Thank you so much. Thanks! I was stuck with debugging an error and your post really helped! See the following on how: on November 18th, 2018 at 4:15 am # […] after I get the csv importing to work but the header itself was creating the problem (thanks to this post which (indirectly) helped a lot). The values in the header did not match the schema […]
http://mitchfournier.com/2011/10/11/how-to-import-a-csv-or-tsv-file-into-a-django-model/
CC-MAIN-2020-05
refinedweb
564
52.76
JavaScript class browser: once again with jQuery I’ve already posted twice about that little class browser application. The first iteration was mostly declarative and can be found here: The second one was entirely imperative and can be found here: This new version builds on top of the code for the imperative version and adds the jQuery dependency in an attempt to make the code leaner and simpler. I invite you to refer to the imperative code (included in the archive for this post) and compare it with the jQuery version, which shows a couple of ways the Microsoft Ajax Library lights up when jQuery is present. The first thing I want to do here is convert the plain function I was using to build the browser’s namespace and class tree into a jQuery plug-in: $.fn.classBrowserTreeView = function (options) { var opts = $.extend({}, $.fn.classBrowserTreeView.defaults, options); return this; }; That plug-in will have two options: the data to render (which will default to the root namespaces in the Microsoft Ajax Library), and the node template selector (which will default to “#nodeTemplate”): $.fn.classBrowserTreeView.defaults = { data: Type.getRootNamespaces(), nodeTemplate: "#nodeTemplate" }; For the moment, as you can see, this plug-in does nothing. We want it to create a DataView control on each of the elements of the current wrapped set. We will do this by calling into the dataView plug-in. You may be wondering where this plug-in might come from. Well, that’s the first kind of lighting up that the Microsoft Ajax Library’s script loader (start.js) will do in the presence of jQuery: every control and behavior will get surfaced as a jQuery plug-in, and components will get added as methods on the jQuery object. This is similar to what I had shown a while ago in this post, only much easier: For example, we can write this in our own plug-in to create DataView components over the current jQuery wrapped set: return this.dataView({ data: opts.data, itemTemplate: opts.nodeTemplate, }); Now we can wire up the itemRendered event of the data view and start enriching the markup that the DataView control rendered for each data item. First, let’s get hold of the nodes in the rendered template and wrap them: var elt = $(args.nodes); Then, if the current node is representing a namespace, we want to hook up the expansion button’s click event so that it toggles visibility of the list of children, and we want to “unhide” the button itself (it has a “hidden” class in the default markup): if (Type.isNamespace(args.dataItem)) { elt.find(".toggleButton").click(function (e) { e.preventDefault(); return toggleVisibility(this); }).removeClass("hidden"); } You can see here that we’re taking advantage of chaining. Next thing is to set-up the node link itself. We start by inhibiting the link’s default action. Then we set the text for the link, and finally we set the command that will bubble up to the DataView when the link gets clicked: elt.find(".treeNode").click( function (e) { e.preventDefault(); return false; }) .text(getSimpleName(args.dataItem.getName())) .setCommand("select"); Here, I’m using a small plug-in to set the command: $.fn.setCommand = function (options) { var opts = $.extend({}, $.fn.setCommand.defaults, options); return $(this).each(function () { $.setCommand(this, opts.commandName, opts.commandArgument, opts.commandTarget); }); } $.fn.setCommand.defaults = { commandName: "select", commandArgument: null, commandTarget: null }; I’m using $.setCommand here, which does get created by the framework for me, but I still need to create that small plug-in to make it work on a wrapped set instead of a static method off jQuery. I’ve sent feedback to the team that setCommand and bind should get created as plug-ins by the framework and hopefully it will happen in a future version. The last thing we need to do here is to recursively create the child branches of our tree: elt.find("ul").classBrowserTreeView({ data: getChildren(args.dataItem) }); This just finds the child UL element of the current branch and calls our plug-in on the results with the children namespaces and classes as the data. And this is it for the tree, we can now create it with this simple call: $("#tree").classBrowserTreeView(); The details view rendering will only differ in minor ways from the code we had in our previous imperative version. The only differences is the use of jQuery to traverse and manipulate the DOM instead of the mix of native DOM APIs and Sys.get that we were using before. For example, args.get("li").innerHTML = args.dataItem.getName ? args.dataItem.getName() : args.dataItem.name; becomes: $(args.nodes).filter("li").text( args.dataItem.getName ? args.dataItem.getName() : args.dataItem.name); Notice how jQuery’s text method makes things a little more secure than the innerHTML we had used before. Updating the details view with the data for the item selected in the tree is done by handling the select command of the tree from the following function: function onCommand(sender, args) { if (args.get_commandName() === "select") { var dataItem = sender.findContext( args.get_commandSource()).dataItem; var isClass = Type.isClass(dataItem) && !Type.isNamespace(dataItem); var childData = (isClass ? getMembers : getChildren)(dataItem); var detailsChild = Sys.Application.findComponent("detailsChild"); detailsChild.onItemRendering = isClass ? onClassMemberRendering : onNamespaceChildRendering; detailsChild.onItemRendered = onDetailsChildRendered; detailsChild.set_data(childData); $("#detailsTitle").text(dataItem.getName()); $(".namespace").css( "display", isClass ? "none" : "block"); $(".class").css( "display", isClass ? "block" : "none"); $("#details").css("display", "block"); } } Not much change here from the previous version, again, except for the use of jQuery and some chaining. And that is pretty much it. I’ve made other changes in the script to make use of the new script loader in the Microsoft Ajax Library but that will be the subject of a future post. Hopefully, this has shown you how the Microsoft Ajax Library can light up with jQuery. The automatic creation of plug-ins feels very much like native jQuery plug-ins and brings all the power of client templates to jQuery. Once we have bind and setCommand plug-ins as well, the Microsoft Ajax Library may become a very useful tool to jQuery programmers just as much as jQuery itself is a very useful tool to Microsoft Ajax programmers. The code can be found here: Update: fixed a problem in Firefox & Chrome.
http://weblogs.asp.net/bleroy/javascript-class-browser-once-again-with-jquery
CC-MAIN-2016-07
refinedweb
1,044
55.84
perltutorial cLive ;-) Well, this confused the hell out of me, so I thought I'd spend some time getting my head around it. <P>Probably best to show by example (apologies to Joseph Hall and [merlyn] for borrowing heavily here from <A HREF="">Effective Perl Programming</A>) <P>Quick summary: <I>'my' creates a new variable, 'local' temporarily amends the value of a variable</I> <P>There is a subtle difference. <P>In the example below, $::a refers to $a in the 'global' namespace. <CODE> $a = 3.14159; { local $a = 3; print "In block, \$a = $a\n"; print "In block, \$::a = $::a\n"; } print "Outside block, \$a = $a\n"; print "Outside block, \$::a = $::a\n"; # This outputs In block, $a = 3 In block, $::a = 3 Outside block, $a = 3.14159 Outside block, $::a = 3.14159 </CODE> <P>ie, 'local' <B>temporarily changes the value of the variable</B>, but <I>only within the scope it exists in</I>. <P>so how does that differ from 'my'? 'my' creates a variable that does not appear in the symbol table, and does not exist outside of the scope that it appears in. So using similar code: <CODE> $a = 3.14159; { my $a = 3; print "In block, \$a = $a\n"; print "In block, \$::a = $::a\n"; } print "Outside block, \$a = $a\n"; print "Outside block, \$::a = $::a\n"; # This outputs In block, $a = 3 In block, $::a = 3.14159 Outside block, $a = 3.14159 Outside block, $::a = 3.14159 </CODE> <P>ie, 'my' has no effect on the global $a, even inside the block. <H4>But in real life, they work virtually the same?</H4> <P>Yes. Sort of. So when should you use them? <UL> <LI>use 'my' when you can (it's faster than local) <LI>use local when: <UL> <LI>you're amending code written in Perl 4, unless you are sure that changing 'local' to 'my' will not cause any lexical problems <LI>you want to amend a special Perl variable, eg $/ when reading in a file. <B>my $/;</B> throws a compile-time error </UL> </UL> <P>If you use Perl 5 and strict (and I know you do :), you probably haven't noticed any difference between using 'my' and 'local', but will hopefully only use 'local' in the second instance above. <P>EPP also suggests you use 'local' when messing with variables in another module's namespace, but I can't think of a RL situation where that could be justified - why not just scope a local variable? Perhaps someone could enlighten me? <P>But, if you ever end up amending some old Perl 4 code that uses local, you need to be aware of the issues and not just do a <CODE>s/\blocal\b/my/gs</CODE> on the script :) - sometimes people use the 'features' of local in unusual ways... <P>Hope that's cleared a few things up. <P>cLive ;-)
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=94007
CC-MAIN-2014-23
refinedweb
486
77.06
The QTestLib framework, provided by Trolltech, is a tool for unit testing Qt based applications and libraries. QTestLib provides all the functionality commonly found in unit testing frameworks as well as extensions for testing graphical user interfaces. Table of contents: QTestLib is designed to ease the writing of unit tests for Qt based applications and libraries: Note: For higher-level GUI and application testing needs, please see the Qt testing products provided by Trolltech partners. All public methods are in the QTest namespace. In addition, the QSignalSpy class provides easy introspection for Qt's signals and slots. To create a test, subclass QObject and add one or more private slots to it. Each private slot is a testfunction in your test. QTest::qExec() can be used to execute all testfunctions in the test object. In addition, there are four private slots that are not treated as testfunctions. They will be executed by the testing framework and can be used to initialize and clean up either the entire test or the current test function. If initTestCase() fails, no testfunction will be executed. If init() fails, the following testfunction will not be executed, the test will proceed to the next testfunction. Example: class MyFirstTest: public QObject { Q_OBJECT private slots: void initTestCase() { qDebug("called before everything else"); } void myFirstTest() { QVERIFY(1 == 1); } void mySecondTest() { QVERIFY(1 != 2); } void cleanupTestCase() { qDebug("called after myFirstTest and mySecondTest"); } }; For more examples, refer to the QTestLib Tutorial. If you are using qmake as your build tool, just add the following to your project file: CONFIG += qtestlib If you are using other buildtools, make sure that you add the location of the QTestLib header files to your include path (usually include/QtTest under your Qt installation directory). If you are using a release build of Qt, link your test to the QtTest library. For debug builds, use QtTest_debug. See Writing a Unit Test for a step by step explanation. The syntax to execute an autotest takes the following simple form: testname [options] [testfunctions[:testdata]]... Substitute testname with the name of your executable. testfunctions can contain names of testfunctions to be executed. If no testfunctions are passed, all tests are run. If the name of an entry in the test function's test data is appended to the test function's name, the test function will be run only with that testdata. For example: /myTestDirectory$ testQString toUpper Runs the test function called toUpper with all available test data. /myTestDirectory$ testQString toUpper toInt:zero Runs the toUpper test function with all available test data, and the toInt test function with the testdata called zero (if the specified test data doesn't exist, the associated test will fail). /myTestDirectory$ testMyWidget -vs -eventdelay 500 Runs the testMyWidget function test, outputs every signal emission and waits 500 milliseconds after each simulated mouse/keyboard event. The following command line arguments are understood:
http://doc.qt.nokia.com/4.1/qtestlib-manual.html
crawl-003
refinedweb
478
54.02
Subject: Re: [boost] [Boost-users] [Review] Lockfree review starts today, July 18th From: Klaim - Joël Lamotte (mjklaim_at_[hidden]) Date: 2011-07-20 14:04:29 On Wed, Jul 20, 2011 at 18:51, Tim Blechmann <tim_at_[hidden]> wrote: > >> If Boost.Lockfree will be accepted, it won't be merged into trunk before > >> Boost.Atomic is accepted. > > > > That seems unnecessary. Can't Boost.Lockfree simply include (a version > > of) Boost.Atomic as an implementation detail for now? > > this would somehow mean to fork boost.atomic, move everying to a `namespace > detail'. might introduce some maintenance overhead to keep it in sync with > the > original library. > > Excuse me if it's common knowledge around here, but I've seen several libraries relying on boost.atomic that have been reviewed, so may I ask : why haven't Boost.Atomic been reviewed yet? It looks like it's already finished and reliable... Joël Lamotte Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2011/07/184044.php
CC-MAIN-2021-39
refinedweb
174
62.85
Comparing Strings in Java – Complete Tutorial August 31, 2014 | Strings | 2 Comments| I have already discussed that how to initialise strings and what is the different ways of creating a String in java. Lets discuss what are the different ways of comparing Strings and their parts in Java. So main methods that are used for String comparision are- boolean endsWith(String suffix) -This method checks whether a string starts with the specified sequence of characters or not. boolean startsWith(String prefix)– This method checks whether a string ends with the specified sequence of characters or not. boolean startsWith(String prefix, int offset)– It checks the String beginning at the offset and see whether the String starts with the string in the argument. int compareTo(String anotherString)– This method compares the two Strings and return an integer based on whether the String is greater(> 0), equals( = 0 ) or smaller( < 0) than the argument and the value will be equal to the difference between the ASCII values of first un-matching character. int compareToIgnoreCase(String anotherString)– This method is same as the above one except it ignores if there is only difference in the case of the string to be compared. boolean equals(Object anObject)– This method returns true if the String is exactly equal to the parameter passed otherwise returns false. boolean equalsIgnoreCase(Object anObject)– This is same as above method but it ignores if there is only difference in the case of the Strings. boolean matches(String regex)– This mathes the String based on the available Regex pattern and returns true if the String follows the Patters otherwise false. All these methods are already self explanatory but lets see a common example which covers all of them. package codingeekStringTutorials; public class CompareStringsExample { public static void main(String[] args) { String name = "CODINGEEK- A PROGRAMMING PORTAL"; String site = "codingeek- a programming portal"; String prefix = "CODINGEEK"; String suffix = "PORTAL"; System.out.println("Do name starts with defined prefix - " + name.startsWith(prefix)); System.out.println("Do name ends with defined suffix - " + name.endsWith(suffix)); //25 is the index of P of PORTAL System.out.println("Do name STARTS with defined SUFFIX at the end - " + name.startsWith(suffix, 25)); System.out.println("Is name and site are equal - " + name.equals(site)); System.out.println("Is name and site are equal(Ignoring Case) - " + name.equalsIgnoreCase(site)); System.out.println("Is name and site are comparable - " + name.compareTo(site)); System.out.println("Is name and site are comparable(Ignoring Case) - " + name.compareToIgnoreCase(site)); String email ="[email protected]"; String regex = "^[_A-Za-z0-9-\\+]+(\\.[_A-Za-z0-9-]+)*@[A-Za-z0-9-]+(\\.[A-Za-z0-9]+)*(\\.[A-Za-z]{2,})$"; System.out.println("does this email matches the Regex - "+ email.matches(regex)); } } Output:- Do name starts with defined prefix - true Do name ends with defined suffix - true Do name STARTS with defined SUFFIX at the end - true Is name and site are equal - false Is name and site are equal(Ignoring Case) - true Is name and site are comparable - -32 Is name and site are comparable(Ignoring Case) - 0 does this email matches the Regex - true If you have any doubt or need any other help.. Just comment below and I will try to solve it asap. Recommended - - Rohit - hiteshgarg21
https://www.codingeek.com/java/strings/comparing-strings-in-java-complete-tutorial/
CC-MAIN-2018-26
refinedweb
542
53.21
I need to let the user input a line of words and make all the 4 letter words in the sentence "love".... example: cin>> I hate programming output --> I love programming does anybody know if there is any library, or string member functions would help doing this task? below is so far what i have, i have a function to return the index of the space... I don't know what else i can do. # include <iostream> # include <string> using namespace std; int indexCounter(string sentence, int startingIndex); int main() { string line; int startIndex=0; int endIndex=0; int size= line.size(); cout<<"Please enter a sentence."<<endl; getline(cin,line); return 0; } int indexCounter(string sentence, int startIndex, int& endIndex) { for(int i=startingIndex; i<size; i++) { if(sentence[i]==' ') { return i; //white space index } endIndex=i; } }
https://www.daniweb.com/programming/software-development/threads/109017/help-on-string
CC-MAIN-2018-17
refinedweb
138
70.73
Closed : Graph, <BR> bool IgnoreFreeWay) public Arc ClosestArc(double X, double Y, double Z, out double Dist, <BR>, <BR>, <BR>. Object.ToString() Object.Equals(Object O). SortableList ArrayList> #include<math.h> #include<stdio.h> #include<conio.h> #include<stdlib.h> // these constants save calculation time const unsigned long power10[10] = {1,10,100,1000,10000,100000,1000000, 10000000,100000000,1000000000}; // define a Boolean type typedef enum BOOL {TRUE, FALSE}; typedef struct _NETWORK { unsigned char nodes; // the 2 connected nodes unsigned long arcFlow; unsigned long arcFloworiginal; } NETWORK; // function prototypes double permutation(int,int); double numPossiblePaths(char); unsigned long padPath(unsigned long,char); unsigned long convertBase(unsigned long, unsigned char); /* * This function returns the number of possible paths using * permmutation, not including restrictions posed on arcs and nodes * *----------------------------------------------------------*/ double numPossiblePaths(char nodes) { double sum=0; char i; for (i=2; i<=(nodes-2); i++) sum += permutation(nodes-2,i); // nP0 and nP1 are always 1 and n, so save time by doing this return (sum+nodes-1); // or 1+(nodes-2), whatever } /* * This function returns nPk (permmutation) * *----------------------------------------------------------*/ double permutation(int n, int k) { double sum=0; int i; if (k>n || n<2) return -1; // return with error if k>n else { // do n! for (i=n; i>1; i--) sum += log(i); // do (n-k)! for (i=(n-k); i>1; i--) sum -= log(i); } // return nPk return exp(sum); } /* * This function pads a path with the start and end nodes * *----------------------------------------------------------*/ unsigned long padPath(unsigned long num, char nodes) { unsigned char ctr=10; while ((num / power10[--ctr]) == 0); num = num + nodes * power10[ctr+1]; num = (num*10)+1; return num; } /* * This function takes in a decimal number and a base to convert it to * *----------------------------------------------------------*/ unsigned long convertBase(unsigned long num, unsigned char numBase) { int ctr=0; unsigned long convertedNum=0; while (num!=0) // dont bother converting when quotient is 0 { convertedNum = convertedNum+((num % numBase) * power10[ctr]); num /= numBase; // go down 1 digit in old base ctr++; // go up 1 digit in new base } return convertedNum; } /*==========================================================*/ int main(void) { char nodes=6; // number of nodes in network unsigned long countTo=0, // number to count in decimal ctr,ctr2,ctr3,n,n2,n3, // general counters and holders path, // a path networkFlow=0, // initlally network flow is 0 smallestArcCapacity; // Duh. NETWORK network[34] = {1}; // holds network information // this value changes along the process BOOL nodePath,validPath=TRUE; // of path selection, and determines whether // a path makes it into the path array. network[0].nodes = 21; network[0].arcFlow = 2; network[1].nodes = 31; network[1].arcFlow = 6; network[2].nodes = 41; network[2].arcFlow = 3; network[3].nodes = 12; network[3].arcFlow = 0; network[4].nodes = 42; network[4].arcFlow = 1; network[5].nodes = 52; network[5].arcFlow = 4; network[6].nodes = 13; network[6].arcFlow = 0; network[7].nodes = 43; network[7].arcFlow = 3; network[8].nodes = 63; network[8].arcFlow = 2; network[9].nodes = 14; network[9].arcFlow = 0; network[10].nodes = 24; network[10].arcFlow = 1; network[11].nodes = 34; network[11].arcFlow = 3; network[12].nodes = 54; network[12].arcFlow = 1; network[13].nodes = 64; network[13].arcFlow = 3; network[14].nodes = 25; network[14].arcFlow = 4; network[15].nodes = 45; network[15].arcFlow = 1; network[16].nodes = 65; network[16].arcFlow = 6; network[17].nodes = 36; network[17].arcFlow = 0; network[18].nodes = 46; network[18].arcFlow = 0; network[19].nodes = 56; network[19].arcFlow = 0; // get the number to count to countTo = (unsigned long) pow(nodes,nodes-2); // start the path building for (ctr=1; ctr<countto; ctr++) { validPath = TRUE; // path is valid so far // convert number to path by converting to base NODES number n = path = convertBase(ctr, nodes); // n is temp var do { n3 = n % 10; // number that is checked for doubles n2 = path; // whole path is checked each time ctr2 = 0; // used to count doubles, none yet do { if ((n2%10)==n3) // if digit = number we're looking for, ctr2++; // found double, so increment n2 /= 10; // go down 1 digit } while (n2!=0); // check if there were doubles and if number is less then 2 if (ctr2>1 || (n%10)<=1) validPath = FALSE; n /= 10; // go down 1 digit } while (n!=0 && validPath!=FALSE); // stop converting when quotient is 0 // add the start and end nodes to path. e.g. if path is 423 and // there are 6 nodes, the path becomes 64231 path = padPath(path,nodes); // check if path is proper by checking the restrictions given // by the user if (validPath==TRUE) { ctr2 = 0; smallestArcCapacity = 255; nodePath = TRUE; // variable used to check path at each node while ((n=(path % power10[ctr2+2] / power10[ctr2]))>9 && (nodePath!=FALSE)) { nodePath = FALSE; // check each path with the information for (ctr3=0; ctr3<20; ctr3++) // given by user // check if nodes correct and arc flow capacity>0 if ((n == network[ctr3].nodes) && (network[ctr3].arcFlow != 0)) { nodePath = TRUE; // in the noetwork array, path is bad if (network[ctr3].arcFlow < smallestArcCapacity) smallestArcCapacity = network[ctr3].arcFlow; } ctr2++; // as soon as path is know to be bad, loop quits } validPath = nodePath; // result of path , bad or good } // if path is valid, decrease all flows on path by smallestArcFlow // and increase in oppositte direction if (validPath==TRUE) { ctr2 = 0; while ((n=(path % power10[ctr2+2]) / power10[ctr2])>9) { ctr3 = 0; while (n != network[ctr3++].nodes); network[ctr3-1].arcFlow -= smallestArcCapacity; ctr3 = 0; while ((((n%10)*10)+(n/10)) != network[ctr3++].nodes); network[ctr3-1].arcFlow += smallestArcCapacity; ctr2++; // as soon as path is know to be bad, loop quits } networkFlow += smallestArcCapacity; // Duh. } if (validPath==TRUE) cout << path << " " << smallestArcCapacity << endl; } cout << networkFlow; return 0; }</stdlib.h></conio.h></stdio.h></math.h></iostream.h>); Graph G = new Graph(); Node Start = G.AddNode(0,0,0); G.AddNode(5,0,0); G.AddNode(5,5,0); Node Ziel = G.AddNode(5,5,5); G.AddArc(new Node(0,0,0),new Node(5,0,0),1); G.AddArc(new Node(5,0,0),new Node(5,5,0),1); . . . G.AddArc(new Node(5,5,0),new Node(5,5,5),1); G.AddArc(new Node(0,0,0),new Node(5,5,0),1); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/4391/C-A-Star-is-born?msg=2383444
CC-MAIN-2014-35
refinedweb
1,049
59.19
Feature #16663 Add block or filtered forms of Kernel#caller to allow early bail-out Description There are many libraries that use caller or caller_locations to gather stack information for logging or instrumentation. These methods generate an array of informational stack frames based on the current call stack. Both methods accept parameters for level (skip some number of Ruby frames) and length (only return this many frames). However many use cases are unable to provide one or both of these. Instrumentation uses, for example, may need to skip an unknown number of frames at the top of the trace, such as to dig out of rspec plumbing or active_record internals and report the first line of user code. In such cases, the typical pattern is to simply request all frames and then filter out the one that is desired. This leads to a great deal of wasted work gathering those frames and constructing objects to carry them to the user. On optimizing runtimes like JRuby and TruffleRuby, it can have a tremendous impact on performance, since each frame has a much higher cost than on CRuby. I propose that we need a new form of caller that takes a block for processing each element. def find_matching_frame(regex) caller do |frame| return frame if frame.file =~ regex end end An alternative API would be to allow passing a query object as a keyword argument, avoiding the block dispatch by performing the match internally: def find_matching_frame(regex) caller(file: regex) end This API would provide a middle ground between explicitly specifying a maximum number of stack frames and asking for all frames. Most common, hot-path uses of caller could be replaced by these forms, reducing overhead on all Ruby implementations and drastically reducing it where stack traces are expensive. No data to display Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/16663
CC-MAIN-2020-24
refinedweb
307
56.49
23 October 2012 03:35 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The official said, “We will reduce the cracker operating rate in November and December to below 90% because PP demand is very low as supply is long. “So maybe ethylene supply will decline because we had plans to export ethylene in the fourth quarter.” The cracker is currently operating at 90% capacity, the official added. “We have no spot ethylene cargoes for November because of the low price of ethylene and propylene,” he said, adding that the co-product butadiene prices are also low and cracker operators’ cost of production is high. The PP
http://www.icis.com/Articles/2012/10/23/9606219/japans-showa-denko-to-cut-oita-cracker-ops-below-90-in-nov-dec.html
CC-MAIN-2013-48
refinedweb
106
50.77
#include <ResilientConnection.h> ResilientConnection represents a Qpid connection that is resilient. Upon creation, ResilientConnection attempts to establish a connection to the messaging broker. If it fails, it will continue to retry at an interval that increases over time (to a maximum interval). If an extablished connection is dropped, a reconnect will be attempted. Create a new resilient connection. Bind a queue to an exchange. Create a new AMQP session. Declare an exclusive, auto-delete queue for a session. Delete a queue. Destroy a created session. Get the next event (if present) from the connection. Get the connected status of the resilient connection. Send a byte into the notify file descriptor. This can be used to wake up the event processing portion of the engine from either the wrapped implementation or the engine itself. Discard the event on the front of the queue. This should be invoked after processing the event from getEvent. Send a message into the AMQP broker via a session. Establish a file descriptor for event notification. Remove a binding.
http://qpid.apache.org/releases/qpid-0.24/qmf/cpp/api/classqmf_1_1engine_1_1ResilientConnection.html
CC-MAIN-2015-35
refinedweb
172
62.44
This command creates a context that can be used for associating lights to shading groups. You can put the context into shading-centric mode by using the -shadingCentric flag and specifying true. This means that the shading group is selected first then lights associated with the shading group are highlighted. Subsequent selections result in assignments. Specifying -shadingCentric false means that the light is to be selected first. The shading groups associated with the light will then be selected and subsequent selections will result in assignments being made. Derived from mel command maya.cmds.shadingLightRelCtx Example: import pymel.core as pm pm.shadingLightRelCtx() # Result: u'shadingLightRelCtx1' #
http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.context/pymel.core.context.shadingLightRelCtx.html#pymel.core.context.shadingLightRelCtx
crawl-003
refinedweb
105
52.97
Simple iPhone Calculator App using hooks in React Native Nabendu ・4 min read Continuing with React Native the next app i am making is a simple iPhone calculator. This post is inspired by this video by Carl Spencer. Let’s head over to a terminal and type expo init CalculatorReactNative Press enter for blank in the selection. In the next screen, you can give the same name as the project. For my case it is CalculatorReactNative. Then change into the directory and do a npm start The project will be started by expo on a web-browser. Next, open the project in VSCode and create a new folder App and a file index.js inside it. Copy all content from App.js to index.js and delete App.js Next, open the App on a physical device. I had mentioned the steps in my blog for Restaurant Search app. You can also open the app in the lap by configuring Android emulator. I have mentioned the steps in my previous blog to Setup Android Emulator on Mac. Let’s put some basic code to display a button and change the default styling. The StatusBar and SafeAreaView are to avoid the notch in iphone X and other Android models with notch for camera. It will show as below on the physical android device. Next, create a folder components inside App and two files Row.js and Button.js inside it. In Row.js put the below, which will show any children passed to it. Next, let’s add some code in Button.js Next, head over to index.js and import these two by - import Row from './components/Row'; import Button from './components/Button'; Next add the layout of calculator buttons in it. We have 5 rows here, each having 4 buttons. Only the last row have 3 buttons. Next, we will add some styles to the button so that looks more like iphone calculator. It will show rounded buttons like below now. Our last row have three buttons. In iPhone the 0 button takes half of the space and . and = takes the rest. We will fix the style for it next. So, in button.js we will take an additional prop size. If it is equal to double, we are adding another style called buttonDouble to the button. So, now move to index.js and add that prop to the Button 0 as below. It will now show our Button 0 taking double space. Next, we will update colors of some of the buttons as in iPhone calculator. First let’s add a new prop for the buttons whose color will be updated in index.js Next, we will update it in our Button.js First let’s add the code for theme. We are also going to update the color of our secondary buttons, so we are creating textStyles. Now, let’s add styles for the themes and text. Now, our App styling is complete and it is looking like the iPhone calculator. Now, we will write the logic for calculator. We are using React hook for state management. For details on hooks go through my earlier post on hooks here. First let’s call a function handleTap() for all buttons. It passes different parameters depending on the button. Next, we will use the useState hook. We will declare a variable currVal and update it with setCurrVal. So, whenever the type is number we are updating the currVal with whatever the user input. Next, we will add logic for operator, clear, posneg, percentage. We have declared two additional state variables operator and prevVal. Next, we will add the logic for equal. It will do calculation depending on operator. This completes our App. So, go ahead and play with it. You can find the code for the same in the link here. - Accessibility, why? how? (part.1) Yury Troynov -
https://dev.to/nabendu82/simple-iphone-calculator-app-using-hooks-in-react-native-1lgk
CC-MAIN-2019-47
refinedweb
649
78.45
Quaternions and Key Bindings: Simple 3D Visualization in Matplotlib Matplotlib is a powerful framework, but its 3D capabilities still have a lot of room to grow. leading to very unnatural-looking results. I decided to see if I could create a simple script that addresses this. Though it would be possible to use the built-in mplot3d architecture to take care of rotating and projecting the points, I decided to do it from scratch for the sake of my own education. We'll step through it below: by the end of this post we will have created a 3D viewer in matplotlib which I think is quite nice. The first step to creating an interactive 3D object is to decide on a representation for the orientation of the object. mplot3d uses azimuthal and elevation angles: these are the familiar $\theta$ and $\phi$ of spherical coordinate systems. While this is a common system, it has the disadvantage that things get a little funky near the poles. Another common framework for mathematical representation of solid-body rotations is a rotation matrix. This is nice because multiple rotations can be composed quickly, but extracting axis and angle information can require matrix decompositions that are relatively expensive. Finally, perhaps the best option is the use of quaternions, four-dimensional generalizations of complex numbers which can be used to compactly represent 3D rotations. A friend who works in the video game industry (in particular the Halo series) told me that quaternions are what the industry generally prefers in many situations: they are compact, fast to compose, and very powerful. On top of that, they're generally cool enough that I wanted an excuse to play around with them a bit. There have been a few attempts to include quaternions natively in the scipy universe, but none have stuck. It's a shame, because it would be a very useful feature. In particular, In particular, I'd love to see something happen with the quaternion dtype, discussed here on the numpy-dev email list. It looks promising, but has not yet been included in numpy. With the lack of available implementations, I decided to write a basic quaternion class which implements the functionality I need: multiplication of quaternions, and tranformation to and from axis-angle representation: import numpy as np class Quaternion: """Quaternions for 3D rotations""" def __init__(self, x): self.x = np.asarray(x, dtype=float) @classmethod def from_v_theta(cls, v, theta): """ Construct quaternion from unit vector v and rotation angle theta """ theta = np.asarray(theta) v = np.asarray(v) s = np.sin(0.5 * theta) c = np.cos(0.5 * theta) vnrm = np.sqrt(np.sum(v * v)) q = np.concatenate([[c], s * v / vnrm]) return cls(q) def __repr__(self): return "Quaternion:\n" + self.x.__repr__() def __mul__(self, other): # multiplication of two quaternions. prod = self.x[:, None] * other.x return self.__class__([(prod[0, 0] - prod[1, 1] - prod[2, 2] - prod[3, 3]), (prod[0, 1] + prod[1, 0] + prod[2, 3] - prod[3, 2]), (prod[0, 2] - prod[1, 3] + prod[2, 0] + prod[3, 1]), (prod[0, 3] + prod[1, 2] - prod[2, 1] + prod[3, 0])]) def as_v_theta(self): """Return the v, theta equivalent of the (normalized) quaternion""" # compute theta norm = np.sqrt((self.x ** 2).sum(0)) theta = 2 * np.arccos(self.x[0] / norm) # compute the unit vector v = np.array(self.x[1:], order='F', copy=True) v /= np.sqrt(np.sum(v ** 2, 0)) return v, theta def as_rotation_matrix(self): """Return the rotation matrix of the (normalized) quaternion""" v, theta = self.as_v_theta() c = np.cos(theta) s = np.sin(theta) return np.array([[v[0] * v[0] * (1. - c) + c, v[0] * v[1] * (1. - c) - v[2] * s, v[0] * v[2] * (1. - c) + v[1] * s], [v[1] * v[0] * (1. - c) + v[2] * s, v[1] * v[1] * (1. - c) + c, v[1] * v[2] * (1. - c) - v[0] * s], [v[2] * v[0] * (1. - c) - v[1] * s, v[2] * v[1] * (1. - c) + v[0] * s, v[2] * v[2] * (1. - c) + c]]) The mathematics of quaternions (and the use of unit quaternions to represent rotations) is fascinating in itself, but the details beyond this simple implementation are too much for this short post. I'd suggest this paper for a more complete introduction. Let's use these quaternions to draw a cube in matplotlib. A cube is made of six faces, each rotated from the other in multiples of ninety degrees. With this in mind, we'll define a fiducial face, and six rotators which will put the face in place. One we have these, we can concatenate a viewing angle to all six, project the results, and display them as polygons on an axes. An important piece here is to get the zorder attribute correct, so that faces in the back do not cover faces in the front. We'll use the z-coordinate from the projection to do this correctly. For later use, we'll do this all within a class derived from plt.Axes, and make it so that the set of polygons can be updated if needed: # don't use %pylab inline, because we want to interact with plots below. %pylab Welcome to pylab, a matplotlib-based Python environment [backend: TkAgg]. For more information, type 'help(pylab)'. class CubeAxes(plt.Axes): """An Axes for displaying a 3D cube""" # fiducial face is perpendicular to z at z=+1 one_face = np.array([[1, 1, 1], [1, -1, 1], [-1, -1, 1], [-1, 1, 1], [1, 1, 1]]) # construct six rotators for the face x, y, z = np.eye(3) rots = [Quaternion.from_v_theta(x, theta) for theta in (np.pi / 2, -np.pi / 2)] rots += [Quaternion.from_v_theta(y, theta) for theta in (np.pi / 2, -np.pi / 2)] rots += [Quaternion.from_v_theta(y, theta) for theta in (np.pi, 0)] # colors of the faces colors = ['blue', 'green', 'white', 'yellow', 'orange', 'red'] def __init__(self, fig, rect=[0, 0, 1, 1], *args, **kwargs): # We want to set a few of the arguments kwargs.update(dict(xlim=(-2.5, 2.5), ylim=(-2.5, 2.5), frameon=False, xticks=[], yticks=[], aspect='equal')) super(CubeAxes, self).__init__(fig, rect, *args, **kwargs) self.xaxis.set_major_formatter(plt.NullFormatter()) self.yaxis.set_major_formatter(plt.NullFormatter()) # define the current rotation self.current_rot = Quaternion.from_v_theta((1, 1, 0), np.pi / 6) def draw_cube(self): """draw a cube rotated by theta around the given vector""" # rotate the six faces Rs = [(self.current_rot * rot).as_rotation_matrix() for rot in self.rots] faces = [np.dot(self.one_face, R.T) for R in Rs] # project the faces: we'll use the z coordinate # for the z-order faces_proj = [face[:, :2] for face in faces] zorder = [face[:4, 2].sum() for face in faces] # create the polygons if needed. # if they're already drawn, then update them if not hasattr(self, '_polys'): self._polys = [plt.Polygon(faces_proj[i], fc=self.colors[i], alpha=0.9, zorder=zorder[i]) for i in range(6)] for i in range(6): self.add_patch(self._polys[i]) else: for i in range(6): self._polys[i].set_xy(faces_proj[i]) self._polys[i].set_zorder(zorder[i]) self.figure.canvas.draw() With this in place, we can draw the cube as follows: fig = plt.figure(figsize=(4, 4)) ax = CubeAxes(fig) fig.add_axes(ax) ax.draw_cube() display(fig) Now comes the fun part. We can add events and callbacks to the axes which will allow us to manipulate the projection by clicking and dragging on the figure. We need to add callbacks to the axes: we'll do this in the construction of the class using the mpl_connect method of the canvas. There are several types of callback events that can be hooked: they can be found by typing plt.connect? and seeing the doc-string: plt.connect? The available options are: 'button_press_event' 'button_release_event' 'draw_event' 'key_press_event' 'key_release_event' 'motion_notify_event' 'pick_event' 'resize_event' 'scroll_event' 'figure_enter_event' 'figure_leave_event' 'axes_enter_event' 'axes_leave_event' 'close_event' Each of these can be connected to a function that performs some operation when those events happen. Below, we'll use the button events, the motion event, and the key press event to create an interactive 3D plot. To be as concise as possible, we'll derive from the class we created above: class CubeAxesInteractive(CubeAxes): """An Axes for displaying an Interactive 3D cube""" def __init__(self, *args, **kwargs): super(CubeAxesInteractive, self).__init__(*args, **kwargs) # define axes for Up/Down motion and Left/Right motion self._v_LR = (0, 1, 0) self._v_UD = (-1, 0, 0) self._active = False self._xy = None # connect some GUI events self.figure.canvas.mpl_connect('button_press_event', self._mouse_press) self.figure.canvas.mpl_connect('button_release_event', self._mouse_release) self.figure.canvas.mpl_connect('motion_notify_event', self._mouse_motion) self.figure.canvas.mpl_connect('key_press_event', self._key_press) self.figure.canvas.mpl_connect('key_release_event', self._key_release) self.figure.text(0.05, 0.05, ("Click and Drag to Move\n" "Hold shift key to adjust rotation")) #---------------------------------------------------------- # when the shift button is down, change the rotation axis # of left-right movement def _key_press(self, event): """Handler for key press events""" if event.key == 'shift': self._v_LR = (0, 0, -1) def _key_release(self, event): """Handler for key release event""" if event.key == 'shift': self._v_LR = (0, 1, 0) #---------------------------------------------------------- # while the mouse button is pressed, set state to "active" # so that motion event will rotate the plot def _mouse_press(self, event): """Handler for mouse button press""" if event.button == 1: self._active = True self._xy = (event.x, event.y) def _mouse_release(self, event): """Handler for mouse button release""" if event.button == 1: self._active = False self._xy = None def _mouse_motion(self, event): """Handler for mouse motion""" if self._active: dx = event.x - self._xy[0] dy = event.y - self._xy[1] self._xy = (event.x, event.y) rot1 = Quaternion.from_v_theta(self._v_UD, 0.01 * dy) rot2 = Quaternion.from_v_theta(self._v_LR, 0.01 * dx) self.current_rot = (rot2 * rot1 * self.current_rot) self.draw_cube() To see the interactive cube, run the following code (interactivity will not work within the web browser) You should be able to use the mouse to change the orientation of the cube, and hold the shift key to change the type of rotation used. fig = plt.figure(figsize=(4, 4)) ax = CubeAxesInteractive(fig) fig.add_axes(ax) ax.draw_cube() display(fig) Note that the image above is a static view: To experience the interactivity, download the notebook and run it on your own machine. If you're anything like me, your mind is probably spinning with all the possibilities of these tools. My eventual goal is to make a fully functional Rubik's cube simulator using these sorts of techniques. It will take some thinking, but I think it can be done, and it would be a very cool demonstration of Matplotlib's capabilities! I'll be working on this with David Hogg over at the Magic Cube github repository: much of the simple script shown above was adapted from code I first wrote there. The possibilities are endless, and I hope you have fun with these tools! This post was written entirely in an IPython Notebook: the notebook file is available for download here. For more information on blogging with notebooks in octopress, see my previous post on the subject.
http://jakevdp.github.io/blog/2012/11/24/simple-3d-visualization-in-matplotlib/
CC-MAIN-2018-39
refinedweb
1,868
58.58
About tgext.scss SCSS is a cool and useful extension to CSS, but it always required some effort to be used and even specific production environment configurations for some systems. tgext.scss has born to make life easier for TurboGears2 developers, it will rely on an internal minimal SCSS compiler (based on Zeta-Library SCSS parser) to serve all the files in your public directory that end with .scss as text/css converting them and minifying them. Installing tgext.scss can be installed both from pypi or from bitbucket: easy_install tgext.scss should just work for most of the users Enabling tgext.scss If tgext.pluggable is available enabling tgext.scss is just a matter of appending to your config/app_cfg.py: from tgext.pluggable import plug plug(base_config, 'tgext.scss') Otherwise manually using tgext.scss is really simple, you edit your config/middeware.py and just after the #Wrap your base TurboGears 2 application with custom middleware here comment wrap app with SCSSMiddleware: from tgext.scss import SCSS = SCSSMiddleware(app) return app Now you just have to put your .scss file inside public/css and they will be served as CSS. Osman Üngür Support tgext.scss provides minimal support for Osman Üngür command. The required syntax is in the form: @import url('/css/file.scss'); The specified path is relative to your project public files directory. Nested imports are not implemented right now, this means that imported files cannot import another scss How much it will slow me down? Actually as tgext.scss uses aggressive caching it won't you slow down at all, indeed it might even be able to serve you CSS files even faster. Here is the report of a benchmark (absolutely not reliable as every other benchmark) made on paster serving the same CSS file or SCSS: $ /usr/sbin/ab -n 1000 Requests per second: 961.26 [#/sec] (mean) $ /usr/sbin/ab -n 1000 Requests per second: 1200.34 [#/sec] (mean) In these case SCSS is even faster than directly serving the same css file as it is served from memory (due to caching performed by tgext.scss) and is also minified resulting in less bandwith usage. Off course this means that tgext.scss will require a bit more memory than serving your css files alone, but as css files are usually small this amount is trascurable.
https://bitbucket.org/_amol_/tgext.scss/src
CC-MAIN-2016-22
refinedweb
392
58.48
NAME setuid - set user identity SYNOPSIS #include <sys/types.h> #include <unistd.h> int setuid(uid_t uid); DESCRIPTION setuid() sets the effective user ID of the calling process. If the effective UID of the caller is root,-engage a non-root user, and then regain root privileges afterwards cannot use setuid(). You can accomplish this with the (non-POSIX, BSD) call. SEE ALSO getuid(2), seteuid(2), setfsuid(2), setreuid(2), capabilities(7), credentials(7) COLOPHON This page is part of release 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/hardy/en/man2/setuid32.2.html
CC-MAIN-2014-10
refinedweb
104
51.75
HOME HELP PREFERENCES SearchSubjectsFromDates What a beatiful baby. Congratulations!!!! You are right, -keepold option process the file again, no matter if it is the same. But the problem is that archives.inf only maintains the last entry, and it should have both files. Let me explain why: we have a customer that every day generate a set of images image001.jpg to image100.jpg. Every day those 100 images are copied to import directory, so every day we will have the same filenames, with the only difference that after running import process, we clean the import folder just to get it ready to receive the new photos. But if import -keepold is executed, the older 100 photos don□t appear in archive.inf. If I use -incremental, it said: 0 documents were consider fro processing. So, I think it should be better to get an extra option for import.pl, something like -import_again, so for same filenames, get 2 entries in archive.inf. Could it be possible? Thanks a lot. Diego -----Mensaje original----- De: Katherine Don [mailto:kjdon64;cs.waikato.ac.nz] Enviado el: Viernes, 22 de Febrero de 2008 06:12 p.m. Para: Diego Spano CC: 'Greenstone WAIKATO' Asunto: Re: [greenstone-devel] Problem with hash and archives.inf Hi Diego > >
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0gsarch--00-0----0-10-0---0---0direct-10---4-----dfr--0-1l--11-en-50---20-preferences-djwhite3%26%2364%3Bbuffalo.edu--00-0-21-00-0--4----0-0-11-10-0utfZz-8-00&a=d&cl=CL3.7.2&d=001601c87892$8c84a240$7c3401c8-diegos
CC-MAIN-2020-29
refinedweb
212
68.26
The Wilcoxon Signed-Rank Test is the non-parametric version of the paired samples t-test. It is used to test whether or not there is a significant difference between two population means when the distribution of the differences between the two samples cannot be assumed to be normal. This tutorial explains how to conduct a Wilcoxon Signed-Rank Test in Python. Example: Wilcoxon Signed-Rank Test in Python Researchers want to know if a new fuel treatment leads to a change in the average mpg of a certain car. To test this, they measure the mpg of 12 cars with and without the fuel treatment. Use the following steps to perform a Wilcoxon Signed-Rank Test in Python to determine if there is a difference in the mean mpg between the two groups. Wilcoxon Signed-Rank Test. Next, we’ll use the wilcoxon() function from the scipy.stats library to conduct a Wilcoxon Signed-Rank Test, which uses the following syntax: wilcoxon(x, y, alternative=’two-sided’) where: - x: an array of sample observations from group 1 - y: an array of sample observations from group 2 - alternative: defines the alternative hypothesis. Default is ‘two-sided’ but other options include ‘less’ and ‘greater.’ Here’s how to use this function in our specific example: import scipy.stats as stats #perform the Wilcoxon-Signed Rank Test stats.wilcoxon(group1, group2) (statistic=10.5, pvalue=0.044) The test statistic is 10.5 and the corresponding two-sided p-value is 0.044. Step 3: Interpret the results. In this example, the Wilcoxon Signed-Rank Test uses the following null and alternative hypotheses: H0: The mpg is equal between the two groups HA: The mpg is not equal between the two groups Since the p-value (0.044) is less than 0.05, we reject the null hypothesis. We have sufficient evidence to say that the true mean mpg is not equal between the two groups.
https://www.statology.org/wilcoxon-signed-rank-test-python/
CC-MAIN-2022-21
refinedweb
324
65.01
Sequences (F#) A sequence is a logical series of elements all of one type. Sequences are particularly useful when you have a large, ordered collection of data but do not necessarily expect to use all the elements. Individual sequence elements are computed only as required, so a sequence can provide better performance than a list in situations in which not all the elements are used. Sequences are represented by the seq<'T> type, which is an alias for IEnumerable<T>. Therefore, any .NET Framework type that implements System.IEnumerable can be used as a sequence. The Seq module provides support for manipulations involving sequences. Sequence Expressions A sequence expression is an expression that evaluates to a sequence. Sequence expressions can take a number of forms. The simplest form specifies a range. For example, seq { 1 .. 5 } creates a sequence that contains five elements, including the endpoints 1 and 5. You can also specify an increment (or decrement) between two double periods. For example, the following code creates the sequence of multiples of 10. // Sequence that has an increment. seq { 0 .. 10 .. 100 } Sequence expressions are made up of F# expressions that produce values of the sequence. They can use the yield keyword to produce values that become part of the sequence. Following is an example. seq { for i in 1 .. 10 do yield i * i } You can use the -> operator instead of yield, in which case you can omit the do keyword, as shown in the following example. seq { for i in 1 .. 10 -> i * i } The following code generates a list of coordinate pairs along with an index into an array that represents the grid. let (height, width) = (10, 10) seq { for row in 0 .. width - 1 do for col in 0 .. height - 1 do yield (row, col, row*width + col) } An if expression used in a sequence is a filter. For example, to generate a sequence of only prime numbers, assuming that you have a function isprime of type int -> bool, construct the sequence as follows. seq { for n in 1 .. 100 do if isprime n then yield n } When you use yield or -> in an iteration, each iteration is expected to generate a single element of the sequence. If each iteration produces a sequence of elements, use yield!. In that case, the elements generated on each iteration are concatenated to produce the final sequence. You can combine multiple expressions together in a sequence expression. The elements generated by each expression are concatenated together. For an example, see the "Examples" section of this topic. Examples The first example uses a sequence expression that contains an iteration, a filter, and a yield to generate an array. This code prints a sequence of prime numbers between 1 and 100 to the console. // Recursive isprime function. let isprime n = let rec check i = i > n/2 || (n % i <> 0 && check (i + 1)) check 2 let aSequence = seq { for n in 1..100 do if isprime n then yield n } for x in aSequence do printfn "%d" x The following code uses yield to create a multiplication table that consists of tuples of three elements, each consisting of two factors and the product. let multiplicationTable = seq { for i in 1..9 do for j in 1..9 do yield (i, j, i*j) } The following example demonstrates the use of yield! to combine individual sequences into a single final sequence. In this case, the sequences for each subtree in a binary tree are concatenated in a recursive function to produce the final sequence. // Yield the values of a binary tree in a sequence. type Tree<'a> = | Tree of 'a * Tree<'a> * Tree<'a> | Leaf of 'a // inorder : Tree<'a> -> seq<'a> let rec inorder tree = seq { match tree with | Tree(x, left, right) -> yield! inorder left yield x yield! inorder right | Leaf x -> yield x } let mytree = Tree(6, Tree(2, Leaf(1), Leaf(3)), Leaf(9)) let seq1 = inorder mytree printfn "%A" seq1 Using Sequences Sequences support many of the same functions as lists. Sequences also support operations such as grouping and counting by using key-generating functions. Sequences also support more diverse functions for extracting subsequences. Many data types, such as lists, arrays, sets, and maps are implicitly sequences because they are enumerable collections. A function that takes a sequence as an argument works with any of the common F# data types, in addition to any .NET Framework data type that implements IEnumerable<T>. Contrast this to a function that takes a list as an argument, which can only take lists. The type seq<'a> is a type abbreviation for IEnumerable<'a>. This means that any type that implements the generic IEnumerable<T>, which includes arrays, lists, sets, and maps in F#, and also most .NET Framework collection types, is compatible with the seq type and can be used wherever a sequence is expected. Module Functions The Seq module in the Microsoft.FSharp.Collections namespace contains functions for working with sequences. These functions work with lists, arrays, maps, and sets as well, because all of those types are enumerable, and therefore can be treated as sequences. Creating Sequences You can create sequences by using sequence expressions, as described previously, or by using certain functions. You can create an empty sequence by using Seq.empty, or you can create a sequence of just one specified element by using Seq.singleton. let seqEmpty = Seq.empty let seqOne = Seq.singleton 10 You can use Seq.init to create a sequence for which the elements are created by using a function that you provide. You also provide a size for the sequence. This function is just like List.init, except that the elements are not created until you iterate through the sequence. The following code illustrates the use of Seq.init. let seqFirst5MultiplesOf10 = Seq.init 5 (fun n -> n * 10) Seq.iter (fun elem -> printf "%d " elem) seqFirst5MultiplesOf10 The output is 0 10 20 30 40 By using Seq.ofArray and Seq.ofList<'T> Function (F#), you can create sequences from arrays and lists. However, you can also convert arrays and lists to sequences by using a cast operator. Both techniques are shown in the following code. // Convert an array to a sequence by using a cast. let seqFromArray1 = [| 1 .. 10 |] :> seq<int> // Convert an array to a sequence by using Seq.ofArray. let seqFromArray2 = [| 1 .. 10 |] |> Seq.ofArray By using Seq.cast, you can create a sequence from a weakly typed collection, such as those defined in System.Collections. Such weakly typed collections have the element type Object and are enumerated by using the non-generic IEnumerable<T> type. The following code illustrates the use of Seq.cast to convert an ArrayList into a sequence. open System let mutable arrayList1 = new System.Collections.ArrayList(10) for i in 1 .. 10 do arrayList1.Add(10) |> ignore let seqCast : seq<int> = Seq.cast arrayList1 You can define infinite sequences by using the Seq.initInfinite function. For such a sequence, you provide a function that generates each element from the index of the element. Infinite sequences are possible because of lazy evaluation; elements are created as needed by calling the function that you specify. The following code example produces an infinite sequence of floating point numbers, in this case the alternating series of reciprocals of squares of successive integers. let seqInfinite = Seq.initInfinite (fun index -> let n = float( index + 1 ) 1.0 / (n * n * (if ((index + 1) % 2 = 0) then 1.0 else -1.0))) printfn "%A" seqInfinite Seq.unfold generates a sequence from a computation function that takes a state and transforms it to produce each subsequent element in the sequence. The state is just a value that is used to compute each element, and can change as each element is computed. The second argument to Seq.unfold is the initial value that is used to start the sequence. Seq.unfold uses an option type for the state, which enables you to terminate the sequence by returning the None value. The following code shows two examples of sequences, seq1 and fib, that are generated by an unfold operation. The first, seq1, is just a simple sequence with numbers up to 100. The second, fib, uses unfold to compute the Fibonacci sequence. Because each element in the Fibonacci sequence is the sum of the previous two Fibonacci numbers, the state value is a tuple that consists of the previous two numbers in the sequence. The initial value is (1,1), the first two numbers in the sequence. The output is as follows: The following code is an example that uses many of the sequence module functions described here to generate and compute the values of infinite sequences. The code might take a few minutes to run. // infiniteSequences.fs // generateInfiniteSequence generates sequences of floating point // numbers. The sequences generated are computed from the fDenominator // function, which has the type (int -> float) and computes the // denominator of each term in the sequence from the index of that // term. The isAlternating parameter is true if the sequence has // alternating signs. let generateInfiniteSequence fDenominator isAlternating = if (isAlternating) then Seq.initInfinite (fun index -> 1.0 /(fDenominator index) * (if (index % 2 = 0) then -1.0 else 1.0)) else Seq.initInfinite (fun index -> 1.0 /(fDenominator index)) // The harmonic series is the series of reciprocals of whole numbers. let harmonicSeries = generateInfiniteSequence (fun index -> float index) false // The harmonic alternating series is like the harmonic series // except that it has alternating signs. let harmonicAlternatingSeries = generateInfiniteSequence (fun index -> float index) true // This is the series of reciprocals of the odd numbers. let oddNumberSeries = generateInfiniteSequence (fun index -> float (2 * index - 1)) true // This is the series of recipocals of the squares. let squaresSeries = generateInfiniteSequence (fun index -> float (index * index)) false // This function sums a sequence, up to the specified number of terms. let sumSeq length sequence = Seq.unfold (fun state -> let subtotal = snd state + Seq.nth (fst state + 1) sequence if (fst state >= length) then None else Some(subtotal,(fst state + 1, subtotal))) (0, 0.0) // This function sums an infinite sequence up to a given value // for the difference (epsilon) between subsequent terms, // up to a maximum number of terms, whichever is reached first. let infiniteSum infiniteSeq epsilon maxIteration = infiniteSeq |> sumSeq maxIteration |> Seq.pairwise |> Seq.takeWhile (fun elem -> abs (snd elem - fst elem) > epsilon) |> List.ofSeq |> List.rev |> List.head |> snd // Compute the sums for three sequences that converge, and compare // the sums to the expected theoretical values. let result1 = infiniteSum harmonicAlternatingSeries 0.00001 100000 printfn "Result: %f ln2: %f" result1 (log 2.0) let pi = Math.PI let result2 = infiniteSum oddNumberSeries 0.00001 10000 printfn "Result: %f pi/4: %f" result2 (pi/4.0) // Because this is not an alternating series, a much smaller epsilon // value and more terms are needed to obtain an accurate result. let result3 = infiniteSum squaresSeries 0.0000001 1000000 printfn "Result: %f pi*pi/6: %f" result3 (pi*pi/6.0) Searching and Finding Elements Sequences support functionality available with lists: Seq.exists, Seq.exists2, Seq.find, Seq.findIndex, Seq.pick, Seq.tryFind, and Seq.tryFindIndex. The versions of these functions that are available for sequences evaluate the sequence only up to the element that is being searched for. For examples, see Lists. Obtaining Subsequences Seq.filter and Seq.choose are like the corresponding functions that are available for lists, except that the filtering and choosing does not occur until the sequence elements are evaluated. Seq.truncate creates a sequence from another sequence, but limits the sequence to a specified number of elements. Seq.take creates a new sequence that contains only a specified number of elements from the start of a sequence. If there are fewer elements in the sequence than you specify to take, Seq.take throws a InvalidOperationException. The difference between Seq.take and Seq.truncate is that Seq.truncate does not produce an error if the number of elements is fewer than the number you specify. The following code shows the behavior of and differences between Seq.truncate and Seq.take. let mySeq = seq { for i in 1 .. 10 -> i*i } let truncatedSeq = Seq.truncate 5 mySeq let takenSeq = Seq.take 5 mySeq let truncatedSeq2 = Seq.truncate 20 mySeq let takenSeq2 = Seq.take 20 mySeq let printSeq seq1 = Seq.iter (printf "%A ") seq1; printfn "" // Up to this point, the sequences are not evaluated. // The following code causes the sequences to be evaluated. truncatedSeq |> printSeq truncatedSeq2 |> printSeq takenSeq |> printSeq // The following line produces a run-time error (in printSeq): takenSeq2 |> printSeq The output, before the error occurs, is as follows. 1 4 9 16 25 1 4 9 16 25 36 49 64 81 100 1 4 9 16 25 1 4 9 16 25 36 49 64 81 100 By using Seq.takeWhile, you can specify a predicate function (a Boolean function) and create a sequence from another sequence made up of those elements of the original sequence for which the predicate is true, but stop before the first element for which the predicate returns false. Seq.skip returns a sequence that skips a specified number of the first elements of another sequence and returns the remaining elements. Seq.skipWhile returns a sequence that skips the first elements of another sequence as long as the predicate returns true, and then returns the remaining elements, starting with the first element for which the predicate returns false. The following code example illustrates the behavior of and differences between Seq.takeWhile, Seq.skip, and Seq.skipWhile. // takeWhile let mySeqLessThan10 = Seq.takeWhile (fun elem -> elem < 10) mySeq mySeqLessThan10 |> printSeq // skip let mySeqSkipFirst5 = Seq.skip 5 mySeq mySeqSkipFirst5 |> printSeq // skipWhile let mySeqSkipWhileLessThan10 = Seq.skipWhile (fun elem -> elem < 10) mySeq mySeqSkipWhileLessThan10 |> printSeq The output is as follows. 1 4 9 36 49 64 81 100 16 25 36 49 64 81 100 Transforming Sequences Seq.pairwise creates a new sequence in which successive elements of the input sequence are grouped into tuples. let printSeq seq1 = Seq.iter (printf "%A ") seq1; printfn "" let seqPairwise = Seq.pairwise (seq { for i in 1 .. 10 -> i*i }) printSeq seqPairwise printfn "" let seqDelta = Seq.map (fun elem -> snd elem - fst elem) seqPairwise printSeq seqDelta Seq.windowed is like Seq.pairwise, except that instead of producing a sequence of tuples, it produces a sequence of arrays that contain copies of adjacent elements (a window) from the sequence. You specify the number of adjacent elements you want in each array. The following code example demonstrates the use of Seq.windowed. In this case the number of elements in the window is 3. The example uses printSeq, which is defined in the previous code example. let seqNumbers = [ 1.0; 1.5; 2.0; 1.5; 1.0; 1.5 ] :> seq<float> let seqWindows = Seq.windowed 3 seqNumbers let seqMovingAverage = Seq.map Array.average seqWindows printfn "Initial sequence: " printSeq seqNumbers printfn "\nWindows of length 3: " printSeq seqWindows printfn "\nMoving average: " printSeq seqMovingAverage The output is as follows. Operations with Multiple Sequences Seq.zip and Seq.zip3 take two or three sequences and produce a sequence of tuples. These functions are like the corresponding functions available for lists. There is no corresponding functionality to separate one sequence into two or more sequences. If you need this functionality for a sequence, convert the sequence to a list and use List.unzip. Sorting, Comparing, and Grouping The sorting functions supported for lists also work with sequences. This includes Seq.sort and Seq.sortBy. These functions iterate through the whole sequence. You compare two sequences by using the Seq.compareWith function. The function compares successive elements in turn, and stops when it encounters the first unequal pair. Any additional elements do not contribute to the comparison. The following code shows the use of Seq.compareWith..") In the previous code, only the first element is computed and examined, and the result is -1. Seq.countBy takes a function that generates a value called a key for each element. A key is generated for each element by calling this function on each element. Seq.countBy then returns a sequence that contains the key values, and a count of the number of elements that generated each value of the key. let mySeq1 = seq { 1.. 100 } let printSeq seq1 = Seq.iter (printf "%A ") seq1; printfn "" let seqResult = Seq.countBy (fun elem -> if elem % 3 = 0 then 0 elif elem % 3 = 1 then 1 else 2) mySeq1 printSeq seqResult The output is as follows. (1, 34) (2, 33) (0, 33) The previous output shows that there were 34 elements of the original sequence that produced the key 1, 33 values that produced the key 2, and 33 values that produced the key 0. You can group elements of a sequence by calling Seq.groupBy. Seq.groupBy takes a sequence and a function that generates a key from an element. The function is executed on each element of the sequence. Seq.groupBy returns a sequence of tuples, where the first element of each tuple is the key and the second is a sequence of elements that produce that key. The following code example shows the use of Seq.groupBy to partition the sequence of numbers from 1 to 100 into three groups that have the distinct key values 0, 1, and 2. let sequence = seq { 1 .. 100 } let printSeq seq1 = Seq.iter (printf "%A ") seq1; printfn "" let sequences3 = Seq.groupBy (fun index -> if (index % 3 = 0) then 0 elif (index % 3 = 1) then 1 else 2) sequence sequences3 |> printSeq The output is as follows. (1, seq [1; 4; 7; 10; ...]) (2, seq [2; 5; 8; 11; ...]) (0, seq [3; 6; 9; 12; ...]) You can create a sequence that eliminates duplicate elements by calling Seq.distinct. Or you can use Seq.distinctBy, which takes a key-generating function to be called on each element. The resulting sequence contains elements of the original sequence that have unique keys; later elements that produce a duplicate key to an earlier element are discarded. The following code example illustrates the use of Seq.distinct. Seq.distinct is demonstrated by generating sequences that represent binary numbers, and then showing that the only distinct elements are 0 and 1. let binary n = let rec generateBinary n = if (n / 2 = 0) then [n] else (n % 2) :: generateBinary (n / 2) generateBinary n |> List.rev |> Seq.ofList printfn "%A" (binary 1024) let resultSequence = Seq.distinct (binary 1024) printfn "%A" resultSequence The following code demonstrates Seq.distinctBy by starting with a sequence that contains negative and positive numbers and using the absolute value function as the key-generating function. The resulting sequence is missing all the positive numbers that correspond to the negative numbers in the sequence, because the negative numbers appear earlier in the sequence and therefore are selected instead of the positive numbers that have the same absolute value, or key. Readonly and Cached Sequences Seq.readonly creates a read-only copy of a sequence. Seq.readonly is useful when you have a read-write collection, such as an array, and you do not want to modify the original collection. This function can be used to preserve data encapsulation. In the following code example, a type that contains an array is created. A property exposes the array, but instead of returning an array, it returns a sequence that is created from the array by using Seq.readonly. type ArrayContainer(start, finish) = let internalArray = [| start .. finish |] member this.RangeSeq = Seq.readonly internalArray member this.RangeArray = internalArray let newArray = new ArrayContainer(1, 10) let rangeSeq = newArray.RangeSeq let rangeArray = newArray.RangeArray // These lines produce an error: //let myArray = rangeSeq :> int array //myArray.[0] <- 0 // The following line does not produce an error. // It does not preserve encapsulation. rangeArray.[0] <- 0 Seq.cache creates a stored version of a sequence. Use Seq.cache to avoid reevaluation of a sequence, or when you have multiple threads that use a sequence, but you must make sure that each element is acted upon only one time. When you have a sequence that is being used by multiple threads, you can have one thread that enumerates and computes the values for the original sequence, and remaining threads can use the cached sequence. Performing Computations on Sequences Simple arithmetic operations are like those of lists, such as Seq.average, Seq.sum, Seq.averageBy, Seq.sumBy, and so on. Seq.fold, Seq.reduce, and Seq.scan are like the corresponding functions that are available for lists. Sequences support a subset of the full variations of these functions that lists support. For more information and examples, see Lists (F#).
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/dd233209(v=vs.100)
CC-MAIN-2019-04
refinedweb
3,420
59.09
How To Auto-Generate DTOs How To Auto-Generate DTOs Don't like writing DTOs? Automate it. See how you can use some Java and SQL together to automatically generate DTOs. Join the DZone community and get the full member experience.Join For Free MariaDB TX, proven in production and driven by the community, is a complete database solution for any and every enterprise — a modern database for modern applications. Background Usually during database development, few people are willing to use JDBC unless they're learning. As we all know, Java code is verbose, even moreso with JDBC! So we often use frameworks/libraries to access a database. There are many ORM frameworks available, for example: Hibernate, MyBatis, JdbcTemplate, DbUtils, ActiveRecord, JavaLite, etc. Many times, the framework can help us avoid writing SQL, but for some complex queries, hand-coded SQL is a must. Although the framework can do the mapping from SQL's ResultSet to a DTO (Data Transfer Object), we have to write those DTOs by hand, one by one. Problems In general, we need to write SQL and the corresponding DTOs. Of course, in order to save time, a Map is sometimes used to store data. However, although a Map is a very lightweight object, it will bring a few more important problems: The caller needs to remember each of the keys inside the Map, which will bring some of the so-called memory burden. If the memory burden is too heavy, it will cause the system to have complex logic, be difficult to understand, be even more difficult to maintain. If the SQL is changed, the key may be changed, too. The programmer needs to handle these changes very carefully. If you want to avoid these problems using a Map, we need to write DTOs for each SQL query. Writing these DTOs is boring; on the other hand, if the SQL query is changed, we must remember to come back to modify the DTOs, too. Idea All of those problems can be solved if there is a tool that automatically does the following: Use the SQL code to directly generate the corresponding DTO. When the SQL code is changed, automatically modify the corresponding DTO. It saves the trouble of manually programming DTOs and synchronizes between SQL and DTO. This article attempts to solve the problem of how to automatically generate DTOs with the SQL code and improve development efficiency. Solution There is hope, but the reality is cruel! So, how can we create this tool? Let's first analyze the feasibility about this problem. To generate these DTOs automatically, the key is to get each name and data type of the SQL result set. If that is done, it will be very easy to generate these DTOs. As we all know, once the SQL finished, maybe the final run of the SQL statement is different, but the names and data types of the SQL result set are relatively fixed. In a very few cases, these names and data types of the SQL results set are not fixed. In this case, a Map maybe is more suitable, but there is no need to discuss that here. So, how do we get these column names and data types? One solution is to analyze the SQL code and find which columns are between "SELECT" and "FROM". However, there are some important difficulties: For parameterized dynamic SQL, this will be difficult to analyze. It is difficult to determine the column's data type. "SELECT * ..."; "CALL ..."; are very difficult to analyze. Like Mybatis, SQL queries written in configuration files, the above solution is somewhat feasible. I don't have a specific study, but I think there will be a lot of problems to be solved. Another solution is to find a way to run the SQL code directly. By running the SQL code, we can intercept the original SQL code. With the original SQL code, it is easy to get each name and data type, like this: ResultSet rs=statement.executeQuery("The original SQL code"); ResultSetMetaData rsmd = rs.getMetaData(); int cc = rsmd.getColumnCount(); for (int i = 1; i <= cc; i++) { int type = rsmd.getColumnType(i); String name = rsmd.getColumnName(i); //... } //... In general, the SQL query interface is a method with some parameters. To run the method directly, it is necessary to automatically initialize these parameters. To automate the solution to this problem, let's take a look at how this tool will face some of the challenges and their corresponding solutions. How to Define a Section of SQL Code First, we need to identify this code so that the code generator can run the code. Normally, our data interface is at the method level, so we can annotate on the method. How to Define the Class Name of a DTO A simple way is to use the combined class name and method name to make a new name. Sometimes, to be flexible, you should be allowed to specify a name. How to Run the SQL Code To execute code, the key is to construct the appropriate parameters of the method. First, we need to analyze the code of this method to extract the parameter name and type. A code parser can use a tool such as JavaCC or some syntax analyzer. Let's discuss how to construct these parameters. In order to simplify the problem, we will be building according to the following rules by default: Numeric parameter, default value: 0, e.g: int arg = 0; String parameter, default value: "", e.g: String arg = ""; Boolean parameter, default value: false, e.g: Boolean arg = false; Array parameter, default value: [0], e.g: int[] arg = new int[0]; Object parameter, default value: new(), e.g: User arg = new User(); ... In the case of some simple parameters, the above structural rules are basically able to work. However, for some special parameters, such as if a parameter is an interface, or some special values to run SQL, etc., the constructed parameters according to the above rules will throw a wrench into the program. This is a problem, but we can provide a parameter on the annotation — the parameter to help the code generator complete the parameter initialization. How to Generate the DTO Class After the previous processing, we can finally run the method. But we haven't got the DTO class that we want yet. One possible way is to wrap the JDBC, when running the method, to intercept the SQL, but the problem is that if the method has multiple queries, it will cause problems. Another way depends on the framework's support. We could intercept the method's return statement to get the SQL statement. With the SQL statement, it is not difficult to generate the DTO class. How to Modify the Called Method In order to minimize the work of developers, after the DTO class generated, we will also need to modify the method's return value as the corresponding DTO class. Generally, the returned DTO object has 3 types of manifestation: Single object List collection Collection of pages Because the method has to be run before generating DTO, the return value of the method should be denoted by common types. Besides, return value can be anyone of the above manifestations. Therefore, for the purpose of realizing automatic modification of return value of method, a simple agreement has to be made: Object represents that return value is a single DTO object. List represents that return value is a list collection. Page represents that return value is a collection of pages. In this case, tools can automatically make the following modifications according to the agreed return types(supposing that the generated type name of DTO is UserBlogs) : Object will be modified as UserBlogs. List will be modified as List<UserBlogs>. Page will be modified as Page<UserBlogs>. How to Deal With Changes in SQL The simple way is: Once the code has changed in that class, all of the DTO classes are regenerated. However, it is clear that when there are a lot of query methods, the DTO code generation process will be very slow. There's also another suitable way: adding a fingerprint field in DTO class — its value, maybe length of the code and hashcode(or the MD5 value of code). First, calculate the method of the fingerprint and compare to the existing method's fingerprint. If it's the same, move on. Otherwise, the program considers the method to have changed and the DTO class needs an update. Implementation Finally, we use a specific implementation as an example. It needs to introduce two projects : Monalisa-orm: This is a simple ORM framework. It introduces the database using an annotation: @DB(jdbc_url, username, password). Monalisa-eclipse: This is an eclipse plugin. It can: Interface with annotation @DB, and when the interface is saved, it automatically generates model classes. Method with annotation @Select, and when the class is saved, it automatically creates or updates the DTO. Easily write multi-line strings. Here are the instructions for installation and setup. Here is an example of how to automatically generated DTOs. The full example is here. package test.dao; public class UserBlogDao { // @Select indicating that the method will generate a DTO // Optional parameter: name // specifies the name of the class generated DTO, // if not specified, using the default: "Result" + the method's name // Optional parameter: build // a Java snippet for initializing the method parameters, // replace the default initialization rule @Select (name = "test.dao.userblogdao.UserBlogs") // !!! After saving, the plugin will automatically modify the return value: List -> List <UserBlogs> // // The first time, DTO class does not exist. In order to compile the code correctly, // the return value and the result of the query must be replaced by a generic value. // If saved, the plugin will automatically modify the results to the corresponding values. // // Here is the corresponding relationship between the return value and the results of the query: // 1. List query // Public DataTable method_name (...) {... return query.getList ();} or // Public List method_name (...) {... return query.getList ();} // // 2. Page query // Public Page method_name (...) {... return query.Page (); } // // 3. Single record // Public Object method_name (...) {... return query.getResult ();} // public List selectUserBlogs(int user_id){ Query q=TestDB.DB.createQuery(); q.add(""/**~{ SELECT a.id,a.name,b.title, b.content,b.create_time FROM user a, blog b WHERE a.id=b.user_id AND a.id=? }*/, user_id); return q.getList(); } } Saved with the above code, the plugin will generate the DTO automatically: test.dao.userblogdao.UserBlogs as follows: /** * Auto generated code by monalisa 1.7.0 * * @see test.dao.UserBlogDao#selectUserBlogs(int) */ public class UserBlogs implements java.io.Serializable{ /** * Some comments from database ... */ @Column(table=User.M.TABLE, jdbcType=4, name=User.M.id$name, ...) private Integer id; //Other fields and get/set methods ... } Method declaration: public List<UserBlogs> selectUserBlogs (int user_id) { ... return q.getList(UserBlogs.class); } Of course, if the method changes (even if it is just a whitespace), after the file saved, UserBlogs will be updated automatically. In order to keep debugging easy, the following message will output in an Eclipse console named monalisa: 2016-06-27 17:00:31 [I] ****** Starting generate result classes from: test.dao.UserBlogDao ****** 2016-06-27 17:00:31 [I] Create class: test.result.UserBlogs, from: [selectUserBlogs (int)] SELECT a.id, a.name, b.title, b.content, b.create_time FROM user a, blog b WHERE a.id = b.user_id AND a.id = 0 Additionally... When writing SQL in Java, multi-line strings can be troublesome. A large segment of the SQL code with many new lines and escape symbols looked uncomfortable. Monalisa-eclipse plugin also solves the problem of writing multi-line strings. E.g: System.out.println (""/**~{ SELECT * FROM user WHERE name = "zzg" }*/); Output: SELECT * FROM user WHERE name = "zzg" Of course, in order to write multi-line string quickly, you can set up Java editor templates in Eclipse. For more details about multi-line syntax, please refer here. So much for that, "How To Auto Generate DTO According SQL Code" is introduced. Welcome to discuss with me, thank you! MariaDB AX is an open source database for modern analytics: distributed, columnar and easy to use. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/how-to-auto-generate-dtos?fromrel=true
CC-MAIN-2019-04
refinedweb
2,042
56.96
table of contents NAME¶ getcwd, getwd, get_current_dir_name - get current working directory SYNOPSIS¶ #include <unistd.h> char *getcwd(char *buf, size_t size); char *getwd(char *buf); char *get_current_dir_name(void); get_current_dir_name():¶¶ On success, these functions return a pointer to a string containing the pathname of the current working directory. In the case of getcwd() and getwd() this is the same value as buf. On failure, these functions return NULL, and errno is set to indicate the error. The contents of the array pointed to by buf are undefined on error. ERRORS¶ -name of the working directory, including the terminating null byte. You need to allocate a bigger array and try again. ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶¶ Under Linux, these functions make use of the getcwd() system call (available since Linux 2.1.92). On older systems. C library/kernel differences¶¶¶ pwd(1), chdir(2), fchdir(2), open(2), unlink(2), free(3), malloc(3) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/unstable/manpages-dev/getcwd.3.en.html
CC-MAIN-2021-43
refinedweb
198
65.83
Creating and Editing File Templates IntelliJ IDEA: ${PACKAGE_NAME}- the name of the target package where the new class or interface will be created. $. IntelliJ IDEA provides a set of additional variables for PHP include templates, that is, templates of reusable fragments that can be included in other PHP file template. The built-in PHP include templates are intended for generating file headers and PHPDoc documentation comments. The following variables are available in PHP include templates: ${NAME}- the name of the class, field, or function (method) for which the PHPDoc comment will be generated. ${NAMESPACE}- the fully qualified name (without a leading slash) of the class or field namespace. ${CLASS_NAME}- the name of the class where the field to generate the PHPDoc comment for is defined. ${STATIC}- gets the value staticif the function (method) or field to generate the comment for is static. Otherwise evaluates to an empty string. ${TYPE_HINT}- a prompt for the returnvalue of the function (method) to generate the comment for. If the return type cannot be detected through the static analysis of the function (method), evaluates to void. ${PARAM_DOC}- - a documentation comment for parameters. Evaluates to a set of lines @param type name. If the function to generate comments for does not contain any parameters, the variable evaluates to empty content. ${THROWS_DOC}- a documentation comment for exceptions. Evaluates to a set of lines @throws type. If the function to generate comments for does not throw any exceptions, the variable evaluates to empty content. ${DS}- a dollar character ( $). The variable evaluates to a plain dollar character ( $) and is used when you need to escape this symbol so it is not treated as a prefix of a variable. ${CARET}- indicated the position of the caret after generating and adding the comment., IntelliJ IDEA cannot "choose" the block to apply the ${CARET}variable in, therefore in this case the ${CARET}variable is ignored. -, IntelliJ IDEA.
https://www.jetbrains.com/help/idea/2016.3/creating-and-editing-file-templates.html
CC-MAIN-2017-04
refinedweb
315
55.64
projects so you need not worry about them. have you read document...struts Hi, i want to develop a struts application,iam using eclipse... as such. Moreover war files are the compressed form of your projects JAva Projects - Java Magazine JAva Projects I need Some Java Projects J2EE - Struts J2EE what is Struts Architecture? Hi Friend, Please visit the following links: Java Project Outsourcing, Java Outsourcing Projects, Oursource your Java development projects Java Project Outsourcing - Outsource Java development projects Java... the quality products in less time. Outsource your Java Projects to our... projects. We use latest software frameworks such as Spring j2ee - Struts j2ee hi can you explain what is proxy interface in delegater design pattern struts struts <p>hi here is my code in struts i want to validate my... }//execute }//class struts-config.xml <struts..."/> </plug-in> </struts-config> validator Tutorials - Jakarta Struts Tutorial Struts Tutorials - Jakarta Struts Tutorial Learn Struts Framework with the help of examples and projects. Struts 2 Training! Get..., Struts Projects, Struts Presentations, Struts MappingDispatchAction Example Struts Books for applying Struts to J2EE projects and generally accepted best practices as well... covers everything you need to know about Struts and its supporting technologies...- projects: AppFuse - A baseline Struts application to be j2ee j2ee I want program for login page with database connectivity using struts framework. that application should session management and cookies Building Projects - Maven2 Building Projects - Maven2  ... Source build tool that made the revolution in the area of building projects... is non trivial because all file references need to be relative, environment must java projects java projects i have never made any projects in any language. i want to make project in java .i don't know a bit about this .i am familar with java.please show me the path please...... Hi, You can develop How to build a Struts Project - Struts How to build a Struts Project Please Help me. i will be building a small Struts Projects, please give some Suggestion & tips Java projects Easy Projects .NET Easy Projects .NET Easy Projects .NET is - AJAX-based project management and team collaboration... and whistles. No need to worry about a sophisticated setup process. It's as easy Java Marketing projects Java Marketing projects Java Marketing Threads in realtime projects Threads in realtime projects Explain where we use threads in realtime projects with example java projects - Java Beginners java projects hi, im final yr eng student.plz give me latest java or web related topics for last yr projects Struts - Struts Java Bean tags in struts 2 i need the reference of bean tags in struts 2. Thanks! Hello,Here is example of bean tags in struts 2: Struts 2 UI java - Struts java hi sir, i need Structs Architecture and flow of the Application. Hi friend, Struts is an open source framework used for developing J2EE web applications using Model View Controller (MVC) design - History of Struts 2 framework. So, the team of Apache Struts and another J2EE framework, WebWork...; Strut2 contains the combined features of Struts Ti and WebWork 2 projects... Struts 2 History 2 project samples struts 2 project samples please forward struts 2 sample projects like hotel management system. i've done with general login application and all. Ur answers are appreciated. Thanks in advance Raneesh struts - Struts struts Hi, I need the example programs for shopping cart using struts with my sql. Please send the examples code as soon as possible. please send it immediately. Regards, Valarmathi Hi Friend, Please projects on cyber cafe projects on cyber cafe To accept details from user like name Birth date address contact no etc and store in a database Hi Friend, Try this: import java.awt.*; import java.sql.*; import javax.swing.*; import struts - Struts knows about all the data that need to be displayed. It is model who is aware about.... ----------------------------------------------- Read for more information. Architecture - Struts Struts Architecture Hi Friends, Can u give clear struts architecture with flow. Hi friend, Struts is an open source framework used for developing J2EE web applications using Model View Controller tiles - Struts Struts Tiles I need an example of Struts Tiles Open Source projects Open Source projects Mono Open Source Project Mono provides...; SGI Open Source Project List The following projects...; Open-source projects get free checkup More open-source software - JDBC struts-taglib.jar I am not able to locate the struts-taglib.jar in downloaded struts file. Why? Do i need to download it again Outsourcing PHP Projects, Outsource PHP Projects Outsourcing PHP Projects - Outsource your PHP development projects Outsource your PHP projects to our PHP development in India. We have dedicated... outsourcing needs. Looking for outsourcing your PHP projects to company? Our Why Struts in web Application - Struts Why Struts in web Application Hi Friends, why struts introduced in to web application. Plz dont send any links . Need main reason for implementing struts. Thanks Prakash Free Java Projects - Servlet Interview Questions Free Java Projects Hi All, Can any one send the List of WebSites which will provide free JAVA projects with source code on "servlets" and "Jsp" relating to Banking Sector? don't Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://roseindia.net/tutorialhelp/comment/91453
CC-MAIN-2013-20
refinedweb
892
65.01
Append data xml file aspnet työt Fix upload error "Invalid File" File attached I need some to configure a delta printer smoothieware config file to work with the machine. It is for an Azteeg X5 GT board. I need to create a animation of my logo. ...development. :). [kirjaudu nähdäksesi URL:n]. We are planning to build website with for Soccer Live score with Content XML or JSON . Must manage can for user Friendly and Responsive website. Please refer to [kirjaudu nähdäksesi URL:n] for more details. an older version of outlook to do this project. (2010 or earlier version of Outlook) Here is one sample, ... import 2 xml of different language , WPML setup , site is already done I have 1 file save_ip .php (the task is to save log ip to txt file). I want to be anyone (including google bots) visiting my newspaper page (eg news .com, [kirjaudu nähdäksesi URL:n] / [kirjaudu nähdäksesi URL:n], news .com / [kirjaudu nähdäksesi URL:n] /, news. Com / *) then htaccess click activate file save_ip. php runs to save the ip. The ser... I am looking to make a ·3D printable file from photos and drawing of a prorotype of a swimming fin that would attach to bicycle shoes
https://www.fi.freelancer.com/job-search/append-data-xml-file-aspnet/
CC-MAIN-2019-22
refinedweb
204
76.52
Okay, I'm trying to get my head round modules here. Kinda embarrassing. So here is a module from a program I am writing: - Code: Select all def load_game(): file = open(lvl) game_progress = file.read() file.close() print(" ") print("Game loaded at level " + str(game_progress) + "!") print(" ") game_play() The part I am struggling at is variables. I want to use 'game_progress' at other points in the program, however it doesn't carry over the whole program, as it's local(?). I tried making it global by adding 'global game_progress' at the start of modules that used it, however, this had it's problems, and I have heard global variables are bad practice. I know the brackets at 'def load_game():' can help with this but I don't quite know how. Do I put game_progress in there? No idea! Sorry for my vague knowledge of python, I'm a beginner, and I've been doing well, but now I've started using modules I need to learn more about them. Thanks for the help!
http://www.python-forum.org/viewtopic.php?p=3284
CC-MAIN-2017-17
refinedweb
172
80.72
Haskell Quiz/Animal Quiz/Solution Ninju From HaskellWiki < Haskell Quiz | Animal Quiz module Main where import System.IO -- nodes are questions and leaves are animals data QuestionTree = Animal String | Question String QuestionTree QuestionTree - data Answer = Yes | No main :: IO () main = do hSetBuffering stdin NoBuffering play (Animal "Dog") return () play :: QuestionTree -> IO QuestionTree play root = do putStrLn "Think of an animal, I will try to guess what it is..." newRoot <- play' root playAgain <- ask "Do you want to play again?" case playAgain of Yes -> play newRoot No -> do putStrLn "Thanks for playing.." return newRoot play' :: QuestionTree -> IO QuestionTree play' animal@(Animal _) = do ans <- ask $ "Are you thinking of " ++ show animal ++ "?" case ans of Yes -> do putStrLn "I win this time." return animal No -> do putStrLn "I give up, you win!" getNewAnimal animal -- returns a new question play' question@(Question s y n) = do ans <- ask s case ans of Yes -> do newYes <- play' y return $ Question s newYes n No -> do newNo <- play' n return $ Question s y newNo getNewAnimal :: QuestionTree -> IO QuestionTree getNewAnimal animal = do putStrLn "Please help me improve my guesses!" putStrLn "What is the name of the animal you were thinking of?" name <- getLine let newAnimal = Animal name putStrLn $ "Now please enter a question that answers yes for " ++ show newAnimal ++ " and no for " ++ show animal question <- getLine return $ Question question newAnimal animal ask :: String -> IO Answer ask s = do putStrLn $ s ++ " (y/n)" getAnswer getAnswer :: IO Answer getAnswer = do ans <- getChar putStrLn "" case ans of 'y' -> return Yes 'n' -> return No _ -> putStrLn "That is not a valid response, please enter 'y' or 'n'..." >> getAnswer instance Show QuestionTree where show (Animal name) = (if elem (head name) "AEIOUaeiou" then "an " else "a ") ++ name show (Question s _ _) = s
http://www.haskell.org/haskellwiki/index.php?title=Haskell_Quiz/Animal_Quiz/Solution_Ninju&direction=prev&oldid=22565
CC-MAIN-2014-10
refinedweb
291
53.65
To solve the problem add the resource file afxprint.rc located in the msc/include directory. The best place to insert it is in the rc2 file in the res directory under your project. // // TESTAPP.RC2 - resources Microsoft Visual C++ does not edit directly // #ifdef APSTUDIO_INVOKED #error this file is not editable by Microsoft Visual C++ #endif //APSTUDIO_INVOKED #includeReply Originally posted by: Macky It seems that you can print just one page Originally posted by: Jack I put all the code in a lib and when I execute the program I got a crash while doing RUNTIME_CLASS(CViewPrintPreview). Any idea ?Reply Originally posted by: Mark Donoghue Seems like the change to ViewPrev.cpp in .NET caused this to break. Anybody else had a similar problem - any workaround? ViewPrev.cpp in VS.NET has been fairly heavily modified and moved to ATLMFC.Reply Originally posted by: situ It will be perfect if having possibility to use keyboard flesh in your preview window for scrollbar.Reply Originally posted by: salina hi thank you for u code, It has been very helpful. l just have a problem, my dialog box can be enter value by user, so if l just want to print out text method. how can change from a dialog with button, staics, or list box to a text method that l want? thank youReply Originally posted by: Sujit How can I get Landscape page Orientation in Print as well as Print Preview?Reply Originally posted by: Sunita Hi, I have a dialog based application. I have added the WM_COMMAND, ID_FILE_PRINT. But when I click on the print button in print preview. Nothing happens.. I tried using the PreTranslate but nothing happens. Seem liks ID_FILE_PRINT not being caught. It goes straight into CViewPrintPreview::OnEndPrintPreview .. what am I doing wrong? What should be done. It is several dialog before in the final dialog the user can preview or print the document. Please help if possible. ThanksReply Originally posted by: sikander Rafiq Plz help me how i can show exact preview which exactly draw the things on its place which is shown by OnPaint func. Hi I have read ur article on net. I want to ask one question. I have a CView class and in its OnPaint func, i created one ractangle and draw texts and called OnFilePrintPreview of CView, but it can't show actual preview Thanks for your kind co-operation Sikander Plz help me how i can show exact preview which exactly draw the things on its place which is shown by OnPaint func. Originally posted by: DongJiang It's very good,thank you. But when I linked with MFC static lib, when running an ERROR happened: Cannot find dialog template named.
http://www.codeguru.com/comment/get/48214510/
CC-MAIN-2014-42
refinedweb
453
65.12
android / platform / external / swig / a8e1862aca759ef6159201fd61dd8536870de54c / . / RELEASENOTES blob: d88a36a758ceaf1c936fd693b779fcaef20177de [ file ] [ log ] [ blame ] This file contains a brief overview of the changes made in each release. A detailed description of changes are available in the CHANGES.current and CHANGES files. Release Notes ============= Detailed release notes are available with the release and are also published on the SWIG web site at. SWIG-3.0.12 summary: - Add support for Octave-4.2. - Enhance %extend to support template functions. - Language specific enhancements and fixes for C#, D,. SWIG-3.0.9 summary: -. SWIG-3.0.8 summary: - pdf documentation enhancements. - Various Python 3.5 issues fixed. - std::array support added for Ruby and Python. - shared_ptr support added for Ruby. - Minor improvements for CFFI, Go, Java, Perl, Python, Ruby. SWIG-3.0.7 summary: - Add support for Octave-4.0.0. - Remove potential Android security exploit in generated Java classes. - Minor new features and bug fixes. SWIG-3.0.6 summary: - Stability and regression fixes. - Fixed parsing of C++ corner cases. - Language improvements and bug fixes for C#, Go, Java, Lua, Python, R. SWIG-3.0.5 summary: - Added support for Scilab. - Important Python regression fix when wrapping C++ default arguments. - Minor improvements for C#, Go, Octave, PHP and Python. SWIG-3.0.4 summary: - Python regression fix when wrapping C++ default arguments. - Improved error messages. SWIG-3.0.3 summary: - Add support for C++11 strongly typed enumerations. - Numerous bug fixes and minor enhancements for C#, D, Go, Java, Javascript, PHP, Perl and Python wrappers. SWIG-3.0.2 summary: - Bug fix during install and a couple of other minor changes. SWIG-3.0.1 summary: - Javascript module added. This supports JavascriptCore (Safari/Webkit), v8 (Chromium) and node.js currently. - A few notable regressions introduced in 3.0.0 have been fixed - in Lua, nested classes and parsing of operator <<. - The usual round of bug fixes and minor improvements for: C#, GCJ, Go, Java, Lua, PHP and Python. SWIG-3.0.0 summary: - This is a major new release focusing primarily on C++ improvements. - C++11 support added. Please see documentation for details of supported features: - Nested class support added. This has been taken full advantage of in Java and C#. Other languages can use the nested classes, but require further work for a more natural integration into the target language. We urge folk knowledgeable in the other target languages to step forward and help with this effort. - Lua: improved metatables and support for %nspace. - Go 1.3 support added. - Python import improvements including relative imports. - Python 3.3 support completed. - Perl director support added. - C# .NET 2 support is now the minimum. Generated using statements are replaced by fully qualified names. - Bug fixes and improvements to the following languages: C#, Go, Guile, Java, Lua, Perl, PHP, Python, Octave, R, Ruby, Tcl - Various other bug fixes and improvements affecting all languages. - Note that this release contains some backwards incompatible changes in some languages. - Full detailed release notes are in the changes file.. SWIG-2.0.9 summary: - Improved typemap matching. - Ruby 1.9 support is much improved. - Various bug fixes and minor improvements in C#, CFFI, Go, Java, Modula3, Octave, Perl, Python, R, Ruby, Tcl and in ccache. SWIG-2.0.6 summary: - Regression fix for Python STL wrappers on some systems.. SWIG-2.0.3 summary: - A bug fix release including a couple of fixes for regressions in the 2.0 series. SWIG-2.0.2 summary: - Support for the D language has been added. - Various bug fixes and minor enhancements. - Bug fixes particular to the Clisp, C#, Go, MzScheme, Ocaml, PHP, R, Ruby target languages. SWIG-2.0.1 summary: - Support for the Go language has been added. - New regular expression (regex) encoder for renaming symbols based on the Perl Compatible Regular Expressions (PCRE) library. - Numerous fixes in reporting file and line numbers in error and warning messages. - Various bug fixes and improvements in the C#, Lua, Perl, PHP, Ruby and Python language modules.. SWIG-1.3.40 summary: - SWIG now supports directors for PHP. - PHP support improved in general. - Octave 3.2 support added. - Various bug fixes/enhancements for Allegrocl, C#, Java, Octave, Perl, Python, Ruby and Tcl. - Other generic fixes and minor new features. SWIG-1.3.39 summary: - Some new small feature enhancements. - Improved C# std::vector wrappers. - Bug fixes: mainly Python, but also Perl, MzScheme, CFFI, Allegrocl and Ruby SWIG-1.3.38 summary: - Output directory regression fix and other minor bug fixes SWIG-1.3.37 summary: - Python 3 support added - SWIG now ships with a version of ccache that can be used with SWIG. This enables the files generated by SWIG to be cached so that repeated use of SWIG on unchanged input files speeds up builds quite considerably. - PHP 4 support removed and PHP support improved in general - Improved C# array support - Numerous Allegro CL improvements - Bug fixes/enhancements for Python, PHP, Java, C#, Chicken, Allegro CL, CFFI, Ruby, Tcl, Perl, R, Lua. - Other minor generic bug fixes and enhancements SWIG-1.3.36 summary: - Enhancement to directors to wrap all protected members - Optimisation feature for objects returned by value - A few bugs fixes in the PHP, Java, Ruby, R, C#, Python, Lua and Perl modules - Other minor generic bug fixes SWIG-1.3.35 summary: - Octave language module added - Bug fixes in Python, Lua, Java, C#, Perl modules - A few other generic bugs and runtime assertions fixed SWIG-1.3.34 summary: - shared_ptr support for Python - Support for latest R - version 2.6 - Various minor improvements/bug fixes for R, Lua, Python, Java, C# - A few other generic bug fixes, mainly for templates and using statements SWIG-1.3.33 summary: - Fix regression for Perl where C++ wrappers would not compile - Fix regression parsing macros SWIG-1.3.32 summary: - shared_ptr support for Java and C# - Enhanced STL support for Ruby - Windows support for R - Fixed long-standing memory leak in PHP Module - Numerous fixes and minor enhancements for Allegrocl, C#, cffi, Chicken, Guile, Java, Lua, Ocaml, Perl, PHP, Python, Ruby, Tcl. - Improved warning support SWIG-1.3.29 summary: - Numerous important bug fixes - Few minor new features - Some performance improvements in generated code for Python SWIG-1.3.28 summary: -'. - Automatic copy constructor wrapper generation via the 'copyctor' option/feature. - Better handling of Windows extensions and types. - Better runtime error reporting. - Add the %catches directive to catch and dispatch exceptions. - Add the %naturalvar directive for more 'natural' variable wrapping. - Better default handling of std::string variables using the %naturalvar directive. - Add the %allowexcept and %exceptionvar directives to handle exceptions when accessing a variable. - Add the %delobject directive to mark methods that act like destructors. - Add the -fastdispatch option to enable smaller and faster overload dispatch mechanism. - Template support for %rename, %feature and %typemap improved. - Add/doc more debug options, such as -dump_module, -debug_typemaps, etc. - Unified typemap library (UTL) potentially providing core typemaps for all scripting languages based on the recently evolving Python typemaps. - New language module: Common Lisp with CFFI. - Python, Ruby, Perl and Tcl use the new UTL, many old reported and hidden errors with typemaps are now fixed. - Initial Java support for languages using the UTL via GCJ, you can now use Java libraries in your favorite script language using gcj + swig. - Tcl support for std::wstring. - PHP4 module update, many error fixes and actively maintained again. - Allegrocl support for C++, also enhanced C support. - Ruby support for bang methods. - Ruby support for user classes as native exceptions. - Perl improved dispatching in overloaded functions via the new cast and rank mechanism. - Perl improved backward compatibility, 5.004 and later tested and working. - Python improved backward compatibility, 1.5.2 and later tested and working. - Python can use the same cast/rank mechanism via the -castmode option. - Python implicit conversion mechanism similar to C++, via the %implicitconv directive (replaces and improves the implicit.i library). - Python threading support added. - Python STL support improved, iterators are supported and STL containers can use now the native PyObject type. - Python many performance options and improvements, try the -O option to test all of them. Python runtime benchmarks show up to 20 times better performance compared to 1.3.27 and older versions. - Python support for 'multi-inheritance' on the python side. - Python simplified proxy classes, now swig doesn't need to generate the additional 'ClassPtr' classes. - Python extended support for smart pointers. - Python better support for static member variables. - Python backward compatibility improved, many projects that used to work only with swig-1.3.21 to swig-1.3.24 are working again with swig-1.3.28 - Python test-suite is now 'valgrinded' before release, and swig also reports memory leaks due to missing destructors. - Minor bug fixes and improvements to the Lua, Ruby, Java, C#, Python, Guile, Chicken, Tcl and Perl modules. SWIG-1.3.27 summary: - Fix bug in anonymous typedef structures which was leading to strange behaviour SWIG-1.3.26 summary: - New language modules: Lua, CLISP and Common Lisp with UFFI. - Big overhaul to the PHP module. - Change to the way 'extern' is handled. - Minor bug fixes specific to C#, Java, Modula3, Ocaml, Allegro CL, XML, Lisp s-expressions, Tcl, Ruby and Python modules. - Other minor improvements and bug fixes. SWIG-1.3.25 summary: - Improved runtime type system. Speed of module loading improved in modules with lots of types. SWIG_RUNTIME_VERSION has been increased from 1 to 2, but the API is exactly the same; only internal changes were made. - The languages that use the runtime type system now support external access to the runtime type system. - Various improvements with typemaps and template handling. - Fewer warnings in generated code. - Improved colour documentation. - Many C# module improvements (exception handling, prevention of early garbage collection, C# attributes support added, more flexible type marshalling/asymmetric types.) - Minor improvements and bug fixes specific to the C#, Java, TCL, Guile, Chicken, MzScheme, Perl, Php, Python, Ruby and Ocaml modules). - Various other bug fixes and memory leak fixes. SWIG-1.3.24 summary: - Improved enum handling - More runtime library options - More bugs fixes for templates and template default arguments, directors and other areas. - Better smart pointer support, including data members, static members and %extend. SWIG-1.3.23 summary: - Improved support for callbacks - Python docstring support and better error handling - C++ default argument support for Java and C# added. - Improved c++ default argument support for the scripting languages plus option to use original (compact) default arguments. - %feature and %ignore/%rename bug fixes and mods - they might need default arguments specified to maintain compatible behaviour when using the new default arguments wrapping. - Runtime library changes: Runtime code can now exist in more than one module and so need not be compiled into just one module - Further improved support for templates and namespaces - Overloaded templated function support added - More powerful default typemaps (mixed default typemaps) - Some important %extend and director code bug fixes - Guile now defaults to using SCM API. The old interface can be obtained by the -gh option. - Various minor improvements and bug fixes for C#, Chicken, Guile, Java, MzScheme, Perl, Python and Ruby - Improved dependencies generation for constructing Makefiles. support, Python, Ruby and XML.
https://android.googlesource.com/platform/external/swig/+/a8e1862aca759ef6159201fd61dd8536870de54c/RELEASENOTES
CC-MAIN-2019-30
refinedweb
1,859
51.24
This is a discussion on Re: Apparent "permissions" issue with /dev/cuau0? - FreeBSD ; On Tue, Oct 28, 2008 at 10:09:17PM -0700, David Wolfskill wrote: > This seems a bit weird to me. I'll explain the context, then the > perceived issue. > > I maintain a port (astro/gpsman) which can make use of ... On Tue, Oct 28, 2008 at 10:09:17PM -0700, David Wolfskill wrote: > This seems a bit weird to me. I'll explain the context, then the > perceived issue. > > I maintain a port (astro/gpsman) which can make use of a serial port to > communicate with a GPS. > > The author of the program let me know earlier today that he had made a > tarball of GPSman-6/4 available. Accordingly, I started updating the > port to use the new version. > > While I was doing that, I noticed that there's a stanza in the port's > Makefile: > > .if ${OSVERSION} < 600000 > GPSMAN_DEFAULT_PORT?= /dev/cuaa0 > .else > GPSMAN_DEFAULT_PORT?= /dev/cuad0 > .endif > > and I recalled that the MPSAFE TTY layer was recently committed to HEAD, > and that the serial device names changed accordingly. Thus, while it > isn't actually essential (as the user can change the device name fairlly > readily), I thought it would be reasonable to adjust that stanza so that > folks installing GPSman on a recent CURRENT system would at least have a > default value that matched something on their system. > > Accordingly, I changed my working copy to read: > > .if ${OSVERSION} < 600000 > GPSMAN_DEFAULT_PORT?= /dev/cuaa0 > .elif ${OSVERSION} < 800045 > GPSMAN_DEFAULT_PORT?= /dev/cuad0 > .else > GPSMAN_DEFAULT_PORT?= /dev/cuau0 > .endif > > I then rebooted my laptop from the CURRENT slice and tried installing > the (updated) port. > > The install was clean (as expected). I found that I needed to move my > old ~/.gpsman directory aside; on invocation, gpsman offered to create > ~/.gpsman-dir (where it stashes various preferences). I was mildly > pleased to see that the default for the serial port showed up as > /dev/cuau0 (as desired). > > I turned on my GPS, plugged it in, tried getting gpsman to talk to it, > and got a complaint: GPSman said that I didn't have permission (I > *think* that was its whine, anyway). I looked; the device was: > > crw-rw---- 1 uucp dialer 0, 51 Oct 28 20:23 /dev/cuau0 > > and output of id(1) verified that I was in group "dialer" (specifically > so I could do this type of thing), as expected. > > > I then de-installed gpsman, rebooted the laptop to RELENG_6 (also built > this morning), reinstalled GPSman, tried the "tallk tothe GPS" > experiment again, and it worked just as it always had before -- no > problems (using the "Garmin" protocol, if that matters). > > So: the hardware works. The physical connection should be OK. It seems > to me that either there's Something Weird going on with access to > things in a file namespace in CURRENT (which isn't all that likely, as > it would probably have an adverrse effect on tracking CURRENT daily -- > and others would likely have been ... mentioning ... it) or GPSman is > trying to do something to the serial port in a way that is no longer > supported in CURRENT. > > Now, GPSman is a Tcl/Tk application. And as I'm not really sufficiently > ambitious as to keep a separate set of installed ports ofr each of > RELENG_6, RELENG_7, and HEAD, I set things up so that /usr/local is > mounted from the same place regardless of which slice I boot from. I > then maintain the ports while running RELENG_6 -- and on the other > slices, I have the misc/compat6x port installed. > > While there are a few "gotchas" occasionally (RELENG_6 firefox isn't > happy with CURRENT's threading model, though I could probably address > that via /etc/libmap.conf), the vast mojority of stuff I use Just Works. > > FWIW, the version of the misc/compat6x port installed is > compat6x-i386-6.4.604000.200810. > > I'm disinclined to believe that this is an issue with the 6.4 release of > GPSman; as ssuch, I expect to send out the PR to update the port > shortly. But it would be nicer if the software could be used under > FreeBSD CURRENT. :-} > > I suppose I could try using ktrace(1) to get a better idea what's going > on. Any other (better?) ideas? Maybe the problem lies with tcl. We had to patch it before to handle our serial ports (that was also for GPSman): John -- John Hay -- John.Hay@meraka.csir.co.za / jhay@FreeBSD.org _______________________________________________ freebsd-current@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"
http://fixunix.com/freebsd/551398-re-apparent-permissions-issue-dev-cuau0.html
CC-MAIN-2015-40
refinedweb
760
63.8
How to Operate on Strings in C++ Overview A string, at its core, simply means an array or characters terminated by a binary zero or null character as the final element in the array. The C-style of dealing with strings means every bit of it and therein lies the problem. Programmers need to be extra careful in dealing with character arrays for many reasons. It is error prone because it is difficult to keep track of the difference between static quoted strings and arrays created on the stack and the heap. A string is so commonly used that it needed a specific identity rather than a data structure that represents an array of characters. Unintended mistakes in dealing with individual characters can create havoc by making it difficult to debug. The introduction of the string class in the standard C++ Library is a later but apt addition to solve many of the problems of character array manipulation once and for all. So, like many primitive data types, if we consider the behavior and uses of string class, it may be treated as part of the primitive data types family of C++. The string class keeps track of the memory during assignment and copy constructor, accommodates variation in character sets, features seamless string data conversion, and so on. In fact, if we sum up we want at least three things that a string must be able to do and C++ string has all of it. - Able to create an modify characters stored in a string - Pick a character and locate in the sequence - Convert string characters according to various schemes of representation C++ string Representation C++ physically hides the sequence of characters represented as an array by using a class of object-oriented methodology. And, like all classes, it also has a defined behavior. This clearly allays the concern for array dimension or null terminated characters because it is taken care of during class design. The string class also maintains its properties, such as the size and storage location of its data. An object created from the string class knows its starting location in memory, its length in characters, content, and the mechanism of its growth by resizing its internal data buffer. Therefore, the problem such as accessing values that are out of bound, using uninitialized arrays or arrays with incorrect values which may lead to the problem such as dangling pointer, has been addressed efficiently. Initializing string Using string is so simple that it can almost be treated like a primitive data type. As a result, the initialization and creation are pretty simple and straightforward. A string object created is initialized to be blank and does not hold garbage values. There are members functions, such as size(), that can be used to report the length of the string or empty() to check if it contains any value or not. Some variations of string creation and initialization are shown below: #include <iostream> #include <string> using namespace std; int main() { string s1; string s2("Welcome "); string s3 = "aboard"; string s4(s2); // Copy the characters from location 0 to 5 string s5(s3,0,5); string s6 = s2 + s3 + ", Thank you."; cout<<"s1 = "<<s1<<endl; cout<<"s2 = "<<s2<<endl; cout<<"s3 = "<<s3<<endl; cout<<"s4 = "<<s4<<endl; cout<<"s5 = "<<s5<<endl; cout<<"s6 = "<<s6<<endl; return 0; } Output Figure 1 shows the output of the preceding code sample. Figure 1: Output of the previous code Overloaded Operators and Some Common Functions The string class has overloaded many operators and has several other useful member functions to leverage convenience of its use. The function empty determines whether or not the string is empty, and function substr returns a part of an existing string. The string class can overload operators, such as += operator for string concatenation, and the = operator invokes the copy constructor, [] operator to create lvalues that enable manipulation of characters like simple arrays. But, note that the overloaded [] operator does not perform any bound checking. As a result, accidental manipulation of out of bound elements must be handled carefully. The at function can be used to access arbitrary elements of the string. It throws an exception if the access goes out of bound of the array. Here is a quick example. #include <iostream> #include <string> using namespace std; int main() { string s1("ABCDEF"); string s2("GHI"); string s3; // Testing overloaded operators cout<<"\ns1: "<<s1<<"\ns2: "<<s2<<"\ns3: "<<s3<<endl; cout<<"Sizes s1, s2, s3: "<<s1.size()<<", "<<s2.size() <<", "<<s3.size()<<" respectively."<<endl; cout<<"Comparison operators demo:"<<<"is s3 empty? "<<(s3.empty() ? "true":"false")<<endl; s3 = s2; cout<<"s3 = s2, s3 is: "<<s3<<endl; s1+=("\n"+s2); cout<<"s1+=(\"\\n\"+s2), s1 is: "<<s1<<endl; cout<<"substring of s1 location 5 through 10: " <<s1.substr(5,10)<<endl; cout<<"substring of s1 from location 10: " <<s1.substr(10)<<endl; cout<<"testing copy constructor"<<endl; string s4(s2); cout<<"string s4(s2), s4 is "<<s4<<endl; cout<<"access s1 with subscript operator"<<endl; for(size_t i=0;i<s1.size();i++) { cout<<s1[i]<<" "; } try { cout<<"\nAttempting to access out of range location"<<endl; s3.at(100) = 'A' } catch (out_of_range &range_exception) { cout<<"An exception occurred. " <<range_exception.what()<<endl; } return 0; } Output Figure 2 shows the output of the previous code sample. Figure 2: Output from the second code sample Note that the substr function takes the first argument as the starting position of the sub string to extract and the second argument as the number of characters to select from the string. Also note that they also have default values, which means that if we invoke the substr function with empty arguments, it produces a copy of the entire string. This provides quite a convenience for the programmer where one can invoke the substr function with no-argument, single argument, or both arguments. Using Iterators with string The string class can be treated like a container of objects where we can use iterators to indicate the start and end of a sequence of characters. It is possible to state two iterators to the constructor of the string itself as follows: #include <iostream> #include <string> #include <cassert> using namespace std; int main() { string s1("Hello"); string s2(s1.begin(),s1.end()); string s3("Hi"); assert(s1 == s2); // Same content assert(s1 == s3); // Different content return 0; } Although we can use an index to access individual characters in a string, iterators provide a unified access to a collection or a data structure. After all, a string is nothing but a collection of characters. Using the index is perfectly all right, particularly for random access but iterators provide the fine tuning and are immensely helpful, especially in code refactoring. If we want to iterate over the characters in a string we may do so in a simple manner as this: string s4("A simple string."); for(size_t i = 0; i < s4.size(); i++) cout<<s4[i]<<' '; Another way to do the same: for(char c: s4) cout<<c<<' '; We also can use iterators as follows: for(auto iter = s4.begin(); iter != s4.end(); ++iter) cout<<*iter<<' '; The string Operations The string class is designed to be safe to handle and has the capability to grow as per requirement without the programmer's intervention. The tedious housekeeping, like tracking of the bounds that we need to do with strings, has gone through a huge improvement. The class has a host of member functions to help with string manipulation needs. The function names are highly intuitive with judicious use of default arguments. #include <iostream> #include <string> using namespace std; int main() { string s1("This is a sample text."); cout<<"Capacity: "<<s1.capacity()<<endl; s1.insert(0,"Hello! "); cout<<s1<<endl; s1.reserve(128); cout<<"Capacity: "<<s1.capacity()<<endl; s1.append(" Append this text."); cout<<s1<<endl; return 0; } When we create a string object, it has a size according to the contents. If we want to find out the capacity of the string object before more storage is reallocated as the string grows, we simply invoke the function called capacity. If we want to make sure that string must have a specific amount of space, we invoke the reserve function. The reserve function is an optimization mechanism to specify the specific amount of storage. There is a function called resize, which appends space if the new size is more than the present string size or truncates the string if the new size is less than the current size. If we insert a new string into a specified location, existing characters move to accommodate new characters. The append function can be used to add more characters at the end of the current string. Here is an example to illustrate some of these functions. #include <iostream> #include <string> using namespace std; int main() { string s1("A wisest ? is a ? who does not ? with another ?'s ?."); string s2("?"); string s3("monkey"); size_t i = 0; size_t j; while((j = s1.find(s2, i))!= string::npos) { s1.replace(j, s2.size(), s3); i = j + s3.size(); } cout<<s1<<endl; return 0; } Output Figure 3 shows the output of the code above. Figure 3: Output from the last code sample The preceding example demonstrates how we can use the find and replace functions to replace a string of characters within a string. As we have seen, the insert function inserts a set of characters without overwriting the existing characters in the string. The replace function, however, overwrites characters. The find function returns the first matched location of a string pattern in another string. Here, we have demonstrated how these two functions can be used effectively to replace a particular string with another string within the context of a large text. Conclusion We have discussed only a few of the member functions available in the string class. There are a whole lot of them if we also include the overloaded ones. Moreover, the string class has overloaded numerous operators to leverage the convenience of its use. One thing for sure is that the string class has not only a lot to offer the programmer but also took great care keeping in view of the convenience of its use.
https://www.codeguru.com/cpp/cpp/string/general/how-to-operate-on-strings-in-c.html
CC-MAIN-2020-16
refinedweb
1,692
51.07
Initial GFF parser for Biopython Generic feature format (GFF) is a nice plain text file format for storing annotations on biological sequences, and would be very useful tied in with the BioSQL relational database. Two weeks ago, I detailed the Bioperl GenBank to GFF mapping, which provided some introductory background to the problem. Here I’m continuing to explore GFF and BioSQL together, but from the opposite direction. I implemented an initial pass at a python GFF parser that will hopefully eventually be included in Biopython. You can find the current code in the git repository. I’d be very happy to have others help with development, provide usage feedback and pass along difficult GFF files for testing. Implementation GFF parsing is a little tricker to fit into the Biopython SeqIO system than other sequence file formats. Formats like GenBank or UniProt contain a sequence and its features combined into a single record. This allows parsers to iterate over files one at a time, returning generic sequence objects from each record. This scales with large files so your memory and processor worries are bounded by the most complicated record in the file. In contrast, GFF files are separate from the primary sequences and do not have any guarantees about annotations for a record being grouped together. To be sure you’ve picked up all features for a record, you need to parse the entire GFF file. For large real-life files this becomes a problem as all of the features being added will rapidly fill up available memory. To solve these problems, GFF parsing is implemented here as a feature addition module with filtering. This means that you first use standard Biopython SeqIO to parse the base sequence records, and then use the GFF class to add features to these initial records. The addition function has an option argument allowing added features to be limited to a subset of interest. You can limit based on record names and add all features related to a specific sequence, or you can limit based on types of GFF features and add these features to all records in the file. This example demonstrates the use of the GFF parser to parse out all of the coding sequence features for chromosome one ( 'I'), and add them to the initial record: from Bio import SeqIO from BCBio.SeqIO.GFFIO import GFFFeatureAdder with open(seq_file) as seq_handle: seq_dict = SeqIO.to_dict(SeqIO.parse(seq_handle, "fasta")) feature_adder = GFFFeatureAdder(seq_dict) cds_limit_info = dict( gff_types = [('Coding_transcript', 'gene'), ('Coding_transcript', 'mRNA'), ('Coding_transcript', 'CDS')], gff_id = ['I'] ) with open(gff_file) as gff_handle: feature_adder.add_features(gff_handle, cds_limit_info) final_rec = feature_adder.base['I'] This example shows the other unique aspect of GFF parsing: nested features. In the example above we pull out coding genes, mRNA transcripts, and coding sequences (CDS). These are nested as a gene can have multiple mRNAs, and CDSs are mapped to one or more mRNA transcripts. In Biopython this is handled naturally using the sub_feature attribute of SeqFeature. So when handling the record, you will dig into a gene feature to find its transcripts and coding sequences. For a more detailed description of how GFF can be mapped to complex transcripts, see the GFF3 documentation, which has diagrams and examples of different biological cases and how they are represented. Testing The test code features several other usage examples which should help provide familiarity with the interface. For real life testing, this was run against the latest C elegans WormBase release, WS199: GFF; sequences. On a standard single processor workstation, the code took about 2 and a half minutes to parse all PCR products and coding sequences from the 1.3G GFF file and 100M genome fasta file. BioSQL To go full circle back to my initial inspiration, the parsed GFF was pushed into a BioSQL database using this script. To test on your own machine, you will have to adjust the database details at the start of the script to match your local configuration instead of my test database. Standard flattened features are well supported going into BioSQL. Nested features, like the coding sequence representation mentioned above, will need additional work. The current loader only utilizes sub_features to get location information and support the join(1..3,5..8) syntax of GenBank. The seqfeature_relationship table in BioSQL seems like the right place to start to support this. Summary This provides an initial implementation of GFF3 parsing support for Biopython. The interface is a proposal and I welcome suggestions on making it more intuitive. Code and test example contributions are also much appreciated. As we find an interface and implementation that works for the python community and the code stabilizes, we can work to integrate this into the Biopython project.
https://bcbio.wordpress.com/2009/03/08/initial-gff-parser-for-biopython/
CC-MAIN-2021-21
refinedweb
784
53.1
Walkthrough: Test-First Support with the Generate From Usage Feature This topic demonstrates how to use the Generate From Usage feature, which supports test-first development. Test-first development is an approach to software design in which you first write unit tests based on product specifications, and then write the source code that is required to make the tests succeed. Visual Studio supports test-first development by generating new types and members in the source code when you first reference them in your test cases, before they are defined. Visual Studio generates the new types and members with minimal interruption to your workflow. You can create stubs for types, methods, properties, fields, or constructors without leaving your current location in code. When you open a dialog box to specify options for type generation, the focus returns immediately to the current open file when the dialog box closes. The Generate From Usage feature can be used with test frameworks that integrate with Visual Studio. In this topic, the Microsoft Unit Testing Framework is demonstrated. To create a Windows Class Library project and a Test project In Visual C# or Visual Basic, create a new Windows Class Library project. Name it GFUDemo_VB or GFUDemo_CS, depending on which language you are using. In Solution Explorer, right-click the solution icon at the top, point to Add, and then click New Project. In the New Project dialog box, in the Project Types pane on the left, click Test. In the Templates pane, click Test Project and accept the default name of TestProject1. The following illustration shows the dialog box when it appears in Visual C#. In Visual Basic, the dialog box looks similar.New Project dialog box Click OK to close the New Project dialog box. You are now ready to begin writing tests To generate a new class from a unit test The test project contains a file that is named UnitTest1. Double-click this file in Solution Explorer to open it in the Code Editor. A test class and test method have been generated. Locate the declaration for class UnitTest1 and rename it to AutomobileTest. In C#, if a UnitTest1() constructor is present, rename it to AutomobileTest(). Locate the TestMethod1() method and rename it to DefaultAutomobileIsInitializedCorrectly(). Inside this method, create a new instance of a class named Automobile, as shown in the following illustrations. A wavy underline appears, which indicates a compile-time error, and a smart tag appears under the type name. The exact location of the smart tag varies, depending on whether you are using Visual Basic or Visual C#.Visual BasicVisual C# Rest the mouse pointer over the smart tag to see an error message that states that no type named Automobile is defined yet. Click the smart tag or press CTRL+. (CTRL+period) to open the Generate From Usage shortcut menu, as shown in the following illustrations.Visual BasicVisual C# Now you have two choices. You could click Generate 'Class Automobile' to create a new file in your test project and populate it with an empty class named Automobile. This is a quick way to create a new class in a new file that has default access modifiers in the current project. You can also click Generate new type to open the Generate New Type dialog box. This provides options that include putting the class in an existing file and adding the file to another project. Click Generate new type to open the Generate New Type dialog box, which is shown in the following illustration. In the Project list, click GFUDemo_VB or GFUDemo_CS to instruct Visual Studio to add the file to the source code project instead of the test project.Generate New Type dialog box Click OK to close the dialog box and create the new file. In Solution Explorer, look under the GFUDemo_VB or GFUDemo_CS project node to verify that the new Automobile.vb or Automobile.cs file is there. In the Code Editor, the focus is still in AutomobileTest.DefaultAutomobileIsInitializedCorrectly. You can continue to write your test with a minimum of interruption. To generate a property stub Assume that the product specification states that the Automobile class has two public properties named Model and TopSpeed. These properties must be initialized with default values of "Not specified" and -1 by the default constructor. The following unit test will verify that the default constructor sets the properties to their correct default values. Add the following line of code to DefaultAutomobileIsInitializedCorrectly. Because the code references two undefined properties on Automobile, a smart tag appears. Click the smart tag for Model and then click Generate property stub. Generate a property stub for the TopSpeed property also. In the Automobile class, the types of the new properties are correctly inferred from the context. The following illustration shows the smart tag shortcut menu.Visual BasicVisual C# To locate the source code Use the Navigate To feature to navigate to the Automobile.cs or Automobile.vb source code file so that you can verify that the new properties have been generated. The Navigate To feature enables you to quickly enter a text string, such as a type name or part of a name, and go to the desired location by clicking the element in the result list. Open the Navigate To dialog box by clicking in the Code Editor and pressing CTRL+, (CTRL+comma). In the text box, type automobile. Click the Automobile class in the list, and then click OK. The Navigate To window is shown in the following illustration.Navigate To window To generate a stub for a new constructor In this test method, you will generate a constructor stub that will initialize the Model and TopSpeed properties to have values that you specify. Later, you will add more code to complete the test. Add the following additional test method to your AutomobileTest class. Click the smart tag under the new class constructor and then click Generate constructor stub. In the Automobile class file, notice that the new constructor has examined the names of the local variables that are used in the constructor call, found properties that have the same names in the Automobile class, and supplied code in the constructor body to store the argument values in the Model and TopSpeed properties. (In Visual Basic, the _model and _topSpeed fields in the new constructor are the implicitly defined backing fields for the Model and TopSpeed properties.) After you generate the new constructor, a wavy underline appears under the call to the default constructor in DefaultAutomobileIsInitializedCorrectly. The error message states that the Automobile class has no constructor that takes zero arguments. To generate an explicit default constructor that does not have parameters, click the smart tag and then click Generate constructor stub. To generate a stub for a method Assume that the specification states that a new Automobile can be put into a Running state if its Model and TopSpeed properties are set to something other than the default values. Add the following lines to the AutomobileWithModelNameCanStart method. Click the smart tag for the myAuto.Start method call and then click Generate method stub. Click the smart tag for the IsRunning property and then click Generate property stub. The Automobile class now contains the following code. public class Automobile { public string Model { get; set; } public int TopSpeed { get; set; } public Automobile(string model, int topSpeed) { this.Model = model; this.TopSpeed = topSpeed; } public Automobile() { // TODO: Complete member initialization } public void Start() { throw new NotImplementedException(); } public bool IsRunning { get; set; } } To run the tests On the Test menu, point to Run, and then click All Tests in Solution. This command runs all tests in all test frameworks that are written for the current solution. In this case, there are two tests, and they both fail, as expected. The DefaultAutomobileIsInitializedCorrectly test fails because the Assert.IsTrue condition returns False. The AutomobileWithModelNameCanStart test fails because the Start method in the Automobile class throws an exception. The Test Results window is shown in the following illustration.Test Results window In the Test Results window, double-click on each test result row to go to the location of each test failure. To implement the source code Add the following code to the default constructor so that the Model, TopSpeed and IsRunning properties are all initialized to their correct default values of "Not specified", -1, and True (true). When the Start method is called, it should set the IsRunning flag to true only if the Model or TopSpeed properties are set to something other than their default value. Remove the NotImplementedException from the method body and add the following code. To run the tests again On the Test menu, point to Run, and then click All Tests in Solution. This time the tests pass. The Test Results window is shown in the following illustration.Test Results window
https://msdn.microsoft.com/en-us/library/dd998313(VS.100).aspx
CC-MAIN-2015-18
refinedweb
1,470
55.24
The Source XML topic provides info on how to create source objects manually, from scratch, using a source object's XML schema. But if you already have - or are going to declare - a public .NET class you can set the Rule Editor to use that class as its source object. Any .NET public class can be used as a source object, provided that it declares at least one public property or one public method that returns a value type (in-rule method). Make sure to read the Code Effects Basics topic for details. The RuleEditor class declares several properties that you can use to set the type of your source object. The most common is SourceType. The code sample below creates an instance of the RuleEditor with the Patient class as its source type: CodeEffects.Rule.Web.RuleEditor editor = new RuleEditor("divRuleEditor") { SourceType = typeof(Patient) }; You can also use the full type name of the source object and the name of its declaring assembly: CodeEffects.Rule.Web.RuleEditor editor = new RuleEditor("divRuleEditor") { SourceTypeName = "CodeEffects.Rule.Demo.Asp.Core.Models.Patient", SourceAssembly = "CodeEffects.Rule.Demo.Asp.Core" }; The SourceAssembly property can be the name of the referenced assembly that declares the source object, or the fully qualified name of the assembly that you plan to load programmatically. It can be retrieved as Assembly.GetAssembly(SourceObjectType).FullName. The SourceTypeName property is the full name of the source object's type. It can be retrieved as sourceInstance.GetType().FullName. The RuleEditor class declares more source-related properties that can be used to set the source object. Our demo projects provide details on how to use .NET classes as source objects in Code Effects.
https://codeeffects.com/Doc/Business-Rule-Source-Object-Class
CC-MAIN-2021-31
refinedweb
280
57.27
Hi ;, you are using the syntax to read an object from a root file. If the file is an ascii one, you have to read its content and fill the histogram with the Fill function. See for example Simple pyroot example problem Cheers, Danilo Actually Firstly I should open my file But I cant do it I write but I cant open Hi I tried your example but it said .txt is not a root file and can not open file What can I do This syntax is not valid. You can find plenty documentation outside of this forum on the internet cplusplus.com/forum/beginner/8388/ . Hi I try this c1 = TCanvas("c1","c1",900,700) h1 = ROOT.TH1F("h1", "", 10, 0, 10) data = open('data.dat','r') for x in data: print x x = Double() h1.Fill(x) data.close() h1.Draw() and opened a white page Than I choose my txt file but cannot opened and said is not a root file Secondly I try this string STRING; ifstream infile; infile.open ("names.txt"); while(!infile.eof) // To get you all the lines. { getline(infile,STRING); // Saves the line in STRING. cout<<STRING; // Prints our STRING. } infile.close(); system ("pause"); MY screen is: but again I havent got a result:( I cant open my txt file and I cant draw my histogram I am not sure I understand. Clearly the reading part depends on the format of your data. In addition, the snippets posted are meant to be consistently integrated in a script, probably they will not work if copy and pasted out of the box in the interpreter as they are. Hmm I am so confused I want just a draw histogram acording to my .txt file (my txt file which include a lot of number between -3 and 3 (-2.98, 1.25,0,1…) I shuld read this txt file and I should draw histogram for these number But I can’t do it Im resarching… Hi T try this method: root[0]TGraph *t = new TGraph ("a.txt"); root[1]t->Draw("A*"); But it said that illegal number of points(0) What shoul I do? Attach your “a.txt” fle here. { TTree *t = new TTree("t", "a tree"); t->ReadFile("a.txt", "v/D"); Double_t v_min = t->GetMinimum("v"); Double_t v_max = t->GetMaximum("v"); Int_t n = 100; Double_t dv = (v_max - v_min) / n; #if 1 /* 0 or 1 */ TH1D *h = new TH1D("h", "a histogram", (n + 2), (v_min - dv), (v_max + dv)); #else /* 0 or 1 */ TH1D *h = new TH1D("h", "a histogram", n, (v_min - dv / n), (v_max + dv / n)); #endif /* 0 or 1 */ t->Project("h", "v"); h->Draw(); } Hi; Thanks your method but While I was writting this code Program wtote this line You are trying to read a file which contains “binary data”, not “ascii data”. Download the “ascii file” from the “İndir” www link that you gave in your previous post here (i.e. “a.txt”) and put it in the same subdirectory in which you run ROOT and execute the macro. Hmm before your last answer I tried again and my result is: opened a white page and is this error connected to your last answer?? Yes, check the “a.txt” file (in the current subdirectory). I downloaded from “indir”… But didn’t it What is imply “execute macro” I try same code which you posted after Idownloaded file but I haven’t got a result Again program said Can not read file / can not open file On this Indir www page, there is a button which downloads an “.exe” file but there is also another button which directly downloads the “a.txt” file (in “ascii” form). dosya.co/4qd003c99hce/a.txt.html You are saying this page but I only see one button for “indir” and I downloaded it
https://root-forum.cern.ch/t/read-txt-file-and-draw-histogram/20476
CC-MAIN-2022-27
refinedweb
641
78.59
This. Raspberry Pi DHT11 DHT22 number) if temp11 is not None: temp11 = "temp,c=" + str(temp11) mqttc.publish(topic_dht11_temp, payload=temp11, retain=True) if humidity11 is not None: humidity11 = "rel_hum,p=" + str(humidity11) mqttc.publish(topic_dht11_humidity, payload=humidity11, retain=True) if temp22 is not None: temp22 = "temp,c=" + str(temp22) mqttc.publish(topic_dht22_temp, payload=temp22, retain=True) if humidity22 is not None: humidity22 = "rel_hum,p=" + str(humidity22) mqttc.publish(topic_dht22_humidity, payload=humidity22, retain=True) time.sleep(5). python /home/pi/python/tempsensor.py sudo crontab -e @reboot python /home/pi/python/tempsensor.py & Thank you very much. I just have a questions because im new to this whole MQTT thingy. Do we have to insert any Login Data at this part? greetings from Germany Yep, this is the info from your MQTT dashboard. When you click Add New...>Device/Widget>Bring Your Own Thing button it will display everything for you. You will also have to replace username and clientid in the topic_dht lines right below that. Hello adam, your Support is much appreciated I have another question: If i want to use more than one DHT22 i only have to copy the Scriptfile, change the GPIO Pin, change the file name & add a second (@Reboot) line to the Crontab...right? In short: One Script for every DHT22 i want to add thx in advance You can do that but I would keep it as one script and add to it. So add a new topic_dht line for each additional sensor making sure to keep unique channels for each (the last digit in the string) and then add additional lines to check the sensors humidity11, temp11 = Adafruit_DHT.read_retry(11, 17) Hello to everybody, I'm a pretty greenhorn to all concerning to linux et. For now I'm asking for your help. I'm trying out to add a DHT11 to my RaspPi 3. I followed exactly all steps written by adam but my MQTT - Dashboard always hangs up at "Waiting for board to connect..." I recognized, that the client-id changes always when the mttq - dashboard is started new. Is this ok? So what ID should I insert into tempsensor.py thank you for helping best regards from Germany Martin Yes, the clientid will be listed on each new MQTT device you create (maybe also new username/password? I'm not sure about that though) Fill in your username, password, and client id wherever required in the .py file (lines 8, 9, 12, 13, 14, 15 in the original file) Thanks for opening my eyes!Very good explanation, adam... best regardsMartin Hi there! thank you very much for this awsome guide! It worked really nicely in my case. The only problem is that when I try to have Graph representation for anything but "live" then it doesnt really update values. It only shows this circular dots as if something is loading. Interestingly, as soon as I switch back to "live" then the Graph work again. I dont know if this is problem of this particular set up or cayenne's. thanks! I had the same thing. Unfortunately graph history isn't available for MQTT yet but it's on the road map. Hello adam, I also try your code here with DHT11 sensor only. I am with raspberry 3. This is the command line error when I try to run my file testDHT.py: Traceback (most recent call last): File "testDHT.py", line 17, in <module> humidity11, temp11 = Adafruit_DHT.read_retry(11, 17) #11 is the sensor type, 17 is the GPIO pin number <module> from . import Raspberry_Pi_2_Driver as driver ImportError: cannot import name Raspberry_Pi_2_Driver Something that makes me sad is that it give error about some RPI 2 drivers, and I am using RPI 3. Something interesting - when I goes into the DHT library and then into examples, when I write the following: python AdafruitDHT.py 11 17 it is showing the current temp and humidity. I move my file with your code into this folder (examples) and when I try to execute it, the cursor is just staying and nothing happens. Hmm that's definitely odd. Did you follow the install procedure for the library? sudo apt-get install build-essential python-dev python-opensslgit clone Adafruit_Python_DHTsudo python setup.py install Yes, I strictly follow the procedure.This is how it looks like, when I try to run the script. And in the Dashboard, the device is added, but with NO widgets. Hit refresh on the dashboard while the script is running and let me know if you see anything show up. No, nothing is showed up Can you PM me the exact script your using? I can test later to see if you see the values show up on your dashboard. Thanks to adam, I was forget to change topic_dht11_temp credentials. I only change the username, not the cliendid. Thank you all guys!!! The tutorial worked just fine for me, but......after a few Hours the DHT22 appears offline in the Dashbard.My Rpi3 runs 24/7. With one Reboot per Day in the morning.(Because of the Memory usage rising to 95%+) And everytime i check it (in the afternoon) the DHT22 seams to be Offline for 6-8 hoursIs there a log i could look in to? If you can log in to the pi you can delete the cron job with crontab -e then open the terminal and start the script with python file.py and then just leave the terminal window open to see if you get any errors. I'm having some problems with my pi as well but I think the log file filled up my SD card again.
http://community.mydevices.com/t/dht11-dht22-with-raspberry-pi/2015
CC-MAIN-2017-51
refinedweb
948
74.9
Getting Started with Kubernetes (at home) — Part 3 In the first two parts of this series, we looked at setting up a production Kubernetes cluster in our labs. In part three of this series, we are going to deploy some services to our cluster such as Guacamole and Keycloak. Step-by-step documentation and further service examples are here. Guacamole Guacamole is a very useful piece of software that allows you to remotely connect to your devices via RDP, SSH, or other protocols. I use it extensively to access my lab resources, even when I am at home. You can use this Helm Chart to install Guacamole on your Kubernetes cluster. The steps are as follows: - Clone the Guacamole Helm Chart from here - Apply any changes to values.yaml such as the annotations and ingress settings. - Deploy the Helm Chart from the apache-guacamole-helm-chartdirectory helm install . -f values.yaml --name=guacamole --namespace=guacamole - An ingress is automatically created by the Helm Chart, and you can access it based on the hosts:section of values.yaml Once you have deployed the Helm Chart, you should be able to access Guacamole at the ingress hostname specified in values.yaml. Keycloak Keycloak is an open-source single sign-on solution that is similar to Microsoft’s ADFS product. You can read more about Keycloak on my blog. There is a stable Keycloak Helm Chart available in the default Helm repo, which we will be using to deploy Keycloak, you can find it here. - Apply any changes to values.yaml - Deploy the helm chart stable/keycloak with values helm install --name keycloak stable/keycloak --values values.yaml - Create an ingress kubectl apply -f ingress.yaml - Get the default password for the keycloakuser. kubectl get secret --namespace default keycloak-http -o jsonpath="{.data.password}" | base64 --decode; echo Though this article is on the shorter side, hopefully it exemplifies how easy it can be to run services in Kubernetes. We mainly looked at pre-made Helm Charts in this article, however deploying a service without a Chart can also be just as easy. I prefer using charts as I find it easier to manage than straight Kubernetes manifest files. You can checkout my public Kubernetes repo at for further information and more service examples. Originally published at on May 4, 2019.
https://medium.com/@just_insane/getting-started-with-kubernetes-at-home-part-3-537b045afd1?source=---------4------------------
CC-MAIN-2019-39
refinedweb
387
64.61
This project is done by a group of students from Singapore Polytechnic, School of Electrical and Electronic Engineering. We have a total of 3 members in our group, Takuma Kabeta, Jun Qian, Yong Hua. Our supervisor is Mr Teo Shin Jen. The purpose of our project is to create a smart, door opener to unlock the office door with Wall Mount Push Release Button Switch using the servo from inside the office using a smartphone. Step 1: Items Required - What is required for this project - Smartphone with Evothings Viewer application installed [download from Google playstore] (free) - Evothings application software (free) - 3D printer (optional) - Sketchup2017 software (optional) - Arduino IDE(free) - NodeMCU 8266 - SG90 Servo motor - Jump wire and circuit board - 5v output socket and USB to USB type B cable - USB type B Socket Step 2: Hardware Setup for the Smart Door - First solder the USB B socket and header pin to the circuit board. - Second solder the header pins. - Use the jump wire and connect USB type B VCC and from GND to header pins. *Make sure not to mistake D+, D-, VCC, GND pin on USB B socket. (We do not need to use D+ D- pin) We need this because nodeMCU only provide 3.3v which is unable to power servo motor.* - After Soldering the socket make sure to drilling/cut connection between VCC, GND, D- and D+ connection.(image above show the cutting between each connection) Step 3: Wire Connection Next, connect devices with jump wires. - Connect your VCC(White wire)and GND(Grey wire) from header pin to Nodemcu VCC and G pin - Connect VCC (RED wire) and GND (BROWN wire) from header pin to Servo pin. - Connect your servo signal pin (Orange wire) from to Nodemcu D1 to Servo data *The servo data and VCC power must send a signal by same voltage. If you make servo VCC = 5v and data send to Servo by 3.3v the servo will not operate* Step 4: Software Setup for the Smart Door Please download the following software Now you must download the software to the computer. (This software allows you to create 3D structure.) - sketchup2017 (optional) Use to establish communication with Evothings viewer application and Nodemcu to control servo. - Evothings The software is used to program NodeMCU. - Arduino IDE This application allow user to remotely send signal to Nodemcu - Evothings Viewer from Google play store Step 5: Installing File to Operate NodeMCU on Arduino IDE After completing all the software installation, open up the Arduino IDE program. Now go to "files" section on headline and click on the "preference" in the Arduino IDE Copy this Url "" and paste it to "Additional board manager URLs:" Step 6: Next, go to "Tools" section in the headline and choose "board". select "Board Manager". In the board management enter "8266" in search section on board manager. Install the file name"esp8266 by ESP8266 Community software version 2.3.0" Step 7: After installation go to tools and board, and select "NodeMCU 1.0(ESP-12E module)" Now you are able to upload code to NodeMCU Step 8: Downloading the Program Code for Nodemcu to Communicate With Evothings Viwer App After completing the previous step, download ZIP file contains source code files from following URLs. - This source file allows Nodemcu communicating with Evothings program. After downloading the file open the file name "esp8266" using Arduino IDE. This code will be the main code running in Nodemcu8266 receiving an order from Evothings viewer app from a smartphone. - This source file is required in the Arduino library to control the Servo movement. - This source file is required in Arduino library for NodeMCU communication with Access Point Step 9: Modified Code I have edited some part of code from file "esp8266" to run the servo motor when signal sent from Evothings app, You can replace the code below with the esp8266 code file or arrange it yourself. This will be the code I have edited on esp8266 file to control servo motor on Nodemcu with evothings viewer application ///////////////////////////////////////////////////////////////////////////////////////////////////////////// #include <ESP8266WiFi.h> #include <Servo.h> Servo myservo; const char* ssid = "Singtel_Project-TK"; //Enter your AP SSID const char* password = "Projectsch888"; //Enter your AP Password const int ledPin = 2; //This will flash the nodemcu internal led once network communication is success WiFiServer server(1337); void printWiFiStatus(); void setup(void) { Serial.begin(115200); //serial monitor is displayed(please set to 115200 to see message displayed by nodemcu) WiFi.begin(ssid, password); // Configure GPIO2 as OUTPUT. pinMode(ledPin, OUTPUT); myservo.attach(5); //The pin 5 will be D1 pin in Nodemcu // Start TCP server. server.begin(); } void loop(void) { int pos; // Check if module is still connected to WiFi. if (WiFi.status() != WL_CONNECTED) { while (WiFi.status() != WL_CONNECTED) { delay(500); } // Print the new IP to Serial. printWiFiStatus(); } WiFiClient client = server.available(); if (client) { Serial.println("Client connected."); while (client.connected()) { if (client.available()) { char command = client.read(); if (command == 'H') { //when the singnal from mobile app is high means servo return to 0 degree position myservo.write(0); Serial.println("door is now close."); //send message door close } else if (command == 'L') { //when the singnal from mobile app is low means servo return to 180 degree position myservo.write(180); Serial.println("door is now open."); //send message door open } } } Serial.println("Client disconnected."); client.stop(); } } void printWiFiStatus() { Serial.println(""); Serial.print("Connected to "); Serial.println(ssid); Serial.print("IP address: "); Serial.println(WiFi.localIP()); } /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// Now upload the Program code to NodeMCU *Make sure to open the serial monitor with correct output "115200" so you could see the IP address your NodeMCU connected * Step 10: Setting Up Evothing Software Now open the Evothing Program on PC. In the main console of program, select Examples section. Search for "ESP8266" example file. Now go and Press Copy on ESP8266. This will create a new file tab. Name the new file and save it. The new file is now saved on My Apps console. Step 11: Setting Up Evothing Software (Establishing Smartphone to NodeMCU Connection) Now start up the mobile Evothings viewer app on your phone. On the Evothings application on computer select "Connect" headline and select "GET KEY". Enter the key displayed in Evothings application on your phone. Step 12: Go back to your EVOthings application on PC one more time and select my app headline. Select the Nodemcu program connection file you just created. Press" run " button. Your mobile phone will now display "Loading" logo and lead you to page where you can enter Nodemcu IP address. Step 13: Once done, they will ask for IP address of your nodeMCU. Look at the serial monitor on Arduino IDE to check the IP address of NodeMCU connected to AP(Access point). Enter the IP Address displayed. This will establish communication with Evothings and NodeMCU. *It is more convenient to provide NodeMCU with static IP Address using Access point. This prevent IP address changes cause by dynamic setting of AP* Step 14: If connected the LED button will be displayed on the mobile application. Now you are able to control the servo using your smartphone with just one button. You can now see the "open" and "close" are displayed in the Arduino IDE monitor when you press the button on the application. To adjust the appearance of the Evothings application software please go to this site for more detail. Step 15: This Is a Door Opening Holder Drawn in Sketchup. (Optional) This files contain the 3D model box to hold the servo and Nodemcu Circuits. Step 16: Reference Reference GitHub. (2017). esp8266/Arduino. [online] Available at:... [Accessed 25 April. 2017]. GitHub. (2017). esp8266/Arduino. [online] Available at: [Accessed 25 April. 2017]. GitHub. (2017). evothings/evothings-examples. [online] Available at: [Accessed 25 April. 2017]. Discussions 1 year ago I'd like to put something like this on the computer cabinet in our house :)
https://www.instructables.com/id/Smart-Door-Opener/
CC-MAIN-2019-13
refinedweb
1,309
64.91
(This article was first published on We think therefore we R, and kindly contributed to R-bloggers) Well I shall hit the nail right on the head and not beat around the bush. I am taking programming lessons on R from my pro bro(Utkarsh Upadhyay) who agreed on teaching me only if I would disseminate my learning(a paranoia all the open-source advocates share). Hence I shall populate the web with another link, which might help other dumb programmers like me. Question:What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?(Which is essentially the LCM of all the numbers from 1 to 20) I came across this problem here. You can choose any programming language to solve the problems given. I choose R. My first attempt to solve this: a <- 20 c <- 0) Brief note on the program above: What I am essentially doing is checking if the number in the variable ‘a’ is divisible by all the numbers from 1 to 20, if its not then I am incrementing the value of ‘a’ and proceeding again with the loop(first while loop). So I would be checking for all the numbers starting 21 whether they are divisible by all the numbers from 1 to 20 and the program runs till ‘a’ takes the desired value.(which came out to be 232,792,560, after the computation was over). This was a conservative way of getting the job done. It however took 9 hours of computation to blurt out the answer. Hmmm, well I could live with that number but just out of curiosity I asked Utkarsh if there was anything that I was missing. I just wish I were there to see the expression on his face, it would probably have been that of despair, or could also have been a hysterical laughter, I would never get to know that(sigh, Schrodinger’s cat) but nevertheless lets focus on the task at hand. The suggestion Utkarsh gave was to use “recursive functions”. Revised program using recursive function and pro bro’s help: We essentially define 2 function and call one in the other. Its easier when you look at the code: Defining a function lcm(a,b) and storing the codes in a file “LCM.R”: lcm <- function(a, b) { if(a > b) { # Swap the numbers to keep the smallest number in ‘a’ a <- a + b b <- a – b a <- a – b } i <- 2 comb <- 1 while(i <= a) { if(a %% i == 0 && b %% i == 0) { # Accout for all the common factors a <- a / i b <- b / i # Count common factors only once. comb <- comb * i } else { # i is not a common factor, carry on to the next number i <- i + 1 } } return (comb * a * b) # For the non common factors, count all of them } A brief note on the above program: What we have done here is we have defined a function lcm(a,b) as per our convenience and defined it such that it returns the LCM of ‘a’ and ‘b’. The logic used to calculate the LCM is what most of us have already used in class 5th. Identify the common factors(which would be contained in ‘i’) and then to compute the LCM just use “comb <- comb * i”. Note that whenever I come across a common factor I am dividing both ‘a’ and ‘b’ by ‘i’ thus the values of ‘a’ and ‘b’ left in the end of the loop would be co-prime.(Think about this.!!). Therefor I am returning (comb * a * b), which would return the LCM of ‘a’ and ‘b’. This program would be stored in a file let’s say “LCM.R”. Now whenever I have to refer to this function lcm(a,b) all I need to do is source this file “LCM.R” and I can conveniently use the function lcm(a,b) to get the LCM of ‘a’ and ‘b’. It would be as if the function lcm(a,b) always existed. Now I can address a question for the novice programmers. Where do the ‘a’ and ‘b’ come from? So whenever I use this self defined function lcm(a,b) I will use it in a program right.? so if I write source(‘LCM.R’) // this would allow you to use the function you defined in “LCM.R” l <- lcm(6,8) // the value of ‘a’ would be 6 and ‘b’ would be 8 print (l) I would get 24. Defining another function lcm1(list.num) and storing the codes in “LCM2.R” Now we come to the tricky part. What we will do now is use the function lcm(a,b), that we defined, and use it to compute LCM of a list of numbers. source(‘LCM’) //calling the file that stores the function lcm(a,b) lcm1 <- function(list.num) { LCM.so.far <- 1 for(next.number in list.num) { LCM.so.far <- lcm(LCM.so.far, next.number) // Here lies the beauty } return (LCM.so.far) } A brief about the above program: What we have done above is defined another function lcm1(list.num) which will take a list of numbers and blurt out the LCM.(Which is exactly what we want.!!). If we look at the ‘for’ loop defined we are running the loop for all the values in the list of numbers. Now the beautiful logic is in the line “LCM.so.far <- lcm(LCM.so.far, next.number)”, we have cleverly used the function defined earlier lcm(a,b) here. LCM.so.far would keep on updating it self as the loop runs with the next number in the list. Finally this function returns the LCM of the list of numbers that would be stored in ‘LCM.so.far’ at the end of the loop.(Think why.!!) Now this function lcm1(list.num) and its definition would be stored in another file say “LCM2.R”. Similar to how we used the file “LCM.R” to use the function lcm(a,b) we can now source “LCM2.R” to use the function lcm1(list.num) that we defined.!! So basically we have 2 function defined in 2 different files. To use the functions wll we need to do is source the files they are stored in. Main Program: source(‘LCM2′) // this will call both the files(think why.!!) l <- lcm1(1:20) print (l) Here we have sourced the file “LCM2.R” which would automatically open “LCM.R” too, since “LCM2.R” has to use lcm(a,b) defined in “LCM.R”(hope you catch the drift). Now we stored the LCM of the list of numbers from 1 to 20 (1:20) in ‘l’ and displayed ‘l’. and TADA.!!! The approximate computation time is <1 sec. And also we get a flexible functionality to compute the LCM of any consecutive list of natural numbers however long(not literally.!!) Even if I consider that the computation took 1 sec, the program that I came up with took 9*60*60= 32,400 secs. Therefore the approximate efficiency enhancement achieved via this transition is 32,39,900% which is not bad I say..:-) Critiques, abuses, banters, blessings are welcome. P.S: Pardon me for the poor presentation and grammatical errors if any....
http://www.r-bloggers.com/calculate-lcm-of-n-consecutive-natural-numbers-using-r/
CC-MAIN-2015-48
refinedweb
1,222
81.12