text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Refactoring for the Tell Don't Ask Pattern Refactoring for the Tell Don't Ask Pattern Check out the process this developer used to refactor the code for a flash card simulation to make it easier to read and maintain. Join the DZone community and get the full member experience.Join For Free Sensu is an open source monitoring event pipeline. Try it today. Design. Here we'll focus on implementing an easy-to-read, easy-to-update code base with object-oriented design, specifically using the Tell Don't Ask principle. How Code Design Can Go Wrong Until you learn from making mistakes in design, the proper way to use design patterns tends to not entirely sink in. For example, I decided to build a command line flash card game for learning the translations of Japanese characters. So it started out pretty simply—I made the mappings of the Japanese characters available in a YAML file. Loaded those as a raw hash and then proceeded to reference that in the game loop logic. Since this was simple enough, I went ahead and abstracted the game system out into an MVC (Model/View/Controller) arrangement with ERB templates for the view and added I18n support for all of the menu and game display words. At this point, I was very proud of what I had achieved. But then an idea occurred to me. It's possible to enter Japanese characters right from the keyboard, so why not add a typing game in along with the translation game? This seemed like an easy thing to do with the current code base simply by adding details about what game modes are available to the YAML files. So I wrote in character mappings for both translation and typing modes. But then to work with the existing code base, I had to tell each object within the code base about these two states and how to handle each one individually and I needed to maintain the state of the current mode. When I got this finished and working well, I was still proud of it but one class stood out to me as a total violation of the Tell Don't Ask pattern. Every method in it was an external query method asking for current state or behavior. These methods are from that class and are an example of what not to do: def mapped_as case @mapping.first when: k @mapped_as.keys.first else @mapped_as.values.first end end def match ? input, comp_bitz case comp_bitz.mapping when: k comp_bitz.value == comp_bitz.collection[input] when: v comp_bitz.value == input end end def choose_display key, value case @mapping.last when: k key when: v value end end def choose_expected key, value case @mapping.first when: k key when: v value end end This is bad design and leads to code that is hard to maintain. This is not duck typing nor is it Tell Don't Ask. The code base depended on this for the behavior throughout and I knew I would have to refactor this...so a year later I did just that. Refactoring to Tell Don't Ask When taking up the challenge to refactor my project, I first looked at the problem area I knew I had which was the query object I've demonstrated above. After examining the code base for some time, I realized that it wasn't revisable in small steps but required a big rewrite, and I'll tell you why. The problems turned out to be more than having a query object on which the system depended. Its root problem was the lack of proper abstraction, a common occurrence when writing code that has "a love for raw types." By importing the flash cards and using them directly as a hash (a raw type) throughout the application, everything that worked with it had to have code written specifically to understand hashy things and the meanings behind the hash's structure. When you fail to abstract data into representational objects, then you get stuck with implementing the methods for the abstraction at every interaction of that object rather than just once with creating an object as the abstraction. So I had to Indiana-Jones the refactor—I had to rebuild it along the side with test-driven development and then swap the code in place, hoping not to set off the cave traps. The first step to implement and test was to create the abstraction for individual flash cards. In this instance, a flash card is a card with a Japanese character on the face and a translation on the back. A card is to have no other responsibility than to hold these two pieces of information. When the game was first implemented as only a translation game, it felt like things adhered to the Single Responsibility Principle (where each class object and method is responsible for just one thing) since only one thing was being handled. But introducing a fundamental change to the system made it evident that the current design was not following SRP, as many of the functions needed to handle a duality of behaviors. Creating the abstraction for a flash card immediately upon loading the data from a YAML file, with only the methods for its two pieces of data, follows SRP and is a core building block from which to implement Tell Don't Ask with duck types. When designing the proper types for adhering to these design patterns, I find it's best to write out the ideas for your application's abstraction on paper so you can have a clearer bird's-eye view to proceed. Having an application written at all also helps envision its design, even if it is a mess, but if you can skip the mess first by thinking through the design, it will make for an easier time when refactoring. Thinking through the design, we have flash cards now; a collection of them would be a card set. From there, we can take the card set and give it to one of two kinds of game mode, either typing practice or translation. These two types of game mode will be of the same duck type game. Now when we pass this duck type of either game through the user interface, it won't care which game is which; they'll both have the same methods defined on each of them, so there aren't any external query methods asking anything. This is our Tell Don't Ask pattern. Implementing Tell Don't Ask Here let's look at the Tell Don't Ask implementation. Instead of writing code that would query what mode of gameplay we're in, we'll just tell it the mode and have it produce what we want by returning our game. class CardSet attr_reader :card_set def initialize(card_hash) @card_set = card_set_builder(card_hash) end def game(mode) case mode when :translate Modes::Translate.new(self) when :typing_practice Modes::TypingPractice.new(self) else raise "Invalid Game Mode!" end end # ... end Here we have a class CardSet, which initializes its set of cards to hold and has no other responsibility. The game method here is our telling method. We tell it which mode we're going to use, and it returns to us the duck type of the game we're going to play. Both Modes::Translate and Modes::TypingPractice are subclassed to a Modes::Game class and only define (the same) methods where they differ. class Translate < Game def match? input current.translation.any? {|value| value == input } end def mode :translate end end class TypingPractice < Game def match? input "#{current}" == input end def mode :typing_practice end end Now when our game sends the mode method for display, we have the name provided, and the match? method is the specific logic that differentiates the two games. This behavior is specific to the game only and therefore where this logic belongs. It also provides an infallible calling mechanism between handling each game, which is the entire purpose of duck types. Now you may notice that the use of this method match? is a query method used externally which violates the Tell Don't Ask pattern. And that's with good reason. Patterns are a general rule to help you, but there are always exceptions to the rule in programming. Scoring is outside the responsibility of our game duck type and belongs to our user interface's scope of behavior. Also, the scoring has been implemented with raw types such as Boolean and integer, so we've stepped out of the role of duck typing for this game mechanic. This area of code has a very low impact on future changes to the system's code and is an acceptable and optimal time to use raw types. Summary When I finished the refactor, I had removed around 600 lines of code and added in 400. The code base went from hard to understand and maintain to easy to read and easily updatable with new features. And that's what Tell Don't Ask offers you when you follow this design principle. It gives you much easier to understand code and a much better guarantee for composability. Whenever you're building a system and you bring in data from an outside source other than the language itself, it's highly recommended you immediately abstract that data into simple abstractions. This will maximize the flexibility you have. Create objects that only have one role following the Single Responsibility Principle. Last, you can use the Tell Don't Ask principle as the code demonstrates above by telling behavior rather than querying for it, and you may have that method wrap your current object's self with your duck type and continue forward in the code path. In object-oriented design, wrapping one object in another object prevents many anti-patterns from occurring and helps keep scopes pure and untampered. And this is exactly what we want. Sensu: workflow automation for monitoring. Learn more—download the whitepaper. Published at DZone with permission of Daniel P. Clark , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/refactoring-for-the-tell-dont-ask-pattern
CC-MAIN-2019-09
refinedweb
1,711
61.46
On Thu, 2006-05-18 at 01:11 -0700, Andrew Morton.> > And for 2.6.18 we're hoping to get John's x86 timer rework merged up. > John, do those patches address this bug?> >.Hey Andrew, Sorry I've been so slow here, just starting to recover from a one week+ flu. :P Here is the PIT fix against the TOD patches that Tim pointedout. Many thanks to Tim for hunting this down.thanks-johnSigned-off-by: John Stultz <johnstul@us.ibm.com>Index: devmm/arch/i386/kernel/i8253.c===================================================================--- devmm.orig/arch/i386/kernel/i8253.c 2006-05-25 18:12:41.000000000 -0500+++ devmm/arch/i386/kernel/i8253.c 2006-05-25 18:52:31.000000000 -0500@@ -41,9 +41,25 @@ { unsigned long flags; int count;- u64 jifs;+ u32 jifs;+ static int old_count;+ static u32 old_jifs; spin_lock_irqsave(&i8253_lock, flags);+ /*+ * Although our caller may have the read side of xtime_lock,+ * this is now a seqlock, and we are cheating in this routine+ * by having side effects on state that we cannot undo if+ * there is a collision on the seqlock and our caller has to+ * retry. (Namely, old_jifs and old_count.) So we must treat+ * jiffies as volatile despite the lock. We read jiffies+ * before latching the timer count to guarantee that although+ * the jiffies value might be older than the count (that is,+ * the counter may underflow between the last point where+ * jiffies was incremented and the point where we latch the+ * count), it cannot be newer.+ */+ jifs = jiffies; outb_p(0x00, PIT_MODE); /* latch the count ASAP */ count = inb_p(PIT_CH0); /* read the latched count */ count |= inb_p(PIT_CH0) << 8;@@ -55,12 +71,29 @@ outb(LATCH >> 8, PIT_CH0); count = LATCH - 1; }- spin_unlock_irqrestore(&i8253_lock, flags); - jifs = jiffies_64;+ /*+ * It's possible for count to appear to go the wrong way for a+ * couple of reasons:+ *+ * 1. The timer counter underflows, but we haven't handled the+ * resulting interrupt and incremented jiffies yet.+ * 2. Hardware problem with the timer, not giving us continuous time,+ * the counter does small "jumps" upwards on some Pentium systems,+ * (see c't 95/10 page 335 for Neptun bug.)+ *+ * Previous attempts to handle these cases intelligently were+ * buggy, so we just do the simple thing now.+ */+ if (count > old_count && jifs == old_jifs) {+ count = old_count;+ }+ old_count = count;+ old_jifs = jifs;++ spin_unlock_irqrestore(&i8253_lock, flags); - jifs -= INITIAL_JIFFIES;- count = (LATCH-1) - count;+ count = (LATCH - 1) - count; return (cycle_t)(jifs * LATCH) + count; }@@ -69,7 +102,7 @@ .name = "pit", .rating = 110, .read = pit_read,- .mask = CLOCKSOURCE_MASK(64),+ .mask = CLOCKSOURCE_MASK(32), .mult = 0, .shift = 20, };-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/5/25/297
CC-MAIN-2013-48
refinedweb
444
71.04
asin, asinf, asinl − arc sine function #include <math.h> double asin(double x); float asinf(float x); long double asinl(long double x); Link with −lm. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): asinf(), asinl(): _BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L; [−pi/2, pi/2]. If x is a NaN, a NaN is returned. If x is +0 (−0), +0 (−0) is returned. If x is outside the range [−1, 1], a domain error occurs, and a NaN is returned. See math_error(7) for information on how to determine whether an error has occurred when calling these functions. The following errors can occur: Domain error: x is outside the range [−1, 1]. acos(3), atan(3), atan2(3), casin(3), cos(3), sin(3), tan(3) This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/.
http://man.linuxtool.net/centos7/u2/man/3_asinf.html
CC-MAIN-2021-25
refinedweb
158
67.04
C# is a simple, general-purpose, modern, object-oriented programming language.In this post, we will be discussing about Object-oriented programming in C#, and I will provide you description about each oops concept in C# with example code to make it more clear. But before we begin to explore OOPS in C#, first discuss what is OOP? What is Object-Oriented Programming (OOP)?. Object-oriented programming is about modelling real world objects or concepts as objects in an application: - An object is something – a person - An object has data – a person has a name - An object performs actions – a person can introduce itself. There are some basic programming concepts in OOP: - Abstraction - The abstraction is simplifying complex reality by modeling classes appropriate to the problem. - Polymorphism - The polymorphism is the process of using an operator or function in different ways for different data input. - Encapsulation - The encapsulation hides the implementation details of a class from other objects. - Inheritance - The inheritance is a way to form new classes using classes that have already been defined. OOPS Concepts, Features & Fundamentals in C# with example OOPS contains list of elements that are very helpful to make object oriented programming stronger. Here is the list of OOPS concepts that we can implement in all major programming languages like c#. 1. Class A class is a collection of objects and represents description of objects that share same attributes and actions.. Here is the syntax and declaration example of Class: namespace ClassTest { public class Vehicle { //your code goes here.. } } Instantiating the Class static void Main(string[] args) { Vehicle car = new Vehicle(); Console.WriteLine(car.ToString()); // Outputs "ClassTest.Vehicle" } 2. Objects Objects are basic building blocks of a C# OOP program.. For example, A “Bike” usually has common elements such as bike color, engine, mileage etc. In OOP terminology these would be called as a Class Properties or Attributes of a Bike object. Here is the example of Object: public class Bike { //This is the class that contains all properties and behavior of an object //here is some properties of class Bike public string color; public string engine; public int mileage; //here is some behavior of class Bike public string GetColor() { return "red"; } public int GetMileage() { return 65; } } Now, to access Bike class, we need create the object of the Bike class and then access it Methods and objects. //It also considered as an "Instance of a Bike Class" Bike objBike = new Bike(); //Accessing Bike class methods objBike.GetColor(); objBike.GetMileage(); 3. Method. using System; public class Circle { private int radius; public void SetRadius(int radius) { this.radius = radius; } public double Area() { return this.radius * this.radius * Math.PI; } } public class Methods { static void Main() { Circle c = new Circle(); c.SetRadius(5); Console.WriteLine(c.Area()); } } In the code example, we have a Circle class. We define two methods. private int radius; We have one member field.. We use the dot operator to call the method. public double Area() { return this.radius * this.radius * Math.PI; } The Area() method returns the area of a circle. to achieve encapsulation? We can achieve Encapsulation by using private access modifier as shown in below example method. private string GetEngineMakeFormula() { private string formula = "a*b"; return formula; } Example – [Encapsulation].Read(); } } Output: Bike mileage is : 65 Bike color is : Black 5. Abstraction Abstraction is the process of providing only essential information to the outside real world and hiding overall background details to present an object. It relies on the separation of interface and implementation. For example, we continue with “Bike” as an example, we have no access to the piston directly, we can use start button to run the piston. Just imagine if a bike manufacturer allows direct access to piston, it would be very difficult to control actions on the piston. That’s the reason why a bike provider separates its internal implementation from its external interface.; } //Its public – so accessible outside class public string DisplayMakeFormula() { //"GetEngineMakeFormula()" is private but accessible and limited to this class only return GetEngineMakeFormula(); } }.WriteLine("Bike color is : " + objBike. Also Read: Abstract class in C# 6. Inheritance The the base classes (ancestors). using System; public class Being { public Being() { Console.WriteLine("Being is created"); } } public class Human : Being { public Human() { Console.WriteLine("Human is created"); } } public class Inheritance { static void Main() { new Human(); } } In this program, we have two classes. A base Being class and a derived Human class. The derived class inherits from the base class. public class Human : Being In C#, we use the colon (:) operator to create inheritance relations. new Human(); We instantiate the derived Human class. Output of the above Inheritance code will be Being is created Human is created We can see that both constructors were called. First, the constructor of the base class is called, then the constructor of the derived class. Also read :Interface in C# (With Example) 7. Polymorphism The polymorphism is the process of using an operator or function in different ways for different data input. In practical terms, polymorphism means that if class B inherits from class A, it does not have to inherit everything about class A; it can do some of the things that class A does differently. In general, polymorphism is the ability to appear in different forms. Technically, it is the ability to redefine methods for derived classes. Polymorphism is concerned with the application of specific implementations to an interface or a more generic base class. Polymorphism is the ability to redefine methods for derived classes. There are two types of polymorphism in C#: compile time polymorphism and runtime polymorphism. Compile time polymorphism is achieved by method overloading and operator overloading in C#. It is also known as static binding or early binding. Example of Compile time polymorphism public class clsCalculation { public int Add(int a, int b) { return a + b; } public double Add(int z, int x, int c) { return z + x + c; } } In above code we have a class "clsCalculation" having two functions with same name "Add" but having different input parameters. First function is with 2 parameters and second function having 3 parameters. So this type of polymorphism is also known as method overloading. It is a compile time polymorphism because compiler already knows what type object it is linking to, what are the methods it is going to call it. So linking a method during a compile time also called as Early binding. Runtime polymorphism in achieved by method overriding which is also known as dynamic binding or late binding. Example of the runtime polymorphism is as below using System; public abstract class Shape { protected int x; protected int y; public abstract int Area(); } public class Rectangle : Shape { public Rectangle(int x, int y) { this.x = x; this.y = y; } public override int Area() { return this.x * this.y; } } public class Square : Shape { public Square(int x) { this.x = x; } public override int Area() { return this.x * this.x; } } public class Polymorphism { static void Main() { Shape[] shapes = { new Square(5), new Rectangle(9, 4), new Square(12) }; foreach (Shape shape in shapes) { Console.WriteLine(shape.Area()); } } } In the above program, we have an abstract Shape class. This class morphs into two descendant classes: Rectangle and Square. Both provide their own implementation of the Area() method. Polymorphism brings flexibility and scalability to the OOP systems. public override int Area() { return this.x * this.y; } ... public override int Area() { return this.x * this.x; } The Rectangle and the Square classes have their own implementations of the Area() method. Shape[] shapes = { new Square(5), new Rectangle(9, 4), new Square(12) }; We create an array of three Shapes. foreach (Shape shape in shapes) { Console.WriteLine(shape.Area()); } We go through each shape and call the Area() method on it. The compiler calls the correct method for each shape. This is the essence of polymorphism.
https://qawithexperts.com/article/c-sharp/object-oriented-programming-oops-concepts-in-c-with-example/199
CC-MAIN-2021-21
refinedweb
1,304
56.66
Created on 2018-12-03 10:20 by n_rosenstein, last changed 2019-01-13 19:49 by timokau. This issue is now closed. On Python 3.7.0 lib2to3 will not parse code like this: def foo(print=None): pass and yield the following error instead lib2to3.pgen2.parse.ParseError: bad input: type=1, value='print', context=('', (1, 8)) In 2.x, 'print' is a reserved keyword and compiling foo fails with SyntaxError. AFAIK, 2to3 expects the input to be *valid* 2.x code that compiles. So it seems to me that the bug is in the input, not 2to3, and that this issue should be closed as 'not a bug'. Previous related reports: * Issue 35260: “2to3” doesn’t parse Python 3’s “print” function by default because it is supposed to translate Python 2 syntax * Issue 2412: “2to3” should support “from __future__ import print_function”. That frees up the “print” reserved keyword in Python 2.6+. I disagree that this is "not a bug". While conversion from 2 to 3 is 2to3's main intention, the documentation advertises:. () And it is used for other purposes. See - this sphinx bug: - bower: Also `print("a", file=sys.stderr)` *is* valid python2 provided that `print_function` is imported.
https://bugs.python.org/issue35383
CC-MAIN-2020-16
refinedweb
204
76.93
Using the Flash drawing tools we'll create a nice looking Radio Button that will make use of the Timeline and Mouse Events in ActionScript 3 to perform a user declared action. Final Result Preview Let's take a look at the final result we will be working towards: Click the top radio box to see how it looks in action. Step 1: Overview A radio button or option button is a type of graphical user interface element that allows the user to choose only one of a predefined set of options. In this tutorial, we will create a custom Radio Button from scratch using Flash and ActionScript 3. Read on! Step 2: Set Up Flash Launch Flash and create a new document. Set the stage size to 320x190px, #181818 for the color, and the frame rate to 24fps. (You can see some examples of Flash radio buttons in the "Match" box, above.) Step 3: Interface This is the interface we'll use: a simple background with a title, some static textfields used as user feedback, a working radio button and two static demos. This will show you how can you enable or disable the radio button. There is also a dynamic textfield (which says [Disabled] in the image) that will be modified by the working radio button. Step 4: Background Select the Rectangle Tool (R) and create a 320x40 px rectangle, place it on top of the stage and fill with this radial gradient: #D45C10 to #B43B02. Now we need something to separate the sections. With the same tool, create a 300x1 px rectangle and fill it with another gradient fill: #737173 to #181818. Duplicate this shape and place them in the center. Step 5: Title Select the Text Tool (T) and set this format in the Properties Panel: Helvetica Bold, 20pt, #FFFFFF. (If you're on Windows, you probably won't have the Helvetica font; use Arial instead.) Type a title and place the textfield in the top-left corner of the screen. To get the letterpress effect, just duplicate (Cmd + D) the textfield, change its color to #8C2D00, move it 1px up and go to Modify > Arrange > Send Backward. You should end with the following effect. Step 6: User Feedback We'll create a series of static Textfields that will tell the user what every radio button represents. There are two types of textfields; a title and a description. This is the format for the title: Myriad Pro Regular, 20pt, #DDDDDD. For the descriptions we use: Myriad Pro Regular, 14pt, #BBBBBB. Step 7: Radio Button Action The Active Radio Button will do something when activated, for this example, a dynamic textfield will be changed showing the current status of the button. Using the Text Tool (T), create a Dynamic Textfield and set statusField as its instance name, then place the textfield as shown in the next image: As the button will be disabled by default, you can write [Disabled] in the textfield. Step 8: Radio Button Next, we'll create the Radio Button. It has three states: - Normal: In this state the button works normally. - Enabled: This state is shown when the user clicks on the button - Disabled: In this state, the button can't be enabled. Step 9: Background Select the Oval Tool (O) and create a 128x128px circle (it doesn't really matter the size you create it since you'll be able to scale it, only for reference) with a 1px stroke color #AAAAAA and a #F7F3F7 to #BDBEBD gradient fill. Step 10: Enabled Area Now we'll create the area that will change when the radio button is enabled. Duplicate (Cmd+D) the background shape and resize it to 64x64px, change the stroke color to a #EEEEEE, #AAAAAA linear gradient and the fill to #C3C6C3, #B5B2B5. Convert the shapes to MovieClip and double-click it to enter edit mode. Create a new Frame (F6) and change the smaller circle fill to #D45C10, #B43B02. This will be the frame shown when enabled. Step 11: Disabled This graphic will be shown when the radio button is disabled. Create a new Frame (F6) and delete the center shape. Change the background gradient to #D4D2D4, #A2A3A2. This will make the background darker and without the part that changes when enabled. Step 12: Instance Names Three Radio Buttons are placed in stage, one for each state. Set the instance names as their section titles suggest, except for the enabled section, as that is an exclusive ActionScript keyword. Name that button enabledBox. Step 13: Document Class We'll make use of the Document Class in this tutorial, if you don't know how to use it or are a bit confused please read this QuickTip. Step 14: New ActionScript 3 Class Create a new ActionScript 3.0 Class and save it as Main.as in your class folder. Step 15: Package The package keyword allows you to organize your code into groups that can be imported by other scripts. It's recommended to name them starting with a lowercase letter using 16: Import Directive These are the classes we'll need to import for our class to work, the import directive makes externally defined classes and packages available to your code. import flash.display.Sprite; import flash.events.MouseEvent; Step 17:: Constructor The constructor is a function that runs when an object is created from a class, this code is the first to execute when you make an instance of an object or runs using the Document Class. public function Main():void { Step 19: Set Radio Button State Here we control the states of the three radio buttons in stage. The first one will operate normally. The second one will be inactive and the last one will show as enabled. active.stop(); inactive.gotoAndStop(3); enabledBox.gotoAndStop(2); Step 20: Handle Mouse Interaction This code tells the Active radio button to listen for mouse events, and every time it detects a MOUSE_UP MouseEvent (this is when the button of the mouse is released), launch the changeState() function. active.addEventListener(MouseEvent.MOUSE_UP, changeState); Step 21: Perform an Action This function will run every time the user clicks the radio button. It checks if the button is currently enabled or disabled and performs a determined action in each of the cases. In this example, it changes the value of the dynamic textfield. private function changeState(e:MouseEvent):void { if (e.target.currentFrame == 1) { e.target.gotoAndStop(2); statusField.text = "[Enabled]"; //This is the action to perform } else { e.target.gotoAndStop(1); statusField.text = "[Disabled]"; //This is the action to perform } } Step 22: Full Code If you are having trouble during any of these steps, remember that you can access the source files at the top of this tutorial. You can also take a look at the full ActionScript code to compare with yours: package { import flash.display.Sprite; import flash.events.MouseEvent; public class Main extends Sprite { public function Main():void { active.stop(); inactive.gotoAndStop(3); enabledBox.gotoAndStop(2); active.addEventListener(MouseEvent.MOUSE_UP, changeState); } private function changeState(e:MouseEvent):void { if (e.target.currentFrame == 1) { e.target.gotoAndStop(2); statusField.text = "[Enabled]"; } else { e.target.gotoAndStop(1); statusField.text = "[Disabled]"; } } } } Conclusion You have created a fully customizable Radio Button using these easy steps, use them to make your own Radio Buttons! Next step: why not combine this with André Cavallari's tutorial about creating Flash components to turn this into an object you could use in any project? And a challenge for you: you've customized the appearance and made the toggle work; try putting five radio buttons on the stage and writing code to only allow one to be enabled at any time.<<
https://code.tutsplus.com/tutorials/create-a-custom-radio-button-from-scratch-using-flash--active-4294
CC-MAIN-2021-21
refinedweb
1,283
62.17
Created on 2017-11-08 16:59 by Ronan.Lamy, last changed 2017-11-09 00:03 by steven.daprano. One would think that u.startswith(v, start, end) would be equivalent to u[start: end].startswith(v), but one would be wrong. And the same goes for endswith(). Here is the actual spec (for bytes, but str and bytearray are the same), in the form of passing pytest+hypothesis tests: from hypothesis import strategies as st, given def adjust_indices(u, start, end): if end < 0: end = max(end + len(u), 0) else: end = min(end, len(u)) if start < 0: start = max(start + len(u), 0) return start, end @given(st.binary(), st.binary(), st.integers(), st.integers()) def test_startswith_3(u, v, start, end): if v: expected = u[start:end].startswith(v) else: start0, end0 = adjust_indices(u, start, end) expected = start0 <= len(u) and start0 <= end0 assert u.startswith(v, start, end) is expected @given(st.binary(), st.binary(), st.integers(), st.integers()) def test_endswith_3(u, v, start, end): if v: expected = u[start:end].endswith(v) else: start0, end0 = adjust_indices(u, start, end) expected = start0 <= len(u) and start0 <= end0 assert u.endswith(v, start, end) is expected Fixing this behaviour to work in the "obvious" way would be simple: just add a check for len(v) == 0 and always return True in that case. Can you please give examples of what you think the problem is? The problem is the complexity of the actual behaviour of these methods. It is impossible to get it right without looking at the source (at least, it was for me), and I doubt any ordinary user can correctly make use of the v='' behaviour, or predict what the return value will be in all cases. See issue24284. `s1.startswith(s2, start, end)` for non-negative indices and non-tuple s2 is equivalent to expressions start + len(s2) <= end and s2[start: start + len(s2)] == s2 or s1.find(s2, start, end) == start Ah, thanks, I noticed the discrepancy between unicode and str in 2.7, but wondered when it was fixed. I guess I'm arguing that it was resolved in the wrong direction, then. Now, your first expression is wrong, even after fixing the obvious typo. The correct version is: start + len(s2) <= min(len(s1), end) and s1[start: start + len(s2)] == s2 If the person who implemented the behaviour can get it right, who will? ;-) The second expression is correct, but I'll argue that it shows that find() also suffers from a discrepancy between its basic one-argument form and the extended ones. For the justification of the find() behavior see msg243668. But the largest argument for this behavior is that find() have it for a long time. Changing it will break existing code that depends on it. This argument is weaker in the case of startwith() and endwith() because their behavior for bytes and Unicode was inconsistent. But the consistency with find() plays a role. Thank you for the bug report Ronan, but I'm afraid that I have no idea what you think the problematic behaviour is. I'm not going to spend the time installing the third-party hypothesis module, and learning how to use it, just to decipher your "actual spec". Where did this spec come from? The documentation is fairly sparse: so I'm not sure where your spec comes from. The title of this ticket is uninformative: what implementation details are being leaked? Saying "The problem is the complexity of the actual behaviour of these methods." explains nothing. Which actual behaviour? Please provide simple examples that contrast expected behaviour from actual behaviour, and justification for the expected behaviour. I don't have Python 3.7 available to me, but in 3.5 the behaviour of u.startswith(v) with an empty v seems consistent to me: py> "alpha".startswith("", 20, 30) True py> "alpha"[20:30].startswith("") True py> "".startswith("", 20, 30) True py> ""[20:30].startswith("") True So I can't see any inconsistency that might be fixed by always returning True in the case v="", as that appears to already be the case.
https://bugs.python.org/issue31984
CC-MAIN-2018-17
refinedweb
694
65.83
C9 Lectures: Stephan T. Lavavej - Standard Template Library (STL),) Actual format may change based on video formats available and browser capability. Why does the MinMax function have that complexity? FUNCTION: MinMax( seq, comparer_lessthan) --> pair(,) _min = seq.begin _max = seq.begin for each element in seq if comparer_lessthan(element, _min) then _min=element if comparer_lessthan(_max,element) then _max=element next return new pair(_min,_max) O(n) or if the STL would let you do FUNCTION: MinMax( seq, comparer_lessthan) --> pair(,) _min = LB(seq.begin,seq.end) _max = _min + UB(_min.begin, seq.end) return new pair(_min,_max). FUNCTION: MinMax( seq, comparer_lessthan, value) --> pair(,) _min = LB(seq.begin,seq.end,value) _max = _min + UB(_min.begin, seq.end,value) return new pair(_min,_max): C:\Temp>type meow.cpp #include <algorithm> #include <array> #include <iostream> #include <iterator> #include <ostream> #include <string> #include <vector> using namespace std; template <typename FwdIt> void print_minmax(FwdIt first, FwdIt last) { const auto p = minmax_element(first, last); cout << "Minimum element (" << *p.first << ") found at index " << distance(first, p.first) << endl; cout << "Maximum element (" << *p.second << ") found at index " << distance(first, p.second) << endl; } template <typename FwdIt, typename Comp> void print_minmax(FwdIt first, FwdIt last, Comp comp) { const auto p = minmax_element(first, last, comp); cout << "Minimum element (" << *p.first << ") found at index " << distance(first, p.first) << endl; cout << "Maximum element (" << *p.second << ") found at index " << distance(first, p.second) << endl; } int main() { const array<int, 25> a = { 83, 79, 13, 17, 53, 59, 29, 2, 37, 11, 47, 97, 19, 31, 5, 43, 41, 89, 73, 7, 3, 23, 67, 61, 71 }; print_minmax(a.begin(), a.end()); vector<string> v; v.push_back("The Eye of the World"); v.push_back("The Great Hunt"); v.push_back("The Dragon Reborn"); v.push_back("The Shadow Rising"); v.push_back("The Fires of Heaven"); v.push_back("Lord of Chaos"); v.push_back("A Crown of Swords"); v.push_back("The Path of Daggers"); v.push_back("Winter's Heart"); v.push_back("Crossroads of Twilight"); v.push_back("Knife of Dreams"); v.push_back("The Gathering Storm"); v.push_back("Towers of Midnight"); v.push_back("A Memory of Light"); print_minmax(v.cbegin(), v.cend(), [](const string& l, const string& r) { return l.size() < r.size(); }); } C:\Temp>cl /EHsc /nologo /W4 meow.cpp meow.cpp C:\Temp>meow Minimum element (2) found at index 7 Maximum element (97) found at index 11 Minimum element (Lord of Chaos) found at index 5 Maximum element (Crossroads of Twilight) found at index 9? Good interesting stuff as always > . int main (int argc, char *argv[]) { unsigned int fact = 0; cout << endl << "What factorial do you want to calculate? "; cin >> fact; vector <unsigned int> vec(fact); iota (vec.begin(), vec.end(), 1); cout << endl; cout << "Factorial is: " << accumulate (vec.begin(), vec.end(), 1, multiplies<unsigned int>() ); cout << endl; return 0; }! ...impatiently waiting for your next video! Keep up these awesome videos! Seconded! On Friday (Nov 5), I filmed Part 8 (regex) and Part 9 (rvalue references). They'll be coming your way soon! ! 8 hours ago, Garp wrote Starting to feel the effects of withdrawal and it ain't pretty! Agreed, I need my STL fix now! 10 days later, still no video 8 hours ago, Deraynger wrote 10 days later, still no video Oh well, the joy of anticipation... 11 hours ago, NotFredSafe wrote *snip* Oh well, the joy of anticipation... Yes, except that the joy could be in anticipation for the 9th Video, after having watched the 8th video Oh Stephan, where art thou? I'm in Part 8, talking about regex: Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
https://channel9.msdn.com/Series/C9-Lectures-Stephan-T-Lavavej-Standard-Template-Library-STL-/C9-Lectures-Stephan-T-Lavavej-Standard-Template-Library-STL-7-of-n?format=html5
CC-MAIN-2017-09
refinedweb
627
61.43
Hello I made a simple timer that should work, but doesn't. I keep getting 1 second as the time. Even if I leave it on for a minute, it said it took 1 second. Have a look at my code.Can anyone help me? Thanks!Can anyone help me? Thanks!Code:#include <iostream> #include <time.h> using namespace std; int main() { time_t start, end; double timediff; char answer, answer2; while(1) { cout << "Welcome to the timer, type s to start and stop time\n"; cin >> answer; if (answer == 's') { time (&start); cout << "Timer started"; cin >> answer2; } if (answer2 == 's') { time(&end); cout << "Timer ended\n"; timediff = difftime(end, start); cout << "That took " << difftime << " seconds\n"; } } system("PAUSE"); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/97427-why-isnt-my-timer-working.html
CC-MAIN-2016-30
refinedweb
120
83.25
From R. Monson-Haefel's book on EJB I understand, that EJB clients that are EJB's themselves, use a somewhat different (namingcontext-)lookup method than 'regular' clients. But why ? Created May 4, 2012 As an example, suppose an industry group defined a set of standard EJB interfaces for, say, HR applications. Different vendors would implement EJB suites that implemented these interfaces, and J2EE applications that used them. If the beans were written using the <ejb-ref> tag, rather than 'knowing' one another's JNDI names (which can be done, by the way), you could, theoretically, mix and match EJBs and applications to build a solution that best fits your company, simply by changing mappings. Of course, this is theoretical: at this stage of the technology, I'd be really surprised if everyone involved actually did things 'right'. The java:comp/env namespace was put into the 1.1 spec to simplify this and a couple of other things. Putting all of these mappable elements under a consistent JNDI context helps to ensure interoperability by encouraging consistency. Note that this can also be used for things like global settings for a bean (say a threshold of change to a salary that may only be done by someone in the 'HR Manager' role). Note also that recent J2EE specs have implied that the client should be getting references in much the same way. In fact, the 1.3 spec (available from javasoft.com as a public draft) indicates that the application client deployment descriptor can include <ejb-ref> stanzas
http://www.jguru.com/faq/view.jsp?EID=136600
CC-MAIN-2017-13
refinedweb
258
52.39
PHP as a preprocessor not only for HTML Recently I've seen a discussion developing on a certain Polish forum about mixing different programming languages in application source code - the topic author used (on purpose) a certain simplification (the discussion was about the very basics of programming): Writing the program source we cannot use instruction from several different languages which are not compatible with each other [...] Some replies stated 'hey! but you can combine C with assembler, or PHP with HTML' (let's skip the thing that HTML is not a programming language). After reading the above, a strange idea came to my mind - if you can use PHP with HTML and JS... why not use it with Python, C, assembler, or other languages as well? Everything circles around a certain PHP feature - the PHP interpreter (in the default running mode) interprets only the code between <?php and ?> tags, thanks to what, we can include PHP in anything, and use it as a powerful preprocessor! An example of usage looks like this: // PHP preprocesor test #include <time.h> #include <stdlib.h> #include <stdio.h> float fast_sin(int deg) { static const float sin_table[] = { <?php for($i = 0; $i < 359; $i++) echo(sin($i) . ", "); echo(sin($i)); ?> }; return sin_table[deg % 360]; } int main(void) { int i; srand(time(NULL)); i = rand(); printf("Do you know that sin(%i) = %f?\n", i, fast_sin(i)); return 0; } Compilation and using such a creature looks like this (as one will see, CLI for PHP is required): $ php my.c | gcc -x c - $ ./a.out Do you know that sin(802194582) = 0.994827? $ A short explanation of gcc options: the -x c option forces gcc to treat input as C language; it is a required parameter if gcc is to read from STDIN (since it cannot check the file extension). The standalone - (dash) tell gcc to read from STDIN, instead of a file. Is this really useful? In large projects certainly not - using such a hack in large projects is close to being masochistic. Small project on the other hand, especially different kind of hackish-apps, could make use such strange wonders. This might be especially useful for languages which do not have a preprocessor, but one would be useful if it would in fact exist (for example Java). The opposite of being digitally-paranoid Everyone that is interested in security is familiar with what people do to make sure they don't forget the password - pieces of paper under the keyboard, sticky notes on the LCD, and in extreme cases there was a story of a guy who has written the PIN to his credit card on the frame of the ATM he used. And now, a SOHO router manufacturer has entered the marked with a great idea (click to zoom, photo by Samlis Coldwind): Hah! I'm sure the user will find the default password once he'll try to login! However, it SHOULD force the admin to change the default password at first login. Guess what... Yep.. It didn't. In the above example the shown password had in fact worked... hosts, malware, and access rights The other day I wrote, inter alia, about a banker trojan that has worked by adding entries to the C:\Windows\system32\drivers\etc\hosts file (which works like a "local DNS" for the domains it contain). A certain idea has came to my mind - why don't we revoke the write privileged for that file from all users (well, from admins only, since normal users can't write to it anyway)? A simple command would do that (do not execute it if you don't know what you're doing): cacls c:\windows\system32\drivers\etc\hosts /g Everyone:R And thats that. However setting 'Deny Write' for everyone would be even better! Of course, once the malware authors will learn about this trick, they will start to add 'grant write' code to their code, and we're back at the starting point - so it's only a temporary solution, and it will be sufficient as long as no malware writer knows about it (I'm judging that the distance between now and the moment an anti-trick will appear in malware code is inversely proportional to the popularity of my blog - so we still have some time to sleep well ;>) And thats it for now ;> Keep the good work! Add a comment:
https://gynvael.coldwind.pl/?id=210
CC-MAIN-2019-26
refinedweb
737
67.18
Habit I guess.Thanks. Habit I guess.Thanks. I'm thinking if Im using StdIn why do I need: int N = Integer.parseInt(args[0]); int compare = Integer.parseInt(args[1]); Is there another way of programming this, with the StdIn? Thanks. It's almost solved, so to speak. I have feeling the professor wont give correct for this code. I don't understand WHY do I need to input string(just some numbers) at the command line and... When I put some integers in args I press enter, then ctr z and out comes the error: Exception in thread"Main java.util.NoSouch.ElementException at Java.util.Svanner.thrown for(Unknown Source) Why... Try to compress your code a little bit, so it will be better to read. I'm not sure if you need to import all that stuff, try looking at that. I'm writing a code that is supposed to filter a thread of integers so that repeated values are removed public class AboveBeyondPossible { public static void main(String[] args) { ... I did the trace. It's solved. With an if loop at the bottom. Hi. This where I'm at right now. The error comes when I test for sequence 1 3 3 5 4 2 3 package forrit; public class Verkefni2tplus { Ok. I have a string in the array? --- Update --- Whitespace? --- Update --- It's the commas! Thanks jps. I think directly. But my program is not correct. I'll put up a new post. --- Update --- The invariant is supposed to be something like this: 1.F-->I 2.I ^ not R}S{I} 3.I ^... Ok. Does anyone in Java-universe know why this code is giving 5 for this sequence: 0,0,5,79,8,8,7,4 --- Update --- Ahh-sorry-that is correct?! --- Update --- I'm wondering. This is a... Mr.Jps. I find it so difficult to read this kind a stuff...I'll always get the sleepreading syndrome we see this, and just give up. Sort of like when I read my terrible tabing(syntax) in me last... Ok. Is this as bad as the last one? And is'nt this the: We solve your homework forum? package forrit; public class Longesdecreasingttt { public static void main(String[] args) { Ok. Would you give correct for doing it this way? public class LongestDecreasing { public static void main(String[] args) { int N = args.length; int[] a = new int[N]; ... My problem now is that I dont ,,take'' the number in a0. But I feel like I'll have to have i=1 because if I have i=0 then I'm out of bounds....but then again I'm out of bounds at the other end... hahahah I'l admit that I'm forum timid GregBrannon. The System.out print...I'm just not cutting it args should be = i I think I'm almost there. How can I fix this code so that it works? public class VectorDecreasing { public static void main(String[] args) { int N = args.length; ... Excuse me. Code tags? I did highlight. Do you mean like explanation in the code itself? Meesa don't understand this. My problem is I dont know where start really. Not asking for soloution, just... Just started to work at this problem. Any suggestions? public class LongestDecreasing { public static void main(String[] args) { Yes it's solved. It's supposed to be like this Your a big help. Thanks. Now it's on to the next.... After a good nights sleep. Only one error left that has been with me all the time. N is just some number or numbers, proferable N>0. I have to sqare the distance and print it out. GregBrannon are you online like 24h?)... I'm taking a course. Apparantly my professor only wants to see that I can apply the formula. Which I have done. But I wonted the code to work. public class Euclediand{ public static...
http://www.javaprogrammingforums.com/search.php?s=396af7a341a20c45a7fc20da93cbc595&searchid=1273291
CC-MAIN-2014-52
refinedweb
658
87.42
How to generate an element from laplace distribution Hi, I am wondering how to generate a random element according to laplace distribution. I tried the method RealDistribution(). But it failed. According to the reference manual, laplace distribution is not supported.... In the manual, there are some examples showing how to deal with uniform distribution, Gaussian distribution, etc. My questions are: (1) How do I know exactly which distributions are supported by the RealDistribution() method? (2) Is there anyway I can simply generate an element according to laplace distribution in sage? Thank you in advance. I found a way to do this by using numpy in sage.... import numpy numpy.random.laplace(loc,scal,size) will generate an element for laplace distribution. But it is not a very "sage" way to do it. And I am still concerned about my first question.
https://ask.sagemath.org/question/26617/how-to-generate-an-element-from-laplace-distribution/
CC-MAIN-2017-39
refinedweb
141
60.82
11 November 2010 05:23 [Source: ICIS news] GUANGZHOU (ICIS)--Inflation in ?xml:namespace> On a year-on-year basis, China’s consumer price index (CPI) increased by 4.4% in October, while its producer price index (PPI) rose by 5%, data from the National Bureau of Statistics (NBS) showed. The October CPI was a 25-month high. The Chinese central bank announced late on Wednesday that it would raise the bank reserve ratio by half a percentage point in its latest move to curb accelerating inflation. In October, the country’s industrial output grew by 13.1% year on year, but the growth was down by 0.2 percentage points from September. Total retail sales increased by 18.6% year on year, but the growth was down by 0.2 percentage points compared with September, NBS data showed. Fixed-asset investments jumped 24.4% year on year during the January-October period, 0.1 percentage points lower than in the first nine months of 2010, the NBS said. Analysts said that the deceleration in investments and exports was the result of governmental controls that focus more on “growth quality” than “growth rate”. “October’s CPI exceeded the market expectation of 4% and inflation would extend throughout this year,” said Qiu Ziyuan, chief analyst at Shenzhen-based broker China Merchants Securities (CMS). “The government may hike interest rates again later this year on inflation
http://www.icis.com/Articles/2010/11/11/9409264/china-inflation-rises-again-as-price-index-hits-25-month-high.html
CC-MAIN-2015-22
refinedweb
234
58.69
Apex SOAP Callouts Learning Objectives Use WSDL2Apex to Generate Apex Code WSDL2Apex automatically generates Apex classes from a WSDL document. You download the web service’s WSDL file, and then you upload the WSDL and WSDL2Apex generates the Apex classes for you. The Apex classes construct the SOAP XML, transmit the data, and parse the response XML into Apex objects. Instead of developing the logic to construct and parse the XML of the web service messages, let the Apex classes generated by WSDL2Apex internally handle all that overhead. If you are familiar with WSDL2Java or with importing a WSDL as a Web Reference in .NET, this functionality is similar to WSDL2Apex. You’re welcome. For this example, we’re using a simple calculator web service to add two numbers. It’s a groundbreaking service that is all the rage! The first thing we need to do is download the WSDL file to generate the Apex classes. Click this link and download the calculator.xml file to your computer. Remember where you save this file, because you need it in the next step. Generate an Apex Class from the WSDL - From Setup, enter Apex Classes in the Quick Find box, then click Apex Classes. - Click Generate from WSDL. - Click Choose File and select the downloaded calculator.xml file. - Click Parse WSDL. The application generates a default class name for each namespace in the WSDL document and reports any errors. For this example, use the default class name. However, in real life it is highly recommended that you change the default names to make them easier to work with and make your code more intuitive. It’s time to talk honestly about the WSDL parser. WSDL2Apex parsing is a notoriously fickle beast. The parsing process can fail for several reasons, such as an unsupported type, multiple bindings, or unknown elements. Unfortunately, you could be forced to manually code the Apex classes that call the web service or use HTTP. - Click Generate Apex code. The final page of the wizard shows the generated classes, along with any errors. The page also provides a link to view successfully generated code. The generated Apex classes include stub and type classes for calling the third-party web service represented by the WSDL document. These classes allow you to call the external web service from Apex. For each generated class, a second class is created with the same name and the prefix Async. The calculatorServices class is for synchronous callouts. The AsyncCalculatorServices class is for asynchronous callouts. Execute the Callout Prerequisites Before you run this example, authorize the endpoint URL of the web service callout,, using the steps from the Authorize Endpoint Addresses section. Now you can execute the callout and see if it correctly adds two numbers. Have a calculator handy to check the results. - Open the Developer Console from the Setup gear ( ). - In the Developer Console, select. - Delete all existing code and insert the following snippet. calculatorServices.CalculatorImplPort calculator = new calculatorServices.CalculatorImplPort(); Double x = 1.0; Double y = 2.0; Double result = calculator.doAdd(x,y); System.debug(result); - Select Open Log, and then click Execute. - After the debug log opens, click Debug Only to view the output of the System.debug statements. The log should display 3.0. Test Web Service Callouts All experienced Apex developers know that to deploy or package Apex code, at least 75% of that code must have test coverage. This coverage includes our classes generated by WSDL2Apex. You might have heard this before, but test methods don’t support web service callouts, and tests that perform web service callouts fail. So, we have a little work to do. To prevent tests from failing and to increase code coverage, Apex provides a built-in WebServiceMock interface and the Test.setMock method. You can use this interface to receive fake responses in a test method, thereby providing the necessary test coverage. Specify a Mock Response for Callouts When you create an Apex class from a WSDL, the methods in the autogenerated class call WebServiceCallout.invoke, which performs the callout to the external service. When testing these methods, you can instruct the Apex runtime to generate a fake response whenever WebServiceCallout.invoke is called. To do so, implement the WebServiceMock interface and specify a fake response for the testing runtime to send. Instruct the Apex runtime to send this fake response by calling Test.setMock in your test method. For the first argument, pass WebServiceMock.class. For the second argument, pass a new instance of your WebServiceMock interface implementation. Test.setMock(WebServiceMock.class, new MyWebServiceMockImpl()); That’s a lot to grok, so let’s look at some code for a complete example. In this example, you create the class that makes the callout, a mock implementation for testing, and the test class itself. - In the Developer Console, select. - For the class name, enter AwesomeCalculator and then click OK. - Replace autogenerated code with the following class definition. public class AwesomeCalculator { public static Double add(Double x, Double y) { calculatorServices.CalculatorImplPort calculator = new calculatorServices.CalculatorImplPort(); return calculator.doAdd(x,y); } } - Press CTRL+S to save. Create your mock implementation to fake the callout during testing. Your implementation of WebServiceMock calls the doInvoke method, which returns the response you specify for testing. Most of this code is boilerplate. The hardest part of this exercise is figuring out how the web service returns a response so that you can fake a value. - In the Developer Console, select. - For the class name, enter CalculatorCalloutMock and then click OK. - Replace the autogenerated code with the following class definition. @isTest global class CalculatorCalloutMock implements WebServiceMock { global void doInvoke( Object stub, Object request, Map<String, Object> response, String endpoint, String soapAction, String requestName, String responseNS, String responseName, String responseType) { // start - specify the response you want to send calculatorServices.doAddResponse response_x = new calculatorServices.doAddResponse(); response_x.return_x = 3.0; // end response.put('response_x', response_x); } } - Press CTRL+S to save. Lastly, your test method needs to instruct the Apex runtime to send the fake response by calling Test.setMock before making the callout in the AwesomeCalculator class. Like any other test method, we assert that the correct result from our mock response was received. - In the Developer Console, select. - For the class name, enter AwesomeCalculatorTest and then click OK. - Replace the autogenerated code with the following class definition. @isTest private class AwesomeCalculatorTest { @isTest static void testCallout() { // This causes a fake response to be generated Test.setMock(WebServiceMock.class, new CalculatorCalloutMock()); // Call the method that invokes a callout Double x = 1.0; Double y = 2.0; Double result = AwesomeCalculator.add(x, y); // Verify that a fake result is returned System.assertEquals(3.0, result); } } - Press CTRL+S to save. - To run the test, select. The AwesomeCalculator class should now display 100% code coverage!
https://trailhead.salesforce.com/en/content/learn/v/modules/apex_integration_services/apex_integration_soap_callouts
CC-MAIN-2021-10
refinedweb
1,129
51.34
Grab siteprice and write to google spreadsheet using python By the end of this read you will be able to grab site price from siteprice.org and write it to google spreadsheet using python. Every website has it’s competition. As our website evolves, we have more competitions and the competitors website also earns good value. It is vital to know the value of our website as well as our competition’s value. Siteprice.org is one of those websites which calculates a website’s value based on different factors. Putting domain name of website in a text file with one domain per line will be our strategy for querying number of websites’s price. You may wish to put hundreds of websites in this txt file which are your competitions. Python codes to extract site price and write in google spreadsheet from bs4 import BeautifulSoup from urllib2 import urlopen import gdata.spreadsheet.service import datetime rowdict = {} rowdict['date'] = str(datetime.date.today()) spread_sheet_id = '13mX6ALRRtGlfCzyDNCqY-G_AqYV4TpE7rq1ZNNOcD_Q' worksheet_id = 'od6' client = gdata.spreadsheet.service.SpreadsheetsService() client.debug = True client.email = 'email@domain.com' client.password = 'password' client.source = 'siteprice' client.ProgrammaticLogin() with open('websitesforprice.txt') as f: for line in f: soup = BeautifulSoup(urlopen("" + line).read()) rowdict['website'] = str(line) rowdict['price'] = soup.find(id="lblSitePrice").string client.InsertRow(rowdict,spread_sheet_id, worksheet_id) 1. Line 1 to 4 These lines are import statements. Here in this program, we are using various python libraries. Gdata is used to access google spreadsheet. We are using BeautifulSoup because it allows us to get data via id which we will use to get the price of a website. Datetime is used to get the current date. Urlopen us used to open the webpage which contains the data we want. 2.Line 5 to 14 In order to write the extracted rank to google spreadsheet programmatically we are using the gdata module. In order to write to a spreadsheet we need the spreadsheet id, worksheet id and a dictionary containing values we want to write to the spreadsheet. The dictionary contains key as the column header and value as the string that is to be written to the spreadsheet(website, price, date for our program). Go to docs.google.com when logged in and create a new spreadsheet. Fill the first three columns of the first row as website, price and date respectively. All the letter should be in lower case and no whitespaces. Now when you have created a new spreadsheet, take a look to the url. The url looks something like this one The spreadsheet id(mentioned earlier) is present in the url. “13mX6ALRRtGlfCzyDNCqY-G_AqYV4TpE7rq1ZNNOcD_Q” in the above url is the spreadsheet id we need. By default the worksheet id is ‘od6‘. Basically line 5 to 14 are codes to access google spreadsheet. 3. Line 15 to 20 Since we’re writing a program that can extract alexa ranks for hundreds of websites and append it to google spreadsheet, therefore taking url from console input is never a good solution. We have to write the url of websites we want to take care of in a text file. Each website in a single line in the format. Make sure there is a valid website, one in each line because we will read the url from python line by line. Line 17 makes a soup element out of the url which has the information we are looking for. The soup element is of different websites in each iteration. Line 18 stores the value of the domain in the key “website” of json rowdict. Line 19 stores the price of the website in the key price of json rowdict. You can see we use BeutifulSoup to get data via id. Finally line 20 pushes the entire json element to google spreadsheet. This piece of codes runs for the number of times equal to the line in text file. Thanks for reading Enjoy!! . If you have any questions regarding the post, feel free to comment below. There Are 1 Comment On This Article.
http://www.thetaranights.com/grab-siteprice-and-write-to-google-spreadsheet-using-python/
CC-MAIN-2018-26
refinedweb
674
75.61
#include <db.h> int DB->set_q_extentsize(DB *db, u_int32_t extentsize); Set the size of the extents used to hold pages in a Queue database, specified as a number of pages. Each extent is created as a separate physical file. If no extent size is set, the default behavior is to create only a single underlying database file. For information on tuning the extent size, see Selecting a extent size. The DB->set_q_extentsize interface may be used only to configure Berkeley DB before the DB->open interface is called. The DB->set_q_extentsize function returns a non-zero error value on failure and 0 on success. The DB->set_q_extentsize function may fail and return a non-zero error for the following conditions: Called after DB->open was called. The DB->set_q_extentsize function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB->set_q_extentsize function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/db_set_q_extentsize.html
crawl-001
refinedweb
180
52.6
I mod_ssl. When Apache HTTP server receives a request, it checks for the requests and forward to the tomcat accordingly. This configuration is important for security and clustering. This tutorial contains following sections. - Installing and configuring Apache HTTP server. - Installing and configuring Apache tomcat. - Installing and configuring mod_jk. - Configuring mod_ssl. - Testing the environment. Installing and configuring Apache HTTP server. Run the following command to download Apache HTTP server 2.2.22 wget wget md5sum -c httpd-2.2.22.tar.bz2.md5 After executing above command, httpd-2.2.22.tar.bz2 archive will be downloaded to your “Downloads” folder. To extract the zipped archive file, run the following command from /Downloads folder. cd /home/semika/Downloads tar -xjvf httpd-2.2.22.tar.bz2 Above command will extract the zipped archive file into httpd-2.2.22 folder under Downloads folder. Now, you should decide where you are going to install Apache HTTP server. I am going to install it to /home/semika/httpd-2.2.22 folder. You have to create the folder there. Navigate to your user folder and execute the following command to create new folder. cd /home/semika mkdir httpd-2.2.22 To install Apache to your particular platform, we need to compile the source distribution that we have already downloaded. If you see carefully inside the extracted folder under Downloads/httpd-2.2.22, you can see there is a configure script. We can compile the source distribution with that script and it will create necessary stuff to install Apache HTTP server. When compiling Apache, various options can be specified that are suited to our local environment. For the complete reference of options provided, see here. Since, we need mod_ssl to be configured with Apache compilation, we need to install OpenSSL development bundle. Otherwise, compilation will raise an exception. To install OpenSSL development libraries, run the following command. sudo apt-get install openssl libssl-dev Sometime, You might need to run the following command as well, if you encounter an error while apache compilation. sudo apt-get install zlib1g-dev sudo apt-get install libxml2-dev To compile Apache source distribution, execute the following commands. cd /home/semika/Downloads/httpd-2.2.22 ./configure --prefix=/home/semika/httpd-2.2.22 --enable-mods-shared=all --enable-log_config=static --enable-access=static --enable-mime=static --enable-setenvif=static --enable-dir=static -enable-ssl=yes –prefix : You can specify the installation directory –enable-mods-shared : Setting this to ‘all’ will enable to install all the shared modules. –enable-ssl : Since we are going to configure Apache HTTP server with mod_ssl, this has been set to ‘yes’ to compile Apache with mod_ssl. By default this option is disabled. For other options specified in the configuration command, please look into full options reference documentation. After successfully running the above command, execute the following commands. Before execute the following command, just have look on your specified installation directory, ie: /home/semika/httpd-2.2.22, you can see that it is still empty. make make install Now, you can see Apache HTTP server has been installed under /home/semika/httpd-2.2.22. Look for modules folder, you can see the list of modules installed. Confirm, whether it has installed mod_ssl.so. Now, you can start the Apache HTTP server. cd /home/semika/httpd-2.2.22/bin sudo ./apachectl start If you see bellow line when executing the above command, “httpd: Could not reliably determine the server’s fully qualified domain name, using 127.0.1.1 for ServerName” edit your /httpd-2.2.22/conf/httpd.conf file as follows. Look for ” ServerName” property in httpd.conf file. You will see following line there. #ServerName Uncomment this line and modify it as follows. ServerName localhost Again start the Apache HTTP server. If you did not change the default port which Apache server is running, check the following URL to check whether server is started or not. The default Apache HTTP server running port is 80. With the above URL, if you see a page with “It works”, you are done with Apache HTTP server. Further, if you want to change the default port, you can edit the httpd.conf file as follows. Look for “Listen” property and change it as you wish. I have set it to 7000. So my Apache HTTP server is running on 7000. I have to access the following URL to get “It works” page. To stop Apache HTTP server,execute the following command. cd /home/semika/httpd-2.2.22/bin sudo ./apachectl stop Installing and configuring Apache tomcat. Installing and configuring Apache tomcat is not a big thing, if you are involving with this kind of advance configuration. For the completeness of the tutorial, I will explain that a little. I am using Apache Tomcat 7.0.25. You can download it from Apache web site and extract it to some where in you local machine. After that, you have to set the environment variable as follows. export CATALINA_HOME=/home/semika/apache-tomcat-7.0.25 export PATH=$CATALINA_HOME/bin:$PATH You can start the tomcat with following command. cd home/semika/apache-tomcat-7.0.25/bin ./startup.sh If you want to see tomcat’s console out put, execute the following commands before starting the tomcat. cd home/semika/apache-tomcat-7.0.25/logs/ tail -f catalina.out After successfully starting the tomcat, try the following URL By default, tomcat will run on port 8080. Now our Apache HTTP server is running on port 7000 and tomcat is on 8080. Further, to configure Tomcat with Apache HTTP server, we need to create workers.properties file under home/semika/apache-tomcat-7.0.25/conf/. Apache HTTP server will connect to Tomcat through port 8009. If you see server.xml file under Tomcat’s conf folder, you can see following connector declaration there. <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> Installing and configuring mod_jk If you look into httpd-2.2.22/modules folder, you can see mod_jk.so has been installed. The mod_jk is a connector for Apache HTTP server to connect to Apache Tomcat. To configure, you have to load mod_jk.so module to Apache HTTP server. Edit /httpd-2.2.22/conf/httpd.conf file as follows. You need to add these properties. # Load mod_jk module # Update this path to match your modules location LoadModule jk_module modules/mod_jk.so # Where to find workers.properties # Update this path to match your conf directory location JkWorkersFile /home/semika/apache-tomcat-7.0.25/conf/workers.properties # Where to put jk logs # Update this path to match your logs directory location JkLogFile /home/semika/apache-tomcat-7.0.25 /rainyDay to worker ajp13 JkMount /rainyDay ajp13 JkMount /rainyDay/* ajp13 I guess most of the properties defined above are clear for you. What is ‘JkMount’ and ‘/rainyDay’. The ‘rainyDay’ is one of my application deployed on Apache Tomcat. It says that ” Forward all the request to the Apache Tomcat that are coming with /rainyDay namespace”. With that, we have finished the configuration of mod_jk. Now, We will test the environment. Try the following URL’s. Nothing special about the above URL. Since I have deployed my ‘rainyDay’ application on Apache Tomcat, We can access the application even without configuring with Apache HTTP server using mod_jk. Now, try the following URL. If you can access the application with above URL, our configuration is successful. We know that, We have not deployed the ‘rainyDay’ application on Apache HTTP server, but on Apache Tomcat and also Apache HTTP server is running on port 7000. We can still access the ‘rainyDay’ application deployed on Apache Tomcat via Apache HTTP server. Now just try the following URL. With the above URL, you can not access the application since URL has https protocol. To access the application with https://, we need to configure SSL with Apache HTTP server. Configuring mod_ssl. To enable SSL on Apache HTTP server, again you have to edit httpd.conf file. Open this file and look for the following line. # Secure (SSL/TLS) connections #Include conf/extra/httpd-ssl.conf Uncomment the above line and open it. That is under /httpd-2.2.22/conf/extra/httpd-ssl.conf. Look for the following properties. SSLPassPhraseDialog builtin SSLEngine on SSLCertificateFile "/home/semika/httpd-2.2.22/conf/server.crt" SSLCertificateKeyFile "/home/semika/httpd-2.2.22/conf/server.key" Uncomment, if some are already commented out. Next, we have to generate SSL certificate files, server.crt and server.key. To generate these file, execute the following commands. cd /home/sermika/Downloads openssl genrsa -des3 -out server.key 1024 openssl req -new -key server.key -out server.csr openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt When executing above commands, it will ask for some details and also a password and you have to provide those. Also, you have keep this password in mind, because it will be needed to provide when starting Apache HTTP server. To know more about the configuring SSL with Apache HTTP server, refer the this documentation. Now carefully look into /home/semika/Downloads folder. You can see the server.key and server.crt generated. You have to copy these two files into Apache HTTP server installation directory. cd /home/semika/Downloads cp server.crt /home/semika/httpd-2.2.22/conf/server.crt cp server.key /home/semika/httpd-2.2.22/conf/server.key Again open the httpd-ssl.conf file under /httpd-2.2.22/conf/extra. You can see </VirtualHost> element and some properties are defined within that element. Like we did in mod_jk configuration, here also, we have to declare the required application context URL’s or any other URL’s that are needed to be secured with SSL. You have to add JkMount declaration as follows. </VirtualHost> ........... ........... JkMount /rainyDay ajp13 JkMount /rainyDay/* ajp13 </VirtualHost> Now try the following URL’s This should load the ” It works” page. I used the port 7000, because I have change the Apache HTTP server default port. You are successfully configured mod_ssl. Now try the following URL as well. You should be able to load the application. Testing the environment. There is no any order of starting Apache Tomcat and Apache HTTP server. After starting your servers, you can test your configuration with following URL’s. If you see, Tomcat’s home page, Tomcat configuration is successful. If you can access the application, you application is successfully deployed on Apache Tomcat. If you see “It works” page, Apache HTTP server is successfully configured. If it loads you application, mod_jk configuration with Apache HTTP server is successful. Again if you see “It works” page, mod_ssl configuration with Apache HTTP server is success. Again if you can access the application, mod_ssl configuration with Apache HTTP server is success and Apache HTTP server properly handle all secure requests to ‘rainyDay’ application successfully. Reference: How to configure Apache HTTP server with Tomcat on SSL ? from our JCG partner Semika loku kaluge at the Code Box blog. Thanks for the article, it was just what I was looking for. Hi Dimitrios, Thanks for the article, it is very easy to follow and well structured. I followed your steps exactly, but haven’t found mod_jk.so in httpd-2.2.22/modules. Is there maybe an extra configuration for that when installing httpd (that you might have left out)? Thanks! I have gone through many web sites to set this up properly for 4days. Your post saved my day finally. Thank you very much thank you for a vey discriptive artical thank you for your article, but how about apache and tomcat ports? and ssl? I need more information about this. thank you again. Very clear post thanks! Wish everyone posted like this.
https://www.javacodegeeks.com/2012/06/apache-http-server-with-tomcat-on-ssl.html/comment-page-1/
CC-MAIN-2017-09
refinedweb
1,977
60.92
Contents: Toolkit The Peer Interfaces This chapter describes the Toolkit class and the purposes it serves. It also describes the java.awt.peer package of interfaces, along with how they fit in with the general scheme of things. The most important advice I can give you about the peer interfaces is not to worry about them. Unless you are porting Java to another platform, creating your own Toolkit, or adding any native component, you can ignore the peer interfaces. The Toolkit object is an abstract class that provides an interface to platform-specific details like window size, available fonts, and printing. Every platform that supports Java must provide a concrete class that extends the Toolkit class. The Sun JDK provides a Toolkit for Windows NT/95 (sun.awt.win32.MToolkit [Java1.0] or sun.awt.windows.MToolkit [Java1.1]), Solaris/Motif (sun.awt.motif.MToolkit), and Macintosh (sun.awt.macos.MToolkit). Although the Toolkit is used frequently, both directly and behind the scenes, you would never create any of these objects directly. When you need a Toolkit, you ask for it with the static method getDefaultToolkit() or the Component.getToolkit() method. You might use the Toolkit object if you need to fetch an image in an application (getImage()), get the font information provided with the Toolkit (getFontList() or getFontMetrics()), get the color model (getColorModel()), get the screen metrics (getScreenResolution() or getScreenSize()), get the system clipboard (getSystemClipboard()), get a print job (getPrintJob()), or ring the bell (beep()). The other methods of Toolkit are called for you by the system. Because Toolkit is an abstract class, it has no usable constructor. To get a Toolkit object, ask for your environment's default toolkit by calling the static method getDefaultToolkit() or call Component.getToolkit() to get the toolkit of a component. When the actual Toolkit is created for the native environment, the awt package is loaded, the AWT-Win32 and AWT--Callback-Win32 or AWT-Motif and AWT-Input threads (or the appropriate threads for your environment) are created, and the threads go into infinite loops for screen maintenance and event handling. The getDefaultToolkit() method returns the system's default Toolkit object. The default Toolkit is identified by the System property awt.toolkit, which defaults to an instance of the sun.awt.motif.MToolkit class. On the Windows NT/95 platforms, this is overridden by the Java environment to be sun.awt.win32.MToolkit (Java1.0) or sun.awt.windows.MToolkit (Java1.1). On the Macintosh platform, this is overridden by the environment to be sun.awt.macos.MToolkit. Most browsers don't let you change the system property awt.toolkit. Since this is a static method, you don't need to have a Toolkit object to call it; just call Toolkit.getDefaultToolkit(). Currently, only one Toolkit can be associated with an environment. You are more than welcome to try to replace the one provided with the JDK. This permits you to create a whole new widget set, outside of Java, while maintaining the standard AWT API. The getColorModel() method returns the current ColorModel used by the system. The default ColorModel is the standard RGB model, with 8 bits for each of red, green, and blue. There are an additional 8 bits for the alpha component, for pixel-level transparency. The getFontList() method returns a String array of the set Java fonts available with this Toolkit. Normally, these fonts will be understood on all the Java platforms. The set provided with Sun's JDK 1.0 (with Netscape Navigator and Internet Explorer, on platforms other than the Macintosh) contains TimesRoman, Dialog, Helvetica, Courier (the only fixed-width font), DialogInput, and ZapfDingbat. In Java 1.1, getFont() reports all the 1.0 font names. It also reports Serif, which is equivalent to TimesRoman; San Serif, which is equivalent to Helvetica; and Monospaced, which is equivalent to Courier. The names TimesRoman, Helvetica, and Courier are still supported but should be avoided. They have been deprecated and may disappear in a future release. Although the JDK 1.1 reports the existence of the ZapfDingbat font, you can't use it. The characters in this font have been remapped to Unicode characters in the range \u2700 to \u27ff. The getFontMetrics() method returns the FontMetrics for the given Font object. You can use this value to compute how much space would be required to display some text using this font. You can use this version of getFontMetrics() (unlike the similar method in the Graphics class) prior to drawing anything on the screen. The getMenuShortcutKeyMask() method identifies the accelerator key for menu shortcuts for the user's platform. The return value is one of the modifier masks in the Event class, like Event.CTRL_MASK. This method is used internally by the MenuBar class to help in handling menu selection events. See Chapter 10, Would You Like to Choose from the Menu? for more information about dealing with menu accelerators. The getPrintJob() method initiates a print operation, PrintJob, on the user's platform. After getting a PrintJob object, you can use it to print the current graphics context as follows: // Java 1.1 only PrintJob p = getToolkit().getPrintJob (aFrame, "hi", aProps); Graphics pg = p.getGraphics(); printAll (pg); pg.dispose(); p.end(); With somewhat more work, you can print arbitrary content. See Chapter 17, Printing, for more information about printing. The frame parameter serves as the parent to any print dialog window, jobtitle serves as the identification string in the print queue, and props serves as a means to provide platform-specific properties (default printer, page order, orientation, etc.). If props is (Properties)null, no properties will be used. props is particularly interesting in that it is used both for input and for output. When the environment creates a print dialog, it can read default values for printing options from the properties sheet and use that to initialize the dialog. After getPrintJob() returns, the properties sheet is filled in with the actual printing options that the user requested. You can then use these option settings as the defaults for subsequent print jobs. The actual property names are Toolkit specific and may be defined by the environment outside of Java. Furthermore, the environment is free to ignore the props parameter altogether; this appears to be the case with Windows NT/95 platforms. (It is difficult to see how Windows NT/95 would use the properties sheet, since these platforms don't even raise the print dialog until you call the method getGraphics().) Table 15.1 shows some of the properties recognized on UNIX platforms; valid property values are shown in a fixed-width font. The getProperty() method retrieves the key property from the system's awt.properties file (located in the lib directory under the java.home directory). If key is not a valid property, defaultValue is returned. This file is used to provide localized names for various system resources. The getScreenResolution() method retrieves the resolution of the screen in dots per inch. The sharper the resolution of the screen, the greater number of dots per inch. Values vary depending on the system and graphics mode. The PrintJob.getPageResolution() method returns similar information for a printed page. The getScreenSize() method retrieves the dimensions of the user's screen in pixels for the current mode. For instance, a VGA system in standard mode will return 640 for the width and 480 for the height. This information is extremely helpful if you wish to manually size or position objects based upon the physical size of the user's screen. The PrintJob.getPageDimension() method returns similar information for a printed page. The getSystemClipboard() method returns a reference to the system's clipboard. The clipboard allows your Java programs to use cut and paste operations, either internally or as an interface between your program and objects outside of Java. For instance, the following code copies a String from a Java program to the system's clipboard: // Java 1.1 only Clipboard clipboard = getToolkit().getSystemClipboard(); StringSelection ss = new StringSelection("Hello"); clipboard.setContents(ss, this); Once you have placed the string "Hello" on the clipboard, you can paste it anywhere. The details of Clipboard, StringSelection, and the rest of the java.awt.datatransfer package are described in Chapter 16, Data Transfer. After checking whether the security manager allows access, this method returns a reference to the system's event queue. getSystemEventQueueImpl() does the actual work of fetching the event queue. The toolkit provider implements this method; only subclasses of Toolkit can call it. The Toolkit provides a set of basic methods for working with images. These methods are similar to methods in the Applet class; Toolkit provides its own implementation for use by programs that don't have access to an AppletContext (i.e., applications or applets that are run as applications). Remember that you need an instance of Toolkit before you can call these methods; for example, to get an image, you might call Toolkit.getDefaultToolkit().getImage(`myImage.gif`). The getImage() method with a String parameter allows applications to get an image from the local filesystem. Its argument is either a relative or absolute filename for an image in a recognized image file format. The method returns immediately; the Image object that it returns is initially empty. When the image is needed, the system attempts to get filename and convert it to an image. To force the file to load immediately or to check for errors while loading, use the MediaTracker class. NOTE: This version of getImage() is not usable within browsers since it will throw a security exception because the applet is trying to access the local filesystem. The getImage() method with the URL parameter can be used in either applets or applications. It allows you to provide a URL for an image in a recognized image file format. Like the other getImage() methods, this method returns immediately; the Image object that it returns is initially empty. When the image is needed, the system attempts to load the file specified by url and convert it to an image. You can use the MediaTracker class to monitor loading and check whether any errors occurred. The prepareImage() method is called by the system or a program to force image to start loading. This method can be used to force an image to begin loading before it is actually needed. The Image image will be scaled to be width x height. A width and height of -1 means image will be rendered unscaled (i.e., at the size specified by the image itself). The observer is the Component on which image will be rendered. As the image is loaded across the network, the observer's imageUpdate() method is called to inform the observer of the image's status. The checkImage() method returns the status of the image that is being rendered on observer. Calling checkImage() only provides information about the image; it does not force the image to start loading. The image is being scaled to be width x height. Passing a width and height of -1 means the image will be displayed without scaling. The return value of checkImage() is some combination of ImageObserver flags describing the data that is now available. The ImageObserver flags are: WIDTH, HEIGHT, PROPERTIES, SOMEBITS, FRAMEBITS, ALLBITS, ERROR, and ABORT. Once ALLBITS is set, the image is completely loaded, and the return value of checkImage() will not change. For more information about these flags, see Chapter 12, Image Processing. The following program loads an image; whenever paint() is called, it displays what information about that image is available. When the ALLBITS flag is set, checkingImages knows that the image is fully loaded, and that a call to drawImage() will display the entire image. import java.awt.*; import java.awt.image.*; import java.applet.*; public class checkingImages extends Applet { Image i; public void init () { i = getImage (getDocumentBase(), "ora-icon.gif"); } public void displayChecks (int i) { if ((i & ImageObserver.WIDTH) != 0) System.out.print ("Width "); if ((i & ImageObserver.HEIGHT) != 0) System.out.print ("Height "); if ((i & ImageObserver.PROPERTIES) != 0) System.out.print ("Properties "); if ((i & ImageObserver.SOMEBITS) != 0) System.out.print ("Some-bits "); if ((i & ImageObserver.FRAMEBITS) != 0) System.out.print ("Frame-bits "); if ((i & ImageObserver.ALLBITS) != 0) System.out.print ("All-bits "); if ((i & ImageObserver.ERROR) != 0) System.out.print ("Error-loading "); if ((i & ImageObserver.ABORT) != 0) System.out.print ("Loading-Aborted "); System.out.println (); } public void paint (Graphics g) { displayChecks (Toolkit.getDefaultToolkit().checkImage(i, -1, -1, this)); g.drawImage (i, 0, 0, this); } } Here's the output from running checkingImages under Java 1.0; it shows that the width and height of the image are loaded first, followed by the image properties and the image itself. Java 1.1 also displays Frame-bits once the image is loaded. Width Height Width Height Properties Some-bits Width Height Properties Some-bits All-bits Width Height Properties Some-bits All-bits Width Height Properties Some-bits All-bits ... (Repeated Forever More) This createImage() method creates an Image object from an ImageProducer. The producer parameter must be some class that implements the ImageProducer interface. Image producers in the java.awt.graphics package are FilteredImageSource (which, together with an ImageFilter, lets you modify an existing image) and MemoryImageSource (which lets you turn an array of pixel information into an image). The image filters provided with java.awt.image are CropImageFilter, RGBImageFilter, AreaAveragingScaleFilter, and ReplicateScaleFilter. You can also implement your own image producers and image filters. These classes are all covered in detail in Chapter 12, Image Processing. The following code uses this version of createImage() to create a modified version of an original image: Image i = Toolkit.getDefaultToolkit().getImage (u); TransparentImageFilter tf = new TransparentImageFilter (.5f); Image j = Toolkit.getDefaultToolkit().createImage ( new FilteredImageSource (i.getSource(), tf)); This createImage() method converts the entire byte array in imageData into an Image. This data must be in one of the formats understood by this AWT Toolkit (GIF, JPEG, or XBM) and relies on the "magic number" of the data to determine the image type. This createImage() method converts a subset of the byte data in imageData into an Image. Instead of starting at the beginning, this method starts at offset and goes to offset+length-1, for a total of length bytes. If offset is 0 and length is imageData.length, this method is equivalent to the previous method and converts the entire array. The data in imageData must be in one of the formats understood by this AWT Toolkit (GIF, JPEG, or XBM) and relies on the "magic number" of the data to determine the image type. NOTE: For those unfamiliar with magic numbers, most data files are uniquely identified by the first handful or so of bytes. For instance, the first three bytes of a GIF file are "GIF". This is what createImage() relies upon to do its magic. The beep() method attempts to play an audio beep. You have no control over pitch, duration, or volume; it is like putting echo ^G in a UNIX shell script. The sync() method flushes the display of the underlying graphics context. Normally, this is done automatically, but there are times (particularly when doing animation) when you need to sync() the display yourself.
http://bioinfo2.ugr.es/OReillyReferenceLibrary/java/awt/ch15_01.htm
CC-MAIN-2014-41
refinedweb
2,531
57.16
Tantalizing Remark on PHP and Web Services Davey Shafik has posted a tantalizing remark on generating WSDL from PHP code, related to some work he’s doing. There’s nothing really to see yet, other than this example; Why is this interesting? The answer takes a little explanation… If you’ve read web services demystified you’ll know WSDL (Web Services Description Language) is an XML markup that allows you to describe a web service in a manner that a computer can use. When building a client to a web service, it cuts out a lot of manual effort, as you can see by comparing the client1.php vs. client2.php examples, in Zend’s article on the new PHP SOAP Extension. So for building web service clients, when using WSDL, it’s remarkably simple with PHP (and other dynamically typed languages). The same cannot be said though, when it comes to building servers with PHP, where WSDL generation is concerned. In WSDL, when describing the arguments a SOAP method accepts and the value it returns, it’s expected that you use XML Schema. XML Schema is “strongly typed” – it has the full range of primitive types (e.g. string, int etc.) as you can see here from which you can build complex types to represent things like objects and hashes (associative arrays). The only thing XML schema doesn’t “do” is indexed arrays, which instead is defined in WSDL. At this point I could rant about SOAP / WSDL being a debacle, but I won’t, expect for one comment: if you’re ever in the situation of evaluating whether to use SOAP, begin by researching what SOAP stands for (somewhere between “Simple Object Access Protocol” and “Service Oriented Architecture” you may start to get nervous). Anyway – the problem for dynamically typed languages (those where types are explicitly declared in the source code) is how do you automate the process of generating WSDL from code? As a developer of a web service, ideally you want to be able to write your code in PHP then have some program generate the WSDL for you, to save time, eliminate errors etc. In languages like Java or C#, where you’re forced to declare types, it’s no problem, for example; public class Calculator { public int add(int i1, int i2) { return i1 + i2; } public int subtract(int i1, int i2) { return i1 - i2; } } The add and subtract methods declare what types of parameter they accept as well as their return values, so it’s simply a matter of using a tool which parses the source code and generates WSDL. But consider the same in PHP; class Calculator { function add($i1, $i2) { return $i1 + $i2; } function subtract($i1, $i2) { return $i1 - $i2; } } Although it’s probably clear to you and me, looking at this code, that the values involved should all be integers, how does a program, analysing this code, work that out? To make like even more interesting, PHP allows you to accept and return wildly different types, depending on runtime circumstances. For example I could modify the add method above to allow me to add the elements or two arrays; class Calculator { function add($i1, $i2) { if ( is_array($i1) && is_array($i2) ) { $i1count = count($i1); if ( $i1count == count($i2) ) { $sums = array(); for ($i=0; $i The value returned by the add method now depends on the values I give it. Trying to generate WSDL automatically in this case is clearly going to be a problem - the parser would need to have a very deep understand of PHP's syntax. As an aside, there are echoes of the same problem when it comes to writing "compilers" which turn PHP into some other, faster, form (see George Schlossnagle's Roadsend PCC, micro-review). Anyway, watch this space for more on generating WSDL from PHP. Would be great if Davey can pull it off.
https://www.sitepoint.com/tantalizing-remark-on-php-and-web-services/
CC-MAIN-2017-43
refinedweb
649
52.63
> > > library, for example, qname.jar. > > > > [...] > > > > > library, for example, namespace.jar > > > > What is the sense of two different jar files for three > classes or so? > The concept of those libraries comes from Sun's JWSDP 1.3, in > particular, jwsdp-shared/lib. In a certain point of view it > seems to be too much for a jar to have one or two classes, > nevertheles, the rationale for making those jars is > componentizing in order to share codes and avoid duplicates. > In addition, I guess that the reason why not having the 3 classes in one jar is that in some cases you don't need all of them at the same time. For example, QName is needed by most Apache WS projects, but the others are needed by JaxMe and EWS.
http://mail-archives.us.apache.org/mod_mbox/ws-dev/200401.mbox/%3C200401180616.i0I6G6E23202@kiku.7777.net%3E
CC-MAIN-2019-22
refinedweb
130
72.76
Right now I am in Egypt on a family holiday, I am writing this article in a diving boat between dives. I have just returned from the depth of an amazing dive site called Small Giftun and I have met a friendly family of six great Napoleon fishes. They asked me to say hello to youJ: In my last post I promised to show a few patterns demonstrating how a new Managed Package Framework could be improved by means of usability, less coding for a task, etc. In this article I show you a few patterns to be used to improve the perception of using GUIDs. GUIDs (Globally Unique Identifiers) are great due to the fact their name suggests. When creating and assigning a GUID to an object of any type we can be sure that our object has a unique identifier no other object has in the universe. However when we have to write down GUIDs, we sometimes do not feel the value held by their uniqueness. GUIDs are creatures with names not easy to remember. The Visual Studio and so VS SDK use GUIDs for many purposes. One kind of usage comes from the COM nature: GUIDs are used to identify COM types and interfaces. Using GUIDS for command IDs and context IDs are great from the architect’s point of view, but might have issues for developers. When we create a VSPackage with commands in C#, we have to write down the GUID of the package at least twice, once in the package definition file and once in the .vsct file: MyPackage.cs: [Guid("BA95C5F5-ED40-43ef-8F0E-23A72AEF63BA")] public sealed class MyPackage : Package { // ... } MyPackage.vsct: <MyPackage"> <!-- Command definitions --> </Commands> <Symbols> <GuidSymbol name="guidMyPackage" value="{BA95C5F5-ED40-43ef-8F0E-23A72AEF63BA}" /> </Symbols> </CommandTable> When we have to type the same GUID twice, we have the likelihood to mistype them. In the situation above we do not get even a warning message during the build process about mistyped GUIDs, we can find this issue only when recognized that our commands do not work with the package. We can encapsulate the package GUID string into a static class as a constant and even then System.Guid value represented by the string: public static class GuidList public const string MyPackageGuidString = "BA95C5F5-ED40-43ef-8F0E-23A72AEF63BA"; public static readonly Guid MyPackageGuid = new Guid(MyPackageGuidString); This approach would help a lot when we have to put down the same GUID in the C# source, however here we have two representations of the same identifier. When we use the GUID in an attribute, we must use the string representation, because attributes can use only constant values in their parameters known at compile time. In other cases we may use the System.Guid representation. However, even this static class would not solve the problem of mistyped GUIDs in the .VSCT file. Many Visual Studio service methods require using GUIDs defined by somewhere in VS. For example at certain points we must know the GUID that represents a VS command, a UI context ID, a logical view ID, etc. The VSConstants class in the Microsoft.VisualStudio namespace defines a few of them in order to reference them by a member name. It is good we have them in defined in this class, however, we VSConstants defines less than the one percent of all the GUIDs used in Visual Studio and less than ten percent of frequently used ones. Actually, we do not have a complete list of GUIDs used somewhere in Visual Studio, often we must look for some workarounds to obtain them. When we have to reference these GUID values from attributes, we cannot even use the values defined in VSConstants. Attributes expect GUIDs in string form while VSConstants members are System.Guid values. When an attribute or a method parameter expects a GUID value we can use any GUID value without getting any compilation error. It could happen that we use command GUID when a logical view GUID has been expected. Generally we recognize this situation only by the fact that our code does not work. Sometimes it takes hours to find the cause of the issue. Issue #1: The same GUID string value has to be typed in the code more than once. Introducing constants solves this issue only partially (no solution for the .vsct file). Issue #2: Attributes decorating types and members can use only the string forms of GUIDs due to the nature of .NET attributes. Issue #3: Finding GUIDs defined and used in Visual Studio is hard, since we do not have a complete list of them. Only a few of them is defined in VSConstants class with good naming conventions in order to find and use them. Issue #4: GUIDs are not distinguished by the semantics of objects or concepts behind them. For example, we can use a command ID where a UI context ID was expected without any compilation time warning or error. Almost all issues summarized above can be resolved with a simple trick .NET uses: let’s represent GUIDs with types! For COM interoperability reasons each .NET type has a GUID that can be queried by the Type.GUID property. This GUID is implicitly set by the compiler to a generated GUID or can be set explicitly with the Guid attribute. What are the opportunities of this approach? Everywhere a GUID is used it can be changed to a System.Type instance. System.Type instances created with the typeof() operator can be used in attributes, because typeof() is evaluated in compile time. A type can have its own metadata that can hold some information beside the simple GUID it represents. This metadata can be used to assign some kind of semantics to the GUID. A type can be used as type parameter in generic types. This behavior also adds some salt to the GUID. To show you what power is in this simple trick, let us see a sample implementation that encapsulates Visual Studio UI context identifiers into types. The essence of the solution is here: public interface IUIContextGuidType { } /// <summary> /// This class is a name provider for types representing UIContext GUIDs used /// </summary> public static class UIContext [Guid("ADFC4E60-0397-11D1-9F4E-00A0C911004F")] public sealed class SolutionBuilding: IUIContextGuidType { } [Guid("ADFC4E61-0397-11D1-9F4E-00A0C911004F")] public sealed class Debugging: IUIContextGuidType { } [Guid("ADFC4E62-0397-11D1-9F4E-00A0C911004F")] public sealed class FullScreenMode: IUIContextGuidType { } [Guid("ADFC4E63-0397-11D1-9F4E-00A0C911004F")] public sealed class DesignMode: IUIContextGuidType { } [Guid("ADFC4E64-0397-11D1-9F4E-00A0C911004F")] public sealed class NoSolution: IUIContextGuidType { } [Guid("F1536EF8-92EC-443C-9ED7-FDADF150DA82")] public sealed class SolutionExists: IUIContextGuidType { } [Guid("ADFC4E65-0397-11D1-9F4E-00A0C911004F")] public sealed class EmptySolution: IUIContextGuidType { } [Guid("ADFC4E66-0397-11D1-9F4E-00A0C911004F")] public sealed class SolutionHasSingleProject: IUIContextGuidType { } The IUIContextGuidType is a markup interface to declare that a type is used as a wrapper type for a Visual Studio UI context GUID. In order to represent the predefined UI contexts, we use the static UIContext class as a container for the GUID equivalent types. Each nested type represents the GUID with the corresponding attribute and each implements the IUIContextGuidType interface. Because it is a markup interface it has no member to implement. The intended usage of these types can be demonstrated by the ProvideAutoLoad attribute. This attribute is used by the regpkg.exe utility to register the package automatically load with a specified UI context. The attribute constructor accepts a single GUID parameter representing the UI context. The source code of ProvideAutoLoad attribute is available in the VS SDK folder under VisualStudioIntegration\Common\Source\CSharp\Shell90. Here is actually the full source code of the attribute without comments and other non-relevant elements tuned for C# 3.0: [AttributeUsage(AttributeTargets.Class, AllowMultiple = true, Inherited = true)] public sealed class ProvideAutoLoadAttribute : RegistrationAttribute public ProvideAutoLoadAttribute(string cmdUiContextGuid) { LoadGuid = new Guid(cmdUiContextGuid); } public Guid LoadGuid { get; private set; } private string RegKeyName get { return string.Format(CultureInfo.InvariantCulture, "AutoLoadPackages\\{0}", LoadGuid.ToString("B")); } public override void Register(RegistrationContext context) context.Log.WriteLine(string.Format(Resources.Culture, Resources.Reg_NotifyAutoLoad, LoadGuid.ToString("B"))); using (Key childKey = context.CreateKey(RegKeyName)) childKey.SetValue(context.ComponentType.GUID.ToString("B"), 0); public override void Unregister(RegistrationContext context) context.RemoveValue(RegKeyName, context.ComponentType.GUID.ToString("B")); Because the attribute derives from the RegistrationAttribute class, it is recognized and used by the regpkg.exe utility. The original ProvideAutoLoad attribute cannot accept a System.Type parameter to represent the UI context GUID, because it does not have the appropriate constructor. Well, let us create our own ProvideAutoLoad version in a new namespace to avoid type name collision. Let’s add the new constructor to the class: public ProvideAutoLoadAttribute(Type type) if (!typeof(IUIContextGuidType).IsAssignableFrom(type)) throw new ArgumentException( "Only types implementing IUIContextGuidType are accepted in " + "ProvideAutoLoad."); LoadGuid = type.GUID; As you see, our constructor not simply assigns the GUID behind the type to the attribute but also checks its semantics. We accept only those types which implement the IUIContextGuidType markup interface; in other cases we raise an exception. No we can write our package definition just like in the following example: [ProvideAutoLoad(typeof(UIContext.NoSolution))] // --- Other registration attributes omitted Do not forget, this time ProvideAutoLoad is the one we defined in our own namespace and not the one in VS SDK. This definition works and our package is properly registered for automatic loading. Now, let’s assume we would use the following definition: [ProvideAutoLoad(typeof(System.Int32))] This code is syntactically correct but semantically is not: System.Int32 (to be precise, the GUID behind System.Int32) does not represent any UI contexts. Our code will compile, however not built. When the build process runs regpkg.exe, we have an error as the following figure illustrates: When the C# compiler creates the assembly from the source code it does not creates an instance of the ProvideAutoLoad attribute, it simply puts the metadata used to initialize the attribute. So, we do not get any compilation error. However, when repkg.exe runs, as soon as its requests the ProvideAutoLoad attribute instance, the .NET framework reads out the metadata from the assembly and instantiates the attribute. The constructor we add to the attributes checks for the semantics of the type representing the GUID. Because System.Int32 is semantically incorrect, we got the message shown in the figure above. This time we do not have to wait while our package runs to catch this kind of semantic issue, we can recognize it during build time. Our sample implementation uses a pattern that copes with Issue #1, #2 and #4. The type representing the GUID is the one location where the GUID should be literally put down, at other locations the type should be used instead of the GUID. However, this patter only works if types, methods and attributes expecting GUIDs accept System.Type parameters. Right now the current VS SDK does not have this behavior; we should change it as the sample implementation illustrated. The pattern above does not solve Issue #3, but gives a good hint about the GUIDs of VS SDK can be represented by a good type hierarchy. The solution above fits not for the predefined GUIDs of the VS SDK but also for user defined GUIDs as well. For example, Visual Studio can use user defined UI contexts. Developers can define their UI context IDs working seamlessly with ProvideAutoLoad attributes, as the following sample illustrates: public static class MyUIContext [Guid("45EFA5FC-469A-44E5-AA5C-2D2196D1C936")] public sealed class MySolutionIsLoaded: IUIContextGuidType { } [Guid("C2349B2A-F2A8-49FD-82E2-FCC61C5564F3")] public sealed class MyProjectHasOpenFiles: IUIContextGuidType { } The COM root of Visual Studio makes it understandable why GUIDs are commonly used in VS SDK. However, the GUID approach is not really close to the programming model we can use in the .NET framework. In this post I showed you a pattern that could make our life with GUIDs easier. The essence of this pattern is to represent a GUID with a .NET type. Adding some semantics to these types (by means of metadata) allows adding semantics to plain GUIDs.
http://dotneteers.net/blogs/divedeeper/archive/2008/06/24/LearnVSXNowPart23.aspx
crawl-002
refinedweb
1,999
53.51
Details - Type: Improvement - Status: Closed - Priority: Trivial - Resolution: Fixed - Affects Version/s: None - Fix Version/s: None - Component/s: Java - Compiler - Labels:None - Patch Info:Patch Available Description Generated java code for Thrift structs should have doc strings from the .thrift file. The information is all there, it's just not made part of the generated code. In the default java generator, the doc strings would go on the field declarations. In the bean style generator, the the doc string would go on both the generated getters and setters. Issue Links - is depended upon by THRIFT-147 Ruby generated classes should include class doc strings - Closed Activity Forgive me, but where's the partial implementation? I see javadoc for structs, interfaces, and service methods. This issue is supposed to be specifically getting doc strings on the struct fields. You're right. Sorry for not reading the title carefully enough. This patch adds docstrings as described in the issue. This is a much better version that factors out some common logic to t_oop_generator. Please use braces with all if statements. Please don't put "using namespace" in a header file. Please use a separate line for each argument in the definition of generate_docstring_comment. I think the changes to ThriftTest are unnecessary because of DocTest.thrift. Do you think it makes sense to copy the full docstring for each getter and setter? This version handles all your comments. As far as whether we should add the docstring to both getter and setter, what would the alternative be? Choosing one or the other is pretty arbitrary. There's not really much cost to adding it to both. I made this (semi-)independent change:;a=commitdiff;h=2e045a6 I'm planning to commit it on its own. This allowed me to make this change to v3:;a=commitdiff;h=0350c04 The result is this diff, which I am planning on committing as a separate diff from the first:;a=treediff;h=0350c04;hp=2e045a6 Thoughts? This looks about right to me. I say commit it. Already partially implemented in r665256.
https://issues.apache.org/jira/browse/THRIFT-179
CC-MAIN-2015-48
refinedweb
346
66.44
Walkthrough: Creating a Basic Form Template with Code Last modified: December 07, 2015 Applies to: InfoPath 2013 | InfoPath Forms Services | Office 2013 | SharePoint Server 2013 In this article Prerequisites Hello World in Visual Studio Tools for Applications Getting the Current User's Name Next steps In Microsoft InfoPath, you can write business logic in Visual Basic or C# by opening a form template in the InfoPath designer, and then using one of the user interface commands to add an event handler, which will open the Visual Studio 2012 development environment for writing your code. By default, form template projects created using Visual Studio 2012 work against the managed code object model provided by the Microsoft.Office.InfoPath namespace. This walkthrough first shows you how to create a simple Hello World application using C# or Visual Basic in the Visual Studio 2012 development environment. The walkthrough concludes with a code sample that shows you how to use the UserName property of the User class to retrieve the current user's name and populate a Text Box control with that value. In the following walkthrough, you will learn how to write code in the Visual Studio 2012 development environment to display a simple alert dialog box by writing an event handler for the Clicked event of the ButtonEvent class, which is associated with the Button control. Create a new project and specify the programming language Start the InfoPath designer, and then double-click the Blank (InfoPath Editor) form template. To specify which programming language to use, click the Office Button, click Form Options, click Programming in the Category list, and then select either Visual Basic or C# from the Form template code language drop-down list. You are now ready to add a Button control and create its event handler. Add a Button control and event handler In the Controls group, click the Button control to add it the form. Double-click the Button control, type Hello for the Label property on the Properties tab of the ribbon, and then click Custom Code. When prompted, save the form and name it HelloWorld. This will open the Visual Studio Tools for Applications environment with the cursor in the event handler for the Clicked event of Button control. You are now ready to add form code to the event handler for the button. Add "Hello World" code to the event handler and preview the form In the event handler skeleton, type: The code for your form template should look similar to the following: using Microsoft.Office.InfoPath; using System; using System.Windows.Forms; using System.Xml; using System.Xml.XPath; namespace HelloWorld { public partial class FormCode { public void InternalStartup() { ((ButtonEvent)EventManager.ControlEvents["CTRL1_5"]).Clicked += new ClickedEventHandler(CTRL1_5_Clicked); } public void CTRL1_5_Clicked(object sender, ClickedEventArgs e) { MessageBox.Show("Hello World!"); } } } Imports Microsoft.Office.InfoPath Imports System Imports System.Windows.Forms Imports System.Xml Imports System.Xml.XPath Namespace HelloWorld Public Class FormCode Private Sub InternalStartup(ByVal sender As Object, ByVal e As EventArgs) Handles Me.Startup AddHandler DirectCast(EventManager.ControlEvents("CTRL1_5"), ButtonEvent).Clicked, AddressOf CTRL1_5_Clicked End Sub Public Sub CTRL1_5_Clicked(ByVal sender As Object, ByVal e As ClickedEventArgs) MessageBox.Show("Hello World!") End Sub End Class End Namespace Switch to the InfoPath designer window. Click the Preview button on the Home tab. Click the Hello button on the form. A message box will be displayed with the text "Hello World!" The next procedure shows how to add debugging breakpoints to your form code. Debug form code Switch back to the Visual Studio 2012 window. Click the grey bar to the left of the line: A red circle is displayed and the line of code is highlighted to indicate that the runtime will pause at this breakpoint in your form code. On the Debug menu, click Start Debugging (or press F5). In the InfoPath Preview window, click the Hello button on the form. The Visual Studio 2012 code editor is given focus, and the breakpoint line is highlighted. On the Debug menu, click Step Over (or press Shift+F8) to continue stepping through the code. The event handler code is executed, and the "Hello World!" message is displayed. Click OK to return to the Visual Studio 2012 code editor, and then click Stop Debugging on the Debug menu (or press Ctrl+Alt+Break). In the following example, you will learn how to use the UserName property of the User class to retrieve the name of the current user and populate the value of a Text Box control by using an event handler for the Loading event. Populating the Text Box control is accomplished by using an instance of the XPathNavigator class to write the current user's name to the XML node that the control is bound to. First, the MainDataSource property of the XmlForm class is called to retrieve an instance of the DataSource class that represents the underlying XML document of the form. The DataSource object then calls the CreateNavigator method, which creates the XPathNavigator object and positions it at the root node of the form's main data source. The SelectSingleNode method of the XPathNavigator class is called to select the employee field in the form's data source. Finally, the SetValue method is called to set the value of the field with the UserName property. For more information on working with System.Xml in managed code form templates, see How to: Work with the XPathNavigator and XPathNodeIterator Classes. Add a Loading event handler Open the HelloWorld form template that you created in the previous walkthrough in the InfoPath designer. On the View tab, select Show Fields. Right click the myFields folder, and then click Add. In Name, type employee, and then click OK. Drag the employee field onto the view. On the Developer tab, click Loading Event. This will create an event handler for the Loading event, and move the focus to that event handler in the code editor. In the code editor, type the following: Switch to the InfoPath form design window, and then click the Preview button on the Home tab to preview the form. The employee field should automatically fill in with your user name. For information about working with event handlers for other controls and events, see How to: Add an Event Handler. For more information about previewing and debugging code in form templates, see How to: Preview and Debug InfoPath Form Templates with Code. For information about how to deploy a managed-code form template, see How to: Deploy InfoPath Form Templates with Code. For information about the InfoPath object model and common programming tasks in managed-code form templates, see Understanding the InfoPath Object Model and Common Developer Tasks
https://msdn.microsoft.com/en-us/library/office/aa942693(v=office.15).aspx
CC-MAIN-2018-13
refinedweb
1,112
53.41
Eigenvalue solver using LaPack routine dgeev (Real) zgeev (Cmplx). More... #include <SmallES.hh> Eigenvalue solver using LaPack routine dgeev (Real) zgeev (Cmplx). This solver is intended to be applied to "small" general N by N matrices, e.g. for the eigenvalue computation in the DirPotIt routine. For Sparse matrices / big eigenvalue problems consider the ArPack routines. As input this solver needs an array which represents the columnwise stored matrix: array(column 1, column 2,... column n). Definition at line 25 of file SmallES.hh. Constructor. Destructor. Compute eigenpairs. Returns the number of converged eigenpairs: (not implemented) Implements eigensolver::EigenSolver< F >. Definition at line 48 of file SmallES.hh. Returns an array with the eigenfunctions. Implements eigensolver::EigenSolver< F >. Returns an array with the real parts of the eigenvalues in the first kmax entries and the imaginary parts of the eigenvalues in the second kmax entries for real matrices and returns a complex array containing the eigenvalues for complex matrices. The last entry in both cases is the optimal size of the work array. Implements eigensolver::EigenSolver< F >. Returns information in an output stream. Reimplemented from eigensolver::EigenSolver< F >. Returns the number of iterations. Implements eigensolver::EigenSolver< F >. Definition at line 46 of file SmallES.hh. Array A as described in modus. Definition at line 53 of file SmallES.hh. Eigenpairs computed? Definition at line 59 of file SmallES.hh. Storage space for eigenvalues. Definition at line 55 of file SmallES.hh. Storage for eigenvectors. Definition at line 57 of file SmallES.hh.
http://www.math.ethz.ch/~concepts/doxygen/html/classeigensolver_1_1SmallES.html
crawl-003
refinedweb
252
53.78
Not to be a killjoy, but doesn't the -a option take care of this? Hi! I wrote the hack below in C++ to get ls to recognize ".hidden" files (files that list directory entries to be removed from the listing ala MacOS X). I even shoehorned it into ls and added "-lstdc++ hidden.o" to coreutils' Makefile. However, I'd much rather use C instead of C++ and plop the code into ls.c itself (and not depend on the standard C++ library). I don't know the first thing about memory management C, so I was hoping someone here knew the equivalent C code (or could direct me to where I could find out). Thanks! Code:// hidden.h #ifndef HIDDEN_FILE_H #define HIDDEN_FILE_H #ifdef __cplusplus extern "C" { #endif void load_hidden_file(const char* a_dir); int is_hidden(const char* a_entry); void release_hidden_files(); #ifdef __cplusplus } #endif #endif // HIDDEN_FILE_H // hidden.cpp #include <algorithm> #include <fstream> #include <string> #include <vector> #include "hidden.h" namespace { std::vector<std::string> hidden; } void load_hidden_file(const char* a_dir) { std::string line = a_dir + std::string("/.hidden"); std::ifstream file(line.c_str()); if (file) while(std::getline(file, line)) hidden.push_back(line); file.close(); } int is_hidden(const char* a_entry) { return (std::find(hidden.begin(), hidden.end(), a_entry) != hidden.end()) || (a_entry[0] == '.'); } void release_hidden_files() { hidden.clear(); } Not to be a killjoy, but doesn't the -a option take care of this? Good work anshelm!!! Yes, this can be done with 'ls -a' but it was a good project none the less. You might use that source elsewhere with an app to see the hidden files. Dunno. I don't think I was quite clear on my intent. I'm extending the functionaility of hidden files. This will allow you to put a file called ".hidden" in a directory and list files you don't want listed. So instead of renaming "bin" to ".bin", you could add "bin" to the .hidden file and it will have the same effect (such as appearing when -a is used). Oh, now that is useful. Very nice. For the sake of interest you may want to browse the source for nautilus, the Gnome file manager...I believe it does largely the same thing but in a graphical way. I poked around in the code for Nautilus, but the function for adding support for ".hidden" depends on several functions that appear to be from gnome, as well as is quite integrated with the rest of Nautilus's view of the current directory. So instead I took a shot at porting to C based solely on the info I could find on google. I'm pretty sure I made some mistakes in regards to memory management alone, although it does compile and work. Any improvements or pointers to what I did wrong would be appreciated. Code:static char** hidden_file_lines; static long int hidden_file_count; static void load_hidden_file(const char* a_dir) { char* name; FILE* fp; char line[PATH_MAX]; hidden_file_count = 0; hidden_file_lines = (char**)malloc(hidden_file_count * sizeof(char*)); name = (char*)malloc((strlen(a_dir) + 9) * sizeof(char)); name = strcpy(name, a_dir); name = strcat(name, "/.hidden"); if ((fp = fopen(name, "r")) != NULL) { while (fgets(line, PATH_MAX, fp) != NULL) { if (strlen(line) == 1) continue; hidden_file_count++; hidden_file_lines = (char**)realloc((void*)hidden_file_lines, hidden_file_count * sizeof(char*)); hidden_file_lines[hidden_file_count - 1] = (char*)calloc(strlen(line) - 1, sizeof(char)); hidden_file_lines[hidden_file_count - 1] = memcpy(hidden_file_lines[hidden_file_count - 1], line, (strlen(line) - 1)); } } fclose(fp); free(name); return; } static int is_hidden(const char* a_entry) { long int i; if (a_entry[0] == '.') { return 1; } for (i = 0; i < hidden_file_count; i++) { if (strcmp(a_entry, hidden_file_lines[i]) == 0) { return 1; } } return 0; } static void release_hidden_file() { long int i; for (i = 0; i < hidden_file_count; i++) { free(hidden_file_lines[i]); } free(hidden_file_lines); hidden_file_count = 0; } Bookmarks
http://www.linuxhomenetworking.com/forums/showthread.php/10431-hidden-files-and-ls?p=89331&viewfull=1
CC-MAIN-2014-42
refinedweb
614
57.57
Are you sure? This action might not be possible to undo. Are you sure you want to continue? • A queue is an ordered list in which all insertions take place at one end and all deletions take place at the opposite end. It is also known as First-In-First-Out (FIFO) lists. a0 a1 a2 an-1 front rear Queue: a First-In-First-Out (FIFO) list Rear front A Rear B A C rear B A front front D rear C B A front rear D C B rear front front *Figure : Inserting and deleting elements in a queue . Application: Job scheduling front rear Q[0] Q[1] Q[2] Q[3] -1 -1 -1 0 J1 -1 1 J1 J2 -1 2 J1 J2 J3 0 2 J2 J3 1 2 J3 Comments queue is empty Job 1 is added Job 2 is added Job 3 is added Job 1 is deleted Job 2 is deleted *Figure : Insertion and deletion from a sequential queue . } int IsFull(){return (rear == MAX_QUEUE_SIZE – 1). int rear = -1.} … . int IsEmpty(){return (front == rear) . int front = -1.Implementation 1: using array # define MAX_QUEUE_SIZE 100/* Maximum queue size */ element queue[MAX_QUEUE_SIZE]. Add to a queue void Insert( int item) { /* add an item to the queue */ if (! IsFull()) queue [++rear] = item. else printf(“Queue Overflow”). } *Function: Add to a queue . else { printf(“Queue Underflow”). return -1 } } *Function: Delete from a queue .Delete from a queue Int Delete() { /* remove element at the front of the queue */ if (!IsEmpty()) return queue [++ front]. Implementation 2: regard an array as a circular queue front: one position counterclockwise from the first element rear: current end EMPTY QUEUE [2] [3] [2] J2 J3 [3] [1] [4] [1] J1 [4] [0] [5] [0] [5] front = 0 rear = 0 front = 0 rear = 3 *Figure: Empty and nonempty circular queues . Problem: one space is left when queue is full FULL QUEUE FULL QUEUE [2] J2 [1] J3 [3] [2] J8 J7 J6 J9 [3] J1 J5 [0] front =0 rear = 5 J4 [4][1] [4] J5 [5] [5] [0] front =4 rear =3 *Figure: Full circular queues and then we remove the item . if (front == k) /* reset rear and print error */ { printf(“ Q Full”).Add to a circular queue void addq(element item) { /* add an item to the queue */ int k = (rear +1) % MAX_QUEUE_SIZE. return. } rear = k. queue[rear] = item. } *Function: Add to a circular queue [2] J2 J3 [3] [1] J1 [0] front = 0 rear = 3 [5] . } *Function: Delete from a circular queue [2] J2 [1] J1 [0] front = 0 rear = 3 J3 [3] [ [5] .Delete from a circular queue element deleteq() { element item. } /* queue_empty returns an error key */ front = (front+1) % MAX_QUEUE_SIZE. /* remove front element from the queue and put it in item */ if (front == rear) { printf(“ Q Empty”). return ERROR. return queue[front]. the complexity is of O(MaxSize).Queue Manipulation Issue • It’s intuitive to use array for implementing a queue. . However. In the worse case. queue manipulations (add and/or delete) will require elements in the array to move. Shifting Elements in Queue front rear front rear front rear . • Pointer front will always point one position counterclockwise from the first element in the queue. But it is also true when queue is full. circular queue assigns next element to q[0] when rear == MaxSize – 1. . This will be a problem.Circular Queue • To resolve the issue of moving elements in the queue. • Queue is empty when front == rear. ) 4 J4 J3 J2 4 n-4 J1 3 2 3 2 1 J2 J4 J3 n-4 J1 n-3 n-2 n-3 n-2 1 0 n-1 0 n-1 front = 0. rear = 0 . rear = 4 front = n-4.Circular Queue (Cont. • Each time when adding an item to the queue. newrear is calculated before adding the item. . then the queue is full. one way is to use only MaxSize – 1 elements in the queue at any time.Circular Queue (Cont.) • To resolve the issue when front == rear on whether the queue is full or empty. If newrear == front. • Another way to resolve the issue is using a flag to keep track of last operation. The drawback of the method is it tends to slow down Add and Delete function.
https://www.scribd.com/document/127545382/Queues
CC-MAIN-2018-34
refinedweb
713
83.76
On a ARM based system running Linux, I have a device that's memory mapped to a physical address. From a user space program where all addresses are virtual, how can I read content from this address? You can map a device file to a user process memory using mmap(2) system call. Usually, device files are mappings of physical memory to the file system. Otherwise, you have to write a kernel module which creates such a file or provides a way to map the needed memory to a user process. Another way is remapping parts of /dev/mem to a user memory. Edit: Example of mmaping /dev/mem (this program must have access to /dev/mem, e.g. have root rights): #include <stdio.h> #include <stdlib.h> #include <fcntl.h> #include <sys/mman.h> #include <unistd.h> int main(int argc, char *argv[]) { if (argc < 3) { printf("Usage: %s <phys_addr> <offset>\n", argv[0]); return 0; } off_t offset = strtoul(argv[1], NULL, 0); size_t len = strtoul(argv[2], NULL, 0); // Truncate offset to a multiple of the page size, or mmap will fail. size_t pagesize = sysconf(_SC_PAGE_SIZE); off_t page_base = (offset / pagesize) * pagesize; off_t page_offset = offset - page_base; int fd = open("/dev/mem", O_SYNC); unsigned char *mem = mmap(NULL, page_offset + len, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, page_base); if (mem == MAP_FAILED) { perror("Can't map memory"); return -1; } size_t i; for (i = 0; i < len; ++i) printf("%02x ", (int)mem[page_offset + i]); return 0; }
https://codedump.io/share/unjIEx2vwpQr/1/accessing-physical-address-from-user-space
CC-MAIN-2017-34
refinedweb
241
64.61
Hi Thank you for your question I'm afraid that you will need to amend your 2012 tax return. Since that is the latest year that you refinanced, you should be able to claim the balance that was left from the 2005 refi (which should also include those you were still amortizing from 2003) You will need to set up the amortization for the points that you paid with the latest refi in 2012 on the amended 2012 tax return, and that should then be carried over to your 2013 amended return. Unless you refinance with the same lender. In that case, you add the points paid on the latest deal to the leftovers from the previous refinancing and deduct the expense on a pro-rated basis over the life of the new loan.. Anne: I sent a response to your answer asking for clarification. Did you get it or did I make the mistake of asking what looks like a second question that requires more money and you haven't even seen it yet. Where are we at now? I'm sorry, I do not see you're posting, and I checked to see if it posted as a separate question, and I don't see that either I apologize that your posting did not come through............would you mind posting it again? If it still does not come through, I will try changing the format to see if that helps I said that I know that I have to set up the amortization of the new points from the 2012 refi on the amended 2012 return. It will be carried forward to my 2013 return (not amended). All the lenders were different. I want to make sure that I have what you are telling right. My question was whether I can use the remaining unamortized points from 2003 on my 2012 amended return even though I didn't take them in 2005 when I did another refi. And, whether I can take the full amount of points from the 2005 refi on my 2012 amended return even though I didn't amortize them at all because I refinanced a new loan in 2012. It sounds like you are saying I can do this. Please verify that this is a correct assumption. Thank you for the clarification on the amendment. Yes, you may take those points in 2012. I hope this helps
http://www.justanswer.com/tax/7wsmp-amortizing-refi-points-2003-forgot.html
CC-MAIN-2014-35
refinedweb
403
76.86
Background: Noda Time and C# 8 Note: this blog post was written based on experimentation with Visual Studio 2019 preview 2.2. It’s possible that some of the details here will change over time. C# 8 is nearly here. At least, it’s close enough to being “here” that there are preview builds of Visual Studio 2019 available that support it. Unsurprisingly, I’ve been playing with it quite a bit. In particular, I’ve been porting the Noda Time source code to use the new C# 8 features. The master branch of the repo is currently the code for Noda Time 3.0, which won’t be shipping (as a GA release) until after C# 8 and Visual Studio 2019 have fully shipped, so it’s a safe environment in which to experiment. While it’s possible that I’ll use other C# 8 features in the future, the two C# 8 features that impact Noda Time most are nullable reference types and switch expressions. Both sets of changes are merged into master now, but the pull requests are still available so you can see just the changes: The switch expressions PR is much simpler than the nullable reference types one. It’s entirely an implementation detail… although admittedly one that confused docfx, requiring a few of those switch expressions to be backed out or moved in a later PR. Nullable reference types are a much, much bigger deal. They affect the public API, so they need to be treated much more carefully, and the changes end up being spread far wide throughout the codebase. That’s why the switch expression PR is a single commit, whereas nullable reference types is split into 14 commits – mostly broken up by project. Reviewing the public API of a nullable reference type change So I’m now in a situation where I’ve got nullable reference type support in Noda Time. Anyone consuming the 3.0 build (and there’s an alpha available for experimentation purposes) from C# 8 will benefit from the extra information that can now be expressed about parameters and return values. Great! But how can I be confident in the changes to the API? My process for making the change in the first place was to enable nullable reference types and see what warnings were created. That’s a great starting point, but it doesn’t necessarily catch everything. In particular, although I started with the main project (the one that creates NodaTime.dll), I found that I needed to make more changes later on, as I modified other projects. Just because your code compiles without any warnings with nullable reference types enabled doesn’t mean it’s “correct” in terms of the API you want to expose. For example, consider this method: public static string Identity(string input) => input; That’s entirely valid C# 7 code, and doesn’t require any changes to compile, warning-free, in C# 8 with nullable reference types enabled. But it may not be what you actually want to expose. I’d argue that it should look like one of these: // Allowing null input, and producing nullable output public static string? Identity(string? input) => input; // Preventing null input, and producing non-nullable output public static string Identity(string input) { // Convenience method for nullity checking. Preconditions.CheckNotNull(input, nameof(input)); return input; } If you were completely diligent when writing tests for the code before C# 8, it should be obvious which is required – because you’d presumably have something like: [Test] public void Identity_AcceptsNull() { Assert.IsNull(Identity(null)); } That test would have produced a warning in C# 8, and would have suggested that the null-permissive API is the one you wanted. But maybe you forgot to write that test. Maybe the test you would have written was one that would have shown up a need to put that precondition in. It’s entirely possible that you write much more comprehensive tests than I do, but I suspect most of us have some code that isn’t explicitly tested in terms of its null handling. The important part take-away here is that even code that hasn’t changed in appearance can change meaning in C# 8… so you really need to review any public APIs. How do you do that? Well, you could review the entire public API surface you’re exposing, of course. For many libraries that would be the simplest approach to take, as a “belt and braces” attitude to review. For Noda Time that’s less appropriate, as so much of the API only deals in value types. While a full API review would no doubt be useful in itself, I just don’t have the time to do it right now. Instead, what I want to review is any API element which is impacted by the C# 8 change – even if the code itself hasn’t changed. Fortunately, that’s relatively easy to do. Enter NullableAttribute The C# 8 compiler applies a new attribute to every API element which is affected by nullability. As an example of what I mean by this, consider the following code which uses the #nullable directive to control the nullable context of the code. public class Test { #nullable enable public void X(string input) {} public void Y(string? input) {} #nullable restore #nullable disable public void Z(string input) {} #nullable restore } The C# 8 compiler creates an internal NullableAttribute class within the assembly (which I assume it wouldn’t if we were targeting a framework that already includes such an attribute) and applies the attribute anywhere it’s relevant. So the above code compiles to the same IL as this: using System.Runtime.CompilerServices; public class Test { public void X([Nullable((byte) 1)] string input) {} public void Y([Nullable((byte) 2)] string input) {} public void Z(string input) {}} } Note how the parameter for Z doesn’t have the attribute at all, because that code is still oblivious to nullable reference types. But both X and Y have the attribute applied to their parameters – just with different arguments to describe the nullability. 1 is used for not-null; 2 is used for nullable. That makes it relatively easy to write a tool to display every part of a library’s API that relates to nullable reference types – just find all the members that refer to NullableAttribute, and filter down to public and protected members. It’s slightly annoying that NullableAttribute doesn’t have any properties; code to analyze an assembly needs to find the appropriate CustomAttributeData and examine the constructor arguments. It’s awkward, but not insurmountable. I’ve started doing exactly that in the Noda Time repository, and got it to the state where it’s fine for Noda Time’s API review. It’s a bit quick and dirty at the moment. It doesn’t show protected members, or setter-only properties, or handle arrays, and there are probably other things I’ve forgotten about. I intend to improve the code over time and probably move it to my Demo Code repository at some point, but I didn’t want to wait until then to write about NullableAttribute. But hey, I’m all done, right? I’ve just explained how NullableAttribute works, so what’s left? Well, it’s not quite as simple as I’ve shown so far. NullableAttribute in more complex scenarios It would be oh-so-simple if each parameter or return type could just be nullable or non-nullable. But life gets more complicated than that, with both generics and arrays. Consider a method called GetNames() returning a list of strings. All of these are valid: // Return value is non-null, and elements aren't null List<string> GetNames() // Return value is non-null, but elements may be null List<string?> GetNames() // Return value may be null, but elements aren't null List<string>? GetNames() // Return value may be null, and elements may be null List<string?>? GetNames() So how are those represented in IL? Well, NullableAttribute has one constructor accepting a single byte for simple situations, but another one accepting byte[] for more complex ones like this. Of course, List<string> is still relatively simple – it’s just a single top-level generic type with a single type argument. For a more complex example, imagine Dictionary<List<string?>, string[]?> . (A non-nullable reference to a dictionary where each key is a not-null list of nullable strings, and each value is a possibly-null array of non-nullable elements. Ouch.) The layout of NullableAttribute in these cases can be thought of in terms of a pre-order traversal of a tree representing the type, where generic type arguments and array element types are leaves in the tree. The above example could be thought of as this tree: Dictionary<,> (not null) / \ / \ List<> (not null) Array (nullable) | | | | string (nullable) string (not null) The pre-order traversal of that tree gives us these values: - Not null (dictionary) - Not null (list) - Nullable (string) - Nullable (array) - Not null (string) So a parameter declared with that type would be decorated like this: [Nullable(new byte[] { 1, 1, 2, 2, 1 })] But wait, there’s more! NullableAttribute in simultaneously-complex-and-simple scenarios The compiler has one more trick up its sleeve. When all the elements in the tree are “not null” or all elements in the tree are “nullable”, it simply uses the constructor with the single-byte parameter instead. So Dictionary<List<string>, string[]> would be decorated with Nullable[(byte) 1] and Dictionary<List<string?>?, string?[]?>? would be decorated with Nullable[(byte) 2]. (Admittedly, Dictionary<,> doesn’t permit null keys anyway, but that’s an implementation detail.) Conclusion The C# 8 feature of nullable reference types is a really complicated one. I don’t think we’ve seen anything like this since async/await. This post has just touched on one interesting implementation detail. I’m sure there’ll be more posts on nullability over the next few months…
https://codeblog.jonskeet.uk/2019/02/
CC-MAIN-2020-05
refinedweb
1,667
60.24
Lists are an alternative way of storing information in respect to arrays. The List interface has many benefits which can make it preferable to arrays. Here are some of its features: Lists do not have a fixed size, and shrink and grow to meet the number of elements The last element’s index will always be n-1 Lists can easily switch, add, and remove elements List is an interface, and therefore needs to be initialized as one of List’s subclasses. ArrayList is a commonly used subclass of the List. Here is the implementation syntax: List<E> NAME = new ArrayList<E>() Besides being an interface, List also has an unusual attribute known as a generic, Here is an example of ArrayList being used: import java.util.List; import java.util.ArrayList; // Put import statements at top of file List<Integer> a = new ArrayList<Integer>(); List<Integer> b = new ArrayList<>(); Both lists, a and b, store values of type Integer. When initializing the two, repeating the contents disclosed in the second generic is not required. Here is an example of a method which uses List and ArrayList: public void Example() { List<Integer> l = new ArrayList<>(); l.add(3); // l = {3} l.add(2); // l = {3, 2} l.get(0); // Returns 3, the value at index 0 l.get(l.indexOf(3)); // Returns 3 l.add(1, 1); // l = {3, 1, 2} Adds the value 1 at index 1 l.set(l.indexOf(2), 4); // l = {3, 1, 4} l.remove(l.indexOf(3)); // l = {1, 4} Remove the value 3 from the List l.remove(l.size() - 1); // l = {1} Remove last element from the List l.remove(0); // l = {} l.isEmpty(); // Returns true } A major difference between arrays and Lists is that ArrayLists and Lists can only store classes and not primitives. In order to store primitives, wrapper classes are used to convert the primitives into objects in the process of boxing/wrapping. Some common wrapper classes are Integer and Double which both act as ints and doubles within an object form. Luckily, with ArrayLists and many other similar classes, these primitives are wrapped to the correct type automatically in a process called auto-boxing when stored, and unboxing when being retrieved. However, it is important to understand manual boxing. Here are some examples: int a = 3 Integer i = new Integer(a); // boxing Integer i2 = a; // also valid i.intValue(); // unboxing double b = 3.0; Double d = new Double(b); // boxing Double d2 = b; // also valid d.doubleValue(); // unboxing // makes object i with int value 3 // returns int value 3 Overall, Lists are an essential part of programming that you must know well if you want to pursue a career in computer science. Lists, along with other classes/interfaces such as sets, maps, stacks, and queues, are important data structures that make up Java’s Collection interface. Once you finish the AP Computer Science course material, it is recommended that you explore these data structures next. Lesson Quiz 1. Which is an incorrect creation of an ArrayList object? 2. What is the purpose of a wrapper class? Written by James Richardson Notice any mistakes? Please email us at [email protected] so that we can fix any inaccuracies.
https://teamscode.com/learn/ap-computer-science/arraylists/
CC-MAIN-2019-04
refinedweb
538
56.86
This seems to be a fairly common case: you want to run a job from the blcli and wait for it to finish and then do something with the result. You can use something like Job.executeJobAndWait but this can be problematic if the job runs for long enough that the blcli to appserver connection will be deemed idle. The blcli sits there with no output so you aren't getting any indication anything is going on (hence the idle timeout problem), if that blcli to appserver connection is still alive or anything else. Instead maybe you can start the job and get something back you can use to check the run status and then loop over running that command until you see the job run has finished. The first thing to do is to find the right blcli command to run. There are a couple that look promising: Job.executeJobAndReturnScheduleID and Job.executeJobAndWaitForRunID. We need a command that will output something we can use to get the running status and I see a couple commands in the JobRun namespace that look like they will do that: JobRun.getJobRunStatusByScheduleId and JobRun.getJobRunIsRunningByRunKey (One thing that's a little confusing that well see later is the executeJobAndWaitForRunID actually returns the jobRunKey, not the jobRunId. And that's ok because we can use that with the getJobRunIsRunningByRunKey command) There are some other commands that let you 'execute against' a set of targets for that run and they also return the scheduleId or jobRunKey. As long as the execution command returns the scheduleId or jobRunKey or jobRunId it looks like we will be in good shape. As you can tell there is no right blcli command. We've found a couple different sets of commands that look like they will accomplish our goal of starting a job run and returning something that we can use to check status with another command. We can use either command set. The schedule one looks nice to me because it gives more information about status than running or not. Let's look at that one first. We need to get the jobKey, run it and get the scheduleId and then do a loop until the job returns a 'COMPLETE' status. That's pretty straightforward: blcli_execute NSHScriptJob getDBKeyByGroupAndName "/Workspace" "myJob" blcli_storeenv JOB_KEY blcli_execute Job executeJobAndReturnScheduleID ${JOB_KEY} blcli_storeenv JOB_SCHEDULE_ID JOB_STATUS="INCOMPLETE" while [[ -n "${${JOB_STATUS//COMPLETE}//$'\n'}" ]] do sleep 10 blcli_execute JobRun getJobRunStatusByScheduleId ${JOB_SCHEDULE_ID} blcli_storeenv JOB_STATUS echo "Schedule ${JOB_SCHEDULE_ID} status is: ${JOB_STATUS//$'\n'/ }" done There are some zsh-isms going on there I can explain: ${JOB_STATUS//$'\n'/ } removes the trailing new line from the return of getJobRunStatusByScheduleId. ${${JOB_STATUS//COMPLETE}//$'\n'} does the same thing and also removes the string COMPLETE from the output and then the while test looks to see if the variable is non-zero in size. When the job completes then it will show a status of COMPLETE only and by removing that string the variable will be zero and the loop will break. You could put a counter in there as well if you want to prevent it from getting stuck in the loop, so something like: blcli_execute NSHScriptJob getDBKeyByGroupAndName "/Workspace" "myJob" blcli_storeenv JOB_KEY blcli_execute Job executeJobAndReturnScheduleID ${JOB_KEY} blcli_storeenv JOB_SCHEDULE_ID JOB_STATUS="INCOMPLETE" COUNT=1 while [[ -n "${${JOB_STATUS//COMPLETE}//$'\n'}" ]] && [[ ${COUNT} -le 100 ]] do sleep 10 blcli_execute JobRun getJobRunStatusByScheduleId ${JOB_SCHEDULE_ID} blcli_storeenv JOB_STATUS echo "Schedule ${JOB_SCHEDULE_ID} status is: ${JOB_STATUS//$'\n'/ }" let COUNT+=1 done Now I might want to check if the job run had errors or dump the job run log items out or use one of the Utility commands to export a job result or log to a file. With the scheduleId approach I'll need to use a couple unreleased blcli commands to convert the schedule id into something i can use. I can run something like this to convert the scheduled into a job run id or job run key: blcli_execute JobRun findByScheduleId ${JOB_SCHEDULE_ID} blcli_execute JobRun getJobRunId blcli_storeenv JOB_RUN_ID or blcli_execute JobRun findByScheduleId ${JOB_SCHEDULE_ID} blcli_execute JobRun getJobRunKey blcli_storeenv JOB_RUN_KEY Now lets look at the getJobRunIsRunningByRunKey version. It's very similar: blcli_execute Job executeJobAndWaitForRunID ${JOB_KEY} blcli_storeenv JOB_RUN_KEY JOB_IS_RUNNING="true" while [[ "${JOB_IS_RUNNING}" = "true" ]] do sleep 10 blcli_execute JobRun getJobRunIsRunningByRunKey ${JOB_RUN_KEY} blcli_storeenv JOB_IS_RUNNING echo "${JOB_IS_RUNNING}" done In a normally functioning environment these will more or less work the same. A problem with the getJobRunIsRunning approach is that if the job doesn't start running, for example you have a job routing rule sending the job to a downed appserver, your appservers are maxed out on WorkItemThreads, or you've reached the MaxJobs threshold, then your executeJobAndWaitForRunID is going to sit waiting for the job to start running and you are back to the possible idle connection timeout happening. There are other commands that look interesting in the JobRun namespace (released and unreleased). The JobRun.showRunningJobs looks neat - it outputs a nice table - but it only shows the job name which is not enough to know if your job is actually running since you can have duplicate job names in different workspace folders. The above example is pretty simple. This can be extended to manage multiple job runs - for example if you wanted to kick off a number of jobs at once and then wait until they were all complete, check their status, and then move on to some other action. One approach to that would be some arrays to hold the job information an another loop around what I've done above. That may be a future post.
https://communities.bmc.com/community/bmcdn/bmc_service_automation/server_configuration_automation_bladelogic/blog
CC-MAIN-2017-43
refinedweb
911
52.94
Hello Community! I am back with another interesting task for you. Here is the task: create a Groovy script that will clear the cookies that are sent with the request. Difficulty: Assume that you have a TestCase in ReadyAPI with several test steps and for some of them, you need to maintain the HTTP session, i.e. to send the Cookie HTTP header. You can achieve this task by using the Maintain HTTP session option of the TestCase. But, for other requests in your Test Case, cookies cause failure. In this case, you may need a script that will remove the Cookie header for these test steps. For example, you have the following configuration of your ReadyAPI test: Tip: Working With Headers Good luck! Solved! Go to Solution. Task: create a Groovy script that will clear the cookies that are sent with the request. This is a solution created for [TechCorner Challenge #8] Thank you, @sonya_m. Those articles provided context that I did not have / understand, and from there I was able to make this work. This script needs to be set as an event script for "RequestFilter.filterRequest" and the target needs to be set as the steps that need to be filtered. Per the original request, an event would need to be created whose target is one of the test steps that needs the cookies cleared. Once that is in place, this script will clear the cookies. import org.apache.http.protocol.HttpContext import com.eviware.soapui.model.iface.SubmitContext import org.apache.http.impl.client.BasicCookieStore import org.apache.http.client.protocol.HttpClientContext HttpContext httpContext = context.getProperty(SubmitContext.HTTP_STATE_PROPERTY); BasicCookieStore cookieStore = httpContext.getAttribute(HttpClientContext.COOKIE_STORE) cookieStore.clear(); Task: create a Groovy script that will clear the cookies that are sent with the request. This is a solution created for [TechCorner Challenge #8] @sonya_m Thanks for the feedback. Completely ignored about automatic Cookie's earlier. Here is updated one which covers both automatic and manual set cookies This the script for SubmitListener.beforeSubmit To make the script more dynamic, using project level custom properties to avoid hard coded header and test step names in the script. i.e., user can add the comma separated values to each custom property. For example, 1. add REMOVE_COOKIE_FOR_STEPS property and values as requested, Login Server 2, Get Info Server 2 2. add HEADERS_TO_REMOVE property and value as requested COOKIE UPDATE: made few changes /() //Removes manual COOKIE def eHeaders = submit.request.requestHeaders getProjectProperty('HEADERS_TO_REMOVE')?.split(',')*.trim().each { eHeaders.remove(it) } submit.request.requestHeaders = eHeaders } NOTE: the same is having issues with Pro for SubmitListener.beforeSubmit event. Works in free edition (tested in 5.4v)with soapUIExtensions Script for RequestFilter.filterRequest /() def eHeaders = request.requestHeaders getProjectProperty('HEADERS_TO_REMOVE')?.split(',')*.trim().each { eHeaders.remove(it) } request.requestHeaders = eHeaders } @HimanshuTayal the sample you are looking for can be found here under Working with headers sample🙂 Edit: I've also attached a sample project to this comment that we created. You can download it as an alternative to the above sample. There’s a virtual service in the project, with the following behavior: 1. If the login request has no cookies – cookies with the login specified in the query returns. 2. If the login request has cookies, code 409 and the message to log out returns. 3. If any other request has no cookies, code 401 and the message to log in returns. 4. If any other request has cookies – everything’s OK. So, the first three steps are successful in the project, but step 4 gets code 409 from the server. To avoid this, we need to clear cookies at step 4. This was set up to run as a script test step before "Login User 2". It could be set to run as a project level event, but wasn't sure what the intent was. // Calling the request headers. def request = context.testCase.testSteps["Login User 2"].testRequest def headers = request.requestHeaders // Get the iterateator of the map Iterator<Map.Entry<String,Object>> iterate = headers.entrySet().iterator(); // Loop through the keys while(iterate.hasNext()){ Map.Entry<String,Object> entry = iterate.next(); // Test each Key string to see if it matches the "COOKIE" string if (entry.key.toUpperCase().contains("COOKIE")) { // Remove key since it matched iterate.remove(); } } // Set request headers back to the parsed-down list. request.requestHeaders = headers Thanks for sharing the info, below code will remove cookies from the request if it found in more than 1 test step. Below is the code: //Getting Test Step Count testStepCount = testRunner.testCase.getTestStepCount() int flag = 0 //Iterating over all Test Steps excepting 1st(which is the step in which this code is written) and skipping last step which is data sink for(int i = 1 ; i < testStepCount-1 ; i++){ //Getting Request from each Test Step def request = testRunner.testCase.getTestStepAt(i).testRequest //Getting Header from each Test Step def headers = request.requestHeaders //iterating on all Headers present in request Iterator<Map.Entry<String,Object>> header__iterator = headers.entrySet().iterator(); while(header__iterator.hasNext()){ Map.Entry<String,Object> header__map = header__iterator.next(); //checking whether "COOKIE" Text is available in Request Header or not if (header__map.key.toUpperCase().contains("COOKIE")) flag++ //Removing Header for 2nd Occurance Onwards. if(flag > 1) header__iterator.remove(); } request.requestHeaders = headers } Let me know if it meets the requirement or not. 🙂 Click "Accept as Solution" if my answer has helped, Remember to give "Kudos" 🙂 ↓↓↓↓↓ Thank you for creating the scripts! Our team took a look at them and here's a couple of ideas on how they can be improved. @msiadak The script is run in Groovy TestStep that is created before the Login User 2 Step. Before Login TestStep is launched, the list of headers to be checked does not exist. The script should be launched with the RequestFilter.filterRequest event handler. Also, the script works only for Headers that were added manually for the request, while it should work for Headers that ReadyAPI generates automatically. Hint: the RequestFilter.filterRequest event should be used🙂 @HimanshuTayal We faced an error "No such property: testRequest for class: com.eviware.soapui.impl.wsdl.teststeps.DebuggableWsdlGroovyScriptTestStep" on this line - def request = testRunner.testCase.getTestStepAt(i).testRequest This is something you might want to revise🙂 As for the method of getting Headers, I think it is similar to what msiadak is doing, so you can see my comment just above for that. Thanks for the insight @sonya_m. I was certain I was coming at this from the wrong angle and was expecting it needed to be an event. One of the problems I had though was I'm not actually sure what a "Cookie Header" looks like (short of the manual ones I used) to remove it. It makes sense to use Events, so that it can handle in any test step / test case / test suite, for this requirement instead of having extra groovy step. SubmitListener.beforeSubmit event can be used as well to achieve the same. But user should pay attention to achieve this functionality by having conditional removal of required header for the required step.
https://community.smartbear.com/t5/API-Functional-Security-Testing/TechCorner-Challenge-8-How-to-Clear-Cookies-in-API-Request/td-p/204911
CC-MAIN-2021-25
refinedweb
1,175
58.79
This document describes how a good Twisted application is structured. It should be useful for beginning Twisted developers who want to structure their code in a clean, maintainable way that reflects current best practices. Readers will want to be familiar with writing servers and clients using Twisted. TwistedQuotes is a very simple plugin which is a great demonstration of Twisted’s power. It will export a small kernel of functionality – Quote of the Day – which can be accessed through every interface that Twisted supports: web pages, e-mail, instant messaging, a specific Quote of the Day protocol, and more. See the description of setting up the TwistedQuotes example . from random import choice from zope.interface import implements from TwistedQuotes import quoteproto class StaticQuoter: """ Return a static quote. """ implements(quoteproto.IQuoter) def __init__(self, quote): self.quote = quote def getQuote(self): return self.quote class FortuneQuoter: """ Load quotes from a fortune-format file. """ implements(quoteproto.IQuoter) def __init__(self, filenames): self.filenames = filenames def getQuote(self): quoteFile = file(choice(self.filenames)) quotes = quoteFile.read().split('\n%\n') quoteFile.close() return choice(quotes) This code listing shows us what the Twisted Quotes system is all about. The code doesn’t have any way of talking to the outside world, but it provides a library which is a clear and uncluttered abstraction: “give me the quote of the day” . Note that this module does not import any Twisted functionality at all! The reason for doing things this way is integration. If your “business objects” are not stuck to your user interface, you can make a module that can integrate those objects with different protocols, GUIs, and file formats. Having such classes provides a way to decouple your components from each other, by allowing each to be used independently. In this manner, Twisted itself has minimal impact on the logic of your program. Although the Twisted “dot products” are highly interoperable, they also follow this approach. You can use them independently because they are not stuck to each other. They communicate in well-defined ways, and only when that communication provides some additional feature. Thus, you can use twisted.web with twisted.enterprise , but neither requires the other, because they are integrated around the concept of Deferreds . Your Twisted applications should follow this style as much as possible. Have (at least) one module which implements your specific functionality, independent of any user-interface code. Next, we’re going to need to associate this abstract logic with some way of displaying it to the user. We’ll do this by writing a Twisted server protocol, which will respond to the clients that connect to it by sending a quote to the client and then closing the connection. Note: don’t get too focused on the details of this – different ways to interface with the user are 90% of what Twisted does, and there are lots of documents describing the different ways to do it. from zope.interface import Interface from twisted.internet.protocol import Factory, Protocol class IQuoter(Interface): """ An object that returns quotes. """ def getQuote(): """ Return a quote. """ class QOTD(Protocol): def connectionMade(self): self.transport.write(self.factory.quoter.getQuote()+'\r\n') self.transport.loseConnection() class QOTDFactory(Factory): """ A factory for the Quote of the Day protocol. @type quoter: L{IQuoter} provider @ivar quoter: An object which provides L{IQuoter} which will be used by the L{QOTD} protocol to get quotes to emit. """ protocol = QOTD def __init__(self, quoter): self.quoter = quoter This is a very straightforward Protocol implementation, and the pattern described above is repeated here. The Protocol contains essentially no logic of its own, just enough to tie together an object which can generate quotes (a Quoter ) and an object which can relay bytes to a TCP connection (a Transport ). When a client connects to this server, a QOTD instance is created, and its connectionMade method is called. The QOTDFactory ‘s role is to specify to the Twisted framework how to create a Protocol instance that will handle the connection. Twisted will not instantiate a QOTDFactory ; you will do that yourself later, in a twistd plug-in. Note: you can read more specifics of Protocol and Factory in the Writing Servers HOWTO. Once we have an abstraction – a Quoter – and we have a mechanism to connect it to the network – the QOTD protocol – the next thing to do is to put the last link in the chain of functionality between abstraction and user. This last link will allow a user to choose a Quoter and configure the protocol. Writing this configuration is covered in the Application HOWTO .
http://twistedmatrix.com/documents/current/core/howto/design.html
CC-MAIN-2016-36
refinedweb
767
56.45
Should we include the html tag every time in the code? Should we include the html tag? Short answer, yes, always. Browsers are capable of parsing more than just HTML, and depend upon well-formed documents to deliver signals that give direction. <!DOCTYPE html> The above is the Document Type Declaration the browser (user agents) first sees when the page comes down from the server. In this instance it is the standard declaration to specify HTML5 as the namespace (specifications) the browser should follow. There is a lot going on behind the scenes when the DOM is assembled. The DOM is a construct of element nodes, all of which descend from a root element. The html in the above doctype declaration tells the browser that the root element of the document is <html></html>. In absence of that element, the browser will be left to its own devices (fallback actions). We run a risk of things going awry in some browsers, depending how well they are written. Best advice… Never take risks. The one tool to keep close at hand when composing a document is the validator… Familiarize yourself with this tool and how it works. There is a direct entry tab that allows us to paste in a document and validate it. Check ‘show code’ and ‘verbose mode’ checkboxes for the most information and a line numbered source listing in the report. Never give up on an invalid document. Fix it and take confidence in the fact that most if not all user agents will be able to correctly parse and/or render it. I think yes because the server won’t know if you’re talking about a different language like Python or CSS.
https://discuss.codecademy.com/t/should-we-include-the-html-tag/319110
CC-MAIN-2018-26
refinedweb
286
72.56
TOTD #46: Facelets with Java Server Faces 1.2 By arungupta on Sep 21, 2008. - Download Facelets from here (or specifically 1.1.14). Facelets Developer Documentation is a comprehensive source of information. - Add "jsf-facelets.jar" from the expanded directory to Project/Libraries as shown: - Change the JSF view documents to ".xhtml" by adding the a new context parameter in "web.xml" as: The updated "web.xml" looks like: - Specify Facelets as the ViewHandler of JSF application by adding the following fragment to "faces-config.xml": The updated document looks like: - Create three new XHTML pages by right-clicking on the project, selecting "New", "XHTML" and name them as "template", "welcome" and "result". This creates "template.xhtml", "welcome.xhtml" and "result.xhtml" in "Web Pages" folder. - Replace the generated code in "template.xtml" with the code given here. Change the <title> text "Facelets: What's your favorite City ?". - Replace the generated code in "welcome.xhtml" with the code given here. Refactor "welcomeJSF.jsp" such that H1 tag and the associated text goes in <ui:define and rest of the content goes in <ui:define. Also change the value of "template" attribute of <ui:composition> by removing "/". The updated page looks like: - Replace the generated code in "result.xhtml" with the code given here. Refactor "result.jsp" such that H1 tag and the associated text goes in <ui:define and rest of the content goes in <ui:define. Also add a namespace declaration for "". Optionally change the <h:form> associated with the command button to: The updated page looks like: - Add couple of more navigation rules to "faces-config.xml": Now this application is using Facelets as the view technology instead of the in-built view definition framework. Please leave suggestions on other TOTD (Tip Of The Day) that you'd like to see. A complete archive of all tips is available here. Technorati: totd javaserverfaces facelets netbeans glassfish 看不懂,留言一下 Posted by 一卡多号 on September 23, 2008 at 04:54 AM PDT # Posted by Arun Gupta's Blog on October 13, 2008 at 10:55 PM PDT #
https://blogs.oracle.com/arungupta/entry/totd_46_facelets_with_java
CC-MAIN-2015-18
refinedweb
348
60.11
I always get this error: "reference to Date is ambiguous, both class java.sql.Date in java.sql and class java.util.Date in java.util match" Created May 7, 2012 Dermot Hennessy It is generally bad practice, from the point of view of code maintainability, to import, e.g. java.util.*. What you should import is only those classes from the package which you are using in your class. That's point 1. If you are using both java.util.Date and java.sql.Date, the easiest thing to do is to import one of them and to explicitly refer to the full package structure of the other every time you use it, i.e. import java.util.Date; ..... ... Date d = new Date(); // returns a java.util.Date java.sql.Date d = new java.sql.Date(); speaks for itselfDermot
http://www.jguru.com/faq/view.jsp?EID=479352
CC-MAIN-2019-43
refinedweb
140
70.7
GHC/Error messages From HaskellWiki Latest revision as of 13:49, 3 December 2009 GHC error messages and their meaning. [edit] 1 "`foo' is not applied to enough type arguments" TODO Example: TODO [edit] 2 "`foo' is not a (visible) method of class `Bar'" This error message occurs when one tries to instantiate a class, but did not import the functions one tries to implement. Example: import Prelude hiding ((==)) data Foo = Foo instance Eq Foo where (==) a b = True [edit] 3 "Cannot match a monotype with `Foo'" See this Haskell-Cafe thread. [edit] 4 "Parse error in pattern" TODO Example: TODO
https://wiki.haskell.org/index.php?title=GHC/Error_messages&diff=32096&oldid=32095
CC-MAIN-2015-35
refinedweb
101
55.07
The test program, compList, listed in Source Code shows the events peers pass along to the Java run-time system. You can then examine the output to see how the run-time system reacts to the different events. When you run compList, the screen looks something like the one in Figure C.1. Java does not have an automated record and playback feature, so the work is left for you to do. The program displays 10 components: Label, Button, Scrollbar, List, multiselection List, Choice, Checkbox, TextField, TextArea, and Canvas (the black box in Figure C.1). Basically, you must manually trigger every event for every component. For every component on the screen (except Done), do the following: Move the cursor over the object, press the mouse button and release, and drag the cursor over the object. Press and release an alphabetic key, press and release the Home and End keys, arrow keys, and function keys. Do this for every component, even for components like Button and Label that have no logical reason for using keyboard events. Select and deselect a few choices; double-click and single-click selections. Click on each arrow, drag the slider, and click in the paging area (the space between each arrow and the slider). Press Enter. Press the Done button, and analyze the results. Run the program again (without exiting), and check the results again. Try to trigger any specific events that you expect but didn't appear in the output from the first pass. Generating some events requires a little work. For example, on a Macintosh, in order to get the MOUSE_UP and MOUSE_DRAG events, you must do a MOUSE_DOWN off the component; otherwise, the MOUSE_DOWN/MOUSE_UP combination turns into an ACTION_EVENT, if that component can generate it. NOTE: The SunTest business unit of Sun Microsystems has an early version of a record and playback Java GUI testing tool called JavaSTAR. Information about it is available at. In the future, it may be possible to use JavaSTAR to help automate this process. The following is the source code for the test program: import java.awt.*; import java.util.*; import java.applet.*; public class compList extends Applet { Button done = new Button ("Done"); Hashtable values = new Hashtable(); public void init () { add (new Label ("Label")); add (new Button ("Button")); add (new Scrollbar (Scrollbar.HORIZONTAL, 50, 25, 0, 255)); List l1 = new List (3, false); l1.addItem ("List 1"); l1.addItem ("List 2"); l1.addItem ("List 3"); l1.addItem ("List 4"); l1.addItem ("List 5"); add (l1); List l2 = new List (3, true); l2.addItem ("Multi 1"); l2.addItem ("Multi 2"); l2.addItem ("Multi 3"); l2.addItem ("Multi 4"); l2.addItem ("Multi 5"); add (l2); Choice c = new Choice (); c.addItem ("Choice 1"); c.addItem ("Choice 2"); c.addItem ("Choice 3"); c.addItem ("Choice 4"); c.addItem ("Choice 5"); add (c); add (new Checkbox ("Checkbox")); add (new TextField ("TextField", 10)); add (new TextArea ("TextArea", 3, 20)); Canvas c1 = new Canvas (); c1.resize (50, 50); c1.setBackground (Color.blue); add (c1); add (done); } public boolean handleEvent (Event e) { if (e.target == done) { if (e.id == Event.ACTION_EVENT) { System.out.println (System.getProperty ("java.vendor")); System.out.println (System.getProperty ("java.version")); System.out.println (System.getProperty ("java.class.version")); System.out.println (System.getProperty ("os.name")); System.out.println (values); } }else { Vector v; Class c = e.target.getClass(); v = (Vector)values.get(c); if (v == null) v = new Vector(); Integer i = new Integer (e.id); if (!v.contains (i)) { v.addElement (i); values.put (c, v); } } return super.handleEvent (e); } } An HTML document to display the applet in a browser should look something like the following: <APPLET code="compList.class" height=300 width=300> </APPLET> The results of the program are sent to standard output when you click on the Done button. What happens to the output depends on the platform. It may be sent to a log file (Internet Explorer), the Java Console (Netscape Navigator), or the command line (appletviewer). The following is sample output from Internet Explorer 3.0 on a Windows 95 platform. Microsoft Corp. 1.0.2 45.3 Windows 95 {class java.awt.Canvas=[504, 503, 1004, 501, 506, 502, 505, 1005, 401, 402, 403, 404], class java.awt.Choice=[1001, 401, 402, 403, 404], class java.awt.Checkbox=[1001, 402, 401, 403, 404], class compList=[504, 503, 501, 506, 502, 505, 1004, 1005], class java. awt.TextField=[401, 402, 403, 404], class java.awt.List=[701, 1001, 401, 402, 403, 404, 702], class java.awt.Scrollbar=[602, 605, 604, 603, 601], class java.awt.TextArea=[401, 402, 403, 404], class java.awt.Button=[1001, 401, 402, 403, 404]} In addition to some identifying information about the run-time environment, the program displays a list of classes and the events they passed. The integers represent the event constants of the Event class; for example, Canvas received events with identifiers 504, 503, etc. The events are not sorted, so you can see the order in which they were sent. Unfortunately, you have to look up these constants in the source code yourself. The class listed as compList is the applet itself and shows you the events that the Applet class receives.
https://docstore.mik.ua/orelly/java/awt/appc_02.htm
CC-MAIN-2019-18
refinedweb
866
67.86
Microsoft focuses on caching and security with the release of the Enterprise Library for .NET Framework 2.0. Redmond has announced the availability of Enterprise Library for .NET Framework 2.0 as a free download from Microsoft's MSDN developer site. The update provides a bevy of Application Blocks to simplify common programming tasks under .NET. The Enterprise Library is designed to help developers follow Microsoft guidelines and streamline code tasks like caching, cryptography, data access and logging from one project to the next. There are a few changes to be found in the new library, the most significant of which is a move to employ the .NET Framework 2.0 classes and namespaces. While Microsoft assures that upgrading to the new library will demand minimal code changes, the company is quick to warn developers that swapping over won't be flawless. "Most of the changes to the application blocks are internal and will not affect your client code. However, there are changes that require you to modify your existing applications, configuration data, and custom providers," Microsoft said in a statement. Details of the changes to be found in this release are available at MSDN. Though no further updates are currently planned, Redmond will be listening to customer feedback and the library will be modified to include support for WinFX and Language Integrated Query (LINQ) when they're released.
https://www.techrepublic.com/article/enterprise-library-updated-to-net-framework-20/
CC-MAIN-2022-05
refinedweb
229
56.35
Procedural Level Generation in Games using a Cellular Automaton: Part 1 A tutorial on procedural level generation using a cellular automaton to create cave-like levels in games. Version - Other, Other, Other In this tutorial series, you’ll use a discreet model called cellular automaton to generate procedural caves for your games. You’ll also learn how to overcome some of the obstacles this model imposes on level generation, like removing unreachable areas of the level and ensuring the exit can always be reached by the player. If you have read the previous tutorial on procedural level generation on this site, you already know the basics of procedural level generation using the Drunkard Walk algorithm. The Drunkard Walk is a reliable, battle-tested algorithm that generates levels, but it’s just one of many options out there. This tutorial series uses Sprite Kit, a framework introduced with iOS 7. You will also need Xcode 5. If you are not already familiar with Sprite Kit, I recommend you read the Sprite Kit Tutorial for Beginners on this site. For readers who are using a different game framework, fear not. You can easily use the ideas from this tutorial in the game framework of your choice. By now, you’re probably asking yourself, “What on Earth is a cellular automaton, anyway?” Time to indulge in a little theory. Cellular Automata Explained A cellular automaton (pl. cellular automata) is a discreet computational model first discovered in the 1940s. Experts in mathematics, physics and biology have studied it extensively, and while it has produced mountains of complex mathematics, the basic concept is really simple. At its core, a cellular automaton consists of an n-dimensional grid with a number of cells in a finite state and a set of transition rules. Each cell of the grid can be in one of several states; in the simplest case, cells can be on or off. The initial distribution of cell states constitutes the seed of the automaton in an initial state (t0). A new generation is created (advancing t by 1) by applying the transition rules to all cells simultaneously, thereby putting every cell in a new state in terms of its current state and the current states of all the cells in its neighborhood. The neighborhood defines which cells around a given cell affect its future state. For a two-dimensional automata, the two most common types of neighborhoods are Moore neighborhoods and von Neumann neighborhoods, as illustrated below. A Moore neighborhood (a) is a square: a Moore neighborhood of size 1 consists of the eight cells surrounding c, including those surrounding it diagonally. A von Neumann neighborhood (b) is like a cross centered on P: above, below, left and right. A well-known example of cellular automata to many game developers is Conway’s Game of Life, which is a two-dimensional grid of cells with each cell in a state of either dead or alive. Four transition rules govern the grid: -. You’ll be implementing a variation of Game of Life to generate cave-like levels in this tutorial. But enough theory. Time to code. Getting Started To get started, download the starter project. Unzip it and double-click CellularAutomataStarter\CellularAutomataStarter.xcodeproj to open it in Xcode. Build and run using the iPhone Retina (3.5-inch) scheme. You’ll see the following on screen: That is one sad knight. No dragons, no damsels, no battles and no glory in sight! Good thing you’re here to change that. How about you create a treasure-laden cave labyrinth for the knight to explore? The starter project contains all the assets you’ll need for this tutorial and a few important classes: Cave: An SKNodesubclass that contains a stub implementation of the cave class. You’ll extend this throughout the tutorial. DPad: Provides a basic implementation of a joystick so the player can control the knight MyScene: Sets up the Sprite Kit scene and processes game logic Player: A SKSpriteNodesubclass that contains the knight’s logic Take a moment to browse the project and familiarize yourself with the setup. The code has plenty of comments to help you understand how it works. Are you ready to give your knight something to do? Implementing a Cellular Automaton Your first step to implement a cellular automaton is to create a grid and put cells into the grid. Open Cave.m and add the following private property to the class extension: @property (strong, nonatomic) NSMutableArray *grid; You made the property for the grid private, because you shouldn’t modify it directly from outside the class. Besides, the state of the grid will update via the transition rules you’ll add later in the tutorial. Next, create a new class to serve as a cell in your grid. Create a new file by selecting File\New\File…, choose iOS\Cocoa Touch\Objective-C class and click Next. Name the class CaveCell, make it a Subclass of NSObject, click Next, make sure the CellularAutomataStarter target is checked, and click Create. Each cell should be in a finite state, so add the following enumeration to the CaveCell.h class after the #import statement: typedef NS_ENUM(NSInteger, CaveCellType) { CaveCellTypeInvalid = -1, CaveCellTypeWall, CaveCellTypeFloor, CaveCellTypeMax }; This defines the possible states for cells in the game, along with CaveCellTypeMax that always has a value of one greater than the last real state. Adding a value like this to an enum definition makes it easy to do things like loop through all the possible values. Also in CaveCell.h, add the following code to the @interface section: @property (assign, nonatomic) CGPoint coordinate; @property (assign, nonatomic) CaveCellType type; - (instancetype)initWithCoordinate:(CGPoint)coordinate; This adds two public properties to the class, one to store the cell’s coordinates within the grid and one to store the cell’s type (or state). You also added an initializer to construct a new cell with the given coordinates. Implement the initializer in CaveCell.m by adding the following code in the @implementation section: - (instancetype)initWithCoordinate:(CGPoint)coordinate { if ((self = [super init])) { _coordinate = coordinate; _type = CaveCellTypeInvalid; } return self; } The initializer creates a new CaveCell instance with the coordinate given and sets the type to CaveCellTypeInvalid. Later, you’ll set the type of the cell to either a wall ( CaveCellTypeWall) or a floor ( CaveCellTypeFloor). You’ve now created the foundation for two of the three core parts of a cellular automaton: the grid and the cell. Next, you’ll create a method to put the cells into the grid. Open Cave.m and import the CaveCell.h header: #import "CaveCell.h" Still inside Cave.m, add the following method that initializes the grid: - (void)initializeGrid { self.grid = [NSMutableArray arrayWithCapacity:(NSUInteger)self.gridSize.height]; for (NSUInteger y = 0; y < self.gridSize.height; y++) { NSMutableArray *row = [NSMutableArray arrayWithCapacity:(NSUInteger)self.gridSize.width]; for (NSUInteger x = 0; x < self.gridSize.width; x++) { CGPoint coordinate = CGPointMake(x, y); CaveCell *cell = [[CaveCell alloc] initWithCoordinate:coordinate]; cell.type = CaveCellTypeFloor; [row addObject:cell]; } [self.grid addObject:row]; } } If you look at initWithAtlasNamed:gridSize: in Cave.m, you'll see that you initialize instances of Cave by passing in the size of the grid, in cells. You use this information in initializeGrid to create a two-dimensional mutable array (i.e. an array of arrays) for the grid and assign a floor cell to each position in the grid. Do you notice anything strange about how the grid is set up? [spoiler title="Solution"]The way you set up the arrays means you need to reference cells with the y-coordinate first. That means to get the CaveCell at column (x-coordinate) 4 and row (y-coordinate) 6 you would get it using the code (CaveCell *)self.grid[6][4].[/spoiler] To allow generating a new cave from the Cave class, add this new method declaration to Cave.h: - (void)generateWithSeed:(unsigned int)seed; Now, implement it in Cave.m: - (void)generateWithSeed:(unsigned int)seed { NSLog(@"Generating cave..."); NSDate *startDate = [NSDate date]; [self initializeGrid]; NSLog(@"Generated cave in %f seconds", [[NSDate date] timeIntervalSinceDate:startDate]); } This method initializes the grid using initializeGrid and logs the time it took to generate the cave. The method also accepts a seed parameter. Although you don't do anything with this parameter yet, soon you will implement code that allows you to generate different caves by using different values for this parameter. But if you pass the same value, it will always generate the same cave. In this tutorial, you'll always pass the same value – 0 – to make it easier to observe changes. Tip: If you want to generate a random cave every time, just pass time(0) as the seed value in your call to generateWithSeed:. To test that everything works as intended open MyScene.m and add the following import statement: #import "Cave.h" Also, add the following private property to the class extension: @property (strong, nonatomic) Cave *cave; You're going to use this property to ensure easy access to the Cave object you generate. Finally, add the following code after the inline comment // Add code to generate new cave here in initWithSize:: _cave = [[Cave alloc] initWithAtlasNamed:@"tiles" gridSize:CGSizeMake(64.0f, 64.0f)]; _cave.name = @"CAVE"; [_cave generateWithSeed:0]; [_world addChild:_cave]; This will instantiate a new Cave object using a texture atlas named "tiles" and then generate the cave with a seed value of 0. Then _cave is added as a child of _world (rather than the scene itself, which will make scrolling easier later). Build and run the game. Check to see if the console has an entry like this: At the moment, there is no visual difference to the game. You should do something about that. :] Creating Tiles Do you remember passing the name of a texture atlas to the initializer when you created _cave during initialization of MyScene? Your next task is to add the code to create the tiles for the cave using the textures in the tiles atlas. Before you add the method to create tiles, it's convenient to have a couple helper methods: one to determine if a grid coordinate is valid, and one to get a cell from the grid based on the coordinates. Add the following methods to Cave.m: - (BOOL)isValidGridCoordinate:(CGPoint)coordinate { return !(coordinate.x < 0 || coordinate.x >= self.gridSize.width || coordinate.y < 0 || coordinate.y >= self.gridSize.height); } - (CaveCell *)caveCellFromGridCoordinate:(CGPoint)coordinate { if ([self isValidGridCoordinate:coordinate]) { return (CaveCell *)self.grid[(NSUInteger)coordinate.y][(NSUInteger)coordinate.x]; } return nil; } These methods are pretty straightforward. isValidGridCoordinate:checks tile coordinates against the grid's size to ensure they are within the range of possible grid coordinates. caveCellFromGridCoordinate:returns a CaveCell instance based on the grid coordinate. Remember that this is backwards to what you might expect (it uses self.grid[y][x] instead of self.grid[x][y]) due to the way you set up the arrays, as discussed earlier. It returns nilif the grid coordinate is invalid. With these methods in place, you can now implement the method to generate the grid's tiles. Still in Cave.m add the following method: - (void)generateTiles { for (NSUInteger y = 0; y < self.gridSize.height; y++) { for (NSUInteger x = 0; x < self.gridSize.width; x++) { CaveCell *cell = [self caveCellFromGridCoordinate:CGPointMake(x, y)]; SKSpriteNode *node; switch (cell.type) { case CaveCellTypeWall: node = [SKSpriteNode spriteNodeWithTexture:[self.atlas textureNamed:@"tile2_0"]]; break; default: node = [SKSpriteNode spriteNodeWithTexture:[self.atlas textureNamed:@"tile0_0"]]; break; } // Add code to position node here: node.blendMode = SKBlendModeReplace; node.texture.filteringMode = SKTextureFilteringNearest; [self addChild:node]; } } } The code above simply loops through the grid to build sprites based on the types of cells. Note: This sets the blend mode to replace, because there is no alpha transparency in these cells. It also sets the filtering mode to nearest, which gives a nice pixel-art style. To create the tiles, add the following code to generateWithSeed: in Cave.m, just after [self initializeGrid];: [self generateTiles]; Build and run. You should now see the following, it's not much yet, but your knight is one step closer to adventure: Positioning Tiles The tiles were created based on the node count, but why can't you see them? They're stacked atop each other! Once you create a tile, you need to calculate the position for it in the cave. Take a look at this diagram. At the top, row numbers begin at 0 and increase towards the bottom, while column numbers begin at 0 on the left and increase towards the right. Remember that the grid array is indexed by (row (y), column (x)). So you're not crazy and this tutorial was not written with a whiskey in hand, the grid is purposefully reversed. As you see in the diagram, you need the size of a tile to calculate its position correctly. That is why you'll add a handy property to the Cave class to get the size. Open Cave.h and add the following property: @property (assign, nonatomic, readonly) CGSize tileSize; The property is read only; it will not change once a Cave instance generates. Open Cave.m and add this method: - (CGSize)sizeOfTiles { SKTexture *texture = [self.atlas textureNamed:@"tile0_0"]; return texture.size; } This returns the size of one of the tile textures in the cave's atlas, and assumes all are the same size. This is a fair assumption, considering how it builds tile maps. Go to initWithAtlasNamed:gridSize: in Cave.m and add the following line of code after the line _gridSize = gridSize;: _tileSize = [self sizeOfTiles]; Next, you need to create a new method to do the actual calculation of the tile position. Add this new method to Cave.m: - (CGPoint)positionForGridCoordinate:(CGPoint)coordinate { return CGPointMake(coordinate.x * self.tileSize.width + self.tileSize.width / 2.0f, (coordinate.y * self.tileSize.height + self.tileSize.height / 2.0f)); } As you see, the calculation in this method corresponds to the calculation illustrated in the diagram above. It simply multiplies the given x and y coordinates with the tile's width and height, respectively. You add half the tile's width and height to those values because you're calculating the tile's center point. The last step is to add the positioning to the tile upon creation. Go back to generateTiles and add this single line of code after the comment // Add code to position node here:: node.position = [self positionForGridCoordinate:CGPointMake(x, y)]; Build and run the game and use the joystick to move around the cave. This isn't exactly a cave, is it? More like a giant wasteland. Why are there no walls? [spoiler title="Solution"] initializeGrid initializes every cell it creates as a floor tile.[/spoiler] The Initial Seed Now that the boilerplate code is in place to manage the grid and tile creation, you can safely move on to implementing the first step in the cellular automaton creation: the initial distribution of cell states. You're going to start by randomly setting each cell to be either a wall or a floor. You'll want to tweak the chance of a cell becoming either a wall or a floor during the cave generation, so, you add the following property to Cave.h: @property (assign, nonatomic) CGFloat chanceToBecomeWall; The value of chanceToBecomeWall will be in the range of 0.0 to 1.0. A value of 0.0 means all cells in the cave will become floors, and a value of 1.0 means all cells in the cave will become walls. Inside Cave.m, set the default value of chanceToBecomeWall to 0.45 by adding the following code to initWithAtlasNamed:gridSize: after the line that initializes _tileSize: _chanceToBecomeWall = 0.45f; This will mean that there is a 45% chance that a cell become a wall. Why 0.45, you might ask? This value comes from good, old fashioned trial-and-error, and it tends to give satisfactory results. You'll need to generate random numbers quite a few times during cave generation, so add the following method to Cave.m: - (CGFloat) randomNumberBetween0and1 { return random() / (float)0x7fffffff; } This method returns a value between 0 and 1. It uses the random() function, which needs to be seeded before use. This should only happen once per cave generation, so add the following line in generateWithSeed:, just before the line that calls initializeGrid: srandom(seed); At the moment, all cells in the cave are floors, but it won't always be a two-dimensional environment. You need to take into account the fact that some cells will become a wall or a floor by modifying initializeGrid. Inside initializeGrid in Cave.m, replace the line cell.type = CaveCellTypeFloor; with the following line of code: cell.type = [self randomNumberBetween0and1] < self.chanceToBecomeWall ? CaveCellTypeWall : CaveCellTypeFloor; Instead of defaulting every cell to be a floor, each cell gets a type based on the value returned by the random number generator, taking into account the chanceToBecomeWall property. Time to see the fruit of your labor thus far. Build and run, and you should now see the following: Try moving around using the joystick. Not very cave-like, is it? So, what's next? Growing the cave The next step you take is to apply the transition rules of the cellular automaton. Remember the rules that governed the cells in Conway's Game of Life? The transition rules are applied to all cells simultaneously thereby changing the state of the cells based on their neighborhood. Exactly the same approach will grow the cave. You'll create a method that iterates each cell in the grid, and applies the transition rules to decide whether the cell will be a wall or a floor. The below figure illustrates how applying the transition rules will impact cave generation: The first thing you do is define the transition rules. In this tutorial, you apply these rules: - If a cell is a wall and less than 3 cells in the Moore neighborhood are walls, the cell changes state to a floor. - If a cell is a floor and greater than 4 cells in the Moore neighborhood are walls, the cell changes state to a wall. - Otherwise, the cell remains in its current state. To make the cave generation as flexible as possible, add these two new properties to Cave.h: @property (assign, nonatomic) NSUInteger floorsToWallConversion; @property (assign, nonatomic) NSUInteger wallsToFloorConversion; These properties allow you to set the number of floors or walls that mark the thresholds for the transition rules described earlier. You need to give them a default value, so open Cave.m and add the following code to initWithAtlasNamed:gridSize:, just after the line that initializes _chanceToBecomeWall: _floorsToWallConversion = 4; _wallsToFloorConversion = 3; The default values are the result of trial-and-error and known to give good results. But don't take my word for it, play around with these property values to see how they affect the cave generation. Your last step before applying transition rules is to create a method to count the number of walls in the Moore neighborhood of a cell. Do you remember how many cells are in the Moore neighborhood? [spoiler title="Solution"]There are eight cells in the Moore neighborhood. See the illustration below for further details. [/spoiler] Add the following method to Cave.m: - (NSUInteger)countWallMooreNeighborsFromGridCoordinate:(CGPoint)coordinate { NSUInteger wallCount = 0; for (NSInteger i = -1; i < 2; i++) { for (NSInteger j = -1; j < 2; j++) { // The middle point is the same as the passed Grid Coordinate, so skip it if ( i == 0 && j == 0 ) { break; } CGPoint neighborCoordinate = CGPointMake(coordinate.x + i, coordinate.y + j); if (![self isValidGridCoordinate:neighborCoordinate]) { wallCount += 1; } else if ([self caveCellFromGridCoordinate:neighborCoordinate].type == CaveCellTypeWall) { wallCount += 1; } } } return wallCount; } The for loops might look a bit weird at first, but there is a method to the madness. The idea is to count the number of wall cells around the grid coordinates (x, y). If you look at the illustration below, you can see coordinates for neighbors are one less, equal to, and one greater than the original coordinate. Your two for loops give you just that, starting at -1 and looping through +1. You might have noticed the code counts invalid grid coordinates (for instance, coordinates that are off the edge of the grid) as walls. This will help fill in the edges of the cave, but it is a matter of preference if you want to do this. You can experiment by not doing it, if you like. Add this method to Cave.m to perform a transition step to all the cells in the grid: - (void)doTransitionStep { // 1 NSMutableArray *newGrid = [NSMutableArray arrayWithCapacity:(NSUInteger)self.gridSize.height]; // 2 for (NSUInteger y = 0; y < self.gridSize.height; y++) { NSMutableArray *newRow = [NSMutableArray arrayWithCapacity:(NSUInteger)self.gridSize.width]; for (NSUInteger x = 0; x < self.gridSize.width; x++) { CGPoint coordinate = CGPointMake(x, y); // 3 NSUInteger mooreNeighborWallCount = [self countWallMooreNeighborsFromGridCoordinate:coordinate]; // 4 CaveCell *oldCell = [self caveCellFromGridCoordinate:coordinate]; CaveCell *newCell = [[CaveCell alloc] initWithCoordinate:coordinate]; // 5 // 5a if (oldCell.type == CaveCellTypeWall) { newCell.type = (mooreNeighborWallCount < self.wallsToFloorConversion) ? CaveCellTypeFloor : CaveCellTypeWall; } else { // 5b newCell.type = (mooreNeighborWallCount > self.floorsToWallConversion) ? CaveCellTypeWall : CaveCellTypeFloor; } [newRow addObject:newCell]; } [newGrid addObject:newRow]; } // 6 self.grid = newGrid; } Let's go over this section-by-section: - Create a new grid that will be the state of the grid after the transition step has been performed. To understand why this is needed, remember that to calculate the new value of a cell in the grid, you need to look at the eight neighbors in the Moore neighborhood. If you already calculated the new value of some of the cells and put them back in the grid, then the calculation will be a mix of new and old data. - Iterate through all the cells in the grid. You use two forloops for this purpose. - For each cell, you use the countWallMooreNeighborsFromGridCoordinate:method you added earlier to calculate the number of walls in the Moore neighborhood. - A copy ( newCell) of the current cell ( oldCell) is made for the reason stated in section 1. - The transition rules apply to the cell. Based on the cell type (wall or floor), it checks the number of wall cells in the Moore neighborhood against the limits for changing the cell. - 5a: If the cell is a wall and mooreNeighborWallCountis less than the value of wallsToFloorConversion, then the wall changes to a floor (transition rule 1). Otherwise, it remains a wall (transition rule 3). - 5b: If the cell is a floor and the MooreNeighborWallCountis greater than the value of the floorsToWallConversion, then the floor changes to a wall (transition rule 2). Otherwise, it remains a floor (transition rule 3). gridproperty. ARC will automatically free memory for the old grid. Now you've almost implemented the transition step of the cellular automaton. The only thing missing is actually performing the transition step or steps when you generate the cave. To be flexible, add another property to add the ability to set how many transition steps perform when the cave generates. Open Cave.h and add the following property: @property (assign, nonatomic) NSUInteger numberOfTransitionSteps; Experimentation revealed that using two transitions steps works well. Initialize the property by adding this to initWithAtlasNamed:gridSize: in Cave.m, just after the line that initializes _wallsToFloorConversion: _numberOfTransitionSteps = 2; Still in Cave.m, add the following for loop to generateWithSeed: between the lines [self initializeGrid]; and [self generateTiles];: for (NSUInteger step = 0; step < self.numberOfTransitionSteps; step++) { NSLog(@"Performing transition step %lu", (unsigned long)step + 1); [self doTransitionStep]; } This simply runs the desired number of transition steps by calling doTransitionStep at each iteration of the loop. Now you've implemented the cellular automaton fully. Build and run to see how the transition steps affect cave generation. At this point, you have a nice, realistic looking cave, but did you notice there are unreachable caverns? In the above image, green highlights indicate where the unreachable areas lie. Since the knight has no holy hand grenades, this leaves the player in a bad way, so you'll need to fix that. Or, maybe you could make that an In-App Purchase. ;] Identifying Caverns Once you identify individual caverns, you can then either connect or remove them all, except the largest cavern. You'll be doing both in this tutorial. The best way to identify the caverns in the cave is to apply a flood fill algorithm that can determine the area connected to a given cell in a grid. Have you ever used a flood fills in paint programs like Photoshop to fill areas with a different color? It works just as well on caves. :] First, add the following private property to the class extension in Cave.m: @property (strong, nonatomic) NSMutableArray *caverns; This allows easy access to information about the caverns later. To perform the flood fill, you need two new methods in Cave.m: - One that creates a copy of the current grid. From this, you'll loop over every floor cell. For each one you'll recursively test each floor cell in its von Neumann neighborhood. - Another you'll call recursively to test cells in the von Neumann neighborhood of a cell. This will also change each tested cell's type to a fill value. Tackle the recursive method first. Add this method to Cave.m: - (void)floodFillCavern:(NSMutableArray *)array fromCoordinate:(CGPoint)coordinate fillNumber:(NSInteger)fillNumber { // 1 CaveCell *cell = (CaveCell *)array[(NSUInteger)coordinate.y][(NSUInteger)coordinate.x]; // 2 if (cell.type != CaveCellTypeFloor) { return; } // 3 cell.type = fillNumber; // 4 [[self.caverns lastObject] addObject:cell]; // 5 if (coordinate.x > 0) { [self floodFillCavern:array fromCoordinate:CGPointMake(coordinate.x - 1, coordinate.y) fillNumber:fillNumber]; } if (coordinate.x < self.gridSize.width - 1) { [self floodFillCavern:array fromCoordinate:CGPointMake(coordinate.x + 1, coordinate.y) fillNumber:fillNumber]; } if (coordinate.y > 0) { [self floodFillCavern:array fromCoordinate:CGPointMake(coordinate.x, coordinate.y - 1) fillNumber:fillNumber]; } if (coordinate.y < self.gridSize.height - 1) { [self floodFillCavern:array fromCoordinate:CGPointMake(coordinate.x, coordinate.y + 1) fillNumber:fillNumber]; } } The intent of this method is to be recursive. As you can see, it calls itself several times. Here's a play-by-play of what it does: - Use the grid coordinatepassed to the method to get the CaveCellto test. - If the cell you're testing isn't a floor cell, the method simply returns. When dealing with recursive methods, you always need some condition like this that will eventually stop the recursion, otherwise you'll crash with an infinite loop. - You change the cell's type to the fillNumberthat passed to the method. Each cavern will have a unique fill number, and this is how you'll tell caverns apart later. It also ensures the cell is never tested again because it will no longer be considered a floor cell. This is the reason you're performing the flood fill on a copy of the cave grid. - The cell gets added to the last array in the list of caverns. The lastObjectof the cavernsarray will be the current cavern where you're performing the flood fill. - Perform a recursive test for each cell in the von Neumann neighborhood. The test runs in the order west, east, north and south. Performance-wise, this is the smartest way since the flood fill goes from top to bottom. Still in Cave.m, add the second required method: - (void)identifyCaverns { // 1 self.caverns = [NSMutableArray array]; // 2 NSMutableArray *floodFillArray = [NSMutableArray arrayWithCapacity:(NSUInteger)self.gridSize.height]; for (NSUInteger y = 0; y < self.gridSize.height; y++) { NSMutableArray *floodFillArrayRow = [NSMutableArray arrayWithCapacity:(NSUInteger)self.gridSize.width]; for (NSUInteger x = 0; x < self.gridSize.width; x++) { CaveCell *cellToCopy = (CaveCell *)self.grid[y][x]; CaveCell *copiedCell = [[CaveCell alloc] initWithCoordinate:cellToCopy.coordinate]; copiedCell.type = cellToCopy.type; [floodFillArrayRow addObject:copiedCell]; } [floodFillArray addObject:floodFillArrayRow]; } // 3 NSInteger fillNumber = CaveCellTypeMax; for (NSUInteger y = 0; y < self.gridSize.height; y++) { for (NSUInteger x = 0; x < self.gridSize.width; x++) { if (((CaveCell *)floodFillArray[y][x]).type == CaveCellTypeFloor) { [self.caverns addObject:[NSMutableArray array]]; [self floodFillCavern:floodFillArray fromCoordinate:CGPointMake(x, y) fillNumber:fillNumber]; fillNumber++; } } } NSLog(@"Number of caverns in cave: %lu", (unsigned long)[self.caverns count]); } Now here's a step-by-step : - Create an instance of a new mutable array for the caverns in the cave. Each cavern will be a mutable array of CaveCells. - Create a deep copy of the cave grid. Since the flood fill will change the typeof the cell, you'll need to work on a copy of the grid rather than the grid itself. - Perform the flood fill by looping through each cell in the copy of the grid. If the cell is a floor tile, it means the cell has not been tested already, so a new cavern starts and the flood fill starts from the current cell. Now, add the following line to generateWithSeed:, right before the [self generateTiles]; line: [self identifyCaverns]; Build and run. In the console, you should be able to see 22 tidy little caverns, for a cave generated with a seed of 0. You now know all the caverns in the cave and you're left with a few choices with increasing difficulty: - You can keep the cave as-is. If you work with destructible terrain, the player will be able to break, blast and bore through the walls to access the unconnected areas. - You can remove all the unconnected caverns by filling all but the largest cavern with wall cells. The advantage is that the organic look of the cave remains while ensuring the knight can reach all parts of the cave. The downside is that you lose some playable area. - You can connect the unconnected caverns to the largest of the caverns. The advantage is that you keep all the walkable areas of the original cave. The downside is that the connections between the caverns will be straight lines, which just doesn't flatter organic look of a cave. In this tutorial, you'll learn how to do options two and three. No matter which option you choose for the game, you'll need to know which cavern is the largest, so you know where to put the entry and exit. All you have to do is count the number of CaveCell instances in each cavern array in the caverns. So, add the following method to Cave.m to do the counting: - (NSInteger)mainCavernIndex { NSInteger mainCavernIndex = -1; NSUInteger maxCavernSize = 0; for (NSUInteger i = 0; i < [self.caverns count]; i++) { NSArray *caveCells = (NSArray *)self.caverns[i]; NSUInteger caveCellsCount = [caveCells count]; if (caveCellsCount > maxCavernSize) { maxCavernSize = caveCellsCount; mainCavernIndex = i; } } return mainCavernIndex; } This method will simply loop through each array in caverns, checking which has the most instances of CaveCell. It returns an index that identifies the largest cavern, and henceforth shall be known as the 'main cavern'. Removing Unconnected Caverns If you choose to remove all the disconnected caverns, then you need to fill them in with wall tiles. To do that, add this new method to Cave.m: - (void) removeDisconnectedCaverns { NSInteger mainCavernIndex = [self mainCavernIndex]; NSUInteger cavernsCount = [self.caverns count]; if (cavernsCount > 0) { for (NSUInteger i = 0; i < cavernsCount; i++) { if (i != mainCavernIndex) { NSArray *array = (NSArray *)self.caverns[i]; for (CaveCell *cell in array) { ((CaveCell *)self.grid[(NSUInteger)cell.coordinate.y][(NSUInteger)cell.coordinate.x]).type = CaveCellTypeWall; } } } } } First, this method gets the index of the main cavern, then it uses a for loop to go through the caverns array. It turns all the CaveCell instances into walls within the caverns, except for those in the main cavern. To see your latest efforts in action, add the following line of code to generateWithSeed: just before the line that calls generateTiles: [self removeDisconnectedCaverns]; Build and run now, and note that your cave no longer has any unconnected caverns. spriteNodeWithColor:size:with a size of 2x2 points in place of using spriteNodeWithTexture:in generateTiles, and modifying sizeOfTilesaccordingly. Where To Go From Here? Here is the finished example project from this tutorial series so far. Congratulations, you have generated your own random cave system using cellular automata! Your knight now has a mysterious cavern to explore. However, there's more you can do! Stay tuned for the second and final part of the series, where you'll learn how to connect unconnected caverns, place an exit and entrance into the cavern, add collision detection - and yes, there will be treasure. In the meantime, if you have any questions or comments, please join the forum discussion below!
https://www.raywenderlich.com/2425-procedural-level-generation-in-games-using-a-cellular-automaton-part-1
CC-MAIN-2019-35
refinedweb
5,369
56.86
Convention-based programming is an interesting model. In essence, it attempts to reduce the potential for error by handling most scenarios based on conventions or standards, and allowing the developers to focus on the exceptions. Probably one of the most thorough public resources I've seen for the convention-based model is Rob Eisenberg's Build your Own MVVM Framework which he dubs "a framework with an opinion." I've not been a huge fan or advocate of convention-based programming in the past, but that is quickly changing. What I didn't like was the secrets it keeps. In other words, the convention-based model is great, if and when you know and understand the convention. A lot of "magic" happens. I am a fan of self-documenting code, and if you aren't careful, a convention-based model might make things happen without it ever becoming clear how it happened. I've also been taking a look at the convention-based model to gain a clearer understanding and see some definite advantages. Perhaps the most intriguing "find" has been that we are always coding by convention anyway ... it's just a question of which convention and how far we take it. Let's take the simple example of taking text from a text block in Silverlight and sending it off to a service. The Old-Fashioned Code-Behind Way One possible way to do this is to simply wire an event in the code-behind and make it happen. Let's take a common scenario: enter some text, then click a button. We want the button to remain disabled until text is available, then submit the text to some service. This is the contract for the service: public interface ISubmit { void Submit(string text); } And our reference implementation is deadly simple: public class SubmissionService : ISubmit { public void Submit(string text) { MessageBox.Show(text); } } Now here is a sample control: <Grid x: <StackPanel Orientation="Horizontal" Margin="5"> <TextBlock Text="Enter Some Text: " Margin="5"></TextBlock> <TextBox x: <Button x: </StackPanel> </Grid> Next, we wire it all up in the code-behind, like this: public partial class CodeBehind { public CodeBehind() { InitializeComponent(); SubmitButton.IsEnabled = false; TextInput.TextChanged += TextInput_TextChanged; } void TextInput_TextChanged(object sender, System.Windows.Controls.TextChangedEventArgs e) { SubmitButton.IsEnabled = !string.IsNullOrEmpty(TextInput.Text); } private void Button_Click(object sender, RoutedEventArgs e) { ISubmit submit = new SubmissionService(); submit.Submit(TextInput.Text); } } That's where most people start. I've often been asked, "Jeremy, it's quick and it's simple ... what's wrong with it?" Nothing, if all you ever do is work with that single text box and single event. The problem comes when you are composing larger applications. We lose several important aspects of productive development when our code and design is intermingled like this. For example ... - Reusability — with the view and logic intertwined, this code can only ever do one thing. We can never reuse the view for something different (perhaps there is a different set up logic you'd want to apply to a view with a text box and a button) and we cannot re-use the business logic because it's all tied into the control. - Extensibility — there is no real way to extend this model, we can only modify it - Presentation Separation — this is less evident with the textbox example, but a combo box or radio button provide more clarity: what happens when I change my radio button to a check box, or my combo box to a list box? Without separation, it means I have to modify everything. - Testability — for larger applications, catching bugs as early in the process as possible is paramount. This "thing" can only be tested one way. With a cleaner separation, we could test UI and logic separately, and isolate the fixes to one place, rather than having them impact both sides every time and possibly propagate to other controls that aren't re-using the pattern. - Designer/Developer Workflow — this is also huge. More and more shops have a design team and a development team and their schedules don't always synchronize. With this model, you are forced to wait for design to toss something over the fence before development can even get started, and then end up in a message if elements change. Wouldn't it be nice if developers could build even before the design team was done, and the design team could toss over results without impacting what was being developed? Those are just a few reasons I shy from that approach when dealing with large, enterprise projects that are complex and require composability, extensibility, scalability and performance (not to mention the ability for many developers to contribute simultaneously to the success of the project). The Control Freak One method to attempt to separate these concerns a little is the control/controller pattern. I built a few applications like this and actually preferred it for ASP.NET WebForms applications (before MVC) to help separate business concerns from control logic. In Silverlight, the pattern might look a little like this ... first, the control: <Grid x: <StackPanel Orientation="Horizontal" Margin="5"> <TextBlock Text="Enter Some Text: " Margin="5"></TextBlock> <TextBox x: <Button x: </StackPanel> </Grid> The controller contract exposes ways we interact with the controller to initialize it, have it save state, etc. For our example, we keep it simple: public interface IController<in T> where T: UserControl { void Init(T userControl); } Notice that our type is contravariant ... that is, we are allowing the generic to be strongly typed to the actual type of the view itself (going from generic to more specific). This is implemented for our view like this: public class ViewController : IController<Controller> { private readonly ISubmit _submit; private Controller _controllerView; public ViewController(ISubmit submit) { _submit = submit; } public void Init(Controller userControl) { _controllerView = userControl; userControl.ButtonSubmit.Click += ButtonSubmit_Click; userControl.ButtonSubmit.IsEnabled = _HasText(); userControl.TextInput.TextChanged += (o, e) => userControl.ButtonSubmit.IsEnabled = _HasText(); } bool _HasText() { return !string.IsNullOrEmpty(_controllerView.TextInput.Text); } void ButtonSubmit_Click(object sender, System.Windows.RoutedEventArgs e) { _submit.Submit(_controllerView.TextInput.Text); } } In this case, we do manage to separate the logic from the view. We can even abstract marrying the controller to the view. There are different ways of doing this, but even a simple factory: public static class ControllerFactory { public static void InitController<T>(T userControl) where T: UserControl { if (typeof(T).Equals(typeof(Controller))) { var controller = (IController<T>)new ViewController(new SubmissionService()); controller.Init(userControl); } } } This scans the type and returns the appropriate controller, so the code-behind becomes this simple: public partial class Controller { public Controller() { InitializeComponent(); Loaded += (o,e) => ControllerFactory.InitController(this); } } (Another popular method is to init the controller, have it create the control and then return or inject it somewhere). While this allowed us to separate the logic from the code-behind, it is still an illusion. The controller knows too much about the view and how it is implemented. There is still tight coupling. I have to drag the view along everywhere the controller goes, so I've really just moved it into a separate file. This pattern can be evolved with a few steps such as providing a view contract and then acting on the interface instead of the view. This will create testability, for example. Even with this extra abstraction, however, the contract ends up mirroring events and attributes on the view and sometimes might be perceived as extra work that doesn't really go far. MVVM Comes to Town So let's take a look at the popular pattern that I've been using in my Silverlight business applications and teaching for some time now, the Model-View-ViewModel (MVVM). I've seen people roll their eyes or cringe when MVVM is mentioned because the perception is that it can entail a lot of work. In my experience, however, the benefits far outweigh the necessary infrastructure. In fact, it like the difference between leasing and buying with a large downpayment. With MVVM, you do some heavy lifting (big downpayment) up front. Once the plumbing is in place, however, building out and extending becomes very easy (small monthly payments). Other architectures might get you to the first screen faster, but come at a maintenance and extensibility cost. Here's a peek at the MVVM control: <UserControl.Resources> <ViewModel:TextViewModel x: </UserControl.Resources> <Grid x: <StackPanel Orientation="Horizontal" Margin="5"> <TextBlock Text="Enter Some Text: " Margin="5"></TextBlock> <TextBox x: <Button x: </StackPanel> </Grid> The cost of MVVM is already apparent. We've introduced another dependency in the view, namely, the view model. It's nice we can create it in XAML but now we've essentially coupled the two together. Furthermore, we now have some extra markup. This is the "glue" that holds the pieces together, instructing the controls how to bind with the underlying view model. What does that view model look like? public class TextViewModel : INotifyPropertyChanged { public TextViewModel() { var cmd = new SubmitCommand { Submit = new SubmissionService() }; CommandSubmit = cmd; } private string _text; public string InputText { get { return _text; } set { _text = value; if (PropertyChanged == null) return; PropertyChanged(this, new PropertyChangedEventArgs("InputText")); ((SubmitCommand)CommandSubmit).RaiseCanExecuteChanged(); } } public ICommand CommandSubmit { get; set; } public event PropertyChangedEventHandler PropertyChanged; } This is a fairly straightforward model. Instead of the business logic mixed in with our code-behind or controller, we now have a more data-centric class. What's nice is that our code doesn't deal with pulling fields from a text box or responding to events. Instead, this is all handled by the framework. We don't go after a text box control to get a string, instead, we simply deal with a string command. We don't have to know whether that string is in a text box or a password box or a custom third-party control. Heck, you can have all three and reuse the same view model. We can test the view model in isolation and because we are responding to commands, not events, we can also fire a "submit command" without screen-scraping or trying to force a mouse click. We've witnessed that a "con" for this approach is some extra overhead and noise in the code-behind, but the benefit is that we don't have to worry about keeping track of changes in controls or moving values into or out of the UI, we just deal with properties and that "noise" is what glues them together. We can build our view model independently and test it and wire it into services and much more. Now we come to the convention-based approach. I must admit that this appeared to me to be something a little "trendy" so I considered it with caution. My one fear of using convention-based models is that "magic" happens without really understanding it. With convention programming, the convention of how you structure and name controls can drive how they interact and behave. If you don't know the convention, this can create a bit of mystery. One moment of epiphany for me came, however, when I was teaching the concept of data-binding to someone not familiar with Silverlight. They suddenly made it very clear that data-binding was a convention they had to learn. In this case, there is a convention of how we get the view model bound to the view. There is also a convention for data-binding itself. We must make sure we have the appropriate command, that we set the mode (one way, two way, etc.) and that the path matches the view model. So in essence, most of us are already programming with convention-based models. So why not simplify things a bit? Convention to the Rescue With a convention-based engine, my control ended up looking like this: <Grid x: <StackPanel Orientation="Horizontal" Margin="5"> <TextBlock Text="Enter Some Text: " Margin="5"></TextBlock> <TextBox x: <Button x: </StackPanel> </Grid> Notice there is no data-binding and no view model. This will work perfectly well in Blend and the designers can modify it to their heart's content. I only ask that my convention is followed, which in this case means the UI elements are named in accordance with the view model. First, let's look at the code-behind. I only do one small thing here (and I can easily make a behavior and do it in XAML instead): I tag the control with a convention. I'm doing a string here, but it can easily be an enumeration or something else. I call it "example": [ControlTag("Example")] public partial class ConventionMVVM { public ConventionMVVM() { InitializeComponent(); } } Now let's take a peek at the view model. Remember, we didn't wire in a data context or instance it in our view. In fact, there is no direct relationship between the two. When you look at the view model, you'll notice that I've create some methods and properties that match the names in the control, this is their "affinity" and is no more coupled than the path on a data-binding command. I also tag the view model with the same tag as the view: [VMTag("Example")] public class ConventionViewModel : INotifyPropertyChanged { [Import] public ISubmit SubmitService { get; set; } private string _text; public string InputText { get { return _text; } set { _text = value; if (PropertyChanged == null) return; PropertyChanged(this, new PropertyChangedEventArgs("InputText")); PropertyChanged(this, new PropertyChangedEventArgs("CanSubmit")); } } public void Submit() { if (CanSubmit) { SubmitService.Submit(_text); } } public bool CanSubmit { get { return !string.IsNullOrEmpty(_text); } } public event PropertyChangedEventHandler PropertyChanged; } Notice that I'm not using commands. I implement the notify property changed event and I import the submission service using the Managed Extensibility Framework so I can code against the contract. That's it! It may seem even more puzzling when you look at my main page. I've hosted all four of the patterns represented here, but only three of the controls are referenced directly: <Grid x: <StackPanel Orientation="Vertical"> <Views:CodeBehind/> <Views:Controller/> <Views:MVVM/> <ContentControl x: </StackPanel> </Grid> Notice I don't have a reference to the view, but I do have a content control that can be filled. In fact, in the code-behind, I tag it the same way I did the view and the view model: public partial class MainPage { [RegionTag("Example")] public ContentControl ExampleRegion { get { return ConventionRegion; } } public MainPage() { InitializeComponent(); } } Hmmm ... so what is going on here? Here we decided we wanted to minimize our monthly payments. We bought the car up front and put a big down payment. In the convention framework, the bulk of the work goes into the common scenarios. We build these out and isolate them to a single place where we can tweak and troubleshoot. We publish our convention and when developers follow it, things "just work." What's nice is the conventions don't meet every situation, so scenarios that aren't covered by the convention can be addressed and dealt with as one-offs. Of course, when we see a pattern emerge that is repeated, we can revisit our convention and integrate it back. The first piece of the convention model handles gluing view models to views and populating them into regions. This is handled by a ConventionManager class that imports and routes the pieces: [Export] public class ConventionManager : IPartImportsSatisfiedNotification { [ImportMany] public Lazy<ContentControl, IRegionTagCapabilities>[] Regions { get; set; } [ImportMany] public Lazy<UserControl, IControlTagCapabilities>[] Views { get; set; } [ImportMany] public Lazy<Object, IVMTagCapabilities>[] ViewModels { get; set; } public void OnImportsSatisfied() { foreach(var regionImport in Regions) { var tag = regionImport.Metadata.Tag; var viewTag = (from v in Views where v.Metadata.Tag.Equals(tag) select v).FirstOrDefault(); if (viewTag == null) continue; var viewModelTag = (from vm in ViewModels where vm.Metadata.Tag.Equals(tag) select vm).FirstOrDefault(); if (viewModelTag == null) continue; var view = viewTag.Value; _Bind(view, viewModelTag.Value); regionImport.Value.Content = view; } } } So far this piece simply routes and glues. The regions, views, and view models with similar tags are bound and routed together. Of course, this is a place where the convention engine can be easily extended. For example, your regions would most likely have panels and items controls so that multiple views could be routed through them. Views and viewmodels might have different metadata tags and other rules for marrying them together. For our simple example, this works. Now let's take a look at the _Bind method ... private static void _Bind(FrameworkElement view, object viewModel) { var elements = _Elements(view.FindName("LayoutRoot") as Panel).ToList(); foreach(var button in (from e in elements where e is Button select e as Button).ToList()) { var nameProperty = button.GetValue(FrameworkElement.NameProperty); if (nameProperty == null) continue; var name = nameProperty.ToString(); var actionMethod = (from m in viewModel.GetType().GetMethods() where m.Name.Equals(name) select m).FirstOrDefault(); if (actionMethod != null) { button.Click += (o, e) => actionMethod.Invoke(viewModel, new object[] {}); } var enabledProperty = (from p in viewModel.GetType().GetProperties() where p.Name.Equals("Can" + name) select p).FirstOrDefault(); if (enabledProperty == null) continue; button.IsEnabled = (bool) enabledProperty.GetGetMethod().Invoke(viewModel, new object[] {}); var button1 = button; ((INotifyPropertyChanged) viewModel) .PropertyChanged += (o, e) => { if (e.PropertyName.Equals("Can" + name)) { button1.IsEnabled = (bool) enabledProperty.GetGetMethod() .Invoke(viewModel, new object[] {}); } }; } foreach(var propertyInfo in viewModel.GetType().GetProperties()) { if (propertyInfo.GetGetMethod() == null || propertyInfo.GetSetMethod() == null) continue; var propName = propertyInfo.Name; var element = (from e in elements where propName.Equals(e.GetValue(FrameworkElement.NameProperty)) select e). FirstOrDefault(); if (element == null) continue; if (element is TextBox) { var binding = new Binding { Source = viewModel, Path = new PropertyPath(propName), Mode = BindingMode.TwoWay }; ((TextBox) element).SetBinding(TextBox.TextProperty, binding); } } } private static IEnumerable<UIElement> _Elements(Panel panel) { yield return panel; foreach(var element in panel.Children) { if (element is Panel) { foreach(var child in _Elements((Panel)element)) { yield return child; } } else { yield return element; } } } This convention focuses on buttons and text boxes. It makes use of the iterator function to simplify recursion. The recursion of controls is done using the yield statements to flattern the discovered controls into a list. If we were searching or filtering, this would stop once the desired element was found so it provides some performance benefits as well (in this case we just blow through the whole list for the simple example). First, we scan the buttons on the form. We are looking for a view model method with the same name as the button, and wiring the click event to invoke the method. Furthermore, if a property exists that is prefixed with "Can" and the button name, we use that to set the IsEnabled property of the button. We could simply bind directly to a command object, but I wanted to demonstrate the flexibility of convention-based solutions and how it can end up being a simple method and property without having to introduce commands. Next, we scan the properties on the view model. For each property, we look for an element with the same name. If it exists and is a text box, we create a binding. A fully-blown convention model would bind to password boxes, text blocks, toggle buttons (radio buttons and check boxes), combo boxes, and more. Conclusion What we've done with the convention model is invested some extra effort up front to handle the common scenarios (button click and text box input, as well as routing of views and view models to regions). Once this is in place, it becomes very easy to add a new view or view model. We simply tag these and then name them. We're still creating a coupling between the view and the view model, but instead of requiring a longer, complex data-binding statement, we've simplified it to a simple name and don't have to remember the binding direction or path syntax, etc. What I really like about this model is that it truly frees the design team to move forward with minimal impact on the development workflow. In one project I've worked with, the design project is completely separate. The only "noise" is the exports of the controls, which can also be done externally. This means development can proceed with building view models and unit tests and once design submits the result, it's a simple question of matching names and properties. As I mentioned earlier in this post, this is just a taste of convention. For a more fully developed solution, be sure to refer to Rob's presentation. Here is the application for you to play with, in the order of: code behind, controller, MVVM, and convention. The source code can be downloaded here. Great article! We are using the convention based approach in our current project based on the ideas presented by Rob Eisenberg at Mix10. However, one thing that we sort of bumped into was the situation when you have more than one control in the view that wants to bind to the same command. This doesn't work since you cannot have several controls with the same name in the same view... We ended up using the traditional command binding syntax and exposing an ICommand in the view model for just those corner cases. What's your thoughts on that? You could really tackle it both ways. I've used the ICommand approach as well, but you can also sequence the names or make the convention NameCommandXXX for example GridCommandSubmit and HeaderCommandSubmit and the convention uses that "Submit" portion for the binding. Really there are no hard and fast rules for convention, and you simply should be flexible to update the model to accomodate new scenarios as they appear. Great article! Thank you. One thought - instead of reserving names for convention, we could use special attached property. To my taste it would be a litle bit safer and cleaner. Absolutely. These can become more sophisticated ... names are names, an attached property shows intention. All good feedback - keep it coming because this will be valuable as others explore their options! Thanks. Nice post! Rob's Caliburn 2.0 implements many of those "opinions". Regards, Santos Have you considered using some form of code generation to achieve a similar outcome? I prefer code generation, because it is enforced and causes static checking on the conventions. I could imagine some ways you could basically create strongly type scaffolding which reduced your code to a very small amount by using code generation. I'm very interesting in looking at T4 templates and other alternatives. For one customer, we built a very simple app that let you drag a control to the surface and would generate the view model automatically based on the convention ... but that assumes the design is there. Wow a totally unneeded framework to do what the MVVM already does? An entire engine based on strings(lol) and some convention made up by someone. Try to teach a new developer coming on a team that. Yeah sorry, its all convention here all your training won't help as this resembles nothing you have ever seen. "data-binding was a convention" errr no its syntax. That is what this article is about, changing syntax to make something simple into some confusing and unreadable. MVVM does what you need, this is just ridiculous. Am I the only one noticing that the supposed VM class is partial? How is anyone with a clue supposed to take this remotely seriously. Other Anonymous poster makes a good point, however, convention based ViewModel location is a nice idea, if it's done better than this. Jeremy, try doing this again, only... you know, good.
https://csharperimage.jeremylikness.com/2010/05/mvvm-coding-by-convention-convention.html
CC-MAIN-2017-39
refinedweb
3,894
55.54
In today's post, I'm going to show you how to use the three different parts of the speech platform on Windows Phone 8.0: Voice Commands, Speech Recognition, and Speech Synthesis. To do that, we're going to build an actual application, that has real value, but hopefully will be less than 100 lines of code. Ready? Here we go ... Here's the plan: Let's build an application that let's the user search Amazon.com for anything they'd like to find, all by simply using their voice. Here's the flow we'll enable today: User: "Search On Amazon" Phone: "What would you like to search for on Amazon?" User: "Electronics" (or whatever they want) Phone: "Searching Amazon for Electronics" (or whatever they said they wanted) User: "Search On Amazon" Phone: "What would you like to search for on Amazon?" User: "Electronics" (or whatever they want) Phone: "Searching Amazon for Electronics" (or whatever they said they wanted) Sound interesting? Great. First, let's start off by launching Visual Studio, and creating a new Windows Phone App project. Then, we'll update the application's manifest, so we can use speech in our project. On the Capabilities tab, click the checkmark next to the Microphone and Speech Recognition items. This tells the Market Place, at application submission time, that your app needs these capabilities. That way, the Market Place can be transparent to the end user, letting them know what your app is really going to do. Next, let's add the XML file that specifies what voice commands you want users to be able to say. From the Solution Explorer, add a new item to your project, and select the Voice Command Definition item template. The default template shows a few examples of what you can do. In a future post, I'll describe in more detail what each XML element does, and why they're important for your application. For the app we're building today, simply type the following XML into the editor. As you're typing, notice that intellisense is active, making suggestions for you along the way, making it fairly easy to build your own XML file from the ground up. 1: <?xml version="1.0" encoding="utf-8"?> 2: 3: <VoiceCommands xmlns=""> 4: 5: <CommandSet xml: 6: 7: <CommandPrefix>Search On</CommandPrefix> 8: <Example>Amazon</Example> 9: 10: <Command Name="searchSite"> 11: <Example>Amazon</Example> 12: <ListenFor>Amazon</ListenFor> 13: <Feedback>What would you like to search for on Amazon</Feedback> 14: <Navigate Target="MainPage.xaml" /> 15: </Command> 16: 17: </CommandSet> 18: 19: </VoiceCommands> Now that we have a Voice Command Definition file (often referred to as a VCD file), let's open up the MainPage.xaml.cs file to get ready to start writing some actual code! Now, go ahead and remove all the unnecessary comments. That'll help us achieve our 100 line count goal. :-) Add the following using namespace statements; we need all of these for the code that follows. The first three are namespaces for the three areas of the speech platform on Windows Phone 8.0. The next namespace, System.Threading.Tasks, is needed so we can create our own asynchronous methods that we can call using the new C# await keyword. The last new namespace we'll use today, Microsoft.Phone.Tasks, will help us launch the WP8's version of Internet Explorer, allowing the user to go directly to Amazon's mobile website. 1: using Windows.Phone.Speech.VoiceCommands; 2: using Windows.Phone.Speech.Recognition; 3: using Windows.Phone.Speech.Synthesis; 4: using System.Threading.Tasks; 5: using Microsoft.Phone.Tasks; Now, let's install the voice commands we've defined up above. To do that, add a line of code to the constructor of our page, after InitializeComponent, that will install our voice command definitions. 1: public MainPage() 2: { 3: InitializeComponent(); 5: VoiceCommandService.InstallCommandSetsFromFileAsync( 6: new Uri("ms-appx:///vcd.xml")); 7: } Don't worry about keeping track of whether you've installed the VCD file before, or not, for now. The VoiceCommandService will do the right thing if you've already installed the voice commands. If you update the VCD file in your application, and hit F5, the new voice commands will be registered. But if nothing has changed since the last call to InstallCommandSetsFromFileAsync, the VoiceCommandService is smart enough to detect that no work is necessary. Now that our voice commands are installed, how do we know when a user spoke one of our commands? Great question. That's where the existing OnNavigatedTo method comes in. When a user speaks a voice command, the VoiceCommandService matches what the user said, with all the phrases you've previously registered when you called InstallCommandSetsFromFileAsync. If the VoiceCommandService determines that it's one of your commands, it will launch your application, on the page specified in your Navigate XML element's Target attribute. So, let's go ahead and override the OnNavigatedTo protected method. When OnNavigatedTo is called, we'll be able to tell it's a "New" navigation, as opposed to a "Back", or a "Refresh", by using the NavigationMode enumeration. The event args also contain the entire query string, broken up into an IDictionary of strings. If a query string parameter named "voiceCommandName" is present in that dictionary, you'll know that you've been launched by the VoiceCommandService. The value of that item in the dictionary will tell you which voice command the user has spoken. So ... In our version of OnNavigatedTo, we'll check to see if it's a new navigation, and that if it's a voice command, and if so, we'll pass the query string dictionary off to another method to actually handle the voice command(s) there. If OnNavigatedTo is being called for some other reason, you wouldn't normally do what I'm doing here. However, to keep our app as simple as possible, we're simply going to launch IE on Amazon.com's mobile web page. Also, for this simple application, there's no reason for us to be on the back stack, so we'll try to get ourselves off the back stack. Since I don't know of a clean way to do that, I've opted to call GoBack on the NavigationService, which conceptually makes sense, but will actually throw an exception since we nothing is on the back stack. That's OK for this sample app, because it has the same desired effect. 1: protected override void OnNavigatedTo(NavigationEventArgs e) 3: base.OnNavigatedTo(e); 5: if (e.NavigationMode == NavigationMode.New && 6: NavigationContext.QueryString.ContainsKey("voiceCommandName")) 7: { 8: HandleVoiceCommand(NavigationContext.QueryString); 9: } 10: else if (e.NavigationMode == NavigationMode.New) 11: { 12: NavigateToUrl(""); 13: } 14: else if (e.NavigationMode == NavigationMode.Back && 15: !System.Diagnostics.Debugger.IsAttached) 16: { 17: NavigationService.GoBack(); 18: } 19: } Now, let's add the method to dispatch the voice commands. For now, we only have one voice command, but, like I said, we'll add more voice commands in the next post. Structuring the code this way up front, will save us a little time later. 1: private void HandleVoiceCommand(IDictionary<string, string> queryString) 3: if (queryString["voiceCommandName"] == "searchSite") 4: { 5: SearchSiteVoiceCommand(queryString); 6: } OK. Now, let's actually do the site search. But, wait a minute We don't know what the user wants to search for yet. Right? Let's pick up there, where the VoiceCommandService left off in the conversation with the user. After speaking, "Search On Amazon", they'll hear the prompt of "What would you like to search for on Amazon", due to our <example> element in the VCD file for the searchSite <command>. So, let's add a new method to handle the conversation, starting at that point, breaking it up into 3 logical pieces. First, we'll get some text from the speech recognizer. Then, if it recognized something, we'll let the user know what we're doing by telling them that we're about to do the search. Finally, we'll navigate to the site to actually do the search. 1: private async void SearchSiteVoiceCommand(IDictionary<string, string> queryString) 3: string text = await RecognizeTextFromWebSearchGrammar(); 4: if (text != null) 5: { 6: await Speak(string.Format("Searching Amazon for {0}", text)); 7: NavigateToUrl(string.Format("{0}", text)); 8: } 9: } OK. Time to use the Speech Recognition APIs. Again, in a future post, I'll describe in more detail what all the various parameters and features, and why we have them as options in the namespace. But for now, just type the following code into Visual Studio: 1: private async Task<string> RecognizeTextFromWebSearchGrammar() 3: string text = null; 4: try 6: SpeechRecognizerUI sr = new SpeechRecognizerUI(); 7: sr.Recognizer.Grammars.AddGrammarFromPredefinedType("web", SpeechPredefinedGrammar.WebSearch); 8: sr.Settings.ListenText = "Listening..."; 9: sr.Settings.ExampleText = "Ex. \"electronics\""; 10: sr.Settings.ReadoutEnabled = false; 11: sr.Settings.ShowConfirmation = false; 12: 13: SpeechRecognitionUIResult result = await sr.RecognizeWithUIAsync(); 14: if (result != null && 15: result.ResultStatus == SpeechRecognitionUIStatus.Succeeded && 16: result.RecognitionResult != null && 17: result.RecognitionResult.TextConfidence != SpeechRecognitionConfidence.Rejected) 18: { 19: text = result.RecognitionResult.Text; 20: } 21: } 22: catch 23: { 24: } 25: return text; 26: } Similarly, here's the method that we'll use to speak the feedback to the end user, using the built in SpeechSynthesizer. It's simpler than using the SpeechRecognizer. Synthesis, often called TTS, or Text-To-Speech, usually is more straight forward from a programmatic standpoint. 1: private async Task Speak(string text) 3: SpeechSynthesizer tts = new SpeechSynthesizer(); 4: await tts.SpeakTextAsync(text); 5: } And finally, here's the method we'll use to navigate to the specified URL. 1: private void NavigateToUrl(string url) 3: WebBrowserTask task = new WebBrowserTask(); 4: task.Uri = new Uri(url, UriKind.Absolute); 5: task.Show(); 6: } That's it. 100 lines of code. Exactly! Not bad, eh? :-) Oh yeah ... In my application, I've also removed nearly all the UI elements from the XAML file (MainPage.xaml), so it's not distracting to the user to see while the speech conversation is happening. Typically, you'd actually have a real page, but... This is just a 100 line sample. Right? :-) OK. Time to try it out. If you haven't typed it all in yet, you can go ahead and cheat copy and paste the code from below. Be sure and remember to do the first few steps, declaring the capabilities required, otherwise you’ll get an Unauthorized exception when you try to call the Speech Recognition APIs. Press F5, and you should land in the Emulator, on Amazon's web page. Great. Our Voice Commands should be installed now. Now, press and hold the Start button (the middle hardware button beneath the screen), and wait for the Speech "Listening..." screen to pop up. If you haven't run speech in the emulator yet, you'll need to accept the speech terms of use. Let's follow the script from above now: You: "Search On Amazon" Phone: "What would you like to search for on Amazon?" You: "Electronics" (or whatever you want) Phone: "Searching Amazon for Electronics" (or whatever you said you wanted) You: "Search On Amazon" Phone: "What would you like to search for on Amazon?" You: "Electronics" (or whatever you want) Phone: "Searching Amazon for Electronics" (or whatever you said you wanted) Sometimes when you're using a fresh copy of the emulator, or if you haven't accepted the terms of use yet, you'll have to wait a few seconds for the voice commands to become active. If it doesn't work the first time, try try again. Or at least, try twice. :-) That's it. You have now completed a reasonably useful speech enabled application for Windows Phone 8.0. In 100 lines of code. Here's the full code listing: using System; using System.Collections.Generic; using System.Net; using System.Windows; using System.Windows.Navigation; using Microsoft.Phone.Controls; using Windows.Phone.Speech.VoiceCommands; using Windows.Phone.Speech.Recognition; using Windows.Phone.Speech.Synthesis; using System.Threading.Tasks; using Microsoft.Phone.Tasks; namespace SearchOn { public partial class MainPage : PhoneApplicationPage { public MainPage() { InitializeComponent(); VoiceCommandService.InstallCommandSetsFromFileAsync(new Uri("ms-appx:///vcd.xml")); } protected override void OnNavigatedTo(NavigationEventArgs e) base.OnNavigatedTo(e); if (e.NavigationMode == NavigationMode.New && NavigationContext.QueryString.ContainsKey("voiceCommandName")) { HandleVoiceCommand(NavigationContext.QueryString); } else if (e.NavigationMode == NavigationMode.New) NavigateToUrl(""); else if (e.NavigationMode == NavigationMode.Back && !System.Diagnostics.Debugger.IsAttached) NavigationService.GoBack(); private void HandleVoiceCommand(IDictionary<string, string> queryString) if (queryString["voiceCommandName"] == "searchSite") SearchSiteVoiceCommand(queryString); private async void SearchSiteVoiceCommand(IDictionary<string, string> queryString) string text = await RecognizeTextFromWebSearchGrammar(); if (text != null) await Speak(string.Format("Searching Amazon for {0}", text)); NavigateToUrl(string.Format("{0}", text)); private async Task<string> RecognizeTextFromWebSearchGrammar() string text = null; try SpeechRecognizerUI sr = new SpeechRecognizerUI(); sr.Recognizer.Grammars.AddGrammarFromPredefinedType("web", SpeechPredefinedGrammar.WebSearch); sr.Settings.ListenText = "Listening..."; sr.Settings.ExampleText = "Ex. \"electronics\""; sr.Settings.ReadoutEnabled = false; sr.Settings.ShowConfirmation = false; SpeechRecognitionUIResult result = await sr.RecognizeWithUIAsync(); if (result != null && result.ResultStatus == SpeechRecognitionUIStatus.Succeeded && result.RecognitionResult != null && result.RecognitionResult.TextConfidence != SpeechRecognitionConfidence.Rejected) { text = result.RecognitionResult.Text; } catch return text; private async Task Speak(string text) SpeechSynthesizer tts = new SpeechSynthesizer(); await tts.SpeakTextAsync(text); private void NavigateToUrl(string url) WebBrowserTask task = new WebBrowserTask(); task.Uri = new Uri(url, UriKind.Absolute); task.Show(); } }
http://blogs.msdn.com/b/robch/archive/2012/11/06/how-many-lines-of-code-does-it-take-to-build-a-complete-wp8-speech-app.aspx
CC-MAIN-2015-27
refinedweb
2,206
50.12
The BlockingQueue interface of the Java Collections framework extends the Queue interface. It allows any operation to wait until it can be successfully performed. For example, if we want to delete an element from an empty queue, then the blocking queue allows the delete operation to wait until the queue contains some elements to be deleted. Classes that Implement BlockingQueue Since BlockingQueue is an interface, we cannot provide the direct implementation of it. In order to use the functionality of the BlockingQueue, we need to use classes that implement it. How to use blocking queues? We must import the java.util.concurrent.BlockingQueue package in order to use BlockingQueue. // Array implementation of BlockingQueue BlockingQueue<String> animal1 = new ArraryBlockingQueue<>(); // LinkedList implementation of BlockingQueue BlockingQueue<String> animal2 = new LinkedBlockingQueue<>(); Here, we have created objects animal1 and animal2 of classes ArrayBlockingQueue and LinkedBlockingQueue, respectively. These objects can use the functionalities of the BlockingQueue interface. Methods of BlockingQueue Based on whether a queue is full or empty, methods of a blocking queue can be divided into 3 categories: Methods that throw an exception add()- Inserts an element to the blocking queue at the end of the queue. Throws an exception if the queue is full. element()- Returns the head of the blocking queue. Throws an exception if the queue is empty. remove()- Removes an element from the blocking queue. Throws an exception if the queue is empty. Methods that return some value offer()- Inserts the specified element to the blocking queue at the end of the queue. Returns falseif the queue is full. peek()- Returns the head of the blocking queue. Returns nullif the queue is empty. poll()- Removes an element from the blocking queue. Returns nullif the queue is empty. More on offer() and poll() The offer() and poll() method can be used with timeouts. That is, we can pass time units as a parameter. For example, offer(value, 100, milliseconds) Here, - value is the element to be inserted to the queue - And we have set a timeout of 100 milliseconds This means the offer() method will try to insert an element to the blocking queue for 100 milliseconds. If the element cannot be inserted in 100 milliseconds, the method returns false. Note: Instead of milliseconds, we can also use these time units: days, hours, minutes, seconds, microseconds and nanoseconds in offer() and poll() methods. Methods that blocks the operation The BlockingQueue also provides methods to block the operations and wait if the queue is full or empty. put()- Inserts an element to the blocking queue. If the queue is full, it will wait until the queue has space to insert an element. take()- Removes and returns an element from the blocking queue. If the queue is empty, it will wait until the queue has elements to be deleted. Suppose, we want to insert elements into a queue. If the queue is full then the put() method will wait until the queue has space to insert elements. Similarly, if we want to delete elements from a queue. If the queue is empty then the take() method will wait until the queue contains elements to be deleted. Implementation of BlockingQueue in ArrayBlockingQueue import java.util.concurrent.BlockingQueue; import java.util.concurrent.ArrayBlockingQueue; class Main { public static void main(String[] args) { // Create a blocking queue using the ArrayBlockingQueue BlockingQueue<Integer> numbers = new ArrayBlockingQueue<>(5); try { // Insert element to blocking queue numbers.put(2); numbers.put(1); numbers.put(3); System.out.println("BLockingQueue: " + numbers); // Remove Elements from blocking queue int removedNumber = numbers.take(); System.out.println("Removed Number: " + removedNumber); } catch(Exception e) { e.getStackTrace(); } } } Output BlockingQueue: [2, 1, 3] Removed Element: 2 To learn more about ArrayBlockingQueue, visit Java ArrayBlockingQueue. Why BlockingQueue? In Java, BlockingQueue is considered as the thread-safe collection. It is because it can be helpful in multi-threading operations. Suppose one thread is inserting elements to the queue and another thread is removing elements from the queue. Now, if the first thread runs slower, then the blocking queue can make the second thread wait until the first thread completes its operation.
https://www.programiz.com/java-programming/blockingqueue
CC-MAIN-2021-39
refinedweb
680
57.16
Find Questions & Answers Can't find what you're looking for? Visit the Questions & Answers page! We have about a dozen function groups and 30-40 function modules which all need to commonly reference upwards of 100 constants. We currently do this: include /namespace/constants But, the extended code check tells me it is bad practice to have an include used in more than one function group. We're not going to rewrite all of our code to sit in a single function group, like it proposes. As we move to object oriented development, we face the same question. Is it best to put all of the constants into a single class and then reference it in our code like this: if ls_knvp-parvw = /namespace/cl_constant=>gc_soldto_function That feels cumbersome, but manageable. Maybe we can alias it or something? I'd like to use a type group, but we can't put it in a namespace and the usage seems to be deprecated? Basically, I want something easy to use that can be referenced from multiple classes or function groups. Shorter naming is preferable. Passing the extended syntax check is very important. Thanks for your suggestions! Having all your constants in one place, no matter whether you're writing something for Finance or for HR, is not good practice. The idea should be to be as specific as possible. Personally, I don't see anything wrong with having e.g. CONSTANTS c_end_of_time TYPE d VALUE '99991231' defined multiple times. If I didn't do it in a class, then I'd do it in an interface - but one constants interface per application. Having your constants defined in one or a few different places, used by many applications, to my mind is close to defining a global variable v_count for all your integer needs. I.e. not a good idea. Group your constants into related groups, define them in interfaces, then use aliases in the class definitions. I do agree with your statement that having all the constants in one place is a bad idea. But, I don't like the idea of having an interface per application, declaring constants for just that application. At that point you are just moving the program constants to a class (or interface) for no real gain and just another element to deal with. Reading Horst Keller's blog about enumeration, in that example, the shirt sizes are something that I would like declared in one place for the entire project / dev class / package. First that would be the definition of code reuse. Second, think about refactoring, in his example if a size XS is added, it has to be done in one place only instead of trying to find every place it was declared. Third, if the XS wasn't supposed to be there, you don't want another developer accidentally adding it to just one application. Personally I like keeping reusable ones in a properly marked globally accessible place for the dev class (as a class or interface) and application specific ones within the program. If my program is primarily class based, then it is an attribute - and if I don't anticipate any one else using it for the same purpose, I bury it in the Private section, so that any future refactoring won't impact inadvertent use in another program. I was using the word application deliberately to mean "connected group of programs". If that's your entire project than fine. The point that if you have a program for shirts, and another for coal mining, they shouldn't share the constants interface. Depending on your package regime, it might make sense to have one interface per package. Or it might not. It all depends on the granularity. I'd hate to see a constants interface for, e.g. FI applications. At some point "reusable" becomes "global" and global is bad! Burying single use constants in private is what I do also. However, if the class is like an API, and will be used by many other classes, then obviously there will be public constants. And if the class has many subclasses, there will be protected ones as well. :-) Thanks for the clarification of "application" - you seem to follow the dictionary meaning ;) In that case, we are both on the same page. And really bad, while we are commenting on it - is carrying around an include with every alphabet declared c_a = 'A', c_b = 'B', etc.... I have been in situations where I was forced to toe that line just so that I don't use... IF my_var = 'X'. OR IF my_var = c_sun_is_shining. they want me to use IF my_var = c_x. You've seen the same code as I have. I always say a constant should describe it's meaning and not it's content. And anyway - what's wrong with Abap_True ? ;-) Rich a constant should describe it's meaning and not it's content. Thank you, I hope you don't mind me stealing this phrase! I tend to forgive not using ABAP_TRUE because in past versions of ABAP, it was available only if you explicitly declared the type-pool. I guess that is not an excuse any more! *muttering darkly* it wasn't an excuse then... :-) The ABAP programming guidelines say: put'em to a class or interface. And hey, since 7.51 we have enumerations ... oooo - enumerations. Lovely. ;-) I like your blog better I guess I didn't dig hard enough on the help site :) You can create instance of your "constant class". And it will shorten code for usage :-) Like this: DATA: lo_cs1 TYPE REF TO /namespace/cl_long_class_name. CREATE OBJECT lo_cs1. IF l_variable = lo_cs1=>c_constant. "TRUE ENDIF. Thank you - that is a good, simple suggestion I dislike it. An instantiation without reason. What if the constructor is resource intensive? You're relying on the developer checking/knowing. No - I don't think it is a good idea. It could be used to make the code more readable, but at the expense of meaning. How about an interface? You aren't dealing with a constructor then. DATA: lo_cs1 TYPE REF TO /namespace/if_long_interface_name. CREATE OBJECT lo_cs1. ... lo_cs1=>constant. You can't instantiate an interface. You won't get past the syntax check. Sorry, my mistake... what caught me off guard was it did pass syntax check without the instantiation - but that doesn't work either! It was intended only for "constant class". But nevertheless I agree. It can be confusing or problem making It must be ... lo_cs1->c_constant ... n'est-ce pas ? C'est. :-) Hello Ray, as a constant is constant, so you are free to define both a global constant à la /namespace/cl_constant=>gc_soldto_function and a local scope redefinition: CONSTANTS c_soldto_function TYPE ... VALUE /namespace/cl_constant=>gc_soldto_function. JNN I think that misses the whole point of having a single place that the constants are declared. In your example you are re-declaring the constant for local use. I have found a slightly better way - so that you are declaring an ALIAS and not actually carrying around new declarations. INTERFACE const. INTERFACES /namespace/if_long_interface_name. ALIASES sun FOR /namespace/if_long_interface_name~sun. ALIASES moon FOR /namespace/if_long_interface_name~moon. ... ENDINTERFACE. ... my_variable = const=>sun. my_variable2 = const=>moon. I still don't like even this method, because it looses the elegance and you have to determine which (or all) the constants to declare the alias for. If the OP truly has a 100 constants that need to stay together in one class/interface/include (whatever), that is a lot of redeclaring! If a bit of extra effort is made to make the code readable, I don't mind if a developer has to do a lot of redeclaration. btw - I'd alias as c_sun, c_moon. Here using prefixes to indicate usage, not type. const=>c_sun. Little redundant, no? Here using prefixes to indicate usage, not type. I guess you have the same aversion to Hungarian notation (I just learned about its source yesterday in a Coffee Corner post by Jelena) as I do. With the alias, there's no const=>c_sun - just c_sun Oh, you are talking about the method Jacques Nomssi described! I was talking about using the keyword ALIASES like my example... In that case yes, no argument there. In my example, I was locally redeclaring the interface and Aliasing the used constants with that local redeclaration. You will have to use the const=> there. Personally, I like this trick of aliasing constants (eventually using a custom-defined macro like "mac_alias /namespace/if_long_interface_name : sun, moon") . Of course, I would prefer enumerations from ABAP 7.50.
https://answers.sap.com/questions/138892/abap-oop-constant-declaration-best-practice.html
CC-MAIN-2018-30
refinedweb
1,433
66.13
Tools for working with the OFX (Open Financial Exchange) file format Project description, this. Example Usage Here’s a sample program from ofxparse import OfxParser with codecs.open('file.ofx') as fileobj: ofx = OfxParser.parse(fileobj) ofx.accounts # Sample .ofx and .qfx files are very useful. If you want to help us out, please edit all identifying information from the file and then email it to jseutter dot ofxparse at gmail dot com. Development - Prerequisites:: - # Ubuntu sudo apt-get install python-beautifulsoup python-nose python-coverage-test-runner # Python 3 (pip) pip install BeautifulSoup4 six lxml nose coverage # Python 2 (pip) pip install BeautifulSoup six nose coverage The six package is required for python 2.X compatibility Tests: Simply running the nosetests command should run the tests. nosetests If you don’t have nose installed, the following might also work: python -m unittest tests.test_parse Test Coverage Report: coverage run -m unittest tests.test_parse # text report coverage report # html report coverage html firefox htmlcov/index.html License ofxparse is released under an MIT license. See the LICENSE file for the actual license text. The basic idea is that if you can use Python to do what you are doing, you can also use this library. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/ofxparse/0.17/
CC-MAIN-2018-30
refinedweb
232
57.16
people...If i need to read a sms from the m View Tutorial By: Omarsi12 at 2009-09-06 17:11:15 2. The result shown in the left shift is wrong... View Tutorial By: krishnalal.K.K at 2008-05-21 03:05:07 3. I am new in Java and got good information to under View Tutorial By: Umesh Kumar at 2012-01-25 08:42:07 4. The code for CharArrayReader is not working. So, w View Tutorial By: Maya Joshi at 2012-06-23 14:30:26 5. @ anub : This is because you need to add external View Tutorial By: Sushil at 2009-09-09 12:24:23 6. Superb many thanks to you Mashoud. View Tutorial By: orchidouest at 2009-05-10 12:58:53 7. sir i need some simple example of jdbc i cannot un View Tutorial By: Velkumar.s at 2013-05-15 07:08:37 8. Simpler example: import java.io.*; View Tutorial By: Joseph Harner at 2011-12-04 23:20:48 9. Very good examples are given by you on overloading View Tutorial By: Ankur Rautela at 2011-11-03 10:22:46 10. can any body explain me how to execute the program View Tutorial By: nikhil at 2009-06-04 23:56:11
https://www.java-samples.com/showcomment.php?commentid=34852
CC-MAIN-2020-05
refinedweb
216
77.53
October 14, 2015 Bi. To update to or install Bioconductor 3.2: Install R 3.2. Bioconductor 3.2 has been designed expressly for this version of R. Follow the instructions at . There are 80 new packages in this release of Bioconductor. ABAEnrichment - The package ABAEnrichment is designed to test for enrichment of user defined candidate genes in the set of expressed genes in different human brain regions. The core function ‘aba_enrich’ integrates the expression of the candidate gene set (averaged across donors) and the structural information of the brain using an ontology, both provided by the Allen Brain Atlas project. ‘aba_enrich’ interfaces the ontology enrichment software FUNC to perform the statistical analyses. Additional functions provided in this package like ‘get_expression’ and ‘plot_expression’ facilitate exploring the expression data. acde - ‘Multivariate Method for Inferential Identification of Differentially Expressed Genes in Gene Expression Experiments’ by J. P. Acosta, L. Lopez-Kleine and S. Restrepo (2015, pending publication). AnnotationHubData - These recipes convert a wide variety and a growing number of public bioinformatic data sets into easily-used standard Bioconductor data structures. BBCAnalyzer -. biobroom -. caOmicsV - caOmicsV package provides methods to visualize multi-dimentional cancer genomics data including of patient information, gene expressions, DNA methylations, DNA copy number variations, and SNP/mutations in matrix layout or network layout. CausalR - Causal Reasoning algorithms for biological networks, including predictions, scoring, p-value calculation and ranking ChIPComp - ChIPComp detects differentially bound sharp binding sites across multiple conditions considering matching control. CNPBayes - Bayesian hierarchical mixture models for batch effects and copy number. CNVPanelizer -. DAPAR - This package contains a collection of functions for the visualisation and the statistical analysis of proteomic data. DChIPRep - The DChIPRep package implements a methodology to assess differences between chromatin modification profiles in replicated ChIP-Seq studies as described in Chabbert et. al -. DeMAND - DEMAND predicts Drug MoA by interrogating a cell context specific regulatory network with a small number (N >= 6) of compound-induced gene expression signatures, to elucidate specific proteins whose interactions in the network is dysregulated by the compound. destiny - Create and plot diffusion maps DiffLogo - DiffLogo is an easy-to-use tool to visualize motif differences. DNABarcodes -. dupRadar - Duplication rate quality control for RNA-Seq datasets. ELMER - ELMER is designed to use DNA methylation and gene expression from a large number of samples to infere regulatory element landscape and transcription factor network in primary tissue. EnrichedHeatmap -. erma - Software and data to support epigenomic road map adventures. eudysbiome - eudysbiome a package that permits to annotate the differential genera as harmful/harmless based on their ability to contribute to host diseases (as indicated in literature) or. fCI - (f. FindMyFriends -. gcatest - GCAT is an association test for genome wide association studies that controls for population structure under a general class of trait. models. GeneBreak - Recurrent breakpoint gene detection on copy number aberration profiles. genotypeeval - Takes in a gVCF or VCF and reports metrics to assess quality of calls. GEOsearch -. GUIDEseq - The package implements GUIDE-seq analysis workflow including functions for obtaining unique cleavage events, estimating the locations of the cleavage sites, aka, peaks, merging estimated cleavage sites from plus and minus strand, and performing off target search of the extended regions around cleavage sites. Guitar - The package is designed for visualization of RNA-related genomic features with respect to the landmarks of RNA transcripts, i.e., transcription starting site, start codon, stop codon and transcription ending site. hierGWAS - Testing individual SNPs, as well as arbitrarily large groups of SNPs in GWA studies, using a joint model of all SNPs. The method controls the FWER, and provides an automatic, data-driven refinement of the SNP clusters to smaller groups or single markers. HilbertCurve - Hilbert curve is a type of space-filling curves that fold one dimensional axis into a two dimensional space, but with still keep the locality. This package aims to provide a easy and flexible way to visualize data through Hilbert curve. iCheck - QC pipeline and data analysis tools for high-dimensional Illumina mRNA expression data. iGC - This package is intended to identify differentially expressed genes driven by Copy Number Alterations from samples with both gene expression and CNA data. Imetagene - This package provide a graphical user interface to the metagene package. This will allow people with minimal R experience to easily complete metagene analysis. INSPEcT - INSPEcT (INference of Synthesis, Processing and dEgradation rates in Time-Course experiments) analyses 4sU-seq and RNA-seq time-course data in order to evaluate synthesis, processing and degradation rates and asses via modeling the rates that determines changes in mature mRNA levels. IONiseR -. ldblock - Define data structures for linkage disequilibrium measures in populations. LedPred -. lfa - LFA is a method for a PCA analogue on Binomial data via estimation of latent structure in the natural parameter. LOLA - Provides functions for testing overlap of sets of genomic regions with public and custom region set (genomic ranges) databases. This make is possible to do automated enrichment analysis for genomic region sets, thus facilitating interpretation of functional genomics and epigenomics data. MEAL - Package to integrate methylation and expression data. It can also perform methylation or expression analysis alone. Several plotting functionalities are included as well as a new region analysis based on redundancy analysis. Effect of SNPs on a region can also be estimated. metagenomeFeatures -. metaX - The package provides a integrated pipeline for mass spectrometry-based metabolomic data analysis. It includes the stages peak detection, data preprocessing, normalization, missing value imputation, univariate statistical analysis, multivariate statistical analysis such as PCA and PLS-DA, metabolite identification, pathway analysis, power analysis, feature selection and modeling, data quality assessment. miRcomp - Based on a large miRNA dilution study, this package provides tools to read in the raw amplification data and use these data to assess the performance of methods that estimate expression from the amplification curves. mirIntegrator - Tools for augmenting signaling pathways to perform pathway analysis of microRNA and mRNA expression levels. miRLAB - Provide tools exploring miRNA-mRNA relationships, including popular miRNA target prediction methods, ensemble methods that integrate individual methods, functions to get data from online resources, functions to validate the results, and functions to conduct enrichment analyses. motifbreakR -). myvariant - MyVariant.info is a comprehensive aggregation of variant annotation resources. myvariant is a wrapper for querying MyVariant.info services NanoStringDiff -. OGSA - OGSA provides a global estimate of pathway deregulation in cancer subtypes by integrating the estimates of significance for individual pathway members that have been identified by outlier analysis. OperaMate - OperaMate is a flexible R package dealing with the data generated by PerkinElmer’s Opera High Content Screening System. The functions include the data importing, normalization and quality control, hit detection and function analysis. Oscope -. Path2PPI - Package to predict pathway specific protein-protein interaction (PPI) networks in target organisms for which only a view information about PPIs is available. Path2PPI uses PPIs of the pathway of interest from other well established model organisms to predict a certain pathway in the target organism. Path2PPI only depends on the sequence similarity of the involved proteins. pathVar - This package contains the functions to find the pathways that have significantly different variability than a reference gene set. It also finds the categories from this pathway that are significant where each category is a cluster of genes. The genes are separated into clusters by their level of variability. PGA - This package provides functions for construction of customized protein databases based on RNA-Seq data, database searching, post-processing and report generation. This kind of customized protein database includes both the reference database (such as Refseq or ENSEMBL) and the novel peptide sequences form RNA-Seq data. Prize -. Prostar - This package provides a GUI interface for DAPAR. ProteomicsAnnotationHubData - These recipes convert a variety and a growing number of public proteomics data sets into easily-used standard Bioconductor data structures. RareVariantVis - Genomic variants can be analyzed and visualized using many tools. Unfortunately, number of tools for global interrogation of variants is limited. Package RareVariantVis aims to present genomic variants (especially rare ones) in a global, per chromosome way. Visualization is performed in two ways - standard that outputs png figures and interactive that uses JavaScript d3 package. Interactive visualization allows to analyze trio/family data, for example in search for causative variants in rare Mendelian diseases. rCGH - A comprehensive pipeline for analyzing and interactively visualizing genomic profiles generated through Agilent and Affymetrix microarrays. As inputs, rCGH supports Agilent dual-color Feature Extraction files (.txt), from 44 to 400K, and Affymetrix SNP6.0 and cytoScan probeset.txt, cychp.txt, and cnchp.txt files, exported from ChAS or Affymetrix Power Tools. This package takes over all the steps required for a genomic profile analysis, from reading the files to the segmentation and genes annotations, and provides several visualization functions (static or interactive) which facilitate profiles interpretation. Input files can be in compressed format, e.g. .bz2 or .gz. RCy3 - Vizualize, analyze and explore graphs, connecting R to Cytoscape >= 3.2.1. RiboProfiling -. rnaseqcomp - Several quantitative and visualized benchmarks for RNA-seq quantification pipelines. Two-replicate quantifications for genes, transcripts, junctions or exons by each pipeline with nessasery meta information should be organizd into numeric matrix in order to proceed the evaluation. ropls -). RTCGA -. RTCGAToolbox -. sbgr - R client for Seven Bridges Genomics API. SEPA -. SICtools -. SISPA - Sample Integrated Gene Set Analysis (SISPA) is a method designed to define sample groups with similar gene set enrichment profiles. SNPhood -. subSeq - Subsampling of high throughput sequencing count data for use in experiment design and analysis. SummarizedExperiment - The SummarizedExperiment container contains one or more assays, each represented by a matrix-like object of numeric or other mode. The rows typically represent genomic ranges of interest and the columns represent samples. SWATH2stats - This package is intended to transform SWATH data from the OpenSWATH software into a format readable by other statistics packages while performing filtering, annotation and FDR estimation. synlet -. TarSeqQC - The package allows the representation of targeted experiment in R. This is based on current packages and incorporates functions to do a quality control over this kind of experiments and a fast exploration of the sequenced regions. An xlsx file is generated as output. TCGAbiolinks - The aim of TCGAbiolinks is : i) facilitate the TCGA open-access data retrieval, ii) prepare the data using the appropriate pre-processing strategies, iii) provide the means to carry out different standard analyses and iv) allow the user to download a specific version of the data and thus to easily reproduce earlier research results. In more detail, the package provides multiple methods for analysis (e.g., differential expression analysis, identifying differentially methylated regions) and methods for visualization (e.g., survival plots, volcano plots, starburst plots) in order to easily develop complete analysis pipelines. traseR - traseR performs GWAS trait-associated SNP enrichment analyses in genomic intervals using different hypothesis testing approaches, also provides various functionalities to explore and visualize the results. variancePartition - Quantify and interpret multiple sources and biological and technical variation in gene expression experiments. Uses linear mixed model to quantify variation in gene expression attributable to individual, tissue, time point, or technical variables. XBSeq -. Package maintainers can add NEWS files describing changes to their packages since the last release. The following package NEWS is available: Changes in version 0.99.0: NEW FEATURES Changes in version 2.9.3 (2015-05-30): Fixed Note about require(Cairo). Fixed failure with changes in ffbase and not exporting min.ff and max.ff Changes in version 2.9.2 (2015-05-07): Changes in version 2.9.1 (2015-04-30): Added the CITATION file (which was never uploaded to the svn repos) Added in one reference in pSegement.Rd Changes in version 1.41.7 (2015-09-14): Changes in version 1.41.6 (2015-07-29): Changes in version 1.41.5 (2015-06-17): Changes in version 1.41.4 (2015-05-26): Changes in version 1.41.3 (2015-05-13): Changes in version 1.41.2 (2015-05-05): BUG FIX/ROBUSTNESS: readCelHeader() and readCel() would core dump R/affxparser if trying to read multi-channel CEL files (Issue #16). Now an error is generated instead. Multi-channel CEL files (e.g. Axiom) are not supported by affxparser. Thanks to Kevin McLoughlin (Lawrence Livermore National Laboratory, USA) for reporting on this. BUG FIX/ROBUSTNESS: readCelHeader() and readCel() on corrupt CEL files could core dump R/affparser (Issues #13 & #15). Now an error is generated instead. Thanks to Benilton Carvalho (Universidade Estadual de Campinas, Sao Paulo, Brazil) and Malte Bismarck (Martin Luther University of Halle-Wittenberg) for reports. Changes in version 1.41.1 (2015-04-25): Changes in version 1.41.0 (2015-04-16): Changes in version 1.8.0: NEW FEATURES genotype accessor requires reference and alternative allele information genotype setter requires reference allele information reference fraction is now accessed through fraction(ASEset, top.allele.criteria=”ref”) for a more robust performance unit tests are now covering the most crucial calculations, such as e.g. fraction, frequency, summary, mapbias and phase specific calculations. Changes in version 1.47: DEFUNCT probesByLL is now defunct; use AnnotationDbi::select() instead. blastSequences supports multiple sequence queries; use as=”data.frame” for output. Improve blastSequences strategy for result retrieval, querying the appropriate API for status every 10 seconds after initial estimated processing time. Changes in version 1.31: NEW FEATURES and API changes columns() and keytypes() sort their return values. ls() on a Bimap option returns keys in sort()ed order, by default * * Changes in version 2.1.26: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.0.0: BUG FIXES Changes in version 0.0.214: NEW FEATURES Have added vcf files from the following genome builds for humans “human_9606/VCF/clinical_vcf_set/”, “human_9606_b141_GRCh37p13/VCF/”, “human_9606_b142_GRCh37p13/VCF/”, “human_9606_b142_GRCh37p13/VCF/clinical_vcf_set/” For each genome build, where available, the following VCF file formats are available a) all.vcf.gz b) all_papu.vcf.gz c) common_all.vcf.gz d) clinvar.vcf.gz e) clinvar_papu f) common_and_clinical g) common_no_known_medical_impact The user can refer to for VCF file type formats Changes in version 1.9.1: Add NEWS file Add code to compute expression variability measure from Alemu, et al. NAR 2014. Changes in version 2.99.0 (2015-10-06): Changes in version 2.1.1: bug fix in texpr, eexpr, iexpr, and gexpr methods; they no longer crash with single-replicate objects Small documentation clarifications Changes in version 1.0.0: Changes in version 1.20.0: BUG FIXES Changes in version 1.8.0: R Markdown templates for Bioconductor HTML and PDF documents Suggest ‘rmarkdown’ as the default engine for .Rmd documents Simplified use with ‘rmarkdown’ - no need to include a separate code chunk calling ‘BiocStyle::markdown’ anymore Functions facilitating the inclusion of document compilation date and package version in the .Rmd document header Changes in version 1.37.2: NEW FEATURES Changes in version 1.9.8: BUG FIXES Changes in version 1.5: new function strandCollapse for collapsing forward and reverse strand data to be unstranded. Updated read.bismark() to support the cytosine report files; both formats are supported. Other minor updates (mostly internal) to read.bismark(). Greatly improved documentation of this function, paying particular attention to differences in file formats between versions of Bismark. Changes in version 1.25.3: BUG FIXES Changes in version 1.25.2: USER VISIBLE CHANGES Changes in version 1.25.1: BUG FIXES Changes in version 1.1.0: BUG FIXES Fixed bug in formatting m/z labels affecting R 3.2.2 Removed dependency on ‘fields’ because ‘maps’ is broken on Windows Changes in version 3.3.8: Changes in version 3.3.7: FIX BUGS Changes in version 3.3.6: NEW FEATURE add new function featureAlignedSignal, featureAlignedDistribution, featureAlignedHeatmap, pie1 add new dataset HOT.spots, wgEncodeTfbsV3 update annoGR class update vignettes Changes in version 3.3.5: NEW FEATURE remove all the RangedData add annoGR class update vignettes Changes in version 3.3.4: NEW FEATURE toGRanges from MACS2, narrowPeak. calculate the p value of overlapping peaks by reagioneR update documentation Changes in version 3.3.3: NEW FEATURE Changes in version 3.3.2: NEW FEATURE Changes in version 3.3.1: NEW FEATURE Changes in version 1.5.11: remove ellipsis parameter in enrichPeakOverlap function and extend it to support GRanges objects <2015-10-08, Thu> + see fixed the issue, <2015-10-05, Mon> update GEO info, now contains >18,000 bed file information <2015-09-24, Thu> Changes in version 1.5.10: dropAnno function, eg. drop nearest gene annotation that far from TSS (>10k). <2015-09-17, Thu> + see + add parameter distanceToTSS_cutoff to enrichAnnoOverlap use base::subset in plotDistToTSS instead of subsetting data within geom_bar <2015-09-17, Thu> + see + subset parameter in layer will be removed in next release of ggplot2. Changes in version 1.5.9: bug fixed of enrichAnnoOverlap <2015-08-26, Wed> change parameter order.matrix to order.by in upsetplot to meet the change of UpSetR pkg <2015-08-26, Wed> Changes in version 1.5.8: Changes in version 1.5.7: add vennpie parameter in upsetplot <2015-07-20, Mon> upsetplot function for csAnno object <2015-07-20, Mon> Changes in version 1.5.6: update citation info <2015-07-09, Thu> BED file +1 shift for BED coordinate system start at 0 <2015-07-07, Tue> Changes in version 1.5.5: Changes in version 1.5.4: Changes in version 1.5.3: Changes in version 1.5.1: Changes in version 1.4.0: Weighted voting mode that uses the distance from an observation to the nearest crossover point of the class densities added. Bartlett Test selection function included. New class SelectResult. rankPlot and selectionPlot can additionally work with lists of SelectResult objects. All feature selection functions now return a SelectResult object or a list of them. priorSelection is a new selection function for using features selected in a prior cross validation for a new dataset classification. New weighted voting mode, where the weight is the distance of the x value from the nearest crossover point of the two densities. Useful for predictions with skewed features. Changes in version 1.7.2: Changes in version 1.7.1: BUG FIXES Changes in version 1.8.0: NEW FEATURES Changes in version 2.3.8: dropGO function <2015-09-24, Thu> + see use_internal_data = TRUE in enrichKEGG example to speedup compilation of vignette and prevent error when online is not available <2015-09-23, Wed> Changes in version 2.3.7: bug fixed in sorting pvalue of compareClusterResult. <2015-09-16, Wed> For compareCluster(fun=groupGO), there is no pvalue, use Count for sorting use_internal_data= TRUE in enrichKEGG and gseKEGG demonstrated in vignette due to the issue <2015-08-26, Wed> Changes in version 2.3.6: merge_result function <2015-07-15, Wed> add citation of ChIPseeker <2015-07-09, Thu> add ‘Functional analysis of NGS data’ section in vignette <2015-06-29, Mon> update vignette <2015-06-24, Wed> Changes in version 2.3.5: Changes in version 2.3.4: Changes in version 2.3.3: Changes in version 2.3.2: bug fixed of build_Anno <2015-05-07, Thu> add plotGOgraph function <2015-05-05, Tue> Changes in version 2.3.1: remove import RDAVIDWebService <2015-04-29, Wed> update buildGOmap <2015-04-29, Wed> remove import KEGG.db <2015-04-29, Wed> Changes in version 0.99.12: Changes in version 0.99.11: Changes in version 0.99.10: Changes in version 0.99.9: Add fix for inconsistencies Remove CheckSignificance documentation for function not available Fix for vignette plot inconsistency bootstrap count Increment version number Changes in version 0.99.8: Changes in version 0.99.7: Specificity improvements BUG FIXES Improve code Styling Add documentation Changes in version 0.99.6: Initial Bioconductor release BUG FIXES Add required imports at NAMESPACE Improve code Styling Add documentation Remove unnecessary files Changes in version 1.38.0: Fixed error reading Codelink files with option type=”Raw” or type=”Norm”. Now readCodelinkSet() accepts “path=” as argument to enable reading files from a target directory. Changes in version 1.2.0 (2015-09-01): update exmaple dataset. add drug repositioning gene sets and anlaysis. Changes in version 1.3.4: Changes in version 1.3.3: Changes in version 1.3.2: Changes in version 1.5.1: oncoPrint: there are default graphics if type of alterations is less than two. anno_*: get rid of lazy loading Changes in version 2.0.6 (2015-06-15): Changes in version 2.0.5 (2015-06-10): Changes in version 2.0.4 (2015-06-05): Changes in version 2.0.3 (2015-05-06): Changes in version 2.0.2 (2015-04-24): Changes in version 2.0.1 (2015-04-18): Implemented resetting the original work directory upon exit of CopywriteR functions Corrected R dependency to version 3.2 Updated DESCRIPTION file Fixed bug resulting in failure to calculate loesses RELEASE (version 2.0) Changes in version 1.3.2: Modified the fields title and description of the file DESCRIPTION to cerrect the title in the citation of the package. Modified the field title of the file COSNet-package.Rd USER VISIBLE CHANGES BUG FIXES PLANS Changes in version 1.3.14: Added clusterFDR() function to compute the FDR for clusters of DB windows. Added checkBimodality() function to compute bimodality scores for regions. Modified normalize(), asDGEList() to allow manual specification of library sizes. Switched from normalizeCounts(), normalize() to S4 method normOffsets(). Modified default parameter specification in strandedCounts(), to avoid errors. Switched to warning from error when a restricted chromosome is specified in extractReads(). Modified extractReads() interface with improved support for extended read and paired read extraction. Added normalization options to filterWindows() when using control samples. Fixed bug for proportional filtering in filterWindows(). Allowed correlateReads() to accept paired-end specification when extracting data. Added maximizeCcf() function to estimate the average fragment length. Added support for strand-specific overlapping in detailRanges(). Increased the fidelity of retained information in dumped BAM file from dumpPE(). Modified strand specification arguments for profileSites(), allowed reporting of individual profiles. Removed param= specification from wwhm(). Switched to RangedSummarizedExperiment conventions for all relevant functions. Switched to mapqFilter for scanBam() when filtering on mapping quality. Added tests for previously untested functions. Slight updates to documentation, user’s guide. Changes in version 1.8.2: UPDATED FUNCTIONS Changes in version 1.7.2: NEW FEATURES add html documentation customize the x axis as the amino acid physical position update the documentation BUG FIXES Changes in version 1.7.1: NEW FEATURES BUG FIXES Changes in version 1.17.1: Changes in version 0.99.5: Changes in version 0.99.4: Changes in version 0.99.3: updated the python script to be executable python script help in the vignette is now “live” added negative tests for empty and flawed input in test_dataInput.R small fixes in the tests and the source code Changes in version 0.99.2: updated general package help small fixes in the vignette updated newsfile Changes in version 0.99.1: fixed typos in the documentation clarified dimensions of the count tables in the vignette added additional checks to the data import functions Changes in version 0.99.0: Changes in version 1.24.0: QuantStudio (LifeTechnologies) output files are now supported All packages that were ‘depended on’ are not explicitly ‘imported from’. This has the effect that scripts depending on ddCt may explicitly load libraries (such as RColorBrewer and Biobase) to use functions therein. ColMap is now simplied to contain only three slots: Sample, Detector, and Ct. Class ‘CSVReader’ is now renamed into ‘TSVReader’ since it supports rather tab-delimited file, instead of comma-separated file A direct way to convert a data.frame to a InputFrame object is documented. See ‘example(InputFrame)’.”) 1.4.0: 07-03-2015 Lorena Pantano lorena.pantano@gmail.com FIX SOME TEXT IN VIGNETTE, AND CLEAN DEPENDS FLAG Changes in version 1.9.0: Changes in version 1.3.3: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.3.2: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.3.1: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.3.1: NEW FEATURES Changes in version 1.3.4: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.3.3: NEW FEATURES Changes in version 1.3.2: NEW FEATURES Changes in version 1.3.1: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.10.0: Added MLE argument to plotMA(). Added normTransform() for simple log2(K/s + 1) transformation, where K is a count and s is a size factor. When the design contains an interaction, DESeq() will use betaPrior=FALSE. This makes coefficients easier to interpret. Independent filtering will be less greedy, using as a threshold the lowest quantile of the filter such that the number of rejections is within 1 SD from the maximum. See ?results. summary() and plotMA() will use ‘alpha’ from results(). Changes in version 1.16.0: Roll up bugfixes dba.plotHeatmap returns binding sites in row order Changes in version 1.1.17: Renamed normalize() to normOffsets(). Added library size specification to DIList methods normOffsets(), asDGEList(). Fixed bugs under pathological settings in plotPlaid(), plotDI(), rotPlaid(), rotDI(). Optimized C++ code for connectCounts(), squareCounts(). Streamlined various R utilities used throughout all functions. Added iter_map.py to inst/python, for iterative mapping of DNase Hi-C data. Added the neighborCounts() function, for simultaneous read counting and enrichment calculation. Added exclude for enrichedPairs(), to provide an exclusion zone in the local neighborhood. Switched default colour in rotPlaid(), plotPlaid() to black. Added compartmentalize() function to identify genomic compartments. Added domainDirections() function to help identify domains. Modified correctedContact() to allow distance correction and report factorized probabilities directly. Modified marginCounts() function for proper single-end-like treatment of Hi-C data. Extended clusterPairs() to merge bin pairs from multiple DILists. Switched to reporting ranges directly from boxPairs(), added support for minimum bounding box output. Modified consolidatePairs() to accept index vectors for greater modularity. Added reference argument for large bin pairs, in filterDirect() and filterTrended(). Added filterDiag() convenience function for filtering of (near-)diagonal bin pairs. Slight change to preparePairs() diagnostic reports when dedup=FALSE, and for unpaired reads. Added option for a distance-based threshold to define invalid chimeras in preparePairs(). Updated documentation, tests and user’s guide. Added diffHic paper entry to CITATION. Changes in version 2.7.12: Changes in version 2.7.11: check whether input geneList is sorted <2015-09-22, Tue> order first followed by showCategory subsetting in barplot <2015-09-08, Tue> + bug fixed in EXTID2NAME, keytype is TAIR for arabidopsis and ORF for malaria <2015-08-26, Wed> Changes in version 2.7.10: add Giovanni Dall’Olio as contributor in author list. <2015-07-21, Tue> update NCG data to version cancergenes_4.9.0_20150720 contributed by Giovanni Dall’Olio. <2015-07-21, Tue> geneInCategory may simplify to vector by R in very rare case, which violate the assumption of list in S4 validation checking. This issue was fixed. <2015-07-19, Sun> Changes in version 2.7.9: add citation of ChIPseeker <2015-07-09, Thu> add ‘Disease analysis of NGS data’ section in vignette <2015-06-29, Mon> convert vignette from Rnw to Rmd <2015-06-24, Wed> Changes in version 2.7.8: Changes in version 2.7.7: speed up by using sample.int instead of sample <2015-06-04, Thu> add seed = FALSE to control reproduciblility of gsea function. to make result reproducible, explicitly set seed=TRUE <2015-06-04, Thu> Changes in version 2.7.6: Changes in version 2.7.5: Changes in version 2.7.4: Changes in version 2.7.3: add setType slot in gseaResult <2015-05-15, Tue> add universe and geneSets slots in enrichResult <2015-05-05, Tue> Changes in version 2.7.2: Changes in version 2.7.1: Changes in version 2.8.0: Changes in version 0.99.3: DOCUMENTATION First release to Bioconductor NEWS file was added. Changes in version 2.5.6: Ported changes from version 2.4.5 - 2.4.7 Added a function to create the synthetic transcripts Deprecated functions fetchAnnotation and knowOrganisms are now defunct Export ‘basename’, ‘seqlevels’, ‘seqlevels<-‘ and ‘seqnames<-‘ Changes in version 2.5.5: Changes in version 2.5.4: Changes in version 2.5.3: Ported changed from release version 2.4.1 Adapted to the genomeIntervals API changes (change from seq_name to seqnames and addition of the coercion methods to GRanges and consort). Changes in version 2.5.2: Changes in version 2.5.1: Changes in version 2.5.0: Changes in version 4.12.0: NEW FEATURES SIGNIFICANT USER-VISIBLE CHANGES deprecated ‘…GreyScale’ ‘resize’ doesn’t perform bilinear filtering at image borders anymore in order to prevent the blending of image edges with the background when the image is upscaled; to switch on bilinear sampling at image borders use the function argument ‘antialias = TRUE’ ‘floodFill’ and ‘fillHull’ preserve storage mode PERFORMANCE IMPROVEMENTS all morphological operations use the efficient Urbach-Wilkinson algorithm (up to 3x faster compared to the previous implementation) ‘rotate’: perform lossless 90/180/270 degree rotations by disabling bilinear filtering BUG FIXES reimplemented the Urbach-Wilkinson algorithm used to perform grayscale morphological transformations improved pixel-level accuracy of spatial linear transformations: ‘affine’, ‘resize’, ‘rotate’ and ‘translate’ ‘display(…, method = “raster”)’: displaying of single-channel color images ‘drawCircle’: corrected x-y offset ‘equalize’: in case of a single-valued histogram issue a warning and return its argument ‘hist’: accept images with ‘colorMode = Color’ containing less than three color channels ‘image’: corrected handling of image frames ‘medianFilter’: filter size check ‘normalize’: normalization of a flat image when the argument ‘separate’ is set to ‘FALSE’ ‘reenumerate’: corrected handling of images without background ‘stackObjects’: corrected handling of blank images without any objects ‘tile’: reset dimnames Changes in version 1.9.3: Correct typos in GetDEResults help file. Include an additional method for normalization. Changes in version 1.9.2: Changes in version 1.9.1: Changes in version 2.3: Added function getGeneLengthAndGCContent to compute gene length and GC-content. Updated vignette. Changes in version 2.1.1: Moderated F-test has been added for likelihood ratio test Weights can be inputted into odp/lrt which allows it to work for RNA-Seq experiments with low samples added function apply_jackstraw fixed bug in build_study Changes in version 3.12.0: New argument tagwise for estimateDisp(), allowing users not to estimate tagwise dispersions. estimateTrendedDisp() has more stable performance and does not return negative trended dispersion estimates. New plotMD methods for DGEList, DGEGLM, DGEExact and DGELRT objects to make a mean-difference plot (aka MA plot). readDGE() now recognizes HTSeq style meta genes. Remove the F-test in glmLRT(). New argument contrast for diffSpliceDGE(), allowing users to specify the testing contrast. glmTreat() returns both logFC and unshrunk.logFC in the output table. New method implemented in glmTreat() to increase the power of the test. New kegga methods for DGEExact and DGELRT objects to perform KEGG pathway analysis of differentially expressed genes using Entrez Gene IDs. New dimnames<- methods for DGEExact and DGELRT objects. Bug fix to dimnames<- method for DGEGLM objects. User’s Guide updated. Three old case studies are replaced by two new comprehensive case studies. Changes in version 1.2.0: Removed function QCfilter Heavily modified function QCinfo Add an argument exSample to preprocessENmix Changes in version 0.99.3: anno_enrich: get rid of lazy loading smoothing by locfit Changes in version 0.99.2: Changes in version 0.99.1: system.file()now Changes in version 0.99.0: Changes in version 1.1.9: BUG FIXES Changes in version 1.1.6: BUG FIXES Changes in version 1.1.5: NEW FEATURES Changes in version 1.1.4: NEW FEATURES Changes in version 1.1.3: SIGNIFICANT USER-VISIBLE CHANGES Added method ensemblVersion that returns the Ensembl version the package bases on. Added method getGenomeFaFile that queries AnnotationHub to retrieve the Genome FaFile matching the Ensembl version of the EnsDb object. Changes in version 1.1.2: SIGNIFICANT USER-VISIBLE CHANGES Added examples to the vignette for building an EnsDb using AnnotationHub along with the matching genomic sequence. Added an example for fetching the sequences of genes, transcripts and exons to the vignette. BUG FIXES Changes in version 1.1.1: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.10.0: NEW FEATURES Changes in version 1.6.1: Changes in version 1.3.5: SIGNIFICANT USER-VISIBLE CHANGES BUG FIXES AND MINOR IMPROVEMENTS Changes in version 1.3.4: SIGNIFICANT USER-VISIBLE CHANGES BUG FIXES AND MINOR IMPROVEMENTS Fixed allReps and labelReps value assignment Fixed bug for duplicate row names in array data Fixed warning for incorrect sample names Changes in version 1.3.3: SIGNIFICANT USER-VISIBLE CHANGES BUG FIXES AND MINOR IMPROVEMENTS Added warnings to initDat function for incorrect sample names and missing 1:1 controls from userMixFileß Set minimum version requirements for ggplot2 and gridExtra Changes in version 1.3.2: SIGNIFICANT USER-VISIBLE CHANGES Vignette is updated to include use of grid.arrange for viewing figures multiplot function is deprecated and removed from the package, function is replaced with grid.arrange BUG FIXES AND MINOR IMPROVEMENTS Changes in version 1.3.1: SIGNIFICANT USER-VISIBLE CHANGES BUG FIXES AND MINOR IMPROVEMENTS Changes in version 1.9.2: Changes in version 1.9.1: integrated the RMT tool for extracting the combinatorial RNA methylome from multiple MeRIP-Seq datasets, see ?RMT 3 authors: Jia Meng, Lin Zhang and Lian Liu added another citation (Liu 2015) Changes in version 3.4: Added argument ‘downloadFile’ to fea_david() to choose whether to save the analysis results to the current directory DAVID now requires https. This causes errors in some systems. A (hopefully) temporary solution is to install some certificates locally. See RDAVIDWebService help: Changes in version 0.99.0: Changes in version 1.7.3: add the header ie the sample names to the output tables (created when using output.type=’table’) correct little bug concerning the SAM file when header is present (see samio.cpp first test over linecount replace by totalnumread) Changes in version 1.4.0: NEW FEATURES BUG FIXES OTHER NOTES added LazyData: true in the DESCRIPTION updated travis.yml to use native R integration vignette: improved structure, to enhance readability and usability; removed rgl hook which was not used anyway documentation: slight updates and enhancements Changes in version 1.1.1: NEW FEATURES Changes in version 2.7.0: Changes in version 1.6.0: Changes in version 1.11.1: Changes in version 2.0.0: Added functions for PC-Relate. PC-Relate provides model-free estimation of recent genetic relatedness in general samples. It can. GENESIS now imports the package gdsfmt. Changes in version 1.9.3: Changes in version 1.9: Added CITATION GenesRanking help: How to create a GenesRanking object based on scores provided by another algorithm Several message and warning improvements Bugfix: 0 IQR filter now returns the original matrix Changes in version 1.1.27: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.26: NEW FUNCTIONS AND FEATURES Changes in version 1.1.25: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.24: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.23: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.22: NEW FUNCTIONS AND FEATURES Changes in version 1.1.21: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.20: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.19: NEW FUNCTIONS AND FEATURES Arithmetic, indicator and logic operations as well as subsetting work on ScoreMatrix, ScoreMatrixBin and ScoreMatrixList objects. New functionality at “ScoreMatrix-class” and “ScoreMatrixList-class” are documented in help pages. Commented examples of functions are uncommented or \donttest{} is used. Changes in version 1.1.18: NEW FUNCTIONS AND FEATURES Changes in version 1.1.17: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.16: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.15: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.14: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.13: IMPROVEMENTS AND BUG FIXES add new argument cex.legend to plotTargetAnnotation() to specify the size of the legend. changed ScoreMatrixBin() to run faster when noCovNA=TRUE check not only for .bw but also .bigWig and .bigwig extensions of BigWig file in ScoreMatrix() and ScoreMatrixBin() Changes in version 1.1.12: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.11: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.10: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.9: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.8: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.7: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.6: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.5: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.4: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.3: IMPROVEMENTS AND BUG FIXES Functions that read data from text files re-written from data.table::fread() to readr::read_delim() and now they can read compressed files from a URL Changes in readBed(), readNarrowPeak(), readBroadPeak() and gffToGRanges() arguments; All of them have now track.line argument that can be FALSE, “auto” or an integer indicating number of first lines to skip. The zero.based argument was added to readBed() that tells whether ranges in the bed file are in 0 or 1-based coordinate system. Implemented by Katarzyna Wręczycka. Changes in version 1.1.2: IMPROVEMENTS AND BUG FIXES Changes in version 1.1.1: NEW FUNCTIONS AND FEATURES IMPROVEMENTS AND BUG FIXES Changes in version 1.25.3: Changes in version 1.25.2: Changes in version 1.25.1: Changed readGff3 to use closed intervals by default. Implemented two sub-functions that implement reading a gff3 as base-pair features only (no zero length intervals, i.e. right-closed intervals) or which allows for zero length intervals, i.e. right-open intervals, when start equals end) Deprecated the seq_name accessors in favour of the BiocGenerics seqnames Added a width accessor - similar to the IRanges functionality Added coercion to GRangesList and RangedData Edited some of the documentation (man page) and NAMESPACE generation to use roxygen2 Changes in version 1.25.0: Changes in version 1.6.0: NEW FEATURES Add strandMode() getter and setter for GAlignmentPairs objects in response to the following post: See ?strandMode for more information. The readGAlignment*() functions now allow repeated seqnames in the BAM header. Add “coverage” method for GAlignmentsList objects. The strand setter now works on a GAlignmentsList object in a restricted way (only strand(x) <- “+” or “-“ or “*” is supported). SIGNIFICANT USER-LEVEL CHANGES summarizeOverlaps() now returns a RangedSummarizedExperiment object (defined in the new SummarizedExperiment package) instead of an “old” SummarizedExperiment object (defined in the GenomicRanges package). Slightly modify the behavior of junctions() on a GAlignmentPairs object so that the returned ranges now have the “real strand” set on them. See ?junctions and the documentation of the ‘real.strand’ argument in the man page of GAlignmentPairs objects for more information. Add ‘real.strand’ argument to first() and last() getters for GAlignmentPairs objects. DEPRECATED AND DEFUNCT Deprecate left() and right() getters and strand() setter for GAlignmentPairs objects. Deprecate ‘invert.strand’ argument of first() and last() getters for GAlignmentPairs objects. Deprecate ‘order.as.in.query’ argument of “grglist” method for GAlignmentPairs objects. Deprecate ‘order.as.in.query’ argument in “rglist” method for GAlignmentsList objects (this concept is not defined for these objects in general and the argument was ignored anyway). After being deprecated in BioC 3.1, the “mapCoords” and “pmapCoords” methods are now defunct. mapToAlignments() should be used instead. After being deprecated in BioC 3.1, the readGAlignmentFromBam() functions are now defunct. Everybody says “Let’s all use the readGAlignment() functions instead! (no FromBam suffix). Yeah!” BUG FIXES Changes in version 1.22: NEW FEATURES Add coverageByTranscript() and pcoverageByTranscript(). See ?coverageByTranscript for more information. Various improvements to makeTxDbFromGFF(): - Now supports ‘format=”auto”’ for auto-detection of the file format. - Now supports naming features by dbxref tag (like GeneID). This has proven useful when importing GFFs from NCBI. Improvements to the coordinate mapping methods: - Support recycling when length(transcripts) == 1 for parallel mapping functions. - Add pmapToTranscripts,Ranges,GRangesList and pmapFromTranscripts,Ranges,GRangesList methods. Adds ‘taxonomyId’ argument to the makeTxDbFrom*() functions. Improvements to makeTxDbPackage(): - Add ‘pkgname’ argument to makeTxDbPackage() to let the user override the automatic naming of the package to be generated. - Support person objects for ‘maintainer’ and ‘author’ arguments to makeTxDbPackage(). The ‘chrominfo’ vector passed to makeTxDb() can now mix NAs and non-NAs. SIGNIFICANT USER-VISIBLE CHANGES Improve handling of ‘circ_seqs’ argument by makeTxDbFromUCSC(), makeTxDbFromGFF(), and makeTxDbFromBiomart(): no more annoying warning when none of the strings in DEFAULT_CIRC_SEQS matches the seqlevels of the TxDb object to be made. 2 minor changes to makeTxDbFromBiomart(): - Now drops unneeded chromosome info when importing an incomplete transcript dataset. - Now returns a TxDb object with ‘Full dataset’ field set to ‘no’ when makeTxDbFromBiomart() is called with user-supplied ‘filters’. makeTxDbPackage() now includes data source in the package name by default (for non UCSC and BioMart databases). The following changes were made to the coordinate mapping methods: - mapToTranscripts() now reports mapped position with respect to the transcription start site regardless of strand. - Change ‘ignore.strand’ default from TRUE to FALSE in all coordinate mapping methods for consistency with other functions that already have the ‘ignore.strand’ argument. - Name matching in mapFromTranscripts() is now done with seqnames(x) and names(transcripts). - The pmapFromTranscripts,*,GRangesList methods now return a GRangesList object. Also they no longer use ‘UNMAPPED’ seqname for unmapped ranges. - Remove uneeded ellipisis from the argument list of various coordinate mapping methods. Change behavior of seqlevels0() getter so it does what it was always intended to do. The order of the transcripts returned by transcripts() has changed: now they are guaranteed to be in the same order as in the GRangesList object returned by exonsBy(). Code improvements and speedup to the transcripts(), exons(), cds(), exonsBy(), and cdsBy() extractors. DEPRECATED AND DEFUNCT After being deprecated in BioC 3.1, the makeTranscriptDb*() functions are now defunct. After being deprecated in BioC 3.1, the ‘exonRankAttributeName’, ‘gffGeneIdAttributeName’, ‘useGenesAsTranscripts’, ‘gffTxName’, and ‘species’ arguments of makeTxDbFromGFF() are now defunct. Remove sortExonsByRank() (was defunct in BioC 3.1). BUG FIXES Fix bug in fiveUTRsByTranscript() and threeUTRsByTranscript() extractors when the TxDb object had “user defined” seqlevels and/or a set of “active/inactive” seqlevels defined on it. Fix bug in isActiveSeq() setter when the TxDb object had “user defined” seqlevels on it. Fix many issues with seqlevels() setter for TxDb objects. In particular make the ‘seqlevels(x) <- seqlevels0(x)’ idiom work on TxDb objects. Fix bug in makeTxDbFromBiomart() when using it to retrieve a dataset that doesn’t provide the cds_length attribute (e.g. sitalica_eg_gene dataset in plants_mart_26). Changes in version 1.6.0: BUG FIXES Changes in version 1.3.9: NEW FEATURES InteractionTrack class for plotting interactions with Gviz plotAvgViewpoint for plotting summarised interactions around a set of features summariseByFeaturePairs: to summarise interactions between all pairs of two feature sets SIGNIFICANT USER-LEVEL CHANGES annotateInteractions is significantly faster Data import has been made stricter and more consistent across different file formats. Homer interaction files now have data imported that was previously discarded. BUG FIXES single viewpoint plotting (plotViewpoint) is.dt, is.pt calculateDistances some import/export issues Changes in version 1.22.0: NEW FEATURES Support coercions back and forth between a GRanges object and a character vector (or factor) with elements in the format ‘chr1:2501-2800’ or ‘chr1:2501-2800:+’. Add facilities for manipulating “genomic variables”: bindAsGRanges(), mcolAsRleList(), and binnedAverage(). See ?genomicvars for more information. Add “narrow” method for GRangesList objects. Enhancement to the GRanges() constructor. If the ‘ranges’ argument is not supplied then the constructor proceeds in 2 steps: 1. An initial GRanges object is created with ‘as(seqnames, “GRanges”)’. 2. Then this GRanges object is updated according to whatever other arguments were supplied to the call to GRanges(). Because of this enhancement, GRanges(x) is now equivalent to ‘as(x, “GRanges”)’ e.g. GRanges() can be called directly on a character vector representing ranges, or on a data.frame, or on any object for which coercion to GRanges is supported. Add ‘ignore.strand’ argument to “range” and “reduce” methods for GRangesList objects. Add coercion from SummarizedExperiment to RangedSummarizedExperiment (also available via updateObject()). See 1st item in DEPRECATED AND DEFUNCT section below for more information about this. GNCList objects are now subsettable. “coverage” methods now accept ‘shift’ and ‘weight’ supplied as an Rle. SIGNIFICANT USER-LEVEL CHANGES Modify behavior of “*” strand in precede() / follow() to mimic ‘ignore.strand=TRUE’. Revisit “pintersect” methods for GRanges#GRanges, GRangesList#GRanges, and GRanges#GRangesList: - Sanitize their semantic. - Add ‘drop.nohit.ranges’ argument (FALSE by default). - If ‘drop.nohit.ranges’ is FALSE, the returned object now has a “hit” metadata column added to it to indicate the elements in ‘x’ that intersect with the corresponding element in ‘y’. binnedAverage() now treats ‘numvar’ as if it was set to zero on genomic positions where it’s not set (typically happens when ‘numvar’ doesn’t span the entire chromosomes because it’s missing the trailing zeros). GRanges() constructor no more mangles the names of the supplied metadata columns (e.g. if the column is “_tx_id”). makeGRangesFromDataFrame() now accepts “.” in strand column (treated as GNCList() constructor now propagates the metadata columns. Remove “seqnames” method for RangesList objects. DEPRECATED AND DEFUNCT The SummarizedExperiment class defined in GenomicRanges is deprecated and replaced by 2 new classes defined in the new SummarizedExperiment package: SummarizedExperiment0 and RangedSummarizedExperiment. In BioC 2.3, the SummarizedExperiment class will be removed from the GenomicRanges package and the SummarizedExperiment0 class will be renamed SummarizedExperiment. To facilitate this transition, a coercion method was added to coerce from old SummarizedExperiment to new RangedSummarizedExperiment (this coercion is performed when calling updateObject() on an old SummarizedExperiment object). makeSummarizedExperimentFromExpressionSet() and related stuff was moved to the new SummarizedExperiment package. After being deprecated in BioC 3.1, the rowData accessor is now defunct (replaced with the rowRanges accessor). After being deprecated in BioC 3.1, GIntervalTree objects and the “intervaltree” algorithm in findOverlaps() are now defunct. After being deprecated in BioC 3.1, mapCoords() and pmapCoords() are now defunct. BUG FIXES Changes in version 0.99.0: genotypeeval in use at Genentech and HLI. genotypeeval submitted to Bioconductor. 2.36: New Features * New, faster SOFT format parsing (Leonardo Gama) * Turned on unit tests in Travis CI * Test coverage metrics added Bug fixes * default download method no longer assumes that curl is installed on linux * GSEMatrix parsing from file now finds cached GPLs Changes in version 1.1.20: fixed bug in geom_tiplab when x contains NA (eg, removing by collapse function) <2015-10-01, Thu> bug fixed in %add2%, if node available use node, otherwise use label <2015-09-04, Fri> bug fixed of subview for considering aes mapping of x and y <2015-09-03, Thu> update vignette by adding r8s example <2015-09-02, Wed> bug fixed in fortify.multiPhylo, convert df$.id to factor of levels=names(multiPhylo_object) <2015-09-02, Wed> update scale_x_ggtree to support Date as x-axis <2015-09-01, Tue> add mrsd parameter for user to specify ‘most recent sampling date’ for time tree <2015-09-01, Tue> - remove ‘time_scale’ parameter. defined ‘raxml’ class for RAxML bootstrapping analysis result <2015-09-01, Tue> + see + read.raxml, parser function + plot, get.tree, get.fields, groupOTU, groupClade, scale_color, gzoom and show methods + fortify.raxml method Changes in version 1.1.19: use fortify instead of fortify.phylo in fortify.multiPhylo, so that multiPhylo can be a list of beast/codeml or other supported objects. <2015-08-31, Mon> support multiPhylo object, should use + facet_wrap or + facet_grid <2015-08-31, Mon> remove dependency of EBImage and phytools to speedup the installation process of ggtree <2015-08-31, Mon> + these two packages is not commonly used, and will be loaded automatically when needed. Changes in version 1.1.18: layout name change to ‘rectangular’, ‘slanted’, ‘circular’/’fan’ for phylogram and cladogram (if branch.length = ‘none’) ‘unroot’ is not changed. <2015-08-28. Fri> implement geom_point2, geom_text2, geom_segment2 to support subsetting <2015-08-28, Fri> see update geom_tiplab according to geom_text2 and geom_segment2 <2015-08-28, Fri> add geom_tippoint, geom_nodepoint and geom_rootpoint <2015-08-28, Fri> Changes in version 1.1.17: bug fixed in rm.singleton.newick by adding support of scientific notation in branch length <2015-08-27, Thu> bug fixed in gheatmap, remove inherit aes from ggtree <2015-08-27, Thu> add ‘width’ parameter to add_legend, now user can specify the width of legend bar <2015-08-27, Thu> add ‘colnames_position’ parameter to gheatmap, now colnames can be display on the top of heatmap <2015-08-27, Thu> theme_transparent to make background transparent <2015-08-27, Thu> subview for adding ggplot object (subview) to another ggplot object (mainview) <2015-08-27, Thu> Changes in version 1.1.16: Changes in version 1.1.15: open text angle parameter for annotation_clade/annotation_clade2 <2015-08-13, Thu> support changing size of add_legend <2015-08-13, Thu> reroot methods for phylo and beast <2015-08-07, Fri> Changes in version 1.1.14: Changes in version 1.1.13: implement annotation_image <2015-08-01, Sat> better implementation of geom_tiplab for accepting aes mapping and auto add align dotted line <2015-08-01, Sat> open group_name parameter of groupOTU/groupClade to user <2015-08-01, Sat> Changes in version 1.1.12: update vignette according to the changes <2015-07-31, Fri> add mapping parameter in ggtree function <2015-07-31, Fri> extend groupClade to support operating on tree view <2015-07-31, Fri> extend groupOTU to support operating on tree view <2015-07-31, Fri> new implementation of groupClade & groupOTU <2015-07-31, Fri> Changes in version 1.1.11: annotation_clade and annotation_clade2 functions. <2015-07-30, Thu> better add_legend implementation. <2015-07-30, Thu> add … in theme_tree & theme_tree2 for accepting additional parameter. <2015-07-30, Thu> better geom_tree implementation. Now we can scale the tree with aes(color=numVar). <2015-07-30, Thu> Changes in version 1.1.10: Changes in version 1.1.9: update add_legend to align legend text <2015-07-06, Mon> bug fixed in internal function, getChild.df, which should not include root node if selected node is root <2015-07-01, Wed> rotate function for ratating a clade by 180 degree and update vignette <2015-07-01, Wed> get_taxa_name function will return taxa name vector of a selected clade <2015-06-30, Tue> add example of flip function in vignette <2015-06-30, Tue> flip function for exchanging positions of two selected branches <2015-06-30, Tue> Changes in version 1.1.8: update get.placement <2015-06-05, Fri> edgeNum2nodeNum for converting edge number to node number for EPA/pplacer output <2015-06-04, Thu> mv scale_x_gheatmap to scale_x_ggtree, which also support msaplot <2015-06-02, Tue> add mask function <2015-06-02, Tue> Changes in version 1.1.7: add example of msaplot in vignette <2015-05-22, Fri> msaplot for adding multiple sequence alignment <2015-05-22, Fri> Changes in version 1.1.6: add vertical_only parameter to scaleClade and set to TRUE by default. only vertical will be scaled by default. <2015-05-22, Fri> update add_colorbar & add_legend <2015-05-21, Thu> add example of add_legend and gheatmap in vignette <2015-05-18, Mon> gheatmap implementation of gplot <2015-05-18, Mon> add_legend for adding evolution distance legend <2015-05-18, Mon> Changes in version 1.1.5: Changes in version 1.1.4: better performance of parsing beast tree <2015-05-11, Mon> + support beast tree begin with ‘tree tree_1 = ‘ and other forms. + support file that only contains one evidence for some of the nodes/tips update add_colorbar to auto determine the position <2015-05-04, Mon> add_colorbar function <2015-04-30, Thu> Changes in version 1.1.3: add space between residue substitution (e.g. K123R / E155D) <2015-04-30, Thu> remove slash line in heatmap legend <2015-04-30, Thu> update vignette to add example of merge_tree <2015-04-29, Wed> Changes in version 1.1.2: in addition to parsing beast time scale tree in XXX_year[\.\d], now supports XXX/year[\.\d] <2015-04-29, Wed> add examples folder in inst that contains sample data <2015-04-29, Wed> update gplot, now rowname of heatmap will not be displayed <2015-04-28, Tue> add line break if substitution longer than 50 character <2015-04-28, Tue> support calculating branch for time scale tree <2015-04-28, Tue> remove parsing tip sequence from mlb and mlc file <2015-04-28, Tue> remove tip.fasfile in read.paml_rst for rstfile already contains tip sequence <2015-04-28, Tue> scale_color accepts user specific interval and output contains ‘scale’ attribute that can be used for adding legend <2015-04-28, Tue> extend fortify methods to support additional fields <2015-04-28, Tue> extend get.fields methods to support additional fields <2015-04-28, Tue> extend tree class to support additional info by merging two tree <2015-04-28, Tue> implement merge_tree function to merge two tree objects into one <2015-04-28, Tue> Changes in version 1.1.1: minor bug fixed in extracting node ID of rst file <2015-04-27, Mon> update parsing beast time scale tree to support _year (originally supports _year.\d+) <2015-04-27, Mon> add Tommy in author <2015-04-27, Mon> Changes in version 1.21.1: Changes in version 1.3.2: BUG FIX Changes in version 1.3.1: BUG FIX Changes in version 1.27.4: Changes in version 1.27.3: Changes in version 1.27.2: Changes in version 1.27.1: Changes in version 1.15.3 (2015-10-07): Updated all pathway data. More descriptive edge types. Use the package “rappdirs” to select a directory for cached files. Changes in version 1.1.1: Changes in version 1.4.0: Changes in version 1.31: SIGNIFICANT USER-VISIBLE CHANGES BUG FIXES Changes in version 1.1.1: print message if input chromosome names are without ‘chr’ call start() and end() explictely with GenomicRanges:: unit of x-axis is correct now Changes in version 0.99.9: Changes in version 2.1.0: USER VISIBLE CHANGES GWAS catalog data curated at EMBL/EBI will now be used with makeCurrentGwascat: Updated 3 August 2015, as comma-space delim. for genes and EFO tags has been introduced Interfaces to the Experimental Factor Ontology () are provided: efo.obo.g is an annotated graphNEL with the ontology Vignette gwascatOnt.Rmd introduced Changes in version 1.15.16: Changes in version 1.15.15: Changes in version 1.15.13: Changes in version 1.15.12: Changes in version 1.15.11: Changes in version 1.15.10: Changes in version 1.15.8: Changes in version 1.15.7: Changes in version 1.15.5: Changes in version 1.15.3: Changes in version 1.15.2: Changes in version 1.15.1: Changes in version 1.3.4 (2015-09-08): Changes in version 1.3.3 (2015-07-20): Updated output format of core functions. Fixed minor bugs. Changes in version 1.3.2 (2015-05-26): Updated DESCRIPTION FILE. Updated the CITATION file. Changes in version 1.3.1 (2015-05-12): Updated DESCRIPTION FILE. Updated the CITATION file. Updated documentation. Changes in version 2.15.2: Changes in version 1.3.2: Changes in version 1.3.1: Changes in version 1.6.0: Changes in version 0.99.3: Removed hlr dependency and added the MEL function Fixed computation of r2 when the SNP_index = NULL Changes in version 0.99.3: Changes in version 0.99.2: Changes in version 0.99.1: Changes in version 0.99.0: Changes in version 1.4.0: Changes in version 1.13.2: SIGNIFICANT USER-VISIBLE CHANGES Change in the way the interaction matrix is loaded. During the import, the lazyload option will force the data to be stored as triangular matrix. Note that a symmetric matrix is not triangular by construction. Then, to avoid any error in the data processing, the triangular matrix is always converted into symmetrical matrix before being returned by the intdata method. Update of track display in mapC function. Off-set plot of adjacent features BUG FIXES Bug fixed in quality control function when empty maps are used Bug fixed when plotting empty matrix Changes in version 1.13.1: NEW FEATURES SIGNIFICANT USER-VISIBLE CHANGES getCombinedContact is now able to merge HTCexp objects for non complete HiTClist objects. Missing maps are replaced by NA matrices Update of isComplete, isPairwise, getCombinedIntervals methods for Hi-C data with no intrachromosomal maps When the maxrange argument is set in the mapC function, all maps are displayed on the same scale so that they can be compared to each other BUG FIXES Bug fixed in mapC with the contact map is empty Bug fixed in seqlevels(HTClist) method Changes in version 1.11.1: Changes in version 1.11.0: Changes in version 1.0.0: Changes in version 0.11.2 (2015-09-11): BUG FIX: readIDAT() can now read non-encrypted IDAT files with strings longer than 127 characters. BUG FIX: readIDAT() incorrectly assumed that there were exactly two blocks in RunInfo fields of non-encrypted (v3) IDAT files. Thanks to Gordon Bean (GitHub @brazilbean) for reporting on and contributing with code for the above two bugs. Changes in version 0.11.1 (2015-07-29): Changes in version 0.11.0 (2015-04-16): 1.1.1: NEW FEATRUES: * A normalization step which the sample cell-clusters to the common meta-cluster model is included an optionally activated during the major meta-clustering process. CHANGES: * The meta.ME C-binding and return value was modified in a way that the A-Posterior probability matrix Z for a cell-cluster belonging to a meta-cluster is also calculated and returned. BUFIXES: * Ellipse position were not correct when ploting a parameter subset Changes in version 1.1.9: Changes in version 1.1.8: BUG FIXES Changes in version 1.1.7: BUG FIXES Changes in version 1.1.6: BUG FIXES Changes in version 1.1.5: BUG FIXES Changes in version 1.1.4: BUG FIXES Changes in version 1.1.3: BUG FIXES Changes in version 1.1.2: BUG FIXES Changes in version 1.1.1: BUG FIXES Changes in version 1.7: NEW FEATURES BUG FIXES Changes in version 2.4.0: NEW FEATURES Add “cbind” methods for binding Rle or RleList objects together. Add coercion from Ranges to RangesList. Add “paste” method for CompressedAtomicList objects. Add “expand” method for Vector objects for expanding a Vector object ‘x’ based on a column in mcols(x). Add overlapsAny,integer,Ranges method. coverage” methods now accept ‘shift’ and ‘weight’ supplied as an Rle. SIGNIFICANT USER-VISIBLE CHANGES The following was moved to S4Vectors: - The FilterRules stuff. - The “aggregate” methods. - The “split” methods. The “sum”, “min”, “max”, “mean”, “any”, and “all” methods on CompressedAtomicList objects are 100X faster on lists with 500k elements, 80X faster for 50k elements. Tweak “c” method for CompressedList objects to make sure it always returns an object of the same class as its 1st argument. NCList() constructor now propagates the metadata columns. DEPRECATED AND DEFUNCT RangedData/RangedDataList are not formally deprecated yet but the documentation now officially declares them as superseded by GRanges/GRangesList and discourage their use. After being deprecated in BioC 3.1, IntervalTree and IntervalForest objects and the “intervaltree” algorithm in findOverlaps() are now defunct. After being deprecated in BioC 3.1, mapCoords() and pmapCoords() are now defunct. Remove seqapply(), mseqapply(), tseqapply(), seqsplit(), and seqby() (were defunct in BioC 3.1). BUG FIXES Fix FactorList() constructor when ‘compress=TRUE’ (note that the levels are combined during compression). Fix c() on CompressedFactorList objects (was returning a CompressedIntegerList object). Changes in version 1.3.4: correction of Ubuntu problem with realloc for 0 elements in linearKernel generating a sparse empty kernel matrix correction of problem with feature weights and prediction profiles for position specific gappy pair kernel correction of problem with feature weights and prediction profiles for position specific motif kernel corrections for feature weights, prediction via feature weights and prediction profile for distance weighted kernels update of KeBABS citation Changes in Changes in version 1.3.2: correction of error in kernel lists user defined sequence kernel example SpectrumKernlabKernel moved to separate directory Changes in version 1.3.1: correction of error in model selection for processing via dense LIBSVM remove problem in check for loading of SparseM Changes in version 1.3.0: Changes in version 1.27.2 (2015-05-25): Changes in version 1.27.1 (2015-05-04): Changes in version 1.1.1: A new function, the “probes2pathways” has been added. I have added to more network reconstruction approaches. The “aracne.a” and the “aracne.m” as included in the “parmigene” r-package. I have replaced in the “preprocess” function the type of the produced image from jpeg to tiff format, with resolution=300dpi. Changes in version 1.3.6: Changes in version 1.3.1: Changes in version 2.2.0: Added checks to avoid producing identical matrices or data frame when the parameters are still the same after first function call. Splitted the analysis in multiple (optionnal) intermediate steps (add_design, produce_matrices and produce_data_frame). narrowPeak and broadPeak format is now supported. Added multiple getter to access metagene members that are all now private (get_params, get_design, get_regions, get_matrices, get_data_frame, get_plot, get_raw_coverages and get_normalized_coverages. Added the NCIS algorithm for noise removal. Replaced the old datasets with promoters_hg19, promoters_hg18, promoters_mm10 and promoters_mm9 that can be accessed with data(promoters_????). Added flip_regions and unflip_regions to switch regions orientation based on the strand. Changes in version 0.0.0.9 (2015-09-14): Changes in version 1.11: Adding fitFeatureModel - a feature based zero-inflated log-normal model. Added MRcoefs,MRtable,MRfulltable support for fitFeatureModel output. Added mention in vignette. Added support for normalizing matrices instead of just MRexperiment objects. Fixed cumNormStat’s non-default qFlag option Changes in version 1.9.21 (2015-10-08): NEW FEATURES BUG FIXES Fixed broken dependency with removed CRAN package MADAM which was used for the Fisher p-value combination. The fix was done by copying the two required functions from the last MADAM archived version (1.2). Fixed a bug in make.sample.list which rendered the function unusable (credits to Marina Adamou-Tzani). Fixed a very special-case small bug in the report generation only a single gene passes the multiple testing correction cutoff (thans to Martic Reszcko, BSRC ‘Alexander Fleming’). Fixed problem with Fisher p-value combination method (returning all NA due to the inexistence of names in the p-value vecror). Fixed bug with flags of biotype filtered genes. Changes in version 1.9.0 (2015-05-27): NEW FEATURES BUG FIXES Changes in version 0.1.0: Changes in version 1.3.3: Changes in version 1.3.2: Changes in version 1.15: Adding testing for preprocessNoob, preprocessFunnorm. Fxing some verbose output of preprocessNoob. Adding non-exported function .digestVector for testing. Changes in version 1.49.8: Changes in version 1.49.7: Changes in version 1.49.1: Changes in version 1.1.1: NEW FEATURES Changes in version 1.13.8: NEW FEATURES Changes in version 1.13.7: Changes in version 1.13.6: BUG FIXES Changes in version 1.13.5: NEW FEATURES Changes in version 1.13.4: NEW FEATURES BUG FIXES Changes in version 1.13.3: NEW FEATURES BUG FIXES Changes in version 1.13.2: NEW FEATURES BUG FIXES Changes in version 1.13.1: NEW FEATURES BUG FIXES Changes in version 1.2.0: Changes in version 1.17.16: Changes in version 1.17.15: Changes in version 1.17.14: Changes in version 1.17.13: partly rewrite writeMgfData <2015-05-16 Thu> initial hmap function <2015-07-16 Thu> fix bug in plotting MS1 spectra (closes issue #59) <2015-07-16 Thu> new image implementation, based on @vladpetyuk’s vp.misc::image_msnset <2015-07-25 Sat> Changed the deprecated warning to a message when reading MzTab data version 0.9, as using the old reader can not only be achived by accident and will be kept for backwards file format compatibility <2015-07-30 Thu> Changes in version 1.17.12: Changes in version 1.17.11: adding unit tests <2015-07-01 Wed> fix abundance column selection when creating MSnSet form MzTab <2015-07-01 Wed> new mzTabMode and mzTabType shortcut accessors for mode and type of an mzTab data <2015-07-01 Wed> Changes in version 1.17.10: Changes in version 1.17.9: calculateFragments’ “neutralLoss” argument is now a list (was a logical before), see #47. <2015-06-24 Wed> add defaultNeutralLoss() function to fill calculateFragments’ modified “neutralLoss” argument, see #47. <2015-06-24 Wed> Changes in version 1.17.8: coercion from IBSpectra to MSnSet, as per user request <2015-06-23 Tue> new iPQF combineFeature method <2015-06-24 Wed> Changes in version 1.17.7: Changes in version 1.17.6: Export metadata,MzTab-method <2015-06-19 Fri> Replace spectra,MzTab-method by psms,MzTab-method <2015-06-20 Sat> Change the meaning of calculateFragments’ “modifications” argument. Now the modification is added to the mass of the amino acid/peptide. Before it was replaced. <2015-06-21 Sun> calculateFragments gains the feature to handle N-/C-terminal modifications, see #47. <2015-06-21 Sun> update readMzTabData example <2015-06-22 Mon> Changes in version 1.17.5: Changes in version 1.17.4: New lengths method for FoICollection instances <2015-06-06 Sat> New image2 function for matrix object, that behaves like the method for MSnSets <2015-06-10 Wed> image,MSnSet labels x and y axis as Samples and Features <2015-06-10 Wed> fixed bug in purityCorrect, reported by Dario Strbenac <2015-06-11 Thu> Changes in version 1.17.3: Changes in version 1.17.2: Changes in version 1.17.1: new MSnSetList class <2015-04-19 Sun> new commonFeatureNames function <2015-04-14 Tue> new compareMSnSets function <2015-04-19 Sun> splitting and unsplitting MSnSets/MSnSetLists <2015-04-19 Sun> Changes in version 1.17.0: Changes in version 1.3.1: Changes in version 3.0.12: Changes in version 3.0.9: dataProcess add options for ‘cutoffCensored=“minFeatureNRun”’. summaryMethods=“TMP” : output will have ‘more50missing’column. remove50missing=FALSE option : remove runs which has more than 50% of missing measurement. It will be affected for TMP, with censored option. MBimpute : impute censored by survival model (AFT) with cutoff censored value featureSubset option : “all”,”top3”, “highQuality” change the default. -groupComparisonPlots heatmap, for logBase=10, fix the bug for setting breaks. Changes in version 3.0.8: dataProcess : when censoredInt=“0”, intensity=0 works even though skylineReport=FALSE. dataProcess, with censored=“0” or “NA” : fix the bug for certain run has completely missing. cutoffCensored=“minRun” or “minFeature” : cutoff for each Run or each feature is little less (99%) than minimum abundance. -summaryMethod=“TMP”, censored works. censoredInt=NA or 0, and cutoffCensored=0, minFeature, minRun Changes in version 3.0.3: dataProcess : new option, skylineReport. for skyline MSstats report, there is ‘Truncated’ column. If Truncated==True, remove them. and keep zero value for summaryMethod=“skyline”. groupComparison : for skyline option, t.test, val.equal=TRUE, which is no adjustment for degree of freedom, just pooled variance. Changes in version 1.7.1: Imports generics from ProtGenerics Export base classes Changes in version 2.3.3: Changes in version 2.3.2: Changes in version 2.3.1: Changes in version 0.99.0: NEW FEATURES 1.1.1: package. Added covariance matrix as output of screen_cvglasso. Changes in version 1.99.0: Added option for the output of model parameters. Added option for multiple imputation. Added option for the model function. Changes in version 1.5.1: Changes in version 1.9.1: NEW FEATURES Changes in version 1.1.0: Changes: the modelList has been added to provide the user a list of currently implemented methods. The ‘verbose’ argument in fs.stability, fs.ensembl.stability, & fit.only.model has been changed to a character option indicating the extent of verbose output. Canberra stability has been added as the previous implementation was not compatible with RPT. See ?canberra.stability for more details. Many more tests have been implemented to make the package more stable. Changes in version 1.99.9 (2015-10-08): Fixed NEWS file. Removed empty file. Changes in version 1.99.8 (2015-10-01): Changes in version 1.99.7 (2015-09-27): Changes in version 1.99.6 (2015-09-26): Improved test coverage and removed stringsAsfactors from tests. Consistent handling of corner cases in Bozic. Miscell minor documentation improvements. Changes in version 1.99.5 (2015-06-25): Changes in version 1.99.4 (2015-06-22): Plotting true phylogenies. Tried randutils, from O’Neill. Won’t work with gcc-4.6. bool issue in Windows/gcc-4.6. Most all to all.equal in tests. Changes in version 1.99.3 (2015-06-19): More examples to vignette Using Makevars More functionality to plot.fitnessEffects Will Windoze work now? Changes in version 1.99.2 (2015-06-19): Changes in version 1.99.01 (2015-06-17): Many MAJOR changes: we are done moving to v.2 New way of specifying restrictions (v.2) that allows arbitrary epistatic interactions and order effects, and very large (larger than 50000 genes) genomes. When onlyCancer = TRUE, all iterations now in C++. Many tests added. Random DAG generation. Some defaults for v.1 changed. Changes in version 1.99.1 (2015-06-18): Try to compile in Windoze with the SSTR again. Reduce size of RData objects with resaveRdaFiles. Try to compile in Mac: mt RNG must include random in all files. Changes in version 1.99.00 (2015-04-23): Accumulated changes of former 99.1.2 to 99.1.14: changes in intermediate version 1.99.1.14 (2015-04-23): Now are things OK (I messed up the repos) changes in intermediate version 99.1.13 (2015-04-23) Added a couple of drop = FALSE. Their absence lead to crashes in some strange, borderline cases. Increased version to make unambiguous version used for anal. CBN. changes in intermediate version 99.1.12 (2015-04-22) Removed lots of unused conversion helpers and added more strict checks and tests of those checks. changes in intermediate version 99.1.11 (2015-04-18) Tests of conversion helpers now really working. changes in intermediate version 99.1.10 (2015-04-16) Added conversion helpers as separate file. Added tests of conversion helpers. Added generate-random-trees code (separate file). More strict now on the poset format and conversions. changes in intermediate version 99.1.9 (2015-04-09) added null mutation for when we run out of mutable positions, and since not clear how to use BNB then. changes in intermediate version 99.1.8 (2015-04-03) added extraTime. changes in intermediate version 99.1.7 (2015-04-03) endTimeEvery removed. Now using minDDrPopSize if needed. changes in intermediate version 99.1.6 (2015-03-20) untilcancer and oncoSimulSample working together changes in intermediate version 99.1.5 (2015-03-20) Using the untilcancer branch changes in intermediate version 99.1.4 (2014-12-24) Added computation of min. of ratio birth/mutation and death/mutation. changes in intermediate version 99.1.3 (2014-12-23) Fixed segfault when hitting wall time and sampling only once. changes in intermediate version 99.1.2 (2014-12-16) Sampling only once Changes in version 1.7.1: Enhancements Changes in version 1.3.3 (2015-07-14): GENERAL NEW FEATURES IMPROVEMENTS MODIFICATIONS BUG FIXES Changes in version 1.3.2 (2015-06-22): GENERAL Update to the latest R version which has been released a few days ago (2015-06-18). Built with R-3.2.1 NEW FEATURES IMPROVEMENTS MODIFICATIONS BUG FIXES Changes in version 1.3.1 (2015-06-17): GENERAL The optional rlm normalization (for ProtoArrays) used by the functions normalizeArrays(), plotNormMethods() and plotMAPlots() has been completely reimplemented in order to fix some bugs and simplify the usage of these functions (details: see below). Built with R-3.2.0 NEW FEATURES IMPROVEMENTS MODIFICATIONS 1.9.3: pathview can accept a vector of multiple pathway ids, and map/render the user data onto all these pathways in one call. one extra column “all.mapped” was added to pathview output data.frames as to show all the gene/compound IDs mapped to each node. add geneannot.map as a generic function for gene ID or annotation mapping. sim.mol.data now generate data with all major gene ID types for all 19 species in bods, not just human. download.kegg now let the user to choose from xml, png or both file types to download for each input pathway. In the meantime, it uses the KEGG REST API instead of the classical KEGG download links. All potential pathways including the general pathways can be downloaded this way. solve the redundant import from graph package. import specific instead of all functions from XML package. Changes in version 1.9.1: Changes in version 0.9.1: Changes in version 0.9.0: Changes in version 1.5.2 (2015-08-07): ADDED FUNCTIONS Changes in version 0.1.0: Changes in version 2.3.1: NEW FUNCTIONALITY COMPATIBILITY ISSUES Changes in version 1.13.6: BUG FIXES droplevels suggestion for sample-data DESeq2 migrated to suggests extend_metagenomeSeq functionality bugs related to previous version distance uptick, mostly in tests and vignette Changes in version 1.13.5: USER-VISIBLE CHANGES Help avoid cryptic errors due to name collision of distance with external loaded packages by making distance a formal S4 method in phyloseq. Improve documentation of distance function and the downstream procedures on which it depends Migrate the list of supported methods to a documented, exported list object, called distanceMethodList. Improved distance unit tests with detailed checks that dispatch works and gives exactly expected distance matrices for all methods defined in distanceMethodList. Improved JSD doc, performance, code, deprecated unnecessary parallel argument in JSD Changes in version 1.13.4: BUG FIXES psmelt bug if user has also loaded the original “reshape” package, due to name collision on the function called melt. psmelt now explicitly calls reshape2::melt to avoid confusion. Fix following note… There are ::: calls to the package’s namespace in its code. A package almost never needs to use ::: for its own objects: ‘JSD.pair’ Changes in version 1.10: Changes in version 1.1.3: Changes in version 1.1.2: Changes in version 1.1.1: Changes in version 1.1.9.7: New SpatProtVis visualisation class <2015-08-13 Thu> add link to explanation of supportive/uncertain reliability scores in tl vignette <2015-09-02 Wed> Changes in version 1.9.6: Update REAMDE with TL ref Update refs in lopims documentation <2015-07-30 Thu> Changes in version 1.9.5: Changes in version 1.9.4: Add reference to TL paper and link to lpSVM code <2015-07-06 Mon> highlightOnPlot throws a warning and invisibly returns NULL instead of an error when no features are in the object <2015-07-08 Wed> highlightOnPlot has a new labels argument <2015-07-10 Fri> Changes in version 1.9.3: Clarify error when no annotation params are provided <2015-05-11 Mon> support for matrix-encoded markers <2015-05-19 Tue> New default in addLegend: bty = “n” <2015-05-20 Wed> getMarkers now supports matrix markers <2015-05-20 Wed> getMarkerClasses now supports matrix markers <2015-05-20 Wed> markerMSnSet and unknownMSnSet now support matrix markers <2015-05-20 Wed> sampleMSnSet now supports matrix markers <2015-05-23 Sat> updated yeast markers and added uniprot ids <2015-05-27 Wed> plot2D support a pre-calculated dim-reduced data matrix as method parameter to avoid recalculation <2015-05-27 Wed> Changes in version 1.9.2: Changes in version 1.9.1: new plot2Ds function to overlay two data sets on the same PCA plot [2015-04-17 Fri] regenerate biomart data used by setAnnotationParams [2015-04-24 Fri] new setStockcolGui function to set the default colours manually via a simple interface [2015-04-29 Wed] new move2Ds function to produce an transition movie between two MSnSets [2015-04-29 Wed] functions to convert GO ids to/from terms. See ?goTermToId for details <2015-05-08 Fri> Changes in version 1.9.0: Changes in version 1.3.2: Changes in version 1.3.1: new plotMat2D function <2015-05-20 Wed> Fix query search in pRolocVis, contributed by pierremj <2015-05-27 Wed> pRolocVis has new method arg <2015-05-29 Fri> Changes in version 1.3.0: Changes in version 0.99.0: Changes in version 0.99.3: Changes in version 0.99.2: Changes in version 0.99.1: Changes in version 0.99.0: Changes in version 1.1.1: Changes in version 1.1.0: Changes in version 4.5.1: Convert log(P-values) back to P-values for human using a chi-sq distribution Version 4: New algorithm for human backgrounds New function: toPWM() that takes both PFMs and PPMs Changes in version 1.1.10: include threshold parameter into ‘readTFdata’ function change STRINGdb to version 10 Changes in version 1.1.1: rename consensus-based dynamic analysis adjust vignette workflow figure Changes in version 1.6.0: RELEASE IMPROVEMENTS BUG FIXES Changes in version 1.4.2 (2015-08-20): BUG FIXES Changes in version 1.4.1 (2015-06-30): IMPROVEMENTS Changes in version 2.40: BUG FIXES Changes in version 1.10.0: NEW FEATURES qExportWig gained createBigWig argument and can now create bigWig files directly qQCReport now also produces base quality plots for bam-file projects by sampling reads from the bam files qCount, qProfile and qExportWig have gained a includeSecondary argument to include/exclude secondary alignments while counting Changes in version 2.1.1: handles NA values upgraded plotting functions 1.2.0: Updates: * Removed dependency from DAVIDQuery package as it will deprected. * Fixed some bugs in DAVIDQuery functions and integrated them into R3CPET. * Updated the Readme.Rd file. * Updated the HPRD.RData and Biogrid.RData to the new igraph class. * some small changes. Changes in version 1.9.8: BUG FIXES Changes in version 1.13.5: Changes in version 1.13.4: Changes in version 1.13.3: add citation of ChIPseeker <2015-07-09, Thu> add ‘Pathway analysis of NGS data’ section in vignette <2015-06-29, Mon> convert vignette from Rnw to Rmd <2015-06-29, Mon> Changes in version 1.13.2: Changes in version 1.13.1: Changes in version 1.1.8: NEW FEATURES Added new functionality to permTest to use multiple evaluation functions with a single randomization procedure. This gives a significant speedup when comparing a single region set with multiple other features Created a new function createFunctionsList() that given a function and a list of values, creates a list of curried functions (e.g with one parameter preassigned to each of the given values) PERFORMANCE IMPROVEMENTS BUG FIXES Changes in version 1.3.8: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.3.7: NEW FEATURES Changes in version 1.3.6: NEW FEATURES Changes in version 1.3.5: SIGNIFICANT USER-VISIBLE CHANGES Merged pull request Added “template” argument to renderReport and derfinderReport to customize the knitr template used Wrapped code that works in a temporary directory in with_wd function, which evaluates in the directory but returns to the original directory in the case of a user interrupt or error (with on.exit) Changes in version 1.3.4: NEW FEATURES Changes in version 1.3.3: NEW FEATURES Now uses derfinderPlot::vennRegions() to show venn diagram of genomic states. Requires derfinderPlot version 1.3.2 or greater. derfinderReport() now has a ‘significantVar’ argument that allows users to choose between determining significant regions by P-values, FDR adjusted P-values, or FWER adjusted P-values (if FWER adjusted P-values are absent, then FDR adjusted P-values are used instead, with a warning). Changes in version 1.3.2: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.3.1: BUG FIXES Changes in version 1.1.6: Changes in version 1.1.5: Changes in version 1.1.4: Changes in version 1.1.3: error message is extract from GREAT now. default value of bgChoice depeneds on gr Changes in version 1.1.2: Changes in version 1.1.1: GreatJobobject are initialized inside submitGreatJob Changes in version 2.14.0: NEW FEATURES improved handling of error messages: HDF5 error messages are simplified and forwarded to R. When reading integer valued data, especially 64-integers and unsigned 32-bit integers, overflow values are now replaced by NA’s and a warning is thrown in this case. When coercing HDF5-integers to R-double, a warning is displayed when integer precision is lost. New low level general library function H5Dget_storage_size implemented. BUG FIXES Memory allocation on heap instead of stack for reading large datasets (Thanks to a patch from Jimmy Jia). Some bugs have been fixed for reading large 64-bit integers and unsigned 32-bit integers. A bug was fixed for reading HDF5 files containing soft links. Changes in version 0.99.7: Changes in version 0.99.6: Changes in version 0.99.5: Added @return for roxygen comment in printPCA.R Updated packages IRanges and BSgenome.Mmusculus.UCSC.mm10 before check. Changes in version 0.99.4: Non-null coverage values define the percentage of best expressed CDSs. New function: readsToReadStart - it builds the GRanges object of the read start genomic positions Acronym BAM used instead of bam Correction of read start coverage (readStartCov) for the reverse strand Title in Description in Title Case Typos corrections in vignette. Style inconsistencies solved: = replaced by <- outside named arguments No space around “=” when using named arguments to functions. This: somefunc(a=1, b=2) spaces around binary operators a space after all commas use of camelCase for both variable and function names ORFrelativePos -> orfRelativePos Replaced 1:length(x) by seq_len(length(x)) Replaced 1:nrow(x) by seq_len(NROW(x)) Replaced trailing white spaces with this command: ‘find . -type f -path ‘./*R’ -exec perl -i -pe ‘s/ +$//’ {} \;’ countsPlot, histMatchLength, plotSummarizedCov: no longer print directly the graphs. Instead they return a list of graphs. Replaced ‘class()’ tests by ‘inherits’ or ‘is’. The codonPCA function no longer prints the PCA graphs sequantially. The 5 PCA graphs are returned, together with the PCA scores. New function printPCA prints the 5 PCA plots produced by codonPCA. A BAM file is now available in the inst/extdata ctrl_sample.bam. It is used in the testriboSeqFromBAM testthat Changes in version 0.99.3: Changes in version 0.99.2: Modified the vignette: small corrections of the explanatory text. New R version: 3.2.2 and bioconductor packages update Changes in version 0.99.1: Added biocViews: Sequencing, Coverage, Alignment, QualityControl, Software, PrincipalComponent v.0.99.0 Initial release. Changes in version 1.1.2: BUG FIXES MISC Changes in version 1.1.9: Filtering report fix when no normalization is conducted Bugfixes for combine(RnBSet, RnBSet) and BigFf matrices Changes in version 1.1.8: Corrected coverage statistics in sample summary table: Sites with NA methylation values are no longer considered in the coverage statistics (makes a difference if some coverage threshold is applied) Improved method for gender prediction. Predicted genders are also included in the exported annotation table Changes in version 1.1.7: Improvements to mergeSamples function for RnBiseqSets Some more memory clean-up Changes in version 1.1.6: Differential methylation based on region level only is now supported Minor updates to the differential methylation report generation Performance improvements and minor bugfixes for using disk.dump.bigff Performance improvements (more memory clean-up) Changes in version 1.1.5: Changes in version 1.1.4: Changes in version 1.1.3: Some fixes in data loading Fixes in parallel environment setup Changes in version 1.1.2: New annotation package format Support for filtering out cross-reactive probes in Infinium 450k dataset Improved logging on a Mac Changes in version 1.11.6: Changes in version 1.11.5: add test script <2015-06-30 Tue> more unit tests <2015-06-30 Tue> Changes in version 1.11.4: Changes in version 1.11.3: Changes in version 1.11.2: Changes in version 1.11.1: Changes in version 1.11.0: Changes in version 1.1.11: NEW FEATURES Changes in version 1.1.10: NEW FEATURES Changes in version 1.1.9: NEW FEATURES opls: default number of permutations set to 10 (instead of 100) as a compromise to enable both quick computation and a first hint at model significance opls: maximum number of components in automated mode (predI = NA or orthoI = NA) set to 10 (instead of 15) plot.opls: bug corrected in case of single component model without permutation testing Changes in version 1.1.8: SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.1.7: NEW FEATURES Changes in version 1.1.6: NEW FEATURES Changes in version 1.1.5: NEW FEATURES Changes in version 1.1.4: NEW FEATURES Changes in version 1.1.3: NEW FEATURES Changes in version 1.1.2: BUG FIXES Changes in version 1.1.1: SIGNIFICANT USER-VISIBLE CHANGES The packaging was modified (but not the algorithms) to be consistent with other machine learning packages: ‘opls’ is now a class and the ‘print’, ‘plot’, ‘predict’, ‘summary’, ‘fitted’, ‘coefficients’ and ‘residuals’ methods are available (see the vignette) renamed method: roplsF -> opls renamed arguments testVi -> now ‘subset’ which indicates the indices of the training (instead of the testing) observations values: tMN -> scoreMN pMN -> loadingMN wMN -> weightMN bMN -> coefficients rMN -> rotationMN varVn -> pcaVarVn tOrthoMN -> orthoScoreMN pOrthoMN -> orthoLoadingMN wOrthoMN -> orthoWeightMN new (S3) methods for objects of class ‘opls’ print summary plot predict fitted coefficients residuals Changes in version 1.5.1: unit tests <2015-06-30 Tue> export and document pxnodes <2015-06-30 Tue> Changes in version 1.5.0: Changes in version 1.4: NEW FEATURES Custom template support added Read frequency result and plot (rqcReadFrequencyPlot) added Per file top represented reads added Top representated reads added Per file heatmap plot (rqcFileHeatmap) added checkpoint function added (experimental) USER VISIBLE CHANGES BPPARAM argument replaced by workers argument File information table added to default report template Function rqcReadWidthPlot, y-axis changed to proportion (%) Almost all plots use colorblid scheme Changes in version 1.21: SIGNIFICANT USER-VISIBLE CHANGES pileup adds query_bins arg to give strand-sensitive cycle bin behavior; cycle_bins renamed left_bins; negative values allowed (including -Inf) to specify bins based on distance from end-of-read. mapqFilter allows specification of a mapping quality filter threshold PileupParam() now correctly follows samtools with min_base_quality=13, min_map_quality=0 (previously, values were assigned as 0 and 13, respectively) Support parsing ‘B’ tags in bam file headers. BUG FIXES segfault on range iteration introduced 1.19.35, fixed in 1.21.1 BamViews parallel evaluation with BatchJobs back-end requires named arguments Changes in version 1.20.0: NEW FEATURES Fast sorting of input bam files in featureCounts. Fractional counting of multi-mapping reads in featureCounts. Detection of complex indels in Subread and Subjunc aligners. Including more candidate locations in read re-alignment step to improve mapping performance. New formula for mapping paired-end reads that takes into account paired-end distance, number of subread votes and number of mismatched bases. Changes in version 1.30: NEW FEATURES SIGNIFICANT USER-VISIBLE CHANGES Changes in version 1.3: New argument ‘isLog’ to RUV* methods: if counts provided on the log scale, normalizedCounts will also be on the log scale. Fixed a bug: epsilon is now removed from the corrected counts. New function makeGroups to make scIdx matrix for RUVs. Changes in version 1.0.0 (2015-05-01): Initial version All the APIs of the SBG platform are supported First vignette added Changes in version 1.1.8: Used Sys.info() rather than sessionInfor() to extract platform information Avoided the use of “\” in R Added a parameter “Ontology” to the function FisherTest_GO_BP_MF_CC() Changes in version 1.1.6: Corrected the python path for Linux users Activated the demo code in R help files and vignettes Changed the maintainer Changes in version 1.1.4: Update the Fisher’s exact test used mouse gene background. Replace the mm10 GENCODE database V3 with V4 in the seq2pathway.data Changes in version 1.1.2: Changes in version 1.0.2: Changes in version 1.10.0: Changes in version 1.7.9: Changes in version 1.7.7: Changes in version 1.7.6: Changes in version 1.7.5: Changes in version 1.7.4: Changes in version 1.7.3: Add methods for duplicateDiscordance with two datasets Add alternateAlleleDetection Changes in version 1.4.0: Added importTranscripts() for importing annotation from GFF format Added plotCoverage() for visualization of per-base read coverage and junction read counts Added predictVariantEffects() for predicting the effect of splice variants on annotated protein-coding transcripts findSGVariants() is now able to deal with more complex gene models SGVariants columns closed5p and closed3p now refer to individual splice variants rather than the splice event they belong to Bug fixes and other improvements Changes in version 1.27: SIGNIFICANT USER-VISIBLE CHANGES fastqFilter allows several input ‘files’ to be written to a single ‘destinations’. readAligned() for BAM files is defunct. QA and associated methods removed. srapply removed Changes in version 1.1.01: paramether tsne.theta has been added to function sc_DimensionalityReductionObj function sc_InSilicoCellsReplicatesObj has been expanded to incorporate a forth model of noise generation following the negative binomial (NB) distribution. For each gene, dispersion parameters for the NB can be estimated in three alternative ways allowing different types of estimates (e.g. technical noise provided by the user). See documentation of sc_InSilicoCellsReplicatesObj() function. Changes in version 0.99.0 (2015-08-07): Changes in version 1.4.0: Changes in version 1.3.7: USER UNVISIBLE CHANGES Changes in version 1.3.5: USER UNVISIBLE CHANGES Changes in version 1.3.4: USER VISIBLE CHANGES USER UNVISIBLE CHANGES read.bibliospec - replaced old code (for loop) by using mcmapply added time meassurements to read.bibliospec Changes in version 1.3.3: USER VISIBLE CHANGES USER UNVISIBLE CHANGES .mascot2psmSet buxfix renamed column name in spectronaut outpu from irt to irt_or_rt Changes in version 1.3.2: USER VISIBLE CHANGES added ssrc (Sequence Specific Retention Calculator) function added a CITATION file Changes in version 1.3.1: USER VISIBLE CHANGES USER UNVISIBLE CHANGES Changes in version 1.0.0: Changes in version 1.7.3: NEW FEATURES Changes in version 1.7.2: NEW FEATURES Changes in version 1.11.2: Changes in version 1.11.1: Changes in version 1.3: OVERVIEW systemPipeR is an R/Bioconductor package for building and running automated analysis workflows for a wide range of next generation sequence (NGS) applications.. The most important enhancements in the upcoming release of the package are outlined below. NGS WORKFLOWS Added new end-to-end workflows for 3 additional NGS application areas: - Ribo-Seq and polyRibo-Seq - ChIP-Seq - VAR-Seq The previous version of systemPipeR included only a complete workflow for RNA-Seq. Added the data package ‘systemPipeRdata’ to generate systemPipeR workflow environments with a single command (genWorkenvir) containing all parameter files and sample data required to quickly test and run workflows. This change will also allow evaluation of much more code examples in the vignettes during the package build/test process than this was possible in the past. About 20 new functions have been added to the package. Some examples are: - Read pre-processor function with support for SE and PE reads - Parallelization option of detailed FASTQ quality reports - Read distribution plots across all features available in a genome annotation (see ?featuretypeCounts) - Visualization of coverage trends along transcripts summarized for any number of transcripts (see ?featureCoverage) - Functionalities to predict uORFs/sORFs and to use them for expression profiling - Differential expression/binding analysis includes now DESeq2 as well as edgeR Added param templates for additional command-line software including, but not limited to: BWA-MEM, GATK, BCFtools, MACS2 Adoption of R Markdown for main vignette. Future plans are to provide for all workflows the report templates in both formats: Latex/PDF and R_Markdown/HTML. WORFLOW FRAMEWORK Simplified design of complex analysis workflows. Workflows can now include any number or combination of R and/or command-line steps Improvements to workflow automation and parallelization on single machines and computer clusters. This also includes now many additional parallelization examples in the workflow vignettes. Changes in version 1.26.0: SIGNIFICANT USER-VISIBLE CHANGES BUG FIXES Fix potential potencial pearson correlation errors that occur when the standard is deviation zero. If that is the case, replace the resulting NA/NaN by zero. Fix R check warnings. Changes in version 0.99.9: CODE Modifications in pileupCounts in order to consider stranded counts Changes in plot methods. arrangeGrob from gridExtra was changed by plot_grid form cowplot package. Update the package vignette. Changes in version 0.99.8: CODE Modifications in pileupCounts and buildFeaturePanel in order to process overlapped features. Wrap plotting examples. Changes in version 0.99.7: CODE Modifications in pileupCounts and buildFeaturePanel in order to process overlapped features. Modifications in bedFile building in order to remove duplicated features based on duplicated start, end and chromosome definitions. Changes in version 0.99.6: CODE Definition of pileupCounts as a function instead a TargetExperiment S4 method. Adaptation and optimization of pileupCounts and bplapply usage in the buildFeaturePanel method. Changes in version 0.99.4: DESCRIPTION file Changes in version 0.99.3: CODE Changes in version 0.99.2: CODE Changes in version 0.99.1: DOCUMENTATION Changes in version 0.99.0: DOCUMENTATION NEWSfile was added. Changes in version 1.9.3: changed default value. Set ‘test.method = “DESeq2” as default value when analyze multi-group without replicate data. add ‘makeFCMatrix’ function for generating the foldchange matrix that is used in ‘simulateReadCounts’ funtion. add function to simulate DEGs using fondchange matrix into ‘simulateReadCounts’ function. Changes in version 3.9.1: new parameter ‘plotchroms’ in function ‘chrom.barplot’ that allows to specify the chromosomes (and their desired order) that shall be included in the plot bug fix regarding use of ‘Offset’ bases in ‘TEQCreport’ Changes in version 1.7.2: NEW FEATURES New class TFFMFirst and TFFMDetail for next generation TFBSs. Novel TFFM sequence logo. BUG FIXES Changes in version 1.9.9: Major update to the CCR-part: It is now possible to fit and plot multiple experiments simultaneously. It is now possible to perform user-specified comparisons of different experiments. They are specified in the ‘comparison’ column of the config table. TR-part: Hypothesis testing is now separated from result table creation. Therefore, a new function was introduced (tpptrAnalyzeMeltCurves) -> see vignette. CCR-part: Curve fitting and plotting are now conducted by separate functions -> see vignette. Bug fixes Introduced color coding of the columns belonging to different experiments in the excel output. This requires openxlsx version >= 2.4.0. The CCR workflow now only returns normalized measurements if normalization was actually performed. Unmodified measurements are always returned and indicated by the suffix ‘unmodified’. Data import from tab-delimited files now ignores quotes so that protein annotation fields can contain single ‘ or “ characters. Now enabling arbitrary numbers of plot colors for melting curves or dose response curves. Changes in version 1.2.5: CCR-import: Argument nonZeroCols can be NULL if no additional filtering is needed. CCR output: To make filtering easier, the ‘passed_filter’ column now shows FALSE even if proteins could not be used for fitting (instead of NAs). CCR output: To make filtering easier, the ‘passed_filter’ column now shows FALSE even if proteins could not be used for fitting (instead of NAs). CCR output: column with normalization results was renamed from ‘normalized’ to ‘median_normalized’ to distinguish from the newly introduced normalization to value at lowest concentration. Excel export: Columns are only color-coded by experiment, when the number of experiments is > 1. Excel export: Columns are only color-coded by experiment when the number of experiments is > 1. Excel export: Relative paths to the plots work now for TR- and CCR part Bug fix for data import: Unique identifiers now treated correctly again. Bug fix for Excel output: Boolean column entries are only transformed to “yes”/”no” for non-missing values. Bugfix in TR-QC plots: do not attempt to create Tm difference histograms if only one experiment is provided. Bugfix in TR normalization: fixedReference argument works again Changes in version 1.1.4: Changes in version 1.1.3: Changes in version 1.1.2: Changes in version 1.1.1: Changes in version 1.1.0: Changes in version 1.5.6: update documentation. add new feature for optimizing the styles with theme. Changes in version 1.5.5: NEW FEATURES BUG FIXES Changes in version 1.5.4: NEW FEATURES BUG FIXES Changes in version 1.5.3: NEW FEATURES BUG FIXES Changes in version 1.5.2: NEW FEATURES Add GRanges operators: +, -, *, / export parseWIG function. BUG FIXES Changes in version 1.15.1: Changes in version 2.0.0-16: Changes in version 0.99.8: Changes in version 0.99.7: added new class varPartResults to store results of fitExtractVarPartModel() and fitExtractVarPartModel() - the user will not notice any change, only the backend is different Allow computation of adjusted ICC in addition to ICC. add warning when categorical variables are modeled as fixed effects fix computation of variance fractions for varying coefficient models add getVarianceComponents() to return variances from lmer() or lm() model fit showWarnings=FALSE suppresses warning messages add fxn argument to fitVarPartModel to evaluate any function on the model fit Changes in version 0.99.6: Changes in version 0.99.5: Changes in version 0.99.4: add documentation for example datasets convert calcVarPart() to S4 from S3 function call fix typos in vignette Changes in version 0.99.3: Changes in version 0.99.2: rename sort.varParFrac to sortCols support ExpressionSet change options for plotStratifyBy() # Before Bioconductor submission Changes in version 0.99.0: Changes in version 1.16.0: NEW FEATURES support REF and ALT values “.”, “+” and “-“ in predictCoding() return non-translated characters in VARCODON in predictCoding() output add ‘verbose’ option to readVcf() and friends writeVcf() writes ‘fileformat’ header line always readVcf() converts REF and ALT values “*” and “I” to ‘’ and ‘.’ MODIFICATIONS VRanges uses ‘*’ strand by default coerce ‘alt’ to DNStringSet for predictCoding,VRanges-method add detail to documentation for ‘ignore.strand’ in predictCoding() be robust to single requrested INFO column not present in vcf file replace old SummarizedExperiment class from GenomicRanges with the new new RangedSummarizedExperiment from SummarizedExperiment package return strand of ‘subject’ for intronic variants in locateVariants() BUG FIXES writeVcf() does not duplicate header lines when chunking remove extra tab after INFO when no FORMAT data are present filteVcf() supports ‘param’ with ranges Changes in version 1.6: USER VISIBLE CHANGES Update on the scores() method for PhastConsDb objects that enables a 10-fold faster retrieval of mean phastCons scores over genomic intervals. Added two new annotated regions coded as fiveSpliceSite and threeSpliceSite and the scoring of their binding affyinity, if scoring matrices are provided. BUG FIXES Fix on the scores() method for PhastConsDb objects, that was affecting multiple-nucleotide ranges from an input GRanges object with unordered sequence names. Fix on snpid2maf() method when decoding MafDb variants with AF values whose significant digits start with 95. Changes in version 1.45.7: USER VISIBLE CHANGES Changes in version 1.45.6: NEW FEATURE BUG FIXES Changes in version 1.45.5: USER VISIBLE CHANGES The sampclass method for xcmsSet will now return the content of the column “class” from the data.frame in the phenoData slot, or if not present, the interaction of all factors (columns) of that data.frame. The sampclass<- method replaces the content of the “class” column in the phenoData data.frame. If a data.frame is submitted, the interaction of its columns is calculated and stored into the “class” column. BUG FIXES Changes in version 1.45.4: BUG FIXES Changes in version 1.45.3: NEW FEATURE xcmsSet now allows phenoData to be an AnnotatedDataFrame. new slots for xcmsRaw: - mslevel: store the mslevel parameter submitted to xcmsRaw. - scanrange: store the scanrange parameter submitted to xcmsRaw. new slots for xcmsSet: - mslevel: stores the mslevel argument from the xcmsSet method. - scanrange: to keep track of the scanrange argument of the xcmsSet method. USER VISIBLE CHANGES show method for xcmsSet updated to display also informations about the mslevel and scanrange. Elaborated some documentation entries. rtrange and mzrange for xcmsRaw method plotEIC use by default the full RT and m/z range. Added arguments “lty” and “add” to plotEIC method for xcmsRaw. getEIC without specifying mzrange returns the ion chromatogram for the full m/z range (i.e. the base peak chromatogram). BUG FIXES Checking if phenoData is a data.frame or AnnotatedDataFrame and throw an error otherwise. xcmsSet getEIC method for water Lock mass corrected files for a subset of files did not evaluate whether the specified files were corrected. Changes in version 1.45.2: BUG FIXES Changes in version 1.45.1: NEW FEATURE plotrt now allows col to be a vector of color definition, same as the plots for retcor methods. Added $ method to access phenoData columns in a eSet/ExpressionSet like manner. Allow to use the “parallel” package for parallel processing of the functions xcmsSet and fillPeaks.chrom. Thanks to J. Rainer! Changes in version 3.2: VERSION xps-1.29.1 Changes in version 1.29.1: No packages were removed in this release.
http://bioconductor.org/news/bioc_3_2_release/
CC-MAIN-2019-13
refinedweb
17,051
51.24
#include <ggi/internal/triple-int.h> unsigned *invert_3(unsigned x[3]); unsigned *lshift_3(unsigned l[3], unsigned r); unsigned *rshift_3(unsigned l[3], unsigned r); lshift_3 shifts l to the left by r bits. Equivalent to l<<=r. rshift_3 shifts l to the right by r bits. This shift is arithmetic, so the sign of l is kept as is. Equivalent to l>>=r. Both lshift_3 and rshift_3 return a pointer to l which has been updated in place. unsigned x[3]; assign_int_3(x, -4); invert_3(x); /* x is now 3 */ lshift_3(x, 42); /* x is now 3*2^42, if that fits in a triple-int */ rshift_3(x, 17); /* x is now 3*2^25 */
http://www.makelinux.net/man/3/G/ggidev-rshift_3
CC-MAIN-2014-42
refinedweb
116
83.25
I am attempting to generate a list of elements that would generate a tree structure if visualized. Example: Element 1 -> 1, 0 //ID is 1 & Parent ID is 0 (0 = root) Element 2 -> 2, 0 //ID is 2 & Parent ID is 0 (0 = root) Element 3 -> 3, 1 //ID is 3 & Parent ID is 1 Element 4 -> 4, 3 Element 5 -> 5, 2 public class Node { public string id; public string parentId; public List<Node> children = new List<Node>(); } Similar to a tree, a node may contain a parent and it also may contain multiple other children nodes If I understand correctly, you have a flat List<Node> and you want to populate children property and end up with a list of the root nodes. This can be achieved efficiently by building a fast lookup structure by id (like Dictionary) and single iteration over the source list. For each node you'll find the parent node and add the node to the parent children list (or to root list if there is no parent). The time complexity of the algorithm will be O(N). static List<Node> BuildTree(List<Node> nodes) { var nodeMap = nodes.ToDictionary(node => node.id); var rootNodes = new List<Node>(); foreach (var node in nodes) { Node parent; if (nodeMap.TryGetValue(node.parentId, out parent)) parent.children.Add(node); else rootNodes.Add(node); } return rootNodes; }
https://codedump.io/share/XHasCxSBARc1/1/c-manually-sorting-parentchild-tree-elements-in-a-list
CC-MAIN-2018-09
refinedweb
227
58.92
Enterprise Site Discovery Tool Sharon Newman, Program Manager for Internet Explorer, looks at how developers can add drag and drop functionality to their websites using HTML5 drag and drop and the File API. The sample used in the video and the second IE10 Platform Preview are available on the IE Test Drive. Actual format may change based on video formats available and browser capability. why is this tagged with internet explorer 8? O_o Cause it don't work on IE8? That's me typing Internet Explorer and hitting tab out of a short field. Nice, but the fun with drag&drop begins by supporting shell namespaces like cameras and other devices. How about that? Video doesn't work. If you could provide a WebM version that would be great. Comments have been closed since this content was published more than 30 days ago, but if you'd like to send us feedback you can Contact Us.
https://channel9.msdn.com/posts/Internet-Explorer-10-Platform-Preview-2-A-look-at-Magnetic-Poetry-in-IE10?format=progressive
CC-MAIN-2017-26
refinedweb
156
74.49
Often we want to use Azure's scalable storage to deploy file servers. For geographically distributed infrastructures, it makes sense to establish a distributed file system (DFS) that spans across regions, both for availability and reach. Here is a summary of the steps required to do so: - Create a vnet in each desired datacenter, ideally with at least 2 subnets (1 for the domain controllers, another for the file servers) + 1 for the gateway - Establish cross-premise connectivity in Azure, for instance as explained in - Deploy domain controllers on the relevant subnets in both datacenters. Make sure that the dns option in the vnet configurations points to both domain controllers in the appropriate order (dc1, dc2 on vnet1 - dc2, dc1 on vnet2). - It is recommended that you assigned a fixed ip to the domain controllers / dns servers. You will need Powershell to do that as explained in. - Deploy servers on the relevant subnets and join them to the domain. - Add at least a data volume each to the servers to host the file shares. If you are planning to replicate its content using dfs-r, it may make sense to put the volume on a locally redundant storage account. - Log into each server and, in “configure local server”, add the DFS namespace and DFS replication features. - Alternatively, you could use Powershell desired configuration to deploy servers with those features enabled. See - In the dfs management console on server 1, create a namespace. - Select “edit settings” and assign full access to administrator, r/w to other users - Make sure that it is a domain-based namespace and “2008 mode” is enabled. Information about namespace roots is then replicated on namespace servers in the domain. - Add server 2 as a namespace server to the namespace you've just created for availability. - Select “new folder” in the namespace. - Create a new file share hosted on the local data volume (e.g.\\server1\share1). Assign permissions as desired (e.g. admins full access and rest r/w). - Click on the share you've just created and select “replicate folder”. - Add a target for the replication on server2 e.g. \\server2\share1 - Create replication group as required, by following prompts. - You may also want to verify that replication works both ways by creating a share on server2 that is replicated to a target on server1.
https://blogs.technet.microsoft.com/gmarchetti/2015/02/17/dfs-in-azure/
CC-MAIN-2017-39
refinedweb
390
63.39
Thread-safe bags in .NET Bags are very similar to Stacks and Queues. We saw that both stacks and queues order their elements in a well defined way: last-in-first-out and first-in-first-out respectively. Bags on the other hand are unordered collections. There’s no guarantee on how, i.e. in which order the elements will be retrieved. Unlike stacks and queues, bags have no one-to-one single-threaded implementation in .NET. They are however implemented as thread-safe ConcurrentBag objects in the System.Collections.Concurrent namespace. Here’s some terminology: - You can insert new items to a concurrent bag with the Add method, like in the case of a List - A single item is removed using the TryTake method. It returns false in case there was nothing to retrieve from the bag. If it returns true then the retrieved item is returned as an “out” parameter - You can check the next item using the TryPeek method. It works the same way as TryTake but it doesn’t remove the element from the collection The following example shows the ConcurrentBag in action. The code fills up a bag of integers with 1000 elements and starts 4 different threads that all read from the shared bag. There’s a very similar code example in the post on ConcurrentQueue and if you run both samples then you’ll see that ConcurrentBags and ConcurrentQueues behave very similarly in practice: public class ConcurrentBagSampleService { private ConcurrentBag<int> _integerBag = new ConcurrentBag<int>(); public void RunConcurrentBagSample() { FillUpBag(1000); Task readerOne = Task.Run(() => GetFromBag()); Task readerTwo = Task.Run(() => GetFromBag()); Task readerThree = Task.Run(() => GetFromBag()); Task readerFour = Task.Run(() => GetFromBag()); Task.WaitAll(readerOne, readerTwo, readerThree, readerFour); } private void FillUpBag(int max) { for (int i = 0; i <= max; i++) { _integerBag.Add(i); } } private void GetFromBag() { int res; bool success = _integerBag.TryTake(out res); while (success) { Debug.WriteLine(res); success = _integerBag.TryTake(out res); } } } Each thread will try to take items from the shared bag as long as TryTake returns false. In that case we know that there’s nothing more left in the collection. View the list of posts on the Task Parallel Library here. Pingback: Summary of thread-safe collections in .NET – .NET training with Jead
https://dotnetcodr.com/2015/07/21/thread-safe-bags-in-net/
CC-MAIN-2017-43
refinedweb
372
58.99
Creating a Standard C++ Program (C++) Updated: July 2009 With Visual C++ 2008, you can create Standard C++ programs by using the Visual Studio development environment. By following the steps in this topic, you can create a project, add a new file to the project, modify the file to add C++ code, and then compile and run the program by using Visual Studio. You can type your own C++ program or use one of the sample programs. The sample program that is used in this topic is a console application. The application uses the set container in the Standard Template Library (STL), which is part of the ISO C++ 98 standard. Visual C++ complies with these standards: ISO C 95 ISO C++ 98 Ecma C++/CLI 05 To create a project and add a source file On the File menu, point to New, and then click Project. Under Project types, expand Visual C++, and then select Win32. Under Templates, click Win32 Console Application. Type a project name. By default, the solution that contains the project has the same name as the new project, but you can type a different name. You can also type a different location for the project. Click OK to create the project. In the Win32 Application Wizard, click Application Settings to reveal options for Application type. Under Additional Options, select Empty Project and then click Finish. To add a new source file to the project In Solution Explorer, right-click the Source Files folder, point to Add, and then click New Item. On the Visual Studio installed templates list, select C++ File (.cpp), type a file name, and then click Add. The .cpp file appears in the Source Files folder in Solution Explorer and is automatically opened in the code editor. Copy the sample program from set::find (STL Samples) by clicking the Copy Code link under Example, and then paste the code into the empty file in the editor. You can also choose a different sample program, or type your own valid C++ program into the empty file. If you use the suggested sample program, notice the using namespace std; directive. This directive enables the program to use cout and endl without requiring fully qualified names (std::cout and std::endl). To build and examine the program On the Build menu, click Build Solution. The Output window displays information about the compilation progress, for example, the location of the build log and a message that states the build status. On the Debug menu, click Start without Debugging. If you used the sample program, a command window is displayed and shows whether certain integers are found in the set.
https://msdn.microsoft.com/en-US/library/ms235629(v=vs.90).aspx
CC-MAIN-2017-34
refinedweb
443
71.34
The Interactive Programming in Python' Course # RICE University - # by Joe Warren, John Greiner, Stephen Wong, Scott Rixner # One of the simplest two-player games is “Guess the number” # The first player thinks of a secret number in some known range while # the second player attempts to guess the number. After each guess, # the first player answers either “Higher”, “Lower” or “Correct!” # depending on whether the secret number is higher, lower or equal to the guess. # try: import simplegui except: import SimpleGUICS2Pygame.simpleguics2pygame as simplegui # To run simplegui in idle python, install SimpleGUICS2Pygame module # download module : import random import math # Global Variables secret_number = 0 limit = 100 No_of_attempts = int(math.ceil(math.log(limit + 1 ,2))) def new_game(): global secret_number,guess secret_number = random.randint(0, 100) def range100(): global secret_number, limit, No_of_attempts limit = 100 secret_number = random.randint(0, limit) No_of_attempts = int(math.ceil(math.log(limit,2))) #print "Lets Play A Game!!!\n" print "\nNew Game.\nGuess a number between 0 to 100" print "You have about %s attempts at this!" %(No_of_attempts) def range1000(): global secret_number, limit, No_of_attempts limit = 1000 secret_number = random.randint(0, limit) No_of_attempts = int(math.ceil(math.log(limit,2))) print "Lets Play A Game!!!\nGuess a number between 0 to 1000" print "You have about %s attempts at this!" %(No_of_attempts) def input_guess(guess): global No_of_attempts, secret_number try: guess = int(guess) # print No_of_attempts #print "\nYour guess was", guess # print "You have %s "%(No_of_attempts) No_of_attempts -= 1 if No_of_attempts > 0: print "\nYour guess was", guess # for No_of_attempts in range(No_of_attempts): # print guess, secret_number, No_of_attempts # No_of_attempts -= 1 if (guess > secret_number): print "Go Lower!!!!!" print "No. of Attempts left:",No_of_attempts elif(guess < secret_number): print "Go Higher!!!!!" print "No. of Attempts left:",No_of_attempts else: print "\nCorrect!!!\n" print "You guessed it right!!!\nCorrect number was %s, you had %s attempts left...\nChoose a New game. " %(secret_number, No_of_attempts) #break else : print "\nTime up, You Lost\nCorrect number was %s"%(secret_number) print "Try again a new game." #new_game() range100() except: print 'Input is not a number!!!\nTry Again!!!' def end_game(): print "\t*******************" print "\t* *" print "\t* Game Over!!! *" print "\t* *" print "\t*******************" frame.stop() def Init_screen(): print "\t***************************" print "\t* I want to Play a Game. *" print "\t* The rules are simple. *" print "\t* All you have to do is- *" print "\t* sit here and guess, a *" print "\t* number I am thinking *" print "\t* ---JigSaw *" print "\t* *" print "\t***************************" range100() frame = simplegui.create_frame('Guess the number', 200, 200) frame.add_button('Range is [ 0 - 100 ]', range100,200) frame.add_button('Range is [ 0 - 1000 ]', range1000, 200) frame.add_input('Enter A Guess', input_guess, 40) frame.add_button('End Game', end_game, 70) frame.start() # call new_game Init_screen() new_game() [/sourcecode]
https://blog.mphomphego.co.za/blog/2014/10/22/guess-the-number-python.html
CC-MAIN-2018-22
refinedweb
439
68.47
Calling a rust library with the Panama FFI Steps for setting up a “Hello, World!” example In this example we will see how to: - build a simple rust library that exposes a C API (which the Panama FFI can link against). - use cbindgento generate a C header file for this library. - use jextractto generate java bindings from the header file. - create a simple java program that invokes the rust library through the bindings. Step 1. Setup a project $ mkdir rust-panama-helloworld $ cd rust-panama-helloworld $ cargo init --lib Step 2. Write a simple rust library Edit the src/lib.rs and change the contents to: #[no_mangle] pub extern "C" fn hello_world() { println!("Hello, world!"); } The #[no_mangle] attribute is needed to make sure the function will be visible in the library, and extern "C" is used to make sure the function has the right ABI (the C ABI for the particular platform). Step 3. Add the needed project config Go into Cargo.toml and add the following: [build-dependencies] cbindgen = "0.20.0" [lib] crate_type = ["cdylib"] cbindgen is used to generate a C header file from the rust sources, which will be fed into panama’s jextract tool to generate java bindings. Step 4. Create a build script that invokes cbindgen Create a build.rs file in the top-level directory and add the following: extern crate cbindgen; use std::env; fn main() { let crate_dir = env::var("CARGO_MANIFEST_DIR").unwrap(); cbindgen::Builder::new() .with_crate(crate_dir) .with_language(cbindgen::Language::C) .generate() .expect("Unable to generate bindings") .write_to_file("lib.h"); } This is a simple build script that invokes cbindgen and writes the output to the lib.h file in the top-level directory. Step 5. Build the rust library & generate a C header file $ cargo build This should create the file (lib)rust_panama_helloworld.(dll/so/dylib) in the target/debug folder. The build should also invoke cbindgen and generate the file lib.h in the top-level directory. The contents of this file should look like this: #include <stdarg.h> #include <stdbool.h> #include <stdint.h> #include <stdlib.h> void hello_world(void); Step 6. Make sure the panama jdk is on the path $ java --version openjdk 17-panama 2021-09-14 OpenJDK Runtime Environment (build 17-panama+3-167) OpenJDK 64-Bit Server VM (build 17-panama+3-167, mixed mode, sharing) I’m using the latest snapshot available at at the time of writing. Step 7. Generate java bindings using jextract $ jextract -d classes -t org.openjdk --include-function hello_world -l rust_panama_helloworld -- lib.h In this command: -d classesspecifies the output directory for the generated bindings. -t org.openjdkspecifies the java package the generated classes will be in. --include-function hello_worldmakes it so we just include the hello_worldfunction we defined in rust. -l rust_panama_helloworldmakes it so the generated bindings will load the rust_panama_helloworldlibrary automatically. --is a separator used to indicate the start of the list of header files to process. - and finally lib.his the header file that was generated during step 5. A classes directory should be created that contains some files generated by jextract: ./classes └───org └───openjdk constants$0.class lib_h.class RuntimeHelper$VarargsInvoker.class RuntimeHelper.class To see what jextract is doing in these bindings, the --source option can also be added to the command above to generate java sources instead of classes, though we need classes for this tutorial (the source files are simply the un-compiled version of the generated class files). The sources could serve as an example of how to use the Panama FFI API directly without using jextract as well, but for this tutorial we’ll keep things simple and use jextract to do the low-level work. See also the State of foreign function support document for a broader overview of the panama foreign function API. Step 8. Create a java program that calls our library Create a Main.java file in the top-level directory and place the following inside: import static org.openjdk.lib_h.*; public class Main { public static void main(String[] args) { hello_world(); } } A simple main class that calls our library function. Step 9. Run the java program $ java --add-modules jdk.incubator.foreign --enable-native-access=ALL-UNNAMED -Djava.library.path=./target/debug -cp classes Main.java In this command: --add-modules jdk.incubator.foreignadds the jdk.incubator.foreignmodule to the module graph (incubator modules are not resolved by default). --enable-native-access=ALL-UNNAMEDenables native access for the unnamed module, which is needed for the panama API to work. -Djava.library.path=./target/debugadds the directory with our rust library to the library path, so System.loadLibrarycan find it at runtime. -cp classesspecifies the class path with our generated java binding classes (the output of jextract). Main.javaspecifies our main class, which will be compiled on the fly, and then run. Step 10. Observe the ouput WARNING: Using incubator modules: jdk.incubator.foreign warning: using incubating module(s): jdk.incubator.foreign 1 warning Hello, world! Success! (some warnings are printed because we are using an incubator module)
https://jornvernee.github.io/rust/panama-ffi/2021/09/03/rust-panama-helloworld.html
CC-MAIN-2021-39
refinedweb
839
50.33
NAME Lexical::SealRequireHints - prevent leakage of lexical hints SYNOPSIS use Lexical::SealRequireHints; DESCRIPTION This module works around two historical bugs in Perl's handling of the %^H (lexical hints) variable. One bug causes lexical state in one file to leak into another that is required/ used from it. This bug, [perl #68590], was present from Perl 5.6 up to Perl 5.10, fixed in Perl 5.11.0. The second bug causes lexical state (normally a blank %^H once the first bug is fixed) to leak outwards from utf8.pm, if it is automatically loaded during Unicode regular expression matching, into whatever source is compiling at the time of the regexp match. This bug, [perl #73174], was present from Perl 5.8.7 up to Perl 5.11.5, fixed in Perl 5.12.0. Both of these bugs seriously damage the usability of any module relying on %^H for lexical scoping, on the affected Perl versions. It is in practice essential to work around these bugs when using such modules. On versions of Perl that require such a workaround, this module globally changes the behaviour of require, including use and the implicit require performed in Unicode regular expression matching, so that it no longer exhibits these bugs. The workaround supplied by this module takes effect the first time its import method is called. Typically this will be done by means of a use statement. This should be done as early as possible, because it only affects require/ use statements that are compiled after the workaround goes into effect. For use statements, and require statements that are executed immediately and only once, it suffices to invoke the workaround when loading the first module that will set up vulnerable lexical state. Delayed-action require statements, however, are more troublesome, and can require the workaround to be loaded much earlier. Ultimately, an affected Perl program may need to load the workaround as very nearly its first action. Invoking this module multiple times, from multiple modules, is not a problem: the workaround is only applied once, and applies to everything subsequently compiled. This module is implemented in XS, with a pure Perl backup version for systems that can't handle XS modules. The XS version has a better chance of playing nicely with other modules that modify require handling. The pure Perl version can't work at all on some Perl versions; users of those versions must use the XS. On all Perl versions suffering the underlying hint leakage bug, pure Perl hooking of require breaks the use of require without an explicit parameter (implicitly using $_). PERL VERSION DIFFERENCES The history of the %^H bugs is complex. Here is a chronological statement of the relevant changes. - Perl 5.6.0 %^Hintroduced. It exists only as a hash at compile time. It is not localised by require, so lexical hints leak into every module loaded, which is bug [perl #68590]. The CORE::GLOBALmechanism doesn't work cleanly for require, because overriding requireloses the necessary special parsing of bareword arguments to it. As a result, pure Perl code can't properly globally affect the behaviour of require. Pure Perl code can localise %^Hitself for any particular requireinvocation, but a global fix is only possible through XS. - Perl 5.7.2 The CORE::GLOBALmechanism now works cleanly for require, so pure Perl code can globally affect the behaviour of requireto achieve a global fix for the bug. - Perl 5.8.7 When utf8.pmis automatically loaded during Unicode regular expression matching, %^Hnow leaks outward from it into whatever source is compiling at the time of the regexp match, which is bug [perl #73174]. It often goes unnoticed, because [perl #68590] makes %^Hleak into utf8.pmwhich then doesn't modify it, so what leaks out tends to be identical to what leaked in. If [perl #68590] is worked around, however, %^Htends to be (correctly) blank inside utf8.pm, and this bug therefore blanks it for the outer module. - Perl 5.9.4 %^Hnow exists in two forms. In addition to the relatively ordinary hash that is modified during compilation, the value that it had at each point in compilation is recorded in the compiled op tree, for later examination at runtime. It is in a special representation-sharing format, and writes to %^Hare meant to be performed on both forms. requiredoes not localise the runtime form of %^H(and still doesn't localise the compile-time form). A couple of special %^Hentries are erroneously written only to the runtime form. Pure Perl code, although it can localise the compile-time %^Hby normal means, can't adequately localise the runtime %^H, except by using a string eval stack frame. This makes a satisfactory global fix for the leakage bug impossible in pure Perl. - Perl 5.10.1 requirenow properly localises the runtime form of %^H, but still not the compile-time form. A global fix is once again possible in pure Perl, because the fix only needs to localise the compile-time form. - Perl 5.11.0 requirenow properly localises both forms of %^H, fixing [perl #68590]. This makes [perl #73174] apparent without any workaround for [perl #68590]. The special %^Hentries are now correctly written to both forms of the hash. - Perl 5.12.0 The automatic loading of utf8.pmduring Unicode regular expression matching now properly restores %^H, fixing [perl #73174]. BUGS The operation of this module depends on influencing the compilation of require. As a result, it cannot prevent lexical state leakage through a require statement that was compiled before this module was invoked. Where problems occur, this module must be invoked earlier. On all Perl versions that need a fix for the lexical hint leakage bug, the pure Perl implementation of this module unavoidably breaks the use of require without an explicit parameter (implicitly using $_). This is due to another bug in the Perl core, fixed in Perl 5.15.5, and is inherent to the mechanism by which pure Perl code can hook require. The use of implicit $_ with require is rare, so although this state of affairs is faulty it will actually work for most programs. Perl versions 5.12.0 and greater, despite having the require hooking bug, don't actually exhibit a problem with the pure Perl version of this module, because with the lexical hint leakage bug fixed there is no need for this module to hook require. SEE ALSO AUTHOR Andrew Main (Zefram) <zefram@fysh.org> Copyright (C) 2009, 2010, 2011, 2012, 2015, 2016, 2017 Andrew Main (Zefram) <zefram@fysh.org> LICENSE This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
https://web-stage.metacpan.org/pod/Lexical::SealRequireHints
CC-MAIN-2021-25
refinedweb
1,117
63.19
We all know that programming is based on math, what is perhaps not so clear is that knowing more math is going to help us become better developers. And although in all types of programming, math is fundamental, it is even more so in functional programming. A very widespread concept which I’m sure you have heard lately is the monad. It is a mathematical concept of the category theory applied later in functional programming. I first heard the term when I started programming in Scala. After much searching on the Internet, I only found articles explained by and for mathematicians or articles that talked about programming and not math. On the other hand, some abuse of language is done to refer to things that are not monads as such. So we will try to talk here, without losing the mathematical rigor, about what the monads are, and then how and why they will help us become better developers. Although they are very abstract and complex terms we will try to do it all in a simple way. In order to understand the monads, we must first learn about two concepts, categories and functors. Categories Let’s start with an example: Imagine that we have obtained the following map of the Middle Earth of the Lord of the Rings. Elves have made it. What do we see on this map? The first thing we see is a few towns. In addition, we have roads that connect each of the towns, as well as themselves (identity). These are one way or one way back. Also, as seen on the map, you cannot reach every town (Who would want to go to Mordor? Oh, that’s right… hobbits…). On the other hand, it seems clear that if Frodo wants to go from The Shire to Erebor, he can go first through Rivendell or he can go directly to Erebor. This is a category. Mathematically speaking, it has two components: - The first, a set of sets (the set of towns on Middle Earth) - The second, a type of functions/arrows that are called morphisms (the roads that connect the towns). In addition, we can compose the morphisms (as in the previous Frodo example) and this composition is such that any morphism composed with the identity is equal to morphism¹ and such that the composition is associative². Functor: Imagine now that we find another map, this time the map of the Seven Kingdoms of Game of Thrones. Again, we have towns and roads linking these towns. That is, we have another category. Look at the roads that link each town. Can you find some similarity? For example, is there any town that no one wants to go to? Of course, who would want to go to Meereen? Nobody! Another example is that there is a town that can be reached from all other towns. Look, at the Middle Earth one too! In fact, it has so many similarities we could say that if we change names and distances, it is the same map (in mathematics it is said to have the same structure and, in the real world, we say George R.R. Martin was “strongly inspired” by the Tolkien’s work)³. Seeing this, it is clear that we should be able to transform, the map of the Middle Earth in the Seven Kingdoms, but … how? The first thing we have to think about is a transformation the towns, in this case: The last thing we have to do is transforming the roads: A very important thing about this transformation is that it preserves the structure, including the composition: Frodo going from the Shire to Erebor (optionally going through Rivendell) would be similar to a trip made by John Snow from The Wall to King’s Landing (also optionally passing by The Eyrie). We can say that this transformation… It’s a Functor! In this way, we have been able to transform “completely different” worlds, exactly what the category theory in mathematics was created for: to build bridges between very different fields. Monad: Now we have found another map of the Middle Earth, but this time, drawn by a Dwarf. Of course, the Middle Earth is the same, so it must have (and has) the same populations, and the same roads. Of course with so many maps, it would be difficult It to get lost … As it is essentially the same map, it is clear that we can also construct a functor or a transformation of the Middle Earth itself, which is the same. This special type of functors is known as endofunctors. But there is more. If an endofunctor also fulfills that it has two natural transformations such as the identity function and another function that is associative, we can say that we have a monad (this we call monadic laws)⁴. Monads in functional programming History: Philip Wadler, who is the greatest contributor to the theory behind functional programming and the use of monads, was inspired by the use of Eugenio Moggi’s monads to give semantics to programming languages. The monads introduced the idea of sequence into the functional paradigm, but without contradicting it, thanks to the composition of functions.⁵ Many common programming concepts can be described in terms of monad structures, including side effects, I/O, exception handling… This allows such concepts to be defined in purely functional ways without the need to add new terms to language semantics and without resorting to imperative programming. There are languages like Haskell that provide monads in the standard kernel and, although Scala does not provide it in the kernel, you are familiar with Cats and Scalaz libraries that help us develop with monads (although we can always implement our own). Functors and monads in FP: It has been nice explaining how the category theory in mathematics works but, as developers, we already want to see an example applied to functional programming: This drawing⁶ represents a functor, in this case, composed of List and map. Surely you have already linked this example with the one of the Lord of the Rings and Game of Thrones´s functor. You may have read online that List is a functor per se, just like the Scala Option class is a monad. Both are type constructors (that would be equivalent to the transformer of towns) but… where is the transformer of roads/arrows? This is a very frequent abuse of language that you will no longer see with the same eyes. If in the previous drawing we had used flatMap instead of map (flatMap’s output is in the same category), we would have drawn a monad (although we need to check the monadic laws) instead of a functor. There are many reasons to use monads in our programs, but I will list the top 3 main reasons (in my humble opinion) to use them: - At the end of the day, we only want to use functions (remember: functional programming). - Imagine that you have a program that performs these two functions: f(x) = x+3 g(x,y) = List(x, y) We want these two functions to be executed in order, sequentially, and we want to do it using only functions. So, how could we do it? Using composition of functions. If we first want to execute g and then f on the result, we compose the functions in the next way: f ∘ g (x, y). - If instead of the previous g, we use g(x, y) = x / y, we have two problems: - If the type that accepts f is different from the one that returns g (g could return a float since it is a division, and f accepts an Int) we would clearly have a problem. - What would happen if the function g passed the values 2 and 0? Indeed, there would be a failure since it can not be divided by 0. We do not want exceptions in functional programming since an exception is not a function. What I mean is that the composition of functions needs one more ingredient to be 100% functional (this ingredient will also solve the two problems above). This ingredient, obviously (judging by the title of the post ;)), is the use of monads. The idea is to allow functions to return two types of results, so g would no longer be: g: Int, Int → Float, but we would do that g: Int, Int → Some | None and f(x): Some | None → Some | None. For example, we are going to undo the functions with the previous example: def g(x: Int, y: Int): Option[Float] = { y match { case 0 => return None case y => return Some(x.toFloat/y) } } And the same with f(x): def f(x: Option[Float]): Option[Int] = { x match { case None => None case Some(x) => Some(x.asInstanceOf[Int]+3) } } So our code would be g(x, y).flatMap (x => f (Option (x))). In this way, we have composed the two functions and avoided the exceptions, since if y is equal to 0 the result will be None. Summary and conclusions In short, we can say that a monad is an endofunctor which holds monadic laws. We can also say that thanks to the monads, we can compose functions, solving type problems and exceptions. Now that we know the category theory a little better and how some of these concepts are used in functional programming, we can use the tools provided in our programs to make a better code. Go, you can already tell your friends “I’ve seen things you people wouldn’t believe”. Notes 1. That is, if Frodo goes from The Shire to Erebor (f) and then stays in Erebor (identity) is the same as if he goes directly to Erebor. 2. That is, if we take five paths, for example that one which goes from The Shire to Erebor (f), that one which goes from Rivendell to the Shire (g), that one which goes from Mordor to Rivendell (h), that one which goes from Rivendell to Erebor (j) and that one which goes from Mordor to The Shire (k). Then (f ∘ g) ∘ h = f ∘ (g ∘ h) since f ∘ g = j and g ∘ h = k. 3. Obviously, the paths of these maps are carefully invented so that everything fits perfectly. 4. Monadic laws with composition operation: - F ∘ id = F (Left identity) - Id ∘ F = F (Right identity) - (F ∘ G) ∘ H = F ∘ (G ∘ H) (Associativity) 5. De Euclides a Java — Ricardo Peña Mari 6. Based on Nikolay Grozev’s one Resources and links - Paper from Phillip Wadler where he explains monads - Eugenio’s Moggi work about monads - The omniscient Bartosz Milewski - The Juan Pedro Villa-Isaza’s work around applications of category theory in functional programming - The Trevor Hartman’s post about monad laws in Scala - Cats library - Wikipedia’s articles on category theory and monads - A work about programming history: De Euclides a Java — Ricardo Peña Mari - The Nikolay Grozev’s post about functors - A stack overflow’s post - A post about functors and monads
https://learningactors.com/knowing-monads-through-the-category-theory/
CC-MAIN-2020-29
refinedweb
1,842
65.96
On Mar 27, 3:01 pm, "David L. Jones" <david.l.jo... at gmail.com> wrote: > On Mar 26, 8:51 pm, Kent <kent.y... at gmail.com> wrote: > > > ... Is > > there any convention how to manage python classes into .py files? > > > ... > > In above packages, each .py file contains one python class. And > > ClassName = Filename > > > ... > > Can anyone give some hint on it? would be great with reason. > > Overall, I don't think there is a single convention that anyone can > point to and everyone will at least acknowledge as convention. > > If you have multiple single-class files, then you will have > unnecessary redundancy referencing the classes from outside: > > # Module structure: mymodule/ > # __init.py__ > # someclass.py > import mymodule > c = mymodule.someclass.someclass() > > You can get around this with a Java-like statement: > > # Same module structure > from mymodule.someclass import someclass # or from ... import * > c = someclass() > > but you lose namespacing which can make code more difficult to read. I > think that this Java-style approach of pulling everything into the > current namespace is quite silly, since Python's module structure was > specifically designed in large part not to work like this. (Commence > flaming.) > > I tend to think in terms of coupling and cohesion. Within an > application, any classes, functions, data, etc. that are tightly > coupled are candidates to live in the same file. If you have a set of > classes that all inherit from a common set of base classes, then you > should probably consider putting the base and inherited classes > together in a file. That puts them in the same namespace, which makes > sense. > > Cohesion is the flip side: if a class is large, even if it is somewhat > coupled to other classes, it should probably go in its own file. In > general, use coupling as a guide to put more things into a single > file, and cohesion as a guide to break out parts into multiple files. > > D with this structure, import statements were *significantly* reduced. :)
https://mail.python.org/pipermail/python-list/2009-March/530663.html
CC-MAIN-2016-50
refinedweb
325
66.33
This chapter describes how to create custom ADF Faces rich client components. This chapter includes the following sections: Section 30.1, "Introduction to Custom ADF Faces Components" Section 30.2, "Setting Up the Workspace and Starter Files" Section 30.3, "Client-Side Development" Section 30.4, "Server-Side Development" Section 30.5, "Deploying a Component Library" Section 30.6, "Adding the Custom Component to an Application" The ADF Faces component library provides a comprehensive set of UI components that covers most of your requirements. However, there are situations when you will want to create a custom rich component that is specific to your application. A custom rich component will allow you to have custom behavior and perform actions that best suit the needs of your application. Note:Creating custom standard JSF components is covered in many books, articles, web sites, and the JavaServer Faces specification, therefore, it is not covered in this guide. This chapter is intended to describe how to create ADF Faces components. JSF technology is built to allow self-registering components and other framework parts. The core JSF runtime at web application startup accomplishes this by inspecting all JAR files in the class path. Any JAR files whose /META-INF/faces-config.xml file contains JSF artifacts will be loaded. Therefore, you can package custom ADF Faces components in a JAR file and simply add it into the web project. For each ADF Faces component, there is a server-side component and there can also be a client-side component. On the server, for JSPs, a render kit provides a base to balance the complex mixture of markup language and JavaScript. The server-side framework also adds a custom lifecycle to take advantage of the API hooks for partial page component rendering. On the client, ADF Faces provides a structured JavaScript framework for handling various nontrivial tasks. These tasks include state synchronization using partial page rendering. For more information about the ADF Faces architecture, see Chapter 3, "Using ADF Faces Architecture." ADF Faces components are derived from the Apache MyFaces Trinidad component library. Because of this, many of the classes you extend when creating a custom ADF Faces component are actually MyFaces Trinidad classes. For more information about the history of ADF Faces, including its evolution, see Chapter 1, "Introduction to ADF Faces Rich Client." Between the JSP and the JSF components is the Application class. The tag library uses a factory method on the application object to instantiate a concrete component instance using the mnemonic referred to as the componentType. A component can render its own markup but this is not considered to be a best practice. The preferred approach is to define a render kit that focuses on a strategy for rendering the presentation. The component uses a factory method on the render kit to get the renderer associated with the particular component. If the component is consumed in an application that uses Facelets, then a component handler creates the component. In addition to functionality, any custom component you create must use an ADF Faces skin to be able to be displayed properly with other ADF Faces components. To use a skin, you must create and register the skinning keys and properties for your component. This chapter describes only how to create and register skins for custom components. For more information about how skins are used and created in general, see Chapter 20, "Customizing the Appearance Using Styles and Skins." Tip:To work with ADF Faces components, your custom component must use at least the ADF Faces simpleskin, because the FusionFX, blafplus-rich, and blafplus-mediumskins inherit from the simple skin. Additionally, if there is any chance your component will be used in an Oracle WebCenter application, then your skin must also be registered with the simple.portletskin. An ADF Faces component consists of both client-side and server-side resources. On the client side, there is the client component, the component peer (the component presenter), and any events associated with the client component. On the server side, there is the server component, server component events, and event listeners. Also, there is a component renderer, a component JSP tag, a composite resource loader, a JavaScript resource loader, and a resource bundle. The component also has several configuration and support files. Together, these classes, JavaScripts, and configuration files are packaged into a JAR file, which can be imported as a library into an application and used like other components. You can use JDeveloper to set up the application workspace and project in which you develop the custom component. After you have created the workspace and project, you add starter working files for the required classes, JavaScript files, and configuration files that make up the custom component. During development, you edit and add code to each of these files, specific for the custom component. The development process is as follows: Create an application, workspace, and project as an environment for development. This includes adding library dependencies and registering XML schemas. You should not create the component in the same application in which you plan to use the component. Create a deployment profile for packaging the component into a JAR file. Create the following starter configuration and support files: faces-config.xml: Used to register many of the artifacts used by the component. trinidad-skins.xml: Used to register the skins that the component uses. Cascading style sheet: Used to define the style properties for the skins. Render kit resource loader: Allows the application to load all the resources required by the component. adf-js-features.xml: Allows the component to become part of a JavaScript partition. For more information about partitions, see Section 1.2.1.2, "JavaScript Library Partitioning." JSP tag library descriptor (TLD) (for JSP): Defines the tag used on the JSF page. Component handler (for Facelets): Defines the handler used to render the component. Create the following client-side JavaScript files: Client Component: Represents the component and its attributes on the client. Client Peer: Manages the document object model (DOM) for the component. Client Event: Invokes processing on the client and optionally propagates processing to the server. Create the following server-side Java files: Server Component class: Represents the component on the server. Server Event Listener class: Listens for and responds to events. Server Events class: Invokes events on the server. Server Renderer class: Determines the display of the component. Resource Bundle class: Defines text strings used by the component. Further develop the component by testing and debugging the JavaScript and Java code. You can use the JDeveloper debugger to set breakpoints and to step through the code. You can also use Java logging features to trace the execution of the component. Deploy the component into a JAR file. Test the component by adding it into an application. Table 30-1 lists the client-side and server-side component artifacts for a custom component. The configuration and support files are not included in the table. To help illustrate creating a custom component, a custom component named tagPane will be used as an example throughout the procedures. The tagPane custom component is created for reuse purposes. Although the tagPane presentation might have been implemented using a variety of existing components, having a single custom component simplifies the work of the page developer. In this case, there may be a trade-off of productivity between the component developer and the page developers. If this particular view composition were needed more than once, the development team would reduce costs by reducing the lines of code and simplifying the task of automating a business process. The tagPane component displays a series of tags and their weighted occurrences for a set of files. Tags that are most frequently used are displayed in the largest font size, while the least used tags are displayed in the smallest font size. Each tag is also a link that triggers an event, which is then propagated to the server. The server causes all the files that contain an occurrence of that tag to then be displayed in a table. Figure 30-1 shows how the tagPane component would be displayed if it was added below the Search pane in the File Explorer application. The tagPane component receives a collection of tags in a Java Map collection. The key of the map is the tag name. The value is a weight assigned to the tag. In the File Explorer application, the weight is the number of times the tag occurs and in most cases, the number of files associated with the tag. The tag name is displayed in the body text of a link and the font size used to display the name represents the weight. Each tag's font size will be proportionally calculated within the minimum and maximum font sizes based upon the upper and lower weights assigned to all tags in the set of files. To perform these functions, the tagPane custom component must have both client-side and server-side behaviors. On the server side, the component displays the map of tags by rendering HTML hyperlinks. The basic markup rendering is performed on the server. A custom event on the component is defined to handle the user clicking a link, and then to display the associated files. These server-side behaviors are defined using a value expression and a method expression. For example, the tagPane component includes: A tag property for setting a Map<String, Number> collection of tags. A tagSelectionListener method-binding event that is invoked on the server when the user clicks the link for the tag. An orderBy property for displaying the sequence of tags from left to right in the order of descending by weight or alternatively displaying the tag links ascending alphabetically. To allow each tag to be displayed in a font size that is proportional to its weight (occurrences), the font size is controlled using an inline style. However, each tag and the component's root markup node also uses a style class. Example 30-1 shows how the tagPane component might be used in a JSF page. Example 30-1 tagPane Custom Component Tag in a JSF Page <acme:tagPane Because the tagPane component must be used with other ADF Faces components, it must use the same skins. Therefore, any styling is achieved through the use of cascading style sheets (CSS) and corresponding skin selectors. For example, the tagPane component needs skin selectors to specify the root element, and to define the style for the container of the links and the way the hyperlinks are displayed. Example 30-2 shows a sample set of style selectors in the CSS file for the tagPane component. Example 30-2 CSS Style Selectors for the Sample Custom Component acme|tagPane - root element acme|tagPane::content - container for the links acme|tagPane::tag - tag hyperlink You may need to specify the HTML code required for the custom component on the server side. Example 30-3 shows HTML server-side code used for the tagPane component. Example 30-3 HTML Code for the Server Side <div class=" acme|tagPane"> <span class=" acme|tagPane::content "> <a class=" acme|tagPane::tag" href="#" style="font-size:9px;">Tag1</a> <a class=" acme|tagPane::tag" href="#" style="font-size:10px;">Tag2</a> </span> </div> On the client side, the component requires a JavaScript component counterpart and a component peer that defines client-side behavior. All DOM interaction goes through the peer (for more information, see Chapter 3, "Using ADF Faces Architecture"). The component peer listens for the user clicking over the hyperlinks that surround the tag names. When the links are clicked, the peer raises a custom event on the client side, which propagates the event to the server side for further processing. Table 30-2 lists the client-side and server-side artifacts for the tagPane component. Referencing the naming conventions in Table 30-1, the component_package is com.adfdemo.acme and the prefix is Acme. Use JDeveloper to set up an application and a project to develop the custom component. After your skeleton project is created, you can add a deployment profile for packaging the component into a JAR file. During the early stages of development, you create starter configuration and support files to enable development. You may add to and edit these files during the process. You create the following configuration files: META-INF/faces-config.xml: The configuration file required for any JSF-based application. While the component will use the faces-config.xml file in the application into which it is eventually imported, you will need this configuration file for development purposes. META-INF/trinidad-skins.xml: The configuration information for the skins that the component can use. Extend the simple skin provided by ADF Faces to include the new component. META-INF/ package_directory /styles/ skinName.css: The style metadata needed to skin the component. META-INF/servlets/resources/ name .resources: The render kit resource loader that loads style sheets and images from the component JAR file. The resource loader is aggregated by a resource servlet in the web application, and is used to configure the resource servlet. In order for the servlet to locate the resource loader file, it must be placed in the META-INF/servlets/resources directory. META-INF/adf-js-features.xml: The configuration file used to define a feature. The definition usually includes a component name or description of functionality that a component provides, and the files used to implement the client-side component. META-INF/ prefix_name .tld (for JSP): The tag definition library for the component. If the consuming web application is using JSP, the custom component requires a defined TLD. The TLD file will be located in the META-INF folder along with the faces-config.xml and trinidad-skins.xml files. META-INF/ prefix_name .taglib.xml (for Facelets): The tag library definition for the component when the consuming application uses Facelets. This file defines the handler for the component. For example, for the tagPane component, the following configuration files are needed: META-INF/faces-config.xml META-INF/trinidad-skins.xml META-INF/acme/styles/acme-simple-desktop.css META-INF/servlets/resources/acme.resources META-INF/acme.tld META-INF/acme.taglib.xml META-INF/adf-js-features.xml After the files are set up in JDeveloper, you add content to them. Then, you create the client-side files nd server-side files. For more information, see Section 30.3, "Client-Side Development," and Section 30.4, "Server-Side Development." This chapter assumes you have experience using JDeveloper and are familiar with the steps involved in creating and deploying an application. For more information about using JDeveloper to create applications, see Chapter 2, "Getting Started with ADF Faces." For more information about deployment, see the "Deploying Fusion Web Applications" chapter of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. To set up the custom component development environment in JDeveloper: Create an application to serve as a development container for the component. Use JDeveloper to create a workspace and project. For procedures on creating an application, see Section 2.2, "Creating an Application Workspace." When selecting an application template, select the Generic Application template. Note:Do not select any other application template, or add any technologies to your application. Because the custom component will be packaged into a JAR file, you do not need to create unnecessary folders such as public_htmlthat JDeveloper creates by default when you use a template specifically for web applications, or add web technologies. Instead, create the starter configuration file from the XML schemas. Prepare the project to be deployed as a JAR file by creating a new deployment profile. In the Application Navigator, right-click the project and choose New. In the New Gallery, select Deployment Profile and then ADF Library JAR File, and click OK. In the Create Deployment Profile dialog, enter a name for the Deployment Profile name. For example, the tagPane component might use adf-richclient-demo-acme. In the Edit JAR Deployment Profile Properties dialog, click OK. In the Project Properties dialog, add library dependencies. Select Libraries and Classpath in the left pane. Click Add Library. In the Add Library dialog, select ADF Faces Runtime 11, Facelets Runtime (if using Facelets), JSF 1.2, and JSP Runtime, and click OK. Click OK to close the Project Properties dialog. Register XML schemas. The custom component requires several XML configuration files. You can use JDeveloper to register the XML schemas associated with these configuration files. You must add schemas for three configuration files: faces-config.xml, trinidad-skins.xml, and trinidad-config.xml. By preregistering these schemas, you can create a template XML configuration file without having to know the specifics about the markup structure. The names and locations of the schemas are assumed by the base installation of JDeveloper. Select Tools > Preferences. In the Preferences dialog, select XML Schemas in the left pane, and click Add. In the Add Schema dialog, click Browse to navigate to the XML schemas included in your JDeveloper build, as shown in Table 30-3. Note:In the Add Schema dialog, make sure Extension is set to .xml. If you change it to XSD, when you later create XML files, you will not be able to use the XML schema you have created. Although the custom component will be registered in the consuming application's faces-config.xml file, during development, the workspace requires a faces-config.xml file. Note:Do not use any of JDeveloper's declarative wizards or dialogs to create the faces-config.xmlfile. These declarative methods assume you are creating a web application, and will add uneccessary artifacts to your custom component application. To create a faces-config.xml file for the custom component: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General, select XML, and then Select XML Document from XML Schema, and click OK. In the Create XML from XML Schema dialog: XML File: Enter faces-config.xml. Directory: Append \src\META-INF to the end of the directory entry. Select Use Registered Schemas and click Next. Enter the following: Target Namespace: Select. Root Element: Select faces-config. Leave the defaults for the other fields, and click Finish. The new file will automatically open in the XML editor. Add the following schema information after the first line in the file: <?xml version="1.0" encoding="US-ASCII"?> <faces-config Adding a schema provides better WYSIWYG tool support. Add a MyFaces Trinidad skins file to register the component's CSS file, which is used to define the component's styles. To create a trinidad-skins.xml file for the custom component: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General and select XML. Select XML Document from XML Schema and click OK. In the Create XML from XML Schema dialog: XML File: Enter trinidad-skins.xml. Directory: Append \ src\META-INF to the end of the Directory entry. Select Use Registered Schemas, and click Next. Enter the following: Target Namespace: Select. Root Element: Select skins. Click Finish. The new file will automatically open in the XML editor. Add a cascading style sheet to define component's style. To create a cascading style sheet for the custom component: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General, select File and click OK. In the Create File dialog: Enter a file name, for example, acme-simple-desktop.css. Append \src\META-INF\ component_prefix \styles to the end of the Directory entry, where component_prefix is the prefix that will be used in the component library. For example, for the tagPane component, acme is the prefix, therefore, the string to append would be \META-INF\acme\styles. Create an empty file and add the fully qualified classpath to the custom resource loader. To create a resource loader for the custom component: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General and then File, and click OK. In the Create File dialog: Enter component_prefix .resources for File Name, where component_prefix will be the prefix used in the component library. For example, for the tagPane component, acme is the prefix, therefore, the string to enter is acme.resources. Append \src\META-INF\sevlets\resources\ to the end of the Directory entry. You need a JSP TLD file to work with JSF pages. To create a JavaServer Pages TLD file for the custom component: In the Application Navigator, right-click the project and select New. In the New Gallery, expand Web Tier and select JSP. Select JSP Tag Library and click OK. In the Create JavaServer Page Tag Library dialog, select Deployable and click Next. Enter the following: Tag Library Descriptor Version: Select 2.1. Short Name: A name. For example, for the tagPane component, you would enter acme. Tag Library URI: A URI for the tag library. For example, for the tagPane component, you would enter. Click Next and optionally enter additional tag library information, then click Finish. Add a features file to define the JavaScript files associated with the custom component, including the files for the client component, the client peer, and the client events. To create an adf-js-features.xml file for the custom component: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General and select XML. In the right pane, select XML Document from XML Schema and click OK. In the Create XML from XML Schema dialog: XML File: Enter adf-js-features.xml. Directory: Append \ src\META-INF to the end of the Directory entry. Select Use Registered Schemas, and click Next. Do the following: Target Namespace: Select. Root Element: Select features. Click Finish. The new file will automatically open in the XML editor. If a consuming application uses Facelets, then you must define the handler for the component. To create a Facelets tag library file: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General and select XML. In the right pane, select XML Document and click OK. In the Create XML file dialog, enter the following: File Name: Enter prefix_name .taglib.xml Directory: Append \ src\META-INF to the end of the Directory entry. Copy and paste the code shown in Example 30-4: Example 30-4 Code for Facelets Tag Library Configuration File <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE facelet-taglib PUBLIC "-//Sun Microsystems, Inc.//DTD Facelet Taglib 1.0//EN" ""> <facelet-taglib <namespace></namespace> <tag> <tag-name>tagPane</tag-name> <handler-class> oracle.adfinternal.view.faces.facelets.rich.RichComponentHandler </handler-class> </tag> </facelet-taglib> Replace the namespace and tag-name code shown in bold with code appropriate for your application. After the JDeveloper workspace and configuration files have been created, you can create and code the client-side JavaScript files. When you have finished with the client-side development, create the server-side files, as described in Section 30.4, "Server-Side Development." Best Practice:Because JavaScript libraries do not have namespaces, you should create all JavaScript object names for the custom component using the same prefix. You do not need to do this on the server because the server-side Java package names will prevent name collisions. For example, for the tagPanecomponent, the client-side JavaScript object names all have the acmeprefix. Client components hold state for properties that are not defined within the corresponding DOM element. These properties are bound to an associated DOM element using the clientId. The clientId uniquely defines a server-side component within the component tree representing a page. The DOM element holds the clientId within the Id attribute. Note:Place each JavaScript object in its own separate source file for best practice and consistency. Developing the client-side component requires creating a JavaScript file for the component, the peer, and the component event. In addition to the client component, client-side events must be defined. The tagPane component's client-side event is fired and propagated to the server when the user clicks one of the three file types. The client event passed to the server is queued so that the target server-side component can take the appropriate action. Finally, the custom component requires a client peer. The peer is the component presenter. Peers act as the links between a client component and an associated DOM element. Client peers add client behaviors. A peer must be bound to a component through a registration method. As with the client component, the associated peer is bound to a DOM element using the component's clientId. There are two types of peers, statefull and stateless. Some complex client components require the peer to hold state and thereby need to use a statefull peer. This type of peer is always bound to a DOM element. Statefull peers are less common than stateless peers. Stateless peers do not hold state and one peer can be bound to multiple components. Stateless peers are the best performance option because they reduce the client footprint. This type of peer performs lazy content delivery to the component. Peers add behavior to the component by dynamically registering and listening for DOM events. Conceptually, a peer's function is similar to the role of a managed bean. However, the client component is not bound to the peer using EL like the server-side component is bound to a view model ( #{backingbean.callback}). The peer registers client component events in the InitSubclass (AdfRichUIPeer.addComponentEventHandlers("click")) callback method. The callback is assumed by using a naming convention of ( <Peer>.prototype.HandleComponent<Event>). The peer manages DOM event callbacks where the server-side component handles the linking using EL bindings to managed beans. For more information about client-side architecture, including peers, see Section 3.1, "Introduction to Using ADF Faces Architecture." The following section assumes you have already set up a custom component development template environment. This development environment includes the setting up of application workspace, projects, deployment profiles, and registering schemas. If you have not done so, see Section 30.2, "Setting Up the Workspace and Starter Files." Use JDeveloper to create a JavaScript file for the component. In it, you will define the component type for the component. To create the component JavaScript file: In the Application Navigator, right-click the project and click New. In the New Gallery, expand Web Tier and select HTML. Select JavaScript File and click OK. In the Create JavaScript File dialog, do the following: File Name: Enter the name of the client-side component. For example, for the tagPane component, you might enter AcmeTagPane.js. Tip:To prevent naming collisions, start the name with the component prefix. Directory: Enter the directory path of the component in a subdirectory under the src directory. For example, for the tagPane component, you might enter adfrichclient-demo-acme\src\oracle\adfdemo\acme\js\component. Open the JavaScript File in the editor and add the component code to define the component type. Example 30-5 shows the code that might be used for the tagPane component. Use JDeveloper to create a JavaScript file for the event. Add code to the JavaScript to perform the functions required when a event is fired, such as a mouse click. To create the JavaScript for the event: In the Application Navigator, right-click the project and select New. In the New Gallery, expand Web Tier and select HTML. Select JavaScript File and click OK. In the Create JavaScript File dialog, do the following: File Name: Enter the name of the client-side event. For example, for the tagPane component, you might enter AcmeTagSelectEvent\event. Open the JavaScript File in the editor and add the event code. Example 30-6 shows the event code that might be added for the tagPane component. Example 30-6 tagPane Event JavaScript /** * Fires a select type event to the server for the source component * when a tag is clicked. */ function AcmeTagSelectEvent(source, tag) { AdfAssert.assertPrototype(source, AdfUIComponent); AdfAssert.assertString(tag); this.Init(source, tag); } // make AcmeTagSelectEvent a subclass of AdfComponentEvent AdfObject.createSubclass(AcmeTagSelectEvent, AdfComponentEvent); /** * The event type */ AcmeTagSelectEvent.SELECT_EVENT_TYPE = "tagSelect"; /** * Event Object constructor */ AcmeTagSelectEvent.prototype.Init = function(source, tag) { AdfAssert.assertPrototype(source, AdfUIComponent); AdfAssert.assertString(tag); this._tag = tag; AcmeTagSelectEvent.superclass.Init.call(this, source, AcmeTagSelectEvent.SELECT_EVENT_TYPE);} /** * Indicates this event should be sent to the server */ AcmeTagSelectEvent.prototype.propagatesToServer = function() { return true; } /** * Override of AddMarshalledProperties to add parameters * sent server side. */ AcmeTagSelectEvent.prototype.AddMarshalledProperties = function( properties) { properties.tag = this._tag; } /** * Convenient method for queue a AcmeTagSelectEvent. */ AcmeTagSelectEvent.queue = function(component, tag) { AdfAssert.assertPrototype(component, AdfUIComponent); AdfAssert.assertString(tag); AdfLogger.LOGGER.logMessage(AdfLogger.FINEST, "AcmeTagSelectEvent.queue(component, tag)"); new AcmeTagSelectEvent(component, tag).queue(true); } /** * returns the selected file type */ AcmeTagSelectEvent.prototype.getTag = function() { return this._tag;} /** * returns a debug string */ AcmeTagSelectEvent.prototype.toDebugString = function() { var superString = AcmeTagSelectEvent.superclass.toDebugString.call(this); return superString.substring(0, superString.length - 1) + ", tag=" + this._tag + "]"; } /* * * Make sure that this event only invokes immediate validators * on the client. */ AcmeTagSelectEvent.prototype.isImmediate = function() { return true; } Use JDeveloper to create a JavaScript file for the peer. Add code to register the peer and bind it to the component. To create the peer JavaScript file: In the Application Navigator, right-click the project and select New. In the New Gallery, expand Web Tier and select HTML. Select JavaScript File and click OK. In the Create JavaScript File dialog, do the following: File Name: Enter the name of the client-side peer. For example, for the tagPane component, you might enter AcmeTagPanePeer\component. Open the JavaScript file in the editor and add code for the peer. In this code, you must create the peer, add event handling with respect to the DOM, and register the peer with the component. Example 30-7 shows the code that might be added for the tagPane component. Example 30-7 tagPane JavaScript Peer AdfRichUIPeer.createPeerClass(AdfRichUIPeer, "AcmeTagPanePeer", true); AcmeTagPanePeer.InitSubclass = function() { AdfLogger.LOGGER.logMessage(AdfLogger.FINEST, "AcmeTagPanePeer.InitSubclass()"); AdfRichUIPeer.addComponentEventHandlers(this, AdfUIInputEvent.CLICK_EVENT_TYPE); } AcmeTagPanePeer.prototype.HandleComponentClick = function(componentEvent) { AdfLogger.LOGGER.logMessage(AdfLogger.FINEST, "AcmeTagPanePeer.HandleComponentClick(componentEvent)"); // if the left mouse button was pressed if (componentEvent.isLeftButtonPressed()) { // find component for the peer var component = this.getComponent(); AdfAssert.assertPrototype(component, AcmeTagPane); // find the native dom element for the click event var target = componentEvent.getNativeEventTarget(); if (target && target.tagName == "A") { AdfLogger.LOGGER.logMessage(AdfLogger.FINEST, "File type element (A) found: " + componentEvent.toString()); var tag = target.firstChild.nodeValue; AdfAssert.assertString(tag); AdfLogger.LOGGER.logMessage(AdfLogger.FINEST, "tag :" + tag); // fire a select event AcmeTagSelectEvent.queue(component, tag); //cancel the native dom onclick to prevent browser actions based on the //'#' hyperlink. The event is of type AdfIEUIInputEvent. This event //will cancle the native dom event by calling //AdfAgent.AGENT.preventDefault(Event) componentEvent.cancel(); } // event has dom node } } // Register the peer with the component. This bit of script must // be invoked after the AcmeTagPane and AcmeTagSelectEvent objects // are created. This is enforced by the ordering of the script files // in the oracle.asfdemo.acme.faces.resource.AcmeResourceLoader. AcmeScriptsResourceLoader.AdfPage.PAGE.getLookAndFeel() .registerPeerConstructor("oracle.adfdemo.acme.TagPane", "AcmeTagPanePeer"); Now that you have created all the JavaScript files for the component, you can add the component to the adf-js-features.xml file you created. Follow the procedures documented in Section A.9.1, "How to Create a JavaScript Feature," omitting the steps for creating the XML files, as you have already done so. Example 30-8 shows the adf-js-features.xml file used for the tagPane component. Example 30-8 adf-js-features.xml File for the tagPane Component <?xml version="1.0" encoding="UTF-8" ?> <features xmlns=""> <feature> <feature-name>AcmeTagPane</feature-name> <feature-class> oracle/adfdemo/acme/js/component/AcmeTagPane.js </feature-class> <feature-class> oracle/adfdemo/acme/js/event/AcmeTagSelectEvent.js </feature-class> <feature-class> oracle/adfdemo/acme/js/component/AcmeTagPanePeer.js </feature-class> </feature> </features> Server-side development involves creating Java classes for: Event listener: This class listens for events and then invokes processing logic to handle the event. Events: You create an event in order to invoke the logic in the associated listener. Component: This class holds the properties that define behavior for the component. Resource bundle: This class holds text strings for the component. Renderer: This class determines how the component will be displayed in the client. Resource loader: This class is required only if your component contains images needed for skinning. After you have created the classes, add the component class and the renderer class to the faces-config.xml file. Then, complete the configuration files started in Section 30.2, "Setting Up the Workspace and Starter Files." The ADF Faces event API requires an event listener interface to process the event. The custom component has a dependency with the event and the event has a dependency with an event listener interface. The Java import statements must reflect these dependencies. You also must define the componentType for the component. To create the EventListener class: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General and select Java. Select Java Interface and click OK. In the Create Java Interface File dialog, do the following: Name: Enter a listener name. For example, for the tagPane component, you might enter TagSelectListener. Package: Enter a name for the package. For example, for the tagPane component, you might enter oracle.adfdemo.acme.faces.event. Open the Java file in the editor and add the following: Have the listener extend the javax.faces.event.FacesListener interface. Add an import statement, and import the FacesListener class and any other classes on which your event is dependent. Add a method signature that will process the new event. Even though you have not created the actual event, you can enter it now so that you will not have to enter it later. Example 30-9 shows the code for the tagPane event listener. Example 30-9 tagPane Event Listener Java Code package oracle.adfdemo.acme.faces.event; import javax.faces.event.AbortProcessingException; import javax.faces.event.FacesListener; public interface TagSelectListener extends FacesListener { /** * <p>Process the {@link TagSelectEvent}.</p> * @param event fired on click of a tag link * @throws AbortProcessingException error processing {@link TagSelectEvent} */ public void processTagSelect(TagSelectEvent event) throws AbortProcessingException; } You must create a server-side event that will be the counter representation of the JavaScript event created in Section 30.3.2, "How to Create a Javascript File for an Event." Server-side JSF events are queued by the component during the Apply Request Values lifecycle phase. Events propagate up to the UIViewRoot class after all the phases but the Render Response phase. Queued events are broadcast to the associated component. The server-side Java component must raise the server-side event, so you must create the event source file first to resolve the compilation dependency. To create the server-side event class: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General and select Java. Select Java Class and click OK. In the Create Java Class File dialog, do the following: Name: Enter an event name. For example, for the tagPane component, you might enter TagSelectEvent. Package: Enter the package name. For example, for the tagPane component, you might enter oracle.adfdemo.acme.faces.event. Extends: Enter a name for the class that the event class extends. This is usually javax.faces.event.FacesEvent. In the Optional Attributes section, select the following:. In the Access Modifiers section, select public. At the bottom, select Constructors from Superclass and Implement Abstract Methods. Example 30-10 shows the code for the event class. Example 30-10 tagPane Event Java Code package oracle.adfdemo.acme.faces.event; import javax.faces.component.UIComponent; import javax.faces.event.FacesEvent; import javax.faces.event.FacesListener; public class TagSelectEvent extends FacesEvent { /** * <p>Tag selected on the client.</p> */ private String tag = null; /** * <p>Overloade constructor passing the <code>source</code> * {@link oracle.adfdemo.acme.faces.component.TagPane} component and the * selected <code>tag</code>. * </p> * @param source component firing the event * @param tag selected tag link type */ public TagSelectEvent(UIComponent source, String tag) { super(source); this.tag = tag; } /** * <p>Returns <code>true</code> if the <code>facesListener</code> is a * {@link TagSelectListener}.</p> * * @param facesListener listener to be evaluated * @return <code>true</code> * if <code>facesListener</code> instancof {@link TagSelectListener} */ public boolean isAppropriateListener(FacesListener facesListener) { return (facesListener instanceof TagSelectListener); } /** * <p>Delegates to the <code>processTagSelect</code> * method of a <code>FacesListener</code> * implementing the {@link TagSelectListener} interface. * * @param facesListener target listener realizing {@link TagSelectListener} */ public void processListener(FacesListener facesListener) { ((TagSelectListener) facesListener).processTagSelect(this); } /** * @return the tag that was selected triggering this event */ public String getTag() { return tag; } } A JSF component can be described as a state holder of properties. These properties define behavior for rendering and how a component responds to user interface actions. When you are developing the component class, you identify the types of the needed properties. You also define the base component that it will extend from the MyFaces Trinidad Framework. For example, the tagPane component extends the UIXObject in MyFaces Trinidad. Most components will have several properties that should be implemented. Some of the properties are inherited from the base class, and some are required for the rich client framework. Other properties are required because they are best practice. And finally, some properties are specific to the functionality of the custom component. For example, the tagPane component has the properties shown in Table 30-4. ADF Faces and MyFaces Trinidad component libraries are defined differently from other libraries. A JSF component has a collection called attributes that provides access to component properties (using the Java simple beans specification) through a MAP interface. The collection also holds value pairs that do not correspond to a component's properties. This concept is called attribute transparency. The JSF runtimes (both MyFaces Trinidad and the JSF reference implementation) implement this concept using the Java reflection API. My Faces Trinidad defines its own internal collection, which does not use the Java reflection API. This difference means that it is more efficient than the base implementation. The solution in MyFaces Trinidad collects more metadata about the component properties. This metadata declares state properties, which allows the base class to fully implement the StateHolder interface in a base class. My Faces Trinidad extends the javax.faces.component.UIComponent class with the org.apache.trinidad.component.UIXComponent class, followed by a complete component hierarchy. To ease code maintenance, the framework has a strategy for generating code based on configuration files and templates. This component strategy is a trade-off in terms of development. It requires more coding for defining properties, but you will not have to code the two methods ( saveState, restoreState) for the StateHolder interface for each component. Note:Do not have your custom component extend from any ADF Faces implementation packages. These implementations are private and might change. Use JDeveloper to create a Java file for the component. Create a Type bean to hold property information and define a PropertyKey for each property. Then, generate accessors for the private attributes. To create the component class: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General and select Java. Select Java Class. Click OK. In the Create Java Class File dialog, do the following: Name: Enter a component name. For example, for the tagPane component, you might enter TagPane. Package: Enter a name for the package. For example, for the tagPane component, you might enter oracle.adfdemo.acme.faces.component. Extends: Enter a name for the class the component class extends. For example, for the tagPane component, you would enter org.apache.myfaces.trinidad.component.UIXObject. In the Optional Attributes section, select the following:. In the Access Modifiers section, select public. At the bottom, select Constructors from Superclass, and Implement Abstract Methods. In the source editor, create a Type bean that contains component property information. This static class attribute shadows an attribute with the same name in the superclass. The type attribute is defined once per component class. Through the Type constructor, you pass a reference to the superclass's Type bean, which copies property information. For example, the tagPane class would contain the following constructor: static public final FacesBean.Type TYPE = new FacesBean.Type(UIXObject.TYPE); For each property, define a static PropertyKey that is used to access the properties state. Use the TYPE reference to register a new attribute. Specify the property type using the class reference. The component data type should correspond to the component property. There is another overload of the registerKey method that allows you to specify state information. The default assumes the property is persistent. Example 30-11 shows the PropertyKey methods for the tagPane component. Example 30-11 PropertyKey Definition /** * <p>Custom CSS applied to the style attribute of the root markup node.</p> */ static public final PropertyKey INLINE_STYLE_KEY = TYPE.registerKey("inlineStyle", String.class); /** * <p>Custom CSS class to the class attribute of the root markup node.</p> */ static public final PropertyKey STYLE_CLASS_KEY = TYPE.registerKey("styleClass", String.class); Right-click in the editor and choose Generate Accessors. In the Generate Accessors dialog, click Select All, ensure the Scope is set to Public, and click OK. This allows JDeveloper to generate get and set methods for the private attributes. Then, remove the private attribute and replace with calls to getProperty(PropertyKey) and getProperty(PropertyKey). Example 30-12 shows the code after replacing the private attribute. Example 30-12 Component Properties public void setInlineStyle(String newinlineStyle) { // inlineStyle = newinlineStyle; setProperty(INLINE_STYLE_KEY, newinlineStyle); } /** * <p>CSS value applied to the root component's style attribute.</p> * * @return newinlineStyle CSS custom style text */ public String getInlineStyle() { // return inlineStyle; return (String) getProperty(INLINE_STYLE_KEY); } You may need to override any methods to perform specific functions in the component. For example, to allow your component to participate in partial page rendering (PPR), you must override the getBeanType method, as shown in Example 30-13. /** * <p>Exposes the <code>FacesBean.Type</code> for this class through a protected * method. This method is called but the <code>UIComponentBase</code> superclass * to setup the components <code>ValueMap</code> which is the container for the * <code>attributes</code> collection.</p> * * @return <code>TagPane.TYPE</code> static property */ @Override protected FacesBean.Type getBeanType() { return TYPE; } Refer to the ADF Faces JavaDoc for more information about the class your component extends, and the methods you may need to override. For the tagPane component, the component must act on the event fired from the client component. A reference to the source component is passed as a parameter to the event's constructor. For the tagPane component, the broadcast method checks if the event passed in using the formal parameter is a TagSelectEvent. If it is, the broadcast method invokes the method expression held by the tagSelectListener attribute. Most events have an immediate boolean property that specifies the lifecycle phase in which the event should be invoked. If the immediate attribute is true, the event is processed in the Apply Values phase; otherwise, the event is processed in the Invoke Application phase. For more information, see Chapter 4, "Using the JSF Lifecycle with ADF Faces." Example 30-14 shows the overwritten broadcast method for the tagPane component. Example 30-14 The broadcast Method in the tagPane Component /** * @param facesEvent faces event * @throws AbortProcessingException exception during processing */ @Override public void broadcast(FacesEvent facesEvent) throws AbortProcessingException { // notify the bound TagSelectListener if (facesEvent instanceof TagSelectEvent) { TagSelectEvent event = (TagSelectEvent) facesEvent; // utility method found in UIXComponentBase for invoking method event // expressions broadcastToMethodExpression(event, getTagSelectListener()); } super.broadcast(facesEvent); } After creating the component class, register the component by adding it to the /META-INF/faces-config.xml file. By defining the component in the faces configuration file packaged with the JAR project, you ensure that component is automatically recognized by the JSF runtime during web application startup. To register the component, enter the component type, which is a logical name used by the applications factory to instantiate an instance of the component. For example, the tagPane component's type is oracle.adfdemo.acme.TagPane. You also need to add the fully qualified class path for the component, for example oracle.adfdemo.acme.faces.component.TagPane. To register a custom component: In the Application Navigator, double-click the faces-config.xml file. Click the Overview tab and click the Components navigation tab. Click the Add icon and enter the type and class for the component. Optionally, add any attributes, properties, or facets. Example 30-15 shows the tagPane component defined within a faces-config.xml file. Example 30-15 tagPane Component Added to the faces-config.xml File <?xml version="1.0" encoding="UTF-8" ?> <faces-config <application> </application> <component> <component-type>oracle.adfdemo.acme.TagPane</component-type> <component-class>oracle.adfdemo.acme.faces.component.TagPane </component-class> </component> Resource bundles are used to store information for the component, such as text for labels and messages, as well as translated text used if the application allows locale switching. Skins also use resource bundles to hold text for components. Because your custom component must use at least the simple skin, you must create at least a resource bundle for that skin. For a custom component, create a Java file for the resource bundle. For more information about resource bundle classes, see Section 20.3, "Defining Skin Style Properties." Tip:You can also use a properties file for your resources. To create the resource bundle class: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General and select Java. Select Java Class and click OK. In the Create Java Class File dialog, do the following: Name: Enter a resource bundle name. The name should reflect the skin with which it will be used. For example, for the sample component, you might enter AcmeSimpleDesktopBundle. Package: Enter a name for the package. For example, for the sample component, you might enter oracle.adfdemo.acme.faces.resource. Extends: For resource bundles, you must enter java.util.ListResourceBundle. In the Optional Attributes section, select the following:. In the Access Modifiers section, select public. At the bottom, select Constructors from Superclass and Implement Abstract Methods. Add any keys and define the text as needed. For more information about creating resource bundles for skins, see Section 20.3.1, "How to Apply Skins to Text." Example 30-16 shows the resource bundle code for the tagPane component. Example 30-16 tagPane Resource Bundle Java Code package oracle.adfdemo.acme.faces.resource; import java.util.ListResourceBundle; /** * <p>Holds properties used by the components bundled in the jar project. * This bundle is part of the trinidad component skin that is configured * in the "/META-INF/trinidad-skins.xml" file. Component Renderers * will use the <code>RenderingContext</code> to lookup a key by calling * the <code>getTranslatedString(key)</code> method.</p> */ public class AcmeSimpleDesktopBundle extends ListResourceBundle { /** * <p>Returns a two dimensional object array that represents a resource bundle . * The first * element of each pair is the key and the second the value.</p> * * @return an array of value pairs */ protected Object[][] getContents() { return new Object[][] { {"AcmeTagPane_tag_title","Tag Weight: {0}"} }; } } To register the resource bundle for the simple desktop skin and any other desired skins, double-click the /META-INF/trinidad-skins.xml file to open it and do the following: In the Structure window, select skin-addition. In the Property Inspector, enter a skin ID. For the simple skin ID, enter simple.desktop. In the Structure window, right-click skin-addition and choose Insert inside skin-addition > bundle-name. In the Property Inspector, enter the fully qualified name of the resource bundle just created. Note:JDeveloper adds translation-sourceand bundle-nameelements as comments. Instead of declaratively creating another bundle-nameelement, you can manually enter the bundle-namevalue in the generated element, and then remove the comment tag. Example 30-17 shows the code for registering the tagPane resource bundle with the simple skin (you will add the style-sheet-name element value in a later step). Example 30-17 Registering a Resource Bundle with a Skin <skins xmlns=""> <skin-addition> <skin-id>simple.desktop</skin-id> <style-sheet-name></style-sheet-name> <bundle-name> oracle.adfdemo.acme.faces.resource.AcmeSimpleDesktopBundle </bundle-name> </skin-addition> </skins> ADF Faces components delegate the functionality of the component to a component class, and when the consuming application uses JSPs, devices. Renderers are qualified in a render kit by family and renderer type. The family is a general categorization for a component, and should be the same as the family defined in the superclass. You do not have to override the getFamily() method in the component because the component will have the method through inheritance. To create the renderer class: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General then select Java. Select Java Class and click OK. In the Create Java Class File dialog, do the following: Name: Enter a renderer name. For example, for the tagPane component, you might enter TagPaneRenderer. Package: Enter a name for the package. For example, for the tagPane component, you might enter oracle.adfdemo.acme.faces.render. Extends: Enter oracle.adf.view.rich.render.RichRenderer. In the Optional Attributes section, select the following:. In the Access Modifiers section, select public. At the bottom, select Constructors from Superclass and Implement Abstract Methods. Add any needed functionality. For example, the skinning functionality provides an API you can use to get the CSS style properties for a given CSS selector during rendering of the component. This API is useful if you need to do conditional rendering based on what styling is set. For more information, see RenderingContext#getStyles and Styles#getSelectorStyleMap in the MyFaces Trinidad Javadoc at. After you create the renderer, register it using the faces-config.xml configuration file. If you want the custom component to work with the other ADF Faces components, you must use the same render kit ID that ADF Faces components use. Tip:The most granular level that JSF allows for defining a render kit is at the view root. To register the render kit and renderer: In the Application Navigator, double-click the faces-config.xml file to open it in the editor. Select the Overview tab and then select the Render Kits navigation tab. Click the Add icon for the Render Kits and enter oracle.adf.rich for the render kit ID. Register your renderer by clicking the Add icon for Renderers and doing the following: Family: Enter the class that the component extends. For example, for the tagPane component, you would enter org.apache.myfaces.trinidad.Object. Type: Enter the type for the component. For example, for the tagPane component, you would enter oracle.adfdemo.acme.TagPane. This must match the renderer type. Class: Enter the fully qualified class path to the renderer created in Section 30.4.7, "How to Create a Class for a Renderer." For example, for the tagPane component, you would enter oracle.adfdemo.acme.faces.render.TagPaneRenderer. Example 30-18 shows the registration of the tagPane component render kit and renderer. Example 30-18 tagPane Renderer Added to the faces-config.xml File <render-kit> <render-kit-id>oracle.adf.rich</render-kit-id> <renderer> <component-family>org.apache.myfaces.trinidad.Object</component-family> <renderer-type>oracle.adfdemo.acme.TagPane</renderer-type> <renderer-class>oracle.adfdemo.acme.faces.render.TagPaneRenderer </renderer-class> </renderer> </render-kit> To use the component on a JSP page, you create a custom tag that will instantiate the custom component. The JSP tag has nothing to do with rendering because the component's renderer will actually perform that task. In JSF 1.1, the JSP tag would invoke rendering on the component after creating and adding it to the component tree. This caused problems because the non-JSF/JSP tags were writing to the same response writer. The timing of the interleaving did not work out for components that rendered their own child components. Note:(An application that uses Facelets uses a handler to instantiate the component. For more information, see Section 30.2.8, "How to Add a Facelets Tag Library Configuration File") In JSF 1.2, the target for Java EE 5 (Servlet 2.5, JSP 2.1), most of the JSP problems were fixed. The JSF/JSP component acts as a component factory that is responsible only for creating components. This means that the rendering response phase is divided into two steps. First the component tree is created, and then the tree is rendered, instead of rendering the components as the component tree was being built. This functionality was made possible by insisting that the entire view be represented by JSF components. The non-JSF/JSP generates markup that implicitly becomes a JSF verbatim component. As a result of changing these mechanics, in JSF 1.2, custom JSP tags extend the javax.faces.webapp.UIComponentELTag class. The encodeBegin, encodeChildren, and encodeEnd methods in the JSP tag have been deprecated. These methods once made corresponding calls to the component. Because the view root in JSF 1.2 does the rendering, all the work can be done in the doStartTag and doEndTag methods. MyFaces Trinidad has its own version of this base class that you will use. The org.apache.myfaces.Trinidad.webapp.UIComponentELTag hooks into the components property bag and makes coding JSPs simpler. The tag class includes the creation of the component's properties. You must choose tag properties carefully. There are some properties that you can ignore for tag implementation, but they may be required as TLD attributes. The following three attributes are implemented by superclasses and shared by many components through Java inheritance: id binding rendered Do not implement the id attribute because the id attribute is implemented by the superclass javax.faces.webapp.UIComponentTagBase. The superclass javax.faces.webapp.UIComponentELTag implements the other two attributes, binding and rendered. Therefore, you do not need to add these to your tag class. In the Application Navigator, right-click the project and select New. In the New Gallery, expand General then select Java. Select Java Class and click OK. In the Create Java Class File dialog, do the following: Name: Enter a tag name. For example, for the tagPane component, you might enter TagPaneTag. Package: Enter a name for the package. For example, for the tagPane component, you might enter oracle.adfdemo.acme.faces.taglib. Class: Enter org.apache.myfaces.trinidad.webapp.UIXComponentELTag. In the Optional Attributes section, select the following:. In the Access Modifiers section, select public. At the bottom, select Constructors from Superclass and Implement Abstract Methods. In the source editor, add all the attributes to the file. Example 30-19 shows the code for the attributes for the TagPaneTag class. Example 30-19 Attributes in the TagPaneTag Class public class TagPaneTag extends UIXComponentELTag { private ValueExpression _partialTriggers = null; private ValueExpression _visible = null; private ValueExpression _inlineStyle = null; private ValueExpression _styleClass = null; private ValueExpression _tags = null; private ValueExpression _orderBy = null; private MethodExpression _tagSelectListener = null; To declaratively generate the accessor methods for the attributes, right-click the file in the source editor and choose Generate Accessors. In the Generate Accessors dialog, click Select All, set the Scope to public and click OK. Add the render type and component type to the class. The component type will be used by the superclass to instantiate the component using the application's factory method, createComponent(componentType). Example 30-20 shows the code for the TagPane Tag class, where both the component type and render type are oracle.adfdemo.acme.TagPane. Example 30-20 Component Type and Render Type for the TagPaneTag Class public String getComponentType() { return COMPONENT_TYPE; } public String getRendererType() { return RENDERER_TYPE; } /** * <p>This component's type, <code>oracle.adfdemo.acme.TagPane</code></p> */ static public final String COMPONENT_TYPE = "oracle.adfdemo.acme.TagPane"; /** * <p>Logical name given to the registered renderer for this component.</p> */ static public final String RENDERER_TYPE = "oracle.adfdemo.acme.TagPane"; Override the setProperties method from the superclass that has a single formal parameter of type FacesBean. This is a MyFaces Trinidad version on the base UIComponentELTag, but it is passed the components state holder instead of the component reference. The job of the setProperties method is to push the JSP tag attribute values to the component. Example 30-21 shows the overridden method for the tagPane Tag class. Example 30-21 Overridden setProperties Method in the TagPaneTag Class @Override protected void setProperties(FacesBean facesBean) { super.setProperties(facesBean); setStringArrayProperty(facesBean, TagPane.PARTIAL_TRIGGERS_KEY, _partialTriggers); setProperty(facesBean, TagPane.VISIBLE_KEY, _visible); setProperty(facesBean, TagPane.INLINE_STYLE_KEY, _inlineStyle); setProperty(facesBean, TagPane.STYLE_CLASS_KEY, _styleClass); setProperty(facesBean, TagPane.TAGS_KEY, _tags); setProperty(facesBean, TagPane.ORDER_BY_KEY, _orderBy); facesBean.setProperty(TagPane.TAG_SELECT_LISTENER_KEY, _tagSelectListener); } A tag library descriptor (TLD) provides more information on the Java Class to the JSP compilation engine and IDE tools (TLDs are not used in applications that use Facelets). Associate the tag library with a URI, assign a version, and give it a name. You should have already performed this step when you created the tag library stub file in Section 30.2.6, "How to Add a JavaServer Pages Tag Library Descriptor File." Open the skeleton TLD file. In the Component Palette, drag and drop a tag element. In the Insert tag dialog, do the following: name: Enter the name of the component. For example, for the tagPane component, you might enter tagPane. body-content: Enter JSP. tag-class: Click the ellipses button and navigate to the components tag class file. Define each of the attributes as follows. For each attribute: In the Structure window, right-click the tag element and choose Insert inside tag > attribute. In the Insert Attribute dialog, enter a value for the name. This should be the same as the name given in the tag class. In the Structure window, select the attribute and in the Property Inspector, set any attribute values. There are three types of elements to define for each attribute. The <id> element is a simple string. Additionally attributes can be either deferred-value or deferred-method attributes. These allow late (deferred) evaluation of the expression. Now that JSP and JSF share the same EL engine, the compiled EL can be passed directly to the component. Example 30-22 shows the TLD for the tagPane component. Example 30-22 tagPane acme.tld Tag Library Descriptor Code <?xml version = '1.0' encoding = 'windows-1252'?> <taglib xmlns: <description>Acme Corporation JSF components</description> <display-name>acme</display-name> <tlib-version>1.0</tlib-version> <short-name>acme</short-name> <uri></uri> <tag> <description> </description> <name>tagPane</name> <tag-class>oracle.adfdemo.acme.faces.taglib.TagPaneTag</tag-class> <body-content>JSP</body-content> <attribute> <name>id</name> <rtexprvalue>true</rtexprvalue> </attribute> <attribute> <name>rendered</name> <deferred-value> <type>boolean</type> </deferred-value> </attribute> <attribute> <name>tagSelectListener</name> <deferred-method> <method-signature>void </method-signature> myMethod(oracle.adfdemo.acme.faces.event.TagSelectEvent) </deferred-method> </attribute> <attribute> <name>visible</name> <deferred-value> <type>boolean</type> </deferred-value> </attribute> <attribute> <name>partialTriggers</name> <deferred-value> </deferred-value> </attribute> <attribute> <name>inlineStyle</name> <deferred-value/> </attribute> <attribute> <name>inlineClass</name> <deferred-value/> </attribute> <attribute> <name>tags</name> <deferred-value/> </attribute> <attribute> <name>binding</name> <deferred-value/> </attribute> <attribute> <name>orderBy</name> <deferred-value/> </attribute> </tag> </taglib> A resource loader is required only if the custom component has image files needed for the component's skinning. The images files are packaged into the JAR project so that the consumer of the component library will need to include the JAR into the class path of their web project and add a few entries into their web deployment descriptor file ( web.xml). The rich client framework uses a resource servlet to deliver images. You need to register this servlet in the web.xml file and then create the resource loader class. A component library requires a resource loader that is auto-loaded by the resource servlet. You create a URL pattern folder mapping for the servlet, which will be used to locate and identify resources within your custom component library. To create a resource loader class: In the Application Navigator, right-click the project and select New. In the New Gallery, expand General and select Java. Select Java Class and click OK. In the Create Java Class File dialog, do the following: Name: Enter a resource loader name. For example, for the tagPane component, you might enter AcmeResourceLoader. Package: Enter a name for the package. For example, for the tagPane component, you might enter oracle.adfdemo.acme.faces.resources. Extends: Enter a name for the class that the tag extends. For example, for the tagPane component, you would enter org.apache.myfaces.trinidad.resource.RegexResourceLoader. In the Optional Attributes section, select the following:. In the Access Modifiers section, select public. At the bottom, select Constructors from Superclass and Implement Abstract Methods. In the source editor, register regular expressions that map to more specific resource loaders. For example, you might create an expression that maps image resources located under an images directory. Example 30-23 shows the expression for the tagPane component that maps the /acme/images/ directory located relative to the /META-INF folder of the custom component JAR. As a result of the registration, the custom component images should be located under /META-INF/acme/images. Example 30-23 Resource Loader for the tagPane Component public class AcmeResourceLoader extends RegexResourceLoader { public AcmeResourceLoader() { // any resource in "/acme/" with the following suffixes will be // loaded from the base folder of "META-INF". // The servlet pattern match "/acme/*" should exist under "META-INF". // For example URL : context-root/acme/images/type1.gif // map to: META-INF/acme/images/type1.gif register("(/.*\\.(jpg|gif|png|jpeg))", new ClassLoaderResourceLoader("META-INF")); Register the Libraries Resource Loader by opening the /META-INF/servlet/resources/ name .resources file and adding the fully qualified name of the resource loader class bound to the URI pattern. The MyFaces Trinidad ResourceServlet uses the servlet context to scan across all JAR files within the class path of the web application. The servlet looks at its own URI mappings in the web deployment descriptor to formulate the location of this resource file. This file must contain the fully qualified name of the Java class bound to the URI pattern. During startup, the ResourceServlet will locate and use this file in a manner similar to how FacesServlet locates and uses the faces-config.xml files. For the tagPane component, the acme.resources file would contain this entry for the composite resource loader: oracle.adfdemo.acme.faces.resource.AcmeResourceLoader A skin is a style sheet based on the CSS 3.0 syntax specified in one place for an entire application. Instead of inserting a style sheet on each page, you use one or more skins for the entire application. Every component automatically uses the styles as described by the skin. No design time code changes are required. Oracle ADF Faces provides three skins for use in your applications: blafplus-rich: Defines the default styles for ADF Faces components. This skin extends the blafplus-medium skin. blafplus-medium: Provides a modest amount of styling. This style extends the simple skin. simple: Contains only minimal formatting. Skins provide more options than setting standard CSS styles and layouts. The skin's CSS file is processed by the skin framework to extract skin properties and icons and register them with the Skin object. Style sheet rules include a style selector, which identifies an element, and a set of style properties, which describes the appearance of the components. All ADF Faces components use skins. The default skin is the simple skin. Because your custom components will be used in conjunction with other ADF Faces components, you add style selectors to an existing ADF Faces skin. Because the rich and medium skins inherit styles from the simple skin, you can simply add your selectors to the simple skin, and it will be available in all skins. However, you may want to style the selector differently for each skin. You set these styles in the CSS file you created. This file will be merged with other CSS styles in the application in which the component is used. The text used in a skin is defined in a resource bundle. Create the text by creating a custom resource bundle and declaring the text you want to display. After you create your custom resource bundle, you register it with the skin. Coupling resource bundles with your CSS provides a method to make your components support multiple locales. The /META-INF/trinidad-skins.xml file you created is used to register your CSS file and your resource bundle with an ADF Faces skin. To create styles for your component: Open the CSS file you created in Section 30.2.4, "How to Add a Cascading Style Sheet." Define a root style selector for the component. This style will be associated with the <DIV> element that establishes the component. Add other style selectors as needed. Example 30-24 shows the CSS file for the tagPane component. Example 30-24 CSS File for the tagPane component acme|tagPane - root element acme|tagPane::content - container for the links acme|tagPane::tag - tag hyperlink For more information about creating CSS for components to be used by skins, see Section 20.3, "Defining Skin Style Properties." Create any needed resource bundle for your component. To register your CSS with an ADF Faces skin, open the /META-INF/trinidad-skins.xml file. In the Structure window, select the skin-addition element, and in the Property Inspector, do the following: skin-id: Enter the ADF Faces skin to which you want to add the custom component selectors. You must register the selectors at least to the simple.desktop skin in order for them to be compatible with ADF Faces components. Note:If there is a possibility that the component will be used in an Oracle WebCenter application, then you must also register the selectors with the simple.portletskin. Skins are also available for PDAs (for example, simple.pda). For more information, see Chapter 20, "Customizing the Appearance Using Styles and Skins." style-sheet-name: Use the dropdown menu to choose Edit, and navigate to the CSS file you created. If you created a resource bundle, add the fully qualified path to the bundle as the value for the <bundle-name> element. Example 30-25 show the code for the tagPane component. Example 30-25 tagPane trinidad-skins.xml Code <?xml version="1.0" encoding="UTF-8" ?> <skins xmlns=""> <skin-addition> <skin-id>simple.desktop</skin-id> <style-sheet-name>acme/styles/acme-simple-desktop.css</style-sheet-name> <bundle-name>oracle.adfdemo.acme.faces.resource.AcmeSimpleDesktopBundle </bundle-name> </skin-addition> </skins> Add an image folder for the images used for the custom component. This folder should be under the META-INF directory. Place any images used by the custom component into this folder. For tagPane, the image folder is /META-INF/acme/images. After creating the custom component library, you must create a deployable artifact that can be used by a web application. Before you can build a Java archive (JAR) file, update the project's deployment profile by adding the many resources you created. To create the JAR file for deployment: In the Application Navigator, double-click the project to open the Project Properties dialog. In the left pane, select Compiler. On the right, ensure that all file types to be deployed are listed in the Copy File Types to Output Directory text field. Note:Some file types, such as .cssand .jsare not included by default. You will need to add these. In the left pane, select Deployment. On the right, under Deployment Profiles, select the ADF Library JAR file, and click Edit. In the left pane, select JAR Options. Verify the default directory path or enter a new path to store your ADF Library JAR file. Ensure that Include Manifest File is selected, and click OK. To deploy, right-click the project and select Deploy >Project_name from the context menu. By default, the JAR file will be deployed to a deployment directory within the project directory. After the component has been created and you have created an ADF Library, you can proceed to import it and use it in another application. However, before using it in an application under development, you should use it in a test application to ensure it works as expected. To do so, import the custom library into your test application. For procedures, see the "Adding ADF Library Components into Projects" section of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. After you add the library, you configure the web deployment descriptor to add a resource servlet mapping. When you use the component and run your test application, you may find you need to debug the component. Therefore, it helps to have logging and assertions enabled for the project. Tip:Importing a library into an application allows the custom component to appear in JDeveloper's Component Palette. You configured the component resource loader to assume a servlet resource mapping (for example, for the tagPane component, the mapping was acme). Therefore, you must add the expected resource servlet mappings to the consuming application's web.xml file. By default, MyFaces Trinidad skinning compresses the CSS classes when it normalizes CSS 3 into CSS 2. Turn off this compression while you are debugging the component. For a production deployment, toggle off this setting. To configure the web.xml file: In the Application Navigator, double-click the web.xml file to open it. In the overview editor, select the Servlets navigation tab, and click the Add icon to add a new servlet. In the Servlets table, do the following: Name: Enter resources. Servlet Class: Enter org.apache.myfaces.trinidad.webapp.ResourceServlet. Below the table, click the Servlet Mappings tab, then click the Add icon. Enter a URI prefix. Resources beginning with this prefix will be handled by the servlet. For example, for the tagPane component, you might enter the prefix /acme/*. To disable compression of the style sheet: Select Application. Click the Add icon for Context Initialization Parameters. For Name, enter org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION and for Value enter true. JavaScript debugging can be a difficult task. To help debug this dynamic language with no type checking, the rich client JavaScript libraries provide a logging mechanism similar to Java logging. There is also an assertion strategy to make the client scripts more type safe. Both of these features are turned on using configuration parameters in the web.xml file. The logging and assertion routines are browser specific. The client JavaScript libraries will support Gecko, Internet Explorer, Opera, and Safari versions of browser agents. For more information, see Section A.2.3.4, "Resource Debug Mode." To turn on logging and assertion: In the Application Navigator, double-click the web.xml file. In the overview editor, click the Application navigation tab. On the Application page, click the Add icon for the Context Initialization Parameters. Add the following parameter to turn on debugging: Name: org.apache.myfaces.trinidad.resource.DEBUG Value: true This setting prevents MyFaces Trinidad from setting the cache headers for resources like JavaScript. It prevents the browser from caching resources. Add the following parameter to set the debug level for client side JavaScript. Name: oracle.adf.view.rich.LOGGER_LEVEL Value: ALL The valid values are OFF, SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST and ALL. The default is OFF. Add the following parameter to turn on client-side script assertions: Name: oracle.adf.view.rich.ASSERT_ENABLED Value: true This setting works together with logging. Toggling this switch to on will make debug information available to the browser. The assertions and logging are displayed differently, depending on the browser. For Internet Explorer, a child browser window will appear beside the active window. For FireFox with the Fire Bug plugin, the debug information will be available through the Fire Bug console. To add the custom component to a JSF page: Open the jspx page in the source editor. Add the TLD namespace to the root tag. For example, for the tagPane component, because the tag library's URI is:, you would add: xmlns:acme="" Use the Component Palette to add the component to the page. Use the Property Inspector to set any attributes. Tip:If you are developing the application outside of JDeveloper, then on the page, use TLD short name and the component name. Also, add any values for attributes. For example, for the tagPane, you might add: <acme:tagPane> < < <tagSelectionListener=#(tagBean.onTagSelect) </tagPane> If you wish to create the tagPane component as described in this chapter, and use it in an application, you will need to use backing beans to bind the custom component to the application components. Example 30-26 shows the backing bean code that is used to bind the tagPane component to the File Explorer application. Example 30-26 Backing Bean Logic for the tagPane Custom Component public Map<String, Number> getTags() { if (_tags == null) { _tags = new TreeMap<String, Number>(); List<FileItem> nameToFileItems = feBean.getDataFactory().getFileItemList(); _doDeepTagCollection(_tags, nameToFileItems); } return _tags; } public void onTagSelect(TagSelectEvent event) { _selectedTag = event.getTag(); CriteriaFileItemFilter criteria = new CriteriaFileItemFilter(_selectedTag); List<FileItem> nameToFileItems = _feBean.getDataFactory().getFileItemList(); if (_selectedTagFileItemList == null) { _selectedTagFileItemList = new ArrayList<FileItem>(); else { _selectedTagFileItemList.clear(); } _doDeepTagSearch(criteria, _selectedTagFileItemList, nameToFileItems); _selectedTagResultsTableModel = new SortableModel(_selectedTagFileItemList); }
https://docs.oracle.com/cd/E15523_01/web.1111/b31973/ad_custom.htm
CC-MAIN-2021-39
refinedweb
12,251
50.12
Analysis of Algorithms (Recurrences) What is the value of following recurrence.Analysis of Algorithms (Recurrences) T(n) = T(n/4) + T(n/2) + cn2 T(1) = c T(0) = 0Where c is a positive constant Discuss it Question. What is the value of following recurrence. T(n) = 5T(n/5) +Analysis of Algorithms (Recurrences) , T(1) = 1, T(0) = 0 Discuss it Question 2 Explanation: The given solution can be solved using Master Method. It falls in Case 1. What is the worst case time complexity of following implementation of subset sum problem.Analysis of Algorithms (Recurrences) // Returns true if there is a subset of set[] with sun equal to given sum bool isSubsetSum(int set[], int n, int sum) { // Base Cases if (sum == 0) return true; if (n == 0 && sum != 0) return false; // If last element is greater than sum, then ignore it if (set[n-1] > sum) return isSubsetSum(set, n-1, sum); /* else, check if sum can be obtained by any of the following (a) including the last element (b) excluding the last element */ return isSubsetSum(set, n-1, sum) || isSubsetSum(set, n-1, sum-set[n-1]); } Discuss it Question 3 Explanation: Following is the recurrence for given implementation of subset sum problem T(n) = 2T(n-1) + C1 T(0) = C1 Where C1 and C2 are some machine specific constants. The solution of recurrence is O(2^n) We can see it with the help of recurrence tree method C1 / \ T(n-1) T(n-1) C1 / \ C1 C1 / \ / \ T(n-2) T(n-2) T(n-2) T(n-2) C1 / \ C1 C1 / \ / \ C1 C1 C1 C1 / \ / \ / \ / \ If we sum the above tree level by level, we get the following series T(n) = C1 + 2C1 + 4C1 + 8C1 + ... The above series is Geometrical progression and there will be n terms in it. So T(n) = O(2^n) Suppose T(n) = 2T(n/2) + n, T(0) = T(1) = 1 Which one of the following is false. ( GATE CS 2005)Analysis of Algorithms (Recurrences) a) T(n) = O(n^2) b) T(n) = (nLogn) c) T(n) = (n^2) d) T(n) = O(nLogn) a) T(n) = O(n^2) b) T(n) = c) T(n) = d) T(n) = O(nLogn) Discuss it Question 4 Explanation: See question 4 of for explanation. Consider the following recurrence:Analysis of Algorithms (Recurrences) Which one of the following is true? (A) T(n) = (loglogn) (B) T(n) = (logn) (C) T(n) = (sqrt(n)) (D) T(n) = (n) (A) T(n) = (B) T(n) = (C) T(n) = (D) T(n) = Discuss it Question 5 Explanation: This question can be solved by first change of variable and then Master Method. (m). You can also prove using Master theorem. Let n = 2^m T(2^m) = T(2^(m/2)) + 1 Let T(2^m) = S(m) S(m) = 2S(m/2) + 1Above expression is a binary tree traversal recursion whose time complexity is S(m) =Now, let us go back to the original recursive function T(n) (m) =(m) = (logn) /* Since n = 2^m */(logn) /* Since n = 2^m */ T(n) = T(2^m) = S(m) = (Logn)(Logn) The running time of an algorithm is represented by the following recurrence relation:Analysis of Algorithms (Recurrences) (A) (n) (B) (n log n) (C) (n^2) (D) (n^2log n) if n <= 3 then T(n) = n else T(n) = T(n/3) + cnWhich one of the following represents the time complexity of the algorithm? (A) (B) (C) (D) Discuss it Question 6 Explanation:) =This can also be solved using Master Theorem for solving recurrences. The given expression lies in Case 3 of the theorem. (n)(n) The running time of the following algorithmAnalysis of Algorithms (Recurrences) Procedure A(n) If n <= 2 return(1) else return A(is best described by );); Discuss it What is the time complexity of the following recursive function:Analysis of Algorithms (Recurrences) (n) (B) (nlogn) (C) (logn) (D) (loglogn) int DoSomething (int n) { if (n <= 2) return 1; else return (DoSomething (floor(sqrt(n))) + n); }(A) (B) (C) (D) Discuss it Question 8 Explanation: Recursive relation for the DoSomething() is T(n) = T(We have ignored the floor() part as it doesn't matter here if it's a floor or ceiling. ) + C1 if n > 2) + C1 if n > 2) The time complexity of the following C function is (assume n > 0 (GATE CS 2004)Analysis of Algorithms (Recurrences) int recursive (mt n) { if (n == 1) return (1); else return (recursive (n-1) + recursive (n-1)); } Discuss it Question 9 Explanation: Recursive expression for the above program will be. T(n) = 2T(n-1) + c T(1) = c1.Let us solve it. T(n) = 2(2T(n-2) + c) + c = 4T(n-2) + 3c T(n) = 8T(n-3) + 6c + c = 8T(n-3) + 7c T(n) = 16T(n-4) + 14c + c = 16T(n-4) + 15c ............................................................ ............................................................. T(n) = (2^(n-1))T(1) + (2^(n-1) - 1)c T(n) = O(2^n) Consider the following recurrence T(n) = 3T(n/5) + lgn * lgn What is the value of T(n)?Analysis of Algorithms (Recurrences) (A) (B) (c) (D) (A) (B) (c) (D) Discuss it Question 10 Explanation: By Case 1 of the Master Method, we have T(n) = Theta(n ^ (log5(3)) ). [^ is for power] There are 30 questions to complete. My Personal Notes arrow_drop_up
https://www.geeksforgeeks.org/algorithms-gq/analysis-of-algorithms-recurrences-gq/
CC-MAIN-2020-10
refinedweb
900
52.12
Select the pair that does not expresses a relationship similar to that expressed in the pair: Wheel: spokes The spokes are units which radiate of a wheel and make the wheel a complete entity. Similarly, fingers, tentacles and petals are integral units of hand, octopus and flower respectively. On the other hand, roots and leaves do not share this kind of relationship. Thus option 4 does not expresses a relationship similar to that expressed in the given pair. If a, b and c are three positive integers such that a and b are in the ratio 3:4 while b and c are in the ratio 2:1, then minimum integer value of a + b + c is _________ Let a = 3x and b = 4x Similarly b = 2y and c = y ∴ 4x = 2y ⇒ y = 2x ∴ c = 2x Now a + b + c = 3x + 4x + 2x = 9x So, the minimum integer value = 9 Reaching a place of appointment of Friday. I found that I was two days earlier than the scheduled day. If I had reached on the following Wednesday then how many days late would I have been? Friday → 2 days earlier Therefore, scheduled day = Friday + 2 = Sunday Sunday + 3 = Wednesday Therefore, I would have been late by 3 days Which one of the following options is the closest in meaning to the word 'mitigate'? Choose the most appropriate word(s) from the options given below to complete the following sentence. It was hoped at the time that that place would become the centre from which the civilization of Africa would proceed; but this ________ was not fulfilled. The sentence implies that it was hoped that that place would become the point from where the African civilization would proceed, but the belief that it would happen was not fulfilled. Therefore, the correct word to fill in the blank is expectation as it means a strong belief that something will happen or be the case. Consider a random walk on an infinite two-dimensional triangular lattice, a part of which is shown in the figure below. If the probabilities of moving to any of the nearest neighbour sites are equal. What is the probability that the walker returns to the starting position at the end of exactly three steps? A person can take 1st step in any direction independently Suppose he moves to A Now to return to O ( initial point) in 2 steps he can move in 2 directions. Either B or f Thus probability he will move to either B or F will be = 2/6 Suppose he moves to B Now to return back to O he has only 1 option ⇒ to move in BO Probability he will move in BO direction = 1/6 Total probability he will return Twelve straight lines are drawn in a plane such that no two of them are parallel and no three of them are concurrent. A circle is now drawn in the same plane such that all the points of intersection of all the lines lie inside the circle. What is the number of non-overlapping regions into which the circle is divided? The nth line drawn will add n more regions to the circle. 1st line adds 1 region to the circle for a total of 2 regions. 2nd line adds 2 more regions to the circle, bringing the total number of regions to 2 + 2 = 4. 3rd line adds 3 more regions to the circle, bringing the total number of regions to 4 + 3 = 7. 4th line adds 4 more regions to the circle, bringing the total number of regions to 7 + 4 = 11. 5th line adds 5 more regions to the circle, bringing the total number of regions to 11 + 5 = 16. 6th line adds 6 more regions to the circle, bringing the total number of regions to 16 + 6 = 22. 7th line adds 7 more regions to the circle, bringing the total number of regions to 22 + 7 = 29. 8th line adds 8 more regions to the circle, bringing the total number of regions to 29 + 8 = 37. 9th line adds 9 more regions to the circle, bringing the total number of regions to 37 + 9 = 46. 10th line adds 10 more regions to the circle, bringing the total number of regions to 46 + 10 = 56. 11th line adds 11 more regions to the circle, bringing the total number of regions to 56 + 11 = 67. 12th line adds 12 more regions to the circle, bringing the total number of regions to 67 + 12 = 79. Electromagnetic radiation is an insidious culprit. Once upon a time, the major concern around electromagnetic radiation was due to high tension wires which carry huge amounts of electricity to cities. Now, we even carry sources of this radiation with us as cell phones, laptops, tablets and other wireless devices. While the most acute exposures to harmful levels of electromagnetic radiation are immediately realized as burns, the health effects due to chronic or occupational exposure may not manifest effects for months or years. Which of the following can be the viable solution for electromagnetic radiation reduction? The correct answer is option 2 i.e. To implement hardware protocols to minimize risks and reduce electromagnetic radiation production significantly. The passage states about the electromagnetic radiations released from the devices and how they affect the individuals. Out of the given options, only option 2 states the possible solution which can be implemented to reduce electromagnetic radiation. In a mock exam, there were 3 sections. Out of all students, 60 students cleared the cut off in section 1, 50 students cleared the cutoff in section 2 and 56 students cleared the cut off in section 3. 20 students cleared the cutoff in section 1 and section 2, 16 cleared cut off in section 2 & section 3, 26 cleared the cut off in section 1 & section 3. The number of students who cleared cutoff of the only one section was equal & was 24 for each section. How many students cleared cut off all the three sections? Let the number of students be X who cleared cut off in all sections. The number of students who cleared the cutoff in section 1 and section 2, only = 20 – X The number of students who cleared the cutoff in section 2 and section 3, only = 16 – X The number of students who cleared the cutoff in section 1 and section 3, only = 26 – X Now, consider section 1: 24 + 20 – x + x + 26 – x = 60 70 – x = 60 x = 10 The regular expression which represents the set of strings in which every 0 is immediately followed by at least two 1's is ____________. If w is in L, then either (a) w does not contain any 0, or (b) it contains a 0 followed by 11. So, w can be written as w1w2.........wn, where each wi is either 0 or 011. So, L is represented by the regular expression (1 + 011)*. Therefore, option 2 is the correct answer. A and B are the only two host on a LAN which uses CSMA/CD protocol. A minimum time required by ‘A’ to detect collision is 600 μs. Find the time taken by the packet to travel from host A to host B. Concept: Contention period is minimum time a host must transmit for before it can be sure that no other host's packet has collided with its transmission. It takes minimum of RTT to detect collision. Calculation: Contention Period = RTT = 2 × Tp 600 = 2 × Tp Tp = 300 μs What is the minterm that equals 1 if x1 = x3 = 0 and x2 = x4 = x5 = 1, and equals 0 otherwise? The minterm y1y2 ...yn is 1 if and only if each yi is 1, and this occurs if and only if xi = 1 when yi = xi and xi=0 when yi = xi. Therefore, the minterm will be x1'x2x3'x4x5 i.e. 01011. The answer will be 11. Consider a 1024 MB free partition and the following memory request. R1 requests 120 MB R2 requests 250 MB R3 requests 480 MB R4 requests 40 MB Which of the following allocation technique will merge the partition into the original 1024 KB segment, when R4 finishes? The buddy system allocates memory from a fixed-size segment consisting of physically contiguous pages. Memory is allocated from this segment using a power-of-2 allocator, which satisfies requests in units sized as a power of 2. Consider a double-ended queue with elements 31, 17, 4, 22, 19, 8. What is the time complexity of deletion of last element i.e. 8 and insertion of new element 10 at the rear? In a double-ended queue, elements can be inserted and deleted from both the front and back of the queue. Time complexity of all operations like insert element at front, insert at rear, delete element from front, and delete element from rear is O(1). The column vector is a simultaneous Eigen vector of For all values of a and b, is an Eigen vector of. Consider a processor that includes a base with indexing addressing mode. Suppose an instruction is encountered that employs this addressing mode and specifies a displacement of 1500. Base register contains the value 3456 and index register contains the value 4. What is the address of the operand? In indexing addressing mode, the address field references a main memory address, and the referenced register contains a positive displacement from that address. Address of operand = base register + index register + displacement Address of operand = 3456 + 4 + 1500 = 4960 The address of a class C host is to be split into subnets with a n-bit subnet number. The maximum number of hosts in each subnet is 14. Find the value of n. The number of available hosts on a subnet is 2h−2, where h is the number of bits used for the host portion of the address. h2 = 14 h = 4 Which of the following options is INCORRECT? Option 4 is incorrect. It can be corrected as: The cartesian product of two countable sets is countable. Proof: There are three cases to consider: Case 1: If both A and B are finite with |A|=m and |B|=n, then it is easy to show that |A×B|=mn and hence A×B is finite and so it is countable. Case 2: If A is finite with |A|=n and B is countably infinite, there exist bijective functions f: A→{1,2,...,n} and g: B→N. We can define h: A×B→N for all (a,b)∈A×B. Then clearly h is injective. So A×B is countable. Case 3: If A and B are both countable, there exist bijective functions f: A→N and g: B→N and defining h: A×B→N gives us an injective function, and so A×B is countable. Consider a following resource allocation graph with multiple instance of each resource type. Which of the following statement is true about above system? In resource allocation graph, a claim edge Pi → Rj indicates that process Pi may request resource Rj at some time in the future. An assignment edge Rj → Pi indicates that process Pi is holding resource Rj. The above system is deadlock-free although there is a cycle in a graph because there are more than one instance per resource type. Which of the following is the correct way to allocate a memory for the following structure? struct Student { int sid; int age; char grade; }; The sizeof operator returns the size of the struct in bytes. The size of a struct is not always the sum of the size of the individual members, hence use sizeof rather than a literal. struct Student *sptr; sptr = (struct Student*)malloc( sizeof(struct Student) ); Solve the integral We know that, In TCP connection, When SYN segment is sent, the value of retransmission-timeout is set to 11 sec. and when the SYN + ACK segment is received at the sender side the time required for the segment to reach the destination (i.e. at the sender side) and be acknowledged is equal to 3.2 sec. Find the retransmission – timeout. Concept: The measured round-trip time for a segment is the time required for the segment to reach the destination and be acknowledged, although the acknowledgment may include other segments. In TCP, there can be only one RTT measurement in progress at any time. Calculation: RTTM = 3.2 RTTS = 3.2 RTTD = RTTS/2 = 1.6 RTO = RTTS + 4 × RTTD = 9.6 Consider the following syntax directed definition: The above SDD is The first rule defines the inherited attribute T’.val using F’.val and F appears to the left of T’ in the production. The second rule define T’1.val using the inherited attribute T’.val associated with head and F.val, where F appears to the left of T’ in the production There are two boxes each containing two components. Each component is defective with probability ¼, independent of all other components. The probability that exactly one box contains exactly one defective component equals? Probability (1 faulty in box) The chances of box having other than 1 defective is 5/8 The probability exactly 1 one fault in exactly 1 box Alternate: The given situation is possible if (1) 1 faulty among 4 components (2) 3 faulty among 4 components For case (1) For case (2) Which of the following statements is false? Out of all the options, option 3 is false. It can be corrected as: It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting time, and response time. A data is sent to UDP along with a pair of socket address and the length of data. After receiving data, UDP adds the header and passes the user datagram to IP with the socket address. What is the maximum size of the data that can be encapsulated in a UDP datagram? In IPv4, the maximum length of packet size is 65,536 and length of IP header is 20 bytes. So, maximum data size (including UDP header) = 65,535 – 20 = 65515 Size of UDP header = 8 bytes Maximum data size = 65,515 – 8 = 65507 What is the output of following C code: #include <stdio.h> int main(void) { char *p; p = "Programming"; p++ ; ++p; --p; p--; printf( p ); return 0; } f the number of balanced parenthesis possible with 'n' pair of parenthesis is 14, what is the value of 'n'? Number of balanced parenthesis = Number of binary search trees Number of binary search trees = When n = 4 Number of binary search trees = Therefore, value of n = 4 Consider the following statements: S1: The identifying relationship is many-to-one from the weak entity set to the identifying entity set, and the participation of the weak entity set in the relationship is total. S2: It is possible to have a weak entity set with more than one identifying entity set. The number of correct statements are ______ Both the given statements are true. It is possible to have a weak entity set with more than one identifying entity set. A particular weak entity would then be identified by a combination of entities, one from each identifying entity set. The primary key of the weak entity set would consist of the union of the primary keys of the identifying entity sets, plus the discriminator of the weak entity set. Consider the following statements: A: The set of all reflexive relations are closed under the operation of set union. B: The set of all irreflexive relations are not closed under the operation of set union. C: The set of all reflexive relations are not closed under set difference. D: The set difference of all reflexive relations is not irreflexive. Which of the given statements are false? Statements B and D are incorrect. They can be corrected as: Hence, option 1 is correct. Match the following: Flags: These are needed by the control unit to determine the status of the processor and the outcome of previous ALU operations. Instruction register: The opcode and addressing mode of the current instruction are used to determine which micro-operations to perform during the execute cycle. Data paths: The control unit controls the internal flow of data. For example, on instruction fetch, the contents of the memory buffer register are transferred to the instruction register. For each path to be controlled, there is a switch. Which of the following relations gives chromatic partition between vertices of the same colour in a connected graph? Equivalence relation between vertices of the same colour in a connected graph gives the chromatic partition. Consider the minterm list form of a Boolean function F given below: F(P, Q, R, S) = Ʃm(0, 1, 2, 4, 6, 8, 9, 10) + d(3, 11, 15) Here, m denotes a minterm and d denotes a don't care term. The number of essential prime implicants of the function F is ___________. Therefore, the number of essential prime implicants is 2. Consider a relational schema R(A, B, C, D, E, F, G) with functional dependencies: A → BC C → DG C → E D → FG The number of superkeys possible is _____. The candidate key for the given schema is A because (A)+ = {A, B, C, D, E, F, G, H} Note: When there is only candidate key for a relation with n-attributes, there are 2n-1 super keys possible. In the given relation, there are 7 attributes. Therefore, the total number of super keys possible are 26 = 64. Match the RAID levels with the number of disks required. The different RAID levels with the number of disks required are as follows: where N = number of data disks and m is proportional to logN. Consider a software program that is artificially seeded with 100 faults. While testing this program, 159 faults are detected, out of which 75 faults are from those artificially seeded faults. Assuming that both real and seeded faults are of same nature and have same distribution, the estimated number of undetected real faults is________. As both real and artificial faults have the same distribution, they will be detected in same proportions. ≠ A.F. = 100 ≠ R.F. = x A fraction of faults detected Consider the following C function: int foo() { int i; int sum = 0; for(i = 1; i < = 20; i++) sum = sum + i; return sum; } Calculate the least number of temporary variables required to create an intermediate code for above C function. The intermediate code for above code: 1. sum = 0; 2. i = 0; 3. if(i > 10) goto 4. t1 = sum + i; 5. sum = t1 6. t2 = i + 1; 7. i = t2; 8. goto 3 9. goto calling program Which of the following strings cannot be generated using the following grammar? (1) aabbaa (2) abaab (3) aaababbaa Only (2) can not be derived from the given grammar. (1) can be derived in the following way: S → aAS S → aSbAS S → aabAS S → aabbaA S → aabbaa (3) can be derived in the following way: S → aAS S → aASS S → aaSbAS S → aaabSbAS S → aaababAS S → aaababbaS S → aaababbaa A host machine uses the token bucket for congestion control with a capacity of 9 GB and the maximum output rate is 250 MBps. The minimum time required to transmit the data is 50 seconds. Tokens arrive to sustain output at a rate of x MBps per second. Find the value of x. C = capacity of bucket = 1 GB M = maximum output rate = 250 MBps e = Input rate S = minimum time required to transmit the data The output rate is 250 MBps. Tokens arrive at a rate to sustain output at a rate of x MBps. M – e = 250 – 70 = 180 MBps Let , then the rank of M is equal to Solution: R2 → R2 + R1 R4 → R4 + R1 There are 15 printers. The current allocation and maximum requirement of tape drives for four processes are shown below: Which of the following is true as per the current state of the system? The system is in safe and non deadlocked state. The safe sequence is P2 -> P3 -> P4 -> P1. In a class of 15 students a quiz is held. The sum of their scores is 100. How many students atleast must have the same score? Suppose the scores are all different. Therefore, we can arrange them in order s1< s2< ...< s15. The smallest possible values here would be s1=0, s2=1, s3=2, ..., s15=14. If we add these scores, we get (14)(15)/2 = 105. This is a contradiction since the scores are supposed to sum to 100. Thus, the scores cannot be all different. Atleast 2 students must have the same score. Consider the following undirected weighted graph. Find the minimum possible weight of the spanning tree if Kruskal’s algorithm is implemented on the above graph. Above graph contains 6 vertices, so minimum spanning tree will be having 6 – 1 = 7 edges. Sort the edges in increasing order of their weight. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far. If the cycle is not formed, include this edge. Else, discard it. Repeat these steps until there are 7 edges in the MST. MST will contain edge (P,S), (S,Q), (S,U), (U,T), (T,R). Hence, minimum possible weight = 1 + 1 + 2 + 3 + 3 = 10 What is the count of the non-zero entries in a triangular matrix? We know that in a lower triangular matrix, the non-zero entries occur only on or below the main diagonal. Therefore, the elements are stored in a linear array M in the following way: M[1] = a11 M[2] = a21 M[3] = a22 M[4] = a31 and so on. It can be observed that A contains 1 element in the first row, 2 elements in the second row, 3 elements in the third row and likewise n elements in the nth row. Therefore, the total number of elements that M will contain = 1+2+3+...+n=n(n+1)/2 What is the predicted value of the fifth CPU burst (in µsec) if exponential averaging is used in the case of the shortest job first scheduling algorithm? The length of CPU bursts µsec is {t1, t2, t3, t4} = {4, 5, 8, 7}, smoothening factor is 0.6 and predicted value of the first CPU burst is 8 µsec. Consider the following Turing machine: Note: (p, q, r) represents that by reading input 'p', it replaces 'p' by 'q' and moves to 'r' direction. Which of the following languages is accepted by the above Turing machine? The given Turing machine performs the following: (a) If the leftmost symbol in the given input string w is 0, it is replaced by x and moved right till a leftmost 1 is encountered in w. It is changed to 'y' and moved backwards. (b) Step (a) is repeated with the leftmost 0. When no 0 or 1 is left, it is moved to a final state. Therefore, the sequence which is printed is of the form 0n1n. Hence, option 1 is the correct answer. Consider the following x86 machine instruction sequence: ADD EAX, EBX SUB ECX, EAX ADD EBX, ECX The first instruction adds the contents of the 32-bit registers EAX and EBX and stores the result in EAX. The second instruction subtracts the contents of EAX from ECX and stores the result in ECX. These instructions are to be executed in a pipelined instruction processor with the following 4 stages: fetch instruction (FI), decode instruction and calculate addresses (DA), fetch operand (FO), and execute (EX). The FI, DA and EX stages take 1 clock cycle each for any instruction. The FO stage takes 2 clock cycle for ADD and 3 clock cycle for SUB instruction. The pipelined processor uses operand forwarding from the FO stage to the DA stage. Calculate the number of clock cycles required for the execution of the above instructions. The minimum number of states in the NFA for the regular expression (a + a(b + aa)*b)* a(b + aa)* a is ______. The NFA for the given regular expression is: Therefore, the number of states = 3 Assume that the main memory with only 4 page frames which are initially empty. If the page reference string is a b c d a e f b c d c e d b f, the number of page faults using the optimal page replacement policy is ______. Consider a simple graph with 10 vertices. The graph will be a connected graph if it has atleast _______ edges Note: The simple graph with n-vertices is connected if it has atleast Therefore, number of edges = 9*8/2 = 36 Consider the following graph. If DFS is implemented on the following graph then which of the following node will not be marked as visited at the end of the traversing if search is started at node A? Some of the sequences of the node in the order they are first visited: ABCDEFG ACEGFDB ABDFGEC ACBDEGF ACBDFGE ABCEGFD Find the subnet address for the IP address 165.81.35.120 and subnet mask is 255.255.192.0. IP address → 165.81.35.120 → 10100101 01010001 00100011 01111000 Subnet mask → 255.255.192.0 → 11111111 11111111 11000000 00000000 IP address 10100101 01010001 00100011 01111000 Subnet mask 11111111 11111111 11000000 00000000 Subnet address 10100101 01010001 00000000 00000000 Subnet address → 10100101 01010001 00000000 00000000 → 165.81.0.0 Consider the following statements about the dining philosopher problem I. There should be at least 6 chopsticks to avoid deadlock for 6 philosophers. II. If the asymmetric solution is implemented then 1stphilosopher picks up her right chopstick first while 6thphilosopher picks up her left chopstick first. Which of the above statement is correct? I. There should be at least n + 1 chopsticks to avoid deadlock for n philosophers. II. Asymmetric solution: an odd philosopher picks up first her left chopstick and then her right chopstick, whereas an even philosopher picks up her right chopstick and then her left chopstick. Consider the new order traversal of a binary tree: The post-order traversal of a binary tree is 3, 2, 3, 5, +, ↑, *, 1, -. What is the new order traversal of the same tree? In post-order traversal, the left subtree is traversed first, then the right subtree and finally the root node. According to the given post-order traversal, the root node should be '-' because it comes at the last in the expression. Similarly, 3 should be the leftmost node of the tree because it is at the starting of the expression. This process is continued and the resulting tree is obtained. The equivalent expression tree for the given post-order traversal is: Therefore, the new order traversal is: 1, -, 5, +, 3, ↑, 2, *, 3. Consider the following tuple relational calculus: What does the given expression perform? The correct answer is option 2. There are two 'there exists' clauses in the given tuple relational calculus which are connected by and (ᴧ). The tuple variable u is restricted to departments that are located in the Taylor building, while tuple variable s is restricted to instructors whose dept name matches that of tuple variable u. Therefore, the given expression finds the names of all instructors whose department is in the Taylor building. What is the output of the following C code: #include<stdio.h> void fun1() { auto int x = 1; static int y; register char z = 'F'; printf("%d %d %d", x, y, ++z); } int main() { fun1(); return 0; } Static variable: The value of a static variable persists until the end of the program. Automatic variable: The variables declared inside the function are automatic or local variables. External variable: Variables that are declared outside of all functions are known as external variables. External or global variables are accessible to any function. Register variable: The register storage class is used to define local variables that should be stored in a register instead of RAM. If the expression has k variables and n operator occurrences, what is the running time complexity to check whether the expression is tautology or not? If the expression has k variables and n operator occurrences, the table has 2k rows, and there are n columns that need to be filled out. Therefore, the implementation of this algorithm takes O(2kn) time Consider the following function. What is the output of calc(10, 1, 2) if pow(y, z) returns value of yz int calc(int x, int y, int z) { int temp = pow(y, z); if(temp == x) return 1; if(temp > x) return 0; return calc(x, y+1, z) + calc(x - temp, y+1, z); } Suppose a set S ={a1, a2, a3, …} of n proposed activities used by a person. Each activity ai has a start time si and a finish time fi , where 0 <= si < fi < ∞∞. Find the maximum number of activities that can be performed by a single person, assuming that a person can only work on a single activity at a time. ort the activities according to their finishing time. Select the first activity from the sorted array If the start time of this activity is greater than or equal to the finish time of a previously selected activity then select this activity. Possible set of activities: (1,4), (5,8) (2,5), (5,8) (0,6), (6,9) Consider the following set of statements: A: The write-back policy increases memory writes. B: Direct mapping technique faces the problem of thrashing. C: Temporal locality refers to the tendency of execution to involve a number of memory locations that are clustered. Which of the given statements is/are incorrect? Write back, minimizes memory writes. With write back, updates are made only in the cache. When an update occurs, a dirty bit, or use bit, associated with the line is set. Then, when a block is replaced, it is written back to main memory if and only if the dirty bit is set. The main disadvantage of direct mapping is that there is a fixed cache location for any given block. Thus, if a program happens to reference words repeatedly from two different blocks that map into the same line, then the blocks will be continually swapped in the cache, and the hit ratio will be low (a phenomenon known as thrashing). Spatial locality refers to the tendency of execution to involve a number of memory locations that are clustered. This reflects the tendency of a processor to access instructions sequentially. The spatial location also reflects the tendency of a program to access data locations sequentially, such as when processing a table of data. Therefore, option 1 is the correct answer. Consider the following relations: student(ID, name, dept_name, credits) course(ID, course_ID, sec_ID, semester, year, grade) Which of the following is the correct SQL for "For each course section offered in 2009, find the average total credits of all students enrolled in the section, if the section had at least 2 students". The correct answer is option 2. The sequence of operations is as follows: Satisfying the given sequence of operations, only option 2 performs the desired operation. Consider the following graph: What is the number of topological orders for the above graph? Topological sorting is a linear ordering of vertices such that for every directed edge uv, vertex u comes before v in the ordering. Topological Sorting for a graph is not possible if the graph is not a DAG i.e. if a cycle is present in the graph. In the given graph, cycle is present. Therefore, no topological order is possible for the given graph. Let A, B, C, and D be four matrices of dimensions 10 x 9, 9 x 12, 12 x 10, and 10 x 15, respectively. The minimum number of scalar multiplications required to find the product ABCD using the basic matrix multiplication method is ______. ((AB)C)D = (10 x 9, 9 x 12), 12 x 10, 10 x 15 = 10 x 9 x 12 + 10 x 12 x 10 + 10 x 10 x 15 = 3780 A(BC)D =9 x 12 x 10 + 10 x 9 x 10 + 10 x 10 x 15 = 3480 A(B(CD)) = 12 x 10 x 15 + 9 x 12 x 15 + 10 x 9 x 15 = 4770 (AB)(CD) = 10 x 9 x 12 + 12 x 10 x 15 + 10 x 12 x 15 = 4680 What is the number of AND gates required in carry circuit for 10- bit look ahead carry adder? The number of AND gates required in carry circuit for a n-bit look ahead carry adder is given by Therefore, number of AND gates = 10*11/2 = 55 Video | 94:34 min Doc | 28 Pages Doc | 1 Page Doc | 1 Page Test | 65 questions | 180 min Test | 65 questions | 180 min Test | 65 questions | 180 min Test | 65 questions | 180 min Test | 65 questions | 180 min
https://edurev.in/course/quiz/attempt/137_Computer-Science-And-IT--CSIT--Mock-Test-6-For-Gat/74ef3862-17be-4373-95ab-e921e2cb4e04
CC-MAIN-2020-45
refinedweb
5,508
61.67
Manhole is in-process service that will accept unix domain socket connections and present the Project description) Note: on eventlet you might need to setup the hub first to prevent circular import problems: import eventlet eventlet.hubs.get_hub() # do this first desirable in case you don’t want the thread active all the time. - thread - Set to True to start the always-on ManholeThread. Default: True. Automatically switched to False if oneshot_on or activate_on are used. - because Python will force all the signal handling to be run in the main thread but signalfd doesn’t. - socket_path - Use a specific path for the unix domain socket (instead of /tmp/manhole-<pid>). This disables patch_fork as children cannot reuse) Using Manhole with uWSGI. Requirements Similar projects - Twisted’s manhole - it has colors and server-side.6.0 (2019-01-19) - Testing improvements (changed some skips to xfail, added osx in Travis). - Fixed long standing Python 2.7 bug where sys.getfilesystemencoding() would be broken after installing a threaded manhole. See #51. - Dropped support for Python 2.6, 3.3 and 3.4. - Fixed handling when socket.setdefaulttimeout() is used. Contributed by “honnix” in #53. - Fixed some typos. Contributed by Jesús Cea in #43. - Fixed handling in manhole-cli so that timeout is actually seconds and not milliseconds. Contributed by Nir Soffer in #45. - Cleaned up useless polling options in manhole-cli. Contributed by Nir Soffer in #46. - Documented and implemented a solution for using Manhole with Eventlet. See #49. 1.5.0 (2017-08-31) - Added two string aliases for connection_handler option. Now you can conveniently use connection_handler="exec". - Improved handle_connection_exec. It now has a clean way to exit (exit()) and properly closes the socket. 1.4.0 (2017-08-29) - Added the connection_handler install option. Default value is manhole.handle_connection_repl, and alternate manhole.handle_connection_exec will wait 1 second for output from manhole (you can customize this with the --timeout option). - Fixed issues with newer PyPy (caused by gevent/eventlet socket unwrapping). 1.3.0 (2015-09-03) - Allowed Manhole to be configured without any thread or activation (in case you want to manually activate). - Added an example and tests for using Manhole with uWSGi. - Fixed error handling in manhole-cli on Python 3 (exc vars don’t leak anymore). - Fixed support for running in gevent/eventlet-using apps on Python 3 (now that they support Python 3). - Allowed reinstalling the manhole (in non-strict mode). Previous install is undone.). Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/manhole/
CC-MAIN-2019-18
refinedweb
435
61.53
> Me and a team of 5 have been working on an app game for about 2 months but out of no where our coder up and disappeared, its been close to like 5 months now and what we can manage on our own is no comparison to what he could do. Is there a way that we could run those files on the project editor and continue work from a build game file and data or are we just screwed You won't be able to recover the source code from a built release version of a project. You had no source control management? No Backups? No access to the unity project? And if you had access to the unity project, why isn't the source code there? we sent him all the objects and art to work into it and he'd send us built versions for consulting and one day just stopped talking to us, thanks for all the comments but we've just moved on. The source could should be available unless he stripped everything and took it with him. Otherwise the ONLY way to reverse engineer anything is to look at the machine code and rebuild the code by translating that. This is an extremely high level topic that people will get paid a hole bunch of $ to do. So unless you can get your hands on that original source code.....yes your screwed. There is a reason why you cant do this, you know... for hacking and reselling a game etc Answer by nullgobz · Jul 31, 2015 at 12:28 AM Well there is one way to acually get the sourcecode but its not pretty. You could decompile the game dll file. Look in your game folder: [gamename]_Data/Managed/ For these dlls: Assembly-CSharp.dll -> for c# scripts Assembly-UnityScript -> for js scripts Open that file/files with a decomilation software like: dotPeek. If you use dotPeek, select the dll, namespace or "root namespace" right click a class and "decompile source" Also use source control in the future if you care about your project :) Best of luck! Yeah, it kinda sucks when a programmer ups and dissapears. :/ Answer by Bunny83 · Jul 31, 2015 at 12:40 AM The script source code uaually can be restored without much problems except it has been manually obfuscated. A Unity build doesn't compile to machine code as we use Mono. All your scripts are compiled to CIL code (common intermediate language). The best decompiler i know is ILSpy. It can view the classes in IL and can also decompile to C# or VB.NET. Asset files can partially be restored but you never get back to the original files. It also depends what kind of "build" you have. In standalong builds you can access the assemblies directly. An android apk file is actually just a zip file. So you can open it with most zip tools and extract the dlls. There are even solutions to "unpack" a webplayer create an agenda or timetable system? 1 Answer Facebook cannot find my Unity data file. 1 Answer Developing a 3d data editor 3 Answers PlayerPrefs.GetString returns wrong value 3 Answers How do you copy PersistentDataFolder between devices? 2 Answers
https://answers.unity.com/questions/1018427/is-there-a-way-to-reverse-engineer-a-built-project.html?sort=oldest
CC-MAIN-2019-22
refinedweb
541
71.75
The Person business type is a new LightSwitch feature introduced in Visual Studio 2013. Its goal is to make it easy to add and manage people-related data in your application. In this post we will show you how to use the Person type and what it can do for you.! Karol Zadora-Przylecki, Lead Developer, Cloud Business Apps Ravi Eda, Software Development Engineer in Test, Cloud Business Apps Join the conversationAdd Comment Hi Ravi Thank you for this nice post. In my on going project I have one requirement to filter data based on current user and manager. So is it possible to filter the database using person data type for this requirement. If yes then how? e.g – current user can see only his/her data. If the current user is manager for some other users then the current user can see his/her data including the other users data related to him. Thanks Thanks Rashmi. Yes, you should be able to filter out data based on the current user. For example, say you have a table named 'Project' with three properties – 1. ProjectName – name of the project. A string type. 2. ContactPerson – primary contact for the project. A Person data type. 3. Manager – the manager of the ContactPerson. A Person data type. Add a query under ‘Project’. Add two single filters – “Where Manager = Current User” Or “ContactPerson = Current User”. Add a Browse screen with this query as a data source. Now each user should be able to see only projects where he/she is listed as the Manager or as the ContactPerson. Hope this clarifies. Thanks, Ravi Hi Ravi Thank you. Its working as expected. Is there any person viewer/picker control for Silverlight client. If not then is there any future road map to integrate person viewer/picker control for SL client? Thanks Glad it helped. Person Picker and Viewer controls are not available for Silverlight client. Please submit a request for these controls at connect.microsoft.com/visualstudio Hi, We are using VS 2013 RC and Sharepoint 2013 for developing Lightswitch HTMLCLient application. We enabled sharepoint in our application. We are exited to see the People picker / viewver control. Our expectation is that all the column of type Person in Sharepoint will map to Type Person in Lightswitch application . But unfortunately Colum of type Person is defined as UserInformationlist in the Lightswitch HTML application. So, these fields are not having People picker control. Is there any way to get people picker control for all the sharepoint columns of type Person ( Created By, ModifiedBy,etc) . We are looking for similar functionality as in the sharepoint for lightswithc HTML. Thanks in advance, Abdul Raheem It's a greatest advance in Lighswitch, congratulations! Since 2011 version, Lighswitch technology becomes esier to use and more powerful. Managing persons in Database gives a lot of uses and the option to create social applications. Is the Email property of the PersonInfo type only available in Sharepoint enabled applications? I'm trying out the Person data type in a Silverlight desktop app that uses Windows authentication. I don't see the Email property in intellisense at all. And some reference pages on some of these Lightswitch namespaces (like is available on MSDN for other .Net namespaces) sure would be nice, essential even. Am I just not looking in the right place? "Is the Email property of the PersonInfo type only available in Sharepoint enabled applications?" Yes, it has to be a SharePoint enabled app. Person Viewer and Picker controls are available in HTML client only. Thanks, Ravi. Bad news for me, but I appreciate the quick response. It might help others if you make it a little clearer in the "Rich Information" code example that the Email property is only available in Sharepoint enabled apps. I based my entire data design around the idea that a single column with the Person data type could contain a network id, full name, and email address. I probably should have know that was too good to be true. And since Windows authentication is going to be very common in internal LOB apps, that information could affect a large number of people. You noted: "you need to be logged in using the same credentials that were used to log into SharePoint." How does this work when SharePoint is authenticating via Active Directory Federation Services (OnPrem – Not Office 365) Using a TrustedIdentityProvider, and Lync is authenticating to a DirSync'd Office 365 Deployment (Not OnPrem). We are using a Hybrid Environment and I'm not seeing people in my drop down lists. I tried to use fiddler to see any web service calls between the App and SharePoint but I didn't see any traffic at all. Thanks; Jon
https://blogs.msdn.microsoft.com/lightswitch/2013/11/05/using-the-person-business-type-karol-zadora-przylecki-ravi-eda/
CC-MAIN-2017-13
refinedweb
792
66.64
I have a situation where I have a lot of <b> tags: <b>12</b> <b>13</b> <b>14</b> <b></b> <b>121</b> As you can see, the second last tag is empty. When I call: sel.xpath('b/text()').extract() Which gives me: ['12', '13', '14', '121'] I would like to have: ['12', '13', '14', '', '121'] Is there a way to get the empty value? My current work around is to call: sel.xpath('b').extract() And then parsing through each html tag myself (the empty tags are here, which is what I want). This is where it is okay to manually strip the tags and get the text. You can use remove_tags() function provided by w3lib: >>> from w3lib.html import remove_tags >>> map(remove_tags, sel.xpath('//b').extract()) [u'12', u'13', u'14', u'', u'121'] Note that w3lib is a Scrapy dependency and is used internally. No need to install it separately. Also, it would be better to use Scrapy Input and Output Processors here. Continue using sel.xpath('b') and define an input processor. For example, you can define it for specific Fields for the Item class: from scrapy.contrib.loader.processor import MapCompose from scrapy.item import Item, Field from w3lib.html import remove_tags class MyItem(Item): my_field = Field(input_processor=MapCompose(remove_tags))
http://www.dlxedu.com/askdetail/3/4a4c8536def4024e595d9302b02d615d.html
CC-MAIN-2018-47
refinedweb
220
68.97
Lab 7: Recursive Objects Due at 11:59pm on Friday, 10/14/2016. Starter Files Download lab. - To receive credit for this lab, you must complete Questions 1, 2, 3, 4, and 5 in lab07.py and submit through OK. - Questions 6 through 9 are extra practice. They can be found in the lab07_extra.py file. It is recommended that you complete these problems on your own time. Linked Lists A linked list is either an empty linked list ( Link.empty) or a Link object containing a first value and the rest of the linked list. Here is a part of the Link class. The entire class can be found in link.py: class Link: """A linked list. >>> s = Link(1, Link(2, Link(3))) >>> s.first 1 >>> s.rest Link(2, Link(3)) """ empty = () def __init__(self, first, rest=empty): assert rest is Link.empty or isinstance(rest, Link) self.first = first self.rest = rest def __repr__(self): if self.rest is Link.empty: return 'Link({})'.format(self.first) else: return 'Link({}, {})'.format(self.first, repr(selfWPD: Linked Lists Use OK to test your knowledge with the following "What Would Python Display?" questions: python3 ok -q link -u If you get stuck, try drawing out the diagram for the linked list on a piece of paper, or loading the Linkclass into the interpreter with python3 -i lab07.py. >>> from link link.py______<1 2 3 4> Question 2: 4: Remove All Implement a function remove_all that takes a Link, and a value, and remove any. Assume there exists some nodes to be removed and the first element is never removed. >>> l1 = Link(0, Link(2, Link(2, Link(3, Link(1, Link(2, Link(3))))))) >>> print_link(l1) <0 2 2 3 1 2 3> >>> remove_all(l1, 2) >>> print_link(l1) <0 3 1 3> >>> remove_all(l1, 3) >>> print_link(l1) <0 1> """"*** YOUR CODE HERE ***"if link is Link.empty or link.rest is Link.empty: return if link.rest.first == value: link.rest = link.rest.rest remove_all(link, value) else: remove_all(link.rest, value) # alternate solution if link is not Link.empty and link.rest is not Link.empty: remove_all(link.rest, value) if link.rest.first == value: link.rest = link.rest.rest Use OK to test your code: python3 ok -q remove_all Trees (Again) Just like Linked Lists, we can also represent Trees as objects. class Tree: def __init__(self, root, branches=[]): for c in branches: assert isinstance(c, Tree) self.root = root self.branches = branches def __repr__(self): if self.branches: branches_str = ', ' + repr(self.branches) else: branches_str = '' return 'Tree({0}{1})'.format(self.root, branches_str) def is_leaf(self): return not self.branches def print_tree(t, indent=0): """Print a representation of this tree in which each node is indented by two spaces times its depth from the entry. >>> print_tree(Tree(1)) 1 >>> print_tree(Tree(1, [Tree(2)])) 1 2 >>> numbers = Tree(1, [Tree(2), Tree(3, [Tree(4), Tree(5)]), Tree(6, [Tree(7)])]) >>> print_tree(numbers) 1 2 3 4 5 6 7 """ print(' ' * indent + str(t.root)) for b in t.branches: print_tree(b, indent + 1) Question 5: WWPD: Trees Use OK to test your knowledge with the following "What Would Python Display?" questions: python3 ok -q trees -u Hint: Remember for all WWPD questions, enter Functionif you believe the answer is <function ...>and Errorif it errors. >>> from lab07 import * >>> t = Tree(1, Tree(2))______Error>>> t = Tree(1, [Tree(2)]) >>> t.root______1>>> t.branches[0]______Tree(2)>>> t.branches[0].root______2>>> t.root = t.branches[0].root >>> t______Tree(2, [Tree(2)])>>> t.branches.append(Tree(4, [Tree(8)])) >>> len(t.branches)______2>>> t.branches[0]______Tree(2)>>> t.branches[1]______Tree(4, [Tree(8)]) Extra Questions The following questions are for extra practice -- they can be found in the the lab07_extra.py file. It is recommended that you complete these problems on your own time. Motivation: Why linked lists. During lecture, you know that certain operations are faster with linked lists comparing to python-lists. Here you will test out those theories in practice.). Question 6: Reverse Other Write a function reverse_other that mutates the tree such that every other (odd_indexed) level of the tree's roots are all reversed. For example Tree(1,[Tree(2), Tree(3)]) becomes Tree(1,[Tree(3), Tree(2)]) def reverse_other(t): """Reverse the roots of every other level of the tree using mutation. >>> t = Tree(1, [Tree(2), Tree(3), Tree(4)]) >>> reverse_other(t) >>> t Tree(1, [Tree(4), Tree(3), Tree(2)]) >>> t = Tree(1, [Tree(2, [Tree(5, [Tree(7), Tree(8)]), Tree(6)]), Tree(3)]) >>> reverse_other(t) >>> t Tree(1, [Tree(3, [Tree(5, [Tree(8), Tree(7)]), Tree(6)]), Tree(2)]) """"*** YOUR CODE HERE ***"def reverse_helper(t, need_reverse): if t.is_leaf(): return new_labs = [child.root for child in t.branches][::-1] for i in range(len(t.branches)): child = t.branches[i] reverse_helper(child, not need_reverse) if need_reverse: child.root = new_labs[i] reverse_helper(t, True) Use OK to test your code: python3 ok -q reverse_other Question 7: Cumulative Sum Write a function cumulative_sum that mutates the Tree t, where each node's root becomes the sum of all entries in the subtree rooted at the node. def cumulative_sum(t): """Mutates t where each node's root becomes the sum of all entries in the corresponding subtree rooted at t. >>> t = Tree(1, [Tree(3, [Tree(5)]), Tree(7)]) >>> cumulative_sum(t) >>> t Tree(16, [Tree(8, [Tree(5)]), Tree(7)]) """"*** YOUR CODE HERE ***"for st in t.branches: cumulative_sum(st) t.root = sum([st.root for st in t.branches]) + t.root Use OK to test your code: python3 ok -q cumulative_sum Question 8: ***"links = [] while link is not Link.empty: if link in links: return True links.append(link) link = link.rest return False Use OK to test your code: python3 ok -q has_cycle Extra question: This question is not worth extra credit and is entirely optional. It is designed to challenge you to think creatively! Implement has_cycle_constant
http://inst.eecs.berkeley.edu/~cs61a/fa16/lab/lab07/
CC-MAIN-2018-05
refinedweb
1,003
77.74
On Windows/Unix in IntelliJ IDEA it is possible to choose a button on an open dialogue box using the keyboard shortcut ALT + [UNDERLINED_LETTER] where different buttons may have different letters underlined (e.g 2 buttons such as [C]lose and [N]ext c I have problem with libGDX Project setup. It is still failing. ... I've tried it with Android studio and Intellij-Idea too. There is report from console: Generating app in C:\Users\Michal\AndroidStudioProjects\my-gdx-game Executing 'C:\Users\Michal\A I am working on a wsdl file. The schemas that are used by the wsdl reside on a web server. Intellij complains that it cannot find the uri namespace because it cannot reference the schema that is published remotely. If I press alt enter then it downlo I have a query: Date dDateFrom; ... String sql = "select a from tblA where timestamp > ?"; ps = this.connection.prepareStatement(sql); ps.setTimestamp(1, java.sql.Timestamp(dDateFrom.getTime())); And the IntelliJ IDE warn an Error: cannot res I This code works on Netbeans but gives an error on IntelliJ: public static List<String> readFile(String file) throws IOException { Path path = Paths.get("src", file); Stream<String> lines = Files.lines(path); return lines.collect(Coll Calling getClass().getResource("./"); // or getClass().getClassLoader().getResource("./"); from within my JUnit test has different results when executed in Eclipse and IntelliJ IDEA: Eclipse: C:/project/war/WEB-INF/classes/ IntelliJ: C After setting up the project in intellij as a maven project like so .. I set up my pom.xml file with a basic structure ... <project xmlns="" xmlns:xsi="" xsi: I'm having a big problem with android/intellij/gradle that I'm hoping someone can help me with. I have an external library that I wrote and I have it imported into an android project. The external library was first compiled with java 1.8 and I got a How do I build, deploy and debug standalone java app on remote machine with IDEA ? I have remote machine with certain hardware device connected to it. I want to develop standalone Java app on my PC, build it locally but the app should be deployed and I've been working on a project for a little while called (for example) work-example in my work directory. I'm refactoring a bit and want to move the project from ~/code/projects/work/work-example to ~/work/example. However, I've found that I can't mo I do to decrypt and encrypt RSA, I use Cipher.getInstance("RSA/NONE/PKCS1Padding"); for it, and I added Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); and compile 'org.bouncycastle:bcprov-jdk16:1.45' to gradle-fi I'm trying to create my first fxml java project and at the initialising stage I'm trying to set cell value factory for the table columnt, for example @FXML private void initialize() { agentId.setCellValueFactory(cellData -> cellData.getValue().getIdP I'm making the switch over from ST3 and I would like to replicate one of the search behaviours that I frequently used in ST3. When I used Sublime Text 3, I would frequently press Ctrl + P to bring up the "Goto Anything" panel. It looks like this In my project, I am extracting the test data to some location which needs to be used by junit tests. When I run my test from eclipse they run fine but when I run them from idea they fail because they could not locate the data. I am extracting data at In a simple IntelliJ module, I just want to generate a .jar file with my .class files, via IntelliJ IDE commands. Please be careful before marking this as a "duplicate": Although I've seen Google and Stack hits with promising titles, I'm not finding I downloaded project from Subversion for 10+ times, imported&reimported maven repositories, downloaded sources, deleted repository dir. org.apache.maven.plugins:maven-install-plugin:2.4:install: org/codehaus/plexus/digest/DigesterException org.co I'm trying to migrate from Goclipse dev environment to IDEA. Could not find a killing feature in golang-idea-plugin, that exists in Goclipse - each GO project automatically adds itself as a GOPATH item for Eclipse session, so I don't have to put it t I modified a file in IntelliJ IDEA that modification not saved on that day since I lost that changes I try to get back that code but its doesn't show any changes on that day and time how to solve this? --------------Solutions------------- What you de I've a multi-module Maven setup with several Flex projects using the Flexmojos Maven plugin. These Flex modules are added as Maven projects to a Intellij project and I want the configuration generated by the flexmojos maven plugin to be used instead
http://www.dskims.com/tag/intellij-idea/
CC-MAIN-2018-22
refinedweb
802
53.81
HTML5 and CSS3: Level Up with Today's Web Technologies 2nd Ed. in print, November PragPub on sale November 06, 2013 Happy birthday to Adolphe Sax, born in 1814 and inventor of—you guessed it—the saxophone. Good thing his last name wasn't Eye. HTML5 and CSS3 aren't really optional these days, but did you get the memo on all the latest features and how to use them correctly? Brian Hogan is here to help with HTML5 and CSS3: Level Up with Today's Web Technologies, 2nd Ed., now in print and shipping from pragprog.com/book/bhh52e. And a new month brings a new issue of PragPub magazine. Get your copy or subscription now from. HTML5 and CSS3: Level Up with Today’s Web Technologies, 2nd Ed. >>IMAGE. Now in print and shipping from pragprog.com/book/bhh52e. PragPub This month PragPub dives into languages old and new, and offers some career and productivity wisdom from experts. The "old" language is AWK, which was created in the 1970s by Alfred Aho, Peter Weinberger, and Brian Kernighan (the name is their initials). So it's not only old but has a heck of a pedigree. Wikipedia will try to tell you that it has been largely supplanted by Perl, but don't be fooled. It may be the most widely available Turing-complete language in existence, and it's actually fun to use. Derrick Schneider recently immersed himself in a study of AWK and shares some nifty AWK tricks for tasks as diverse as analyzing data from an iPhone app, managing podcasts, and checking for null parameters on public methods. The "new" language is Clojure. Michael Bevilaqua-Linn continues his series on the Clojure language with an insightful exploration of Clojure namespaces this month. Henrik Kniberg has some advice on managing technical debt. You start out by realizing that not all technical debt is bad, he says. Portia Tung, author of The Dream Team Nightmare, has something to say to coaches. And Rothman and Lester are back this month with some career advice on when to stay and when to go. Also, your editor shares another excerpt this month from his and Paul Freiberger's upcoming third edition of their seminal history of the personal computer, Fire in the Valley. On the delivery front, you can now subscribe or purchase single issues without going through PayPal, via the Stripe payment infrastructure. Oh, and we have a magazine forum at, but it isn't seeing much use. We have some ideas about how to change that, one of cleverest of which is to invite you to drop in. Please consider yourself invited. Subscriptions and single copies on sale now at.: - Modern C++ Programming with Test-Driven Development - 3D Game Programming for Kids - iPad and iPhone Kung Fu - The Coding Dojo Handbook Thanks for your continued support, Dave & Andy The Pragmatic Programmers Books • eBooks • PragPub Magazine • Audiobooks and Screencasts PragProg.com
https://pragprog.com/news/html5-and-css3-level-up-with-todays-web-technologies-2nd-ed-in-print-november-pragpub-on-sale
CC-MAIN-2017-47
refinedweb
491
63.49
Creating a Data-Bound Grid in C# with ADO.NET Introduction Data access is a core of most applications and an ability to efficiently access and modify a database is required for developers on a regular basis. In this article, you will look at accessing SQL-based data utilizing C# and ADO.NET and displaying the data in a data-bound grid control. ADO.NET Architecture ADO.NET is a framework of classes that allows you to access data and get the necessary data for .NET-based applications. ADO.NET is similar to its predecessor, ADO; however, there are some very important differences in its architecture. ADO.NET is based on XML, is more flexible than ADO, and allows working without maintaining a connection and switching between data sources with little code. - Connection: A starting point to data access; determines how you connect to the data store; requires setting up properties, like ConnectionString, to establish communications to the data store. - Command: Used with stored procedures and running SQL statements - DataReader: Provides a forward-only, read-only stream of data from a given data source. - DataAdapter: Provides a bridge between the source data and the DataSet object to allow retrieving and updating data. Data Access Basics Working with ADO.NET in.NET framework requires using one of the two System.Data namespaces: System.Data.SQLClient or System.Data.OleDB. The choice of the namespace you use will depend on the database you are trying to access. When working with SQL server, System.Data.SQLClient namespace is the best choice. For other database types, you have to use the System.Data.OleDB namespace. Core ADO.NET Namespaces - System.Data: Serves as a basis for others and makes up objects such as DataTable, DataColumn, DataView, and Constraints. - System.Data.Common: Defines generic objects shared by the different data providers that include DataAdapter, DataColumnMapping, and DataTableMapping. It is used by the data providers and contains the collections that are useful for accessing data sources. - System.Data.OleDb: Defines objects that can be used to connect to the data sources and to modify the data in the various data sources. It is written as the generic data provider, and the implementation provided by the .NET Framework in contains the drivers for Microsoft SQL Server, the Microsoft OLE DB Provider for Oracle, and Microsoft Provider for Jet 4.0. The namespace is useful when you need to connect to many different data sources and you want to achieve a better performance than a provider. - System.Data.SqlClient: A data provider namespace created specifically for Microsoft SQL Server version 7.0 and up. The namespace takes advantage of the Microsoft SQL Server APIs directly and offers a better performance than the more generic System.Data.OleDb namespace. - System.Data.SqlTypes: Provides classes for data types specific to Microsoft SQL Server. The namespace is designed specifically for SQL Server and offers better performance. - System.Data.Odbc: Is intended to work with all compliant ODBC drivers. It is available for download from the Microsoft's web site. Start Coding To create an example accessing the data and displaying it in a grid control, first add a data grid control to the form, dataGrid1.Add the following namespaces to your code. using System.Data; using System.Data.OleDb; <code> private void Form1_Load(object sender, System.EventArgs e) { string strConn, strSQL; strConn = "Provider=Microsoft.JET.OLEDB.4.0;" + "data source=" + "C:\DataAccess\Northwind.mdb"; strSQL = "SELECT CompanyName, ContactName, ContactTitle, " ; strSQL = strSQL + "Address, City, Country FROM Customers"; OleDbDataAdapter da = new OleDbDataAdapter(strSQL, strConn); DataSet ds = new DataSet(); da.Fill(ds, "Customers"); dataGrid1.DataMember = "Customers"; dataGrid1.DataSource=ds; } The Result Running the code above will produce the following result: In this example, you display the data from the C:\DataAccess\Northwind.mdb database. You will see only the columns you listed in the select statement. The grid control will provide scrollbars as necessary. How It Works The code example above provides a simple demonstration of getting the data from an Access database onto the data grid using C# and ASP.NET. You define two string variables, strConn (a connection string variable ) and strSQL(a query statement variable). You proceed defining an OleDBDataAdapter object da and passing to it both the query statement (strSQL) and the connection string (strConn). Notice that you are not creating a Connection object in your example because ADO.NET doesn't force you to create one. Then, a dataset ds is defined; it is used to get the actual data from the Customers table onto the form's Data Grid control. You specify dataset's DataMember Property point to the table from which you are getting the data and set the Data Grid control's DataSource property to the DataSet ds. The result is displayed in the Data Grid control on the form. Summary In this example, you looked at the basics of using ADO.NET in C# applications and utilized a Data Grid control to display the data returned by the query. About the Author Irina Medvinskaya has been involved in technology since 1996. Throughout her career, she as developed many client/server and web applications mainly for financial services companies. She works as a Development Manager at Citigroup.
http://www.developer.com/net/csharp/article.php/3670531/Creating-a-Data-Bound-Grid-in-C-with-ADONET.htm
CC-MAIN-2016-22
refinedweb
871
57.77
Details Description I have an entity with a map element collection where the map value is an Embeddable. @Embeddable public class LocalizedString @Entity public class Multilingual. Activity - All - Work Log - History - Activity - Transitions Change in persistent map are detected by OpenJPA when the the put/remove method is called. In the example, the get method is called to retrieve the value. Currently there is no way for the persistent map to know whether the value is modified or not. A new mechanism needs to be added to detect the change. Closing issue which has been resolved for some time. If you believe the issue is not resolved please reopen or open a new issue. The attachment MapUpdate.patch contains a unit test exhibiting the problem. A test which updates a map key passes. Another test which only update the map value fails.
https://issues.apache.org/jira/browse/OPENJPA-1784
CC-MAIN-2017-13
refinedweb
142
59.6
So I'm starting to learn Python (3.43) code on my own time and my friend who is a major in Comp Sci made up a little assignment for me to try to complete although I have no class experience. So what I need to do is make some code where the user is prompted to input how many scores to be entered and for hose to be inputed on each line which i have successfully completed. The hard part is figuring out how to make the min, max, and avg work into this. This is my cide thus far: def scores(): print('we are starting') count = int(input('Enter amount of scores: ')) print('Each will be entered one per line') scoreList = [] for i in range(1, count+1): scoreList.append(int(input('Enter score: '))) print(scoreList) print(scoreList) print('thank you the results are:') mysum = sum(count) # mysum needs to be a float average = 1.0*mysum / count print ('Total: ', str(count)) print ('Average: ', str(average)) print ('Minimum: ', str(min(count))) print ('Maximum: ', str(max(count))) scores() While a direct answer is helpful, I'd really like to understand what it is I may be doing wrong or some pointers on how to make it better. I understand my code skills are elementary but I'm trying to grasp the basics without paying a large amount for a class. I do however plan on taking a course once I can grasp the basics. thank you!
https://www.daniweb.com/programming/software-development/threads/502928/new-to-python-calculating-minimum-maximum-and-average-of-a-input-list
CC-MAIN-2017-30
refinedweb
247
63.43
Greg Ewing wrote: > Zooko <zooko@zooko.com>: > > >>Now what I would *like* is that instead of doing "import os" to load code, >>instead the caller provides, or doesn't provide the os module as part of the >>construction/invocation of A. >> >>I don't have a clear idea yet of how that could be implemented in a >>Pythonic, compatible way. > > > Maybe, instead of there being one ultra-global namespace for importing > modules from, it should be part of a function's environment. By > default a function invocation would inherit the "import environment" > of it's caller, but the caller could override this to provide a more > restricted environment. Inheriting things is not the capability way. Passing capabilities that allow imports is, of course, but isn't very Pythonic. I'm not sure there's a neat way to fix this that keeps both camps happy. > This would be equivalent to passing in a set of allowable > modules as an implicit parameter to every call. Making it explicit would make me happy. Can you pass parameters to an import? Cheers, Ben. -- "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff
https://mail.python.org/pipermail/python-dev/2003-March/034171.html
CC-MAIN-2019-18
refinedweb
207
63.39
21 October 2008 15:43 [Source: ICIS news] LONDON (ICIS news)--NYMEX light sweet crude futures fell more than $3/bbl on Tuesday to take the front month contract down to $71/bbl on the back of gains in the US dollar.?xml:namespace> By 14:15 GMT, November NYMEX crude, which is set to expire at the end of the day, had hit a low of $71.00/bbl, a loss of $3.25/bbl from Monday’s close of $74.25/bbl, before recovering to around $72.05/bbl. At the same time, December Brent crude on ?xml:namespace> This latest dip was also due to the stock markets trading in negative territory. Oil futures lately have been tracking the equity markets, which have been seen to represent the degree of confidence in the global economy, or lack of it. Meanwhile, OPEC was scheduled to discuss the impact of the global economic weakness in oil demand later this week, with various members calling for a substantial cut in output without destabilising the
http://www.icis.com/Articles/2008/10/21/9165286/US-crude-falls-3bbl-as-dollar-gains.html
CC-MAIN-2015-14
refinedweb
174
62.27
jawtheshark's Journal: C advice needed 12 As readers of my troll account may have noticed, I'm currently doing some C on Linux for a client. Now, that's not really a problem, even though I haven't done C in ages. After a week doing just that, I feel quite comfy and my vi, a few terminals and the man pages make me happy. One thing that worries me are memory leaks. Now, I have been quite diligent in avoiding them but I #include <stdio.h> #include <stdlib.h> #include <strings.h> int main( int argc, char **argv ) { char* leak = malloc( 255 ); leak[0] = (char)NULL; if ( ( argv[1] != NULL ) && ( strcmp( argv[1], "-t" ) ) ) { printf( "FAILURE\n" ); return ( EXIT_FAILURE ); } free( leak ); printf( "SUCCESS\n" ); return ( EXIT_SUCCESS ); } Evidently, this is just to illustrate something. (Never mind that I don't check if malloc returns NULL for example, I normally do check that stuff) As you can see, a memory leak would occur at return( EXIT_FAILURE ); if this were a normal function. However, it isn't: it's main and doing a return( EXIT_FAILURE ); means the program will stop and to my understanding, the operating system will reclaim all used memory in the first place. What do you think about this construct? Bad? Acceptable? Just plain wrong? Perhaps, I'm not even right that the operating system will reclaim all memory (I doubt that though). As said, I've been doing Java for ages. Things like this escape the mind if you haven't done it in a long time. Good practice to clean up your own memory (Score:3, Informative) Even though the OS will clean up after you, it's still a good idea to clean up after yourself. There is no absolute guarantee that subsequent versions of the kernel (or new operating systems) will do garbage collection, so the developer should either clean up the memory allocated or use a C/C++ garbage collector library to take care of such things. HTH. Re: (Score:1) It's just so much more verb Re: (Score:2) Not necessarily... Here is a link to a garbage collection library for C [hp.com]. I have not used it outside of Digital Mars C/C++, but it had a decent reputation and it's free. Hopefully this will save you a bunch of work. Re: (Score:1) Re: (Score:2) False. An OS that doesn't clean up after a dead process won't last long. I'm not saying that cleaning up after yourself isn't a good idea anyway, but if you're exiting the program, you can rely on the OS to do it for you. Oh, and Jorg... use Valgrind [valgrind.org]. It's an invaluable tool when writing code in C. Re: (Score:1) Well, I did look up on wikipedia and AmigaOS didn't clean up. ;-) I knew about Valgrind, I just don't know if I'll have enough time to learn it and investigate problems. I like to be very thorough, but this client has very strict deadlines. Re: (Score:2) Link please. I'm fairly sure it did... I knew about Valgrind, I just don't know if I'll have enough time to learn it and investigate problems. The only thing you need to know is valgrind --leak-check=full your_application. Then just fix any problems it tells you about. Re: (Score:1) Re: (Score:2) You'd be surprised, actually. And, there's a good number of DOS machines stilol doing embedded stuff out there, and these are not very robust for stuff like that. While I agree on a general level, my point is that you really cannot depend on the OS all the time. You should be able to, especially with modern OSes, but that's just not always the case. Re: (Score:1) Re: (Score:2) Re: (Score:1) Because, as always, when I came there, no information was available on the project except it would be "Unix". So, I just picked one and hoped for the best. If I've got time spare, I'll install Solaris. Currently, I run Windows and have a Linux install in a VMWare. It's the company machine, I haven't as much freedoms as at home.
http://slashdot.org/~jawtheshark/journal/196276
CC-MAIN-2014-52
refinedweb
717
82.04
Setting up the Reveal SDK Server Step 1 - Create a New ASP.NET Core Web API The steps below describe how to create a new ASP.NET Core Web API project. If you want to add the Reveal SDK to an existing application, go to Step 2. 1 - Start Visual Studio 2019 and click Create a new project on the start page, select the ASP.NET Core Web API template, and click Next. 2 - Provide a project name and set the location to the server directory we created earlier, and click Next. 3 - Choose your framework, authentication type, and Docker options, and then click Create. Step 2 - Add Reveal SDK 1 - Right click the Solution, or Project, and select Manage NuGet Packages for Solution. 2 - In the package manager dialog, open the Browse tab, select the Infragistics (Local) package source, and install the Reveal.Sdk.Web.AspNetCore NuGet package into the project. Note If you are a trial user, you can install the Reveal.Sdk.Web.AspNetCore.Trial NuGet package found on NuGet.org. 3 - Open and modify the Program.cs file to add the namespace using Reveal.Sdk;. Then, add the call to IMcvBuilder.AddReveal() to the existing builder.Services.AddControllers() method as follows: using Reveal.Sdk; builder.Services.AddControllers().AddReveal(); Step 3 - Create the Dashboards Folder 1 - Right-click the project and select Add -> New Folder. The folder MUST be named Dashboards . By default, the Reveal SDK uses a convention that will load all dashboards from the Dashboards folder. You can change this convention by creating a custom IRVDashboardProvider. You can learn more about this in the Loading Dashboards topic. Step 4 - Setup CORs Policy (Debugging) While developing and debugging your application, it is common to host the server and client app on different URLs. For example; your Server my be running on, while your Angular app may be running on. If you were to try and load a dashboard from the client application, it would fail because of ASP.NET Core's Cross-Origin Requests (CORs) security policy. To enable this scenario, you must create a CORs policy and enable it in the server project. 1 - Open and modify the Program.cs file to create a CORs policy which will allow any origin (url) access to any headers and methods. builder.Services.AddCors(options => { options.AddPolicy("AllowAll", builder => builder.AllowAnyOrigin().AllowAnyHeader().AllowAnyMethod() ); }); 2 - Apply the policy only while in debug mode. If you have a production application then you would apply the appropriate policy for your production builds. if (app.Environment.IsDevelopment()) { app.UseCors("AllowAll"); } It's important to understand the order in which the middleware executes. The UseCors must be called in a specific order. In this example after UseHttpsRedirection() and before UseAuthorization(). For more information, please refer to this Microsoft help topic Next Steps:
https://help.revealbi.io/en/web/getting-started-server.html
CC-MAIN-2022-33
refinedweb
469
51.24
. That is more or less our current mission. If this mission leads to QEMU creating a non-libvirt based API & telling people to use that instead, then I'd say libvirt's mission needs to change to avoid that scenario ! I strongly believe that libvirt's strategy is good for application developers over the medium to long term. We need to figure out how to get rid of the short term pain from the feature timelag, rather than inventing a new library API for them to use. >. Stepping back a bit first, there are the two core areas in which people can be limited by libvirt currently. 1. Monitor commands 2. Command line flags Ultimately, IIUC, you are suggesting we need to allow arbitrary passthrough for both of these in libvirt. At the libvirt level, we have 3 core requirements 1. The XML format is extend only (new elements allowed, or add attributes or children to existing elements) 2. The C library API is append only (new symbols only) 3. The RPC wire protocol is append only (maps 1-1 to the C API generally) The core question for us as libvirt developers is how we could support QEMU specific features that may change arbitrarily, without it impacting on our ability to maintain these 3 requirements for the non-hypervisor specific APIs. We don't ever want to be in a situation where a QEMU specific API will require us to change the soname of the main libvirt library, or introduce incompatible wire protocol changes. If we were to introduce QEMU specific APIs, we also need a way to easily remove those over time, as & when we have them available as generic APIs. At the C API level, this to me suggests that we'd want to introduce a separate libvirt-qemu.so library for the QEMU specific APIs. This library would not have the same requirements of fixed long term ABI that the main libvirt.so did. We'd add QEMU APIs to libvirt-qemu.so any time needed, but remove them when the equivalent functionality were in libvirt.so, and increment the soname of libvirt-qemu.so at that point. At the wire protocol level, the protocol allows us to support multiple versioned protocols in parallel over the same data stream. So again there, we could define a sub-protocol for QEMU specific features for which we don't provide the indefinite ABI compatability. Finally the XML format is "easy" - just have a versioned XML namespace for extra pieces, that's distinct from the default namespace, again without the permanent long term compatability guarentees. There are, however, some bits that are unlikely to work when QEMU is under libvirt. Specifically any of the device backends that use stdio (eg, -serial stdio, or the ncurses graphics), simply because all libvirt spawned VMs are fully daemonized & so stdio is /dev/null Other items are hard, but not entirely impossible to solve. eg, any use of the 'script=' arg for -net devices doesn't work, because libvirt clears all capabilities from the QEMU process so it'll be lacking CAP_NET_ADMIN which most TAP device setup scripts in fact need. Some parts of the C library/wire protocol here are related to another feature I'd like to introduce for libvirt, namely a administrative library. eg a API to configure and manage the libvirtd daemon itself on the fly. This could easily hook into the wire protocol, but live as a separate libvirt-daemon.so library API in similar way to what I suggest for QEMU specific API > :|
https://www.redhat.com/archives/libvir-list/2010-March/msg00924.html
CC-MAIN-2015-11
refinedweb
596
60.65
Contents: UNIX Manual Page Gateway Mail Gateway Relational Databases Search/Index Gateway Imagine a situation where you have an enormous amount of data stored in a format that is foreign to a typical web browser. And you need to find a way to present this information on the Web, as well as allowing potential users to search through the information. How would you accomplish such a task? Many information providers on the Web find themselves in situations like this. Such a problem can be solved by writing a CGI program that acts as a gateway between the data and the Web. A simple gateway program was presented in Chapter 7, Advanced Form Applications. The pie graph program can read the ice cream data file and produce a graph illustrating the information contained within it. In this chapter, we will discuss gateways to UNIX programs, relational databases, and search engines. Manual pages on a UNIX operating system provide documentation on the various software and utilities installed on the system. In this section, I will write a gateway that reads the requested manual page, converts it to HTML, and displays it (see Figure 9.1). We will let the standard utility for formatting manual pages, nroff, do most of the work. But this example is useful for showing what a little HTML can do to spruce up a document. The key technique you need is to examine the input expected by a program and the output that it generates, so that you can communicate with it. Here is the form that is presented to the user: <HTML> <HEAD><TITLE>UNIX Manual Page Gateway</TITLE></HEAD> <BODY> <H1>UNIX Manual Page Gateway</H1> <HR> <FORM ACTION="/cgi-bin/manpage.pl" METHOD="POST"> <EM>What manual page would you like to see?</EM> <BR> <INPUT TYPE="text" NAME="manpage" SIZE=40> <P> <EM>What section is that manual page located in?</EM> <BR> <SELECT NAME="section" SIZE=1> <OPTION SELECTED>1 <OPTION>2 <OPTION>3 <OPTION>4 <OPTION>5 <OPTION>6 <OPTION>7 <OPTION>8 <OPTION>Don't Know </SELECT> <P> <INPUT TYPE="submit" VALUE="Submit the form"> <INPUT TYPE="reset" VALUE="Clear all fields"> </FORM> <HR> </BODY></HTML> This form will be rendered as shown in Figure 9.2. On nearly all UNIX systems, manual pages are divided into eight or more sections (or subdirectories), located under one main directory-usually /usr/local/man or /usr/man. This form asks the user to provide the section number for the desired manual page. The CGI program follows. The main program is devoted entirely to finding the right section, and the particular manual page. A subroutine invokes nroff on the page to handle the internal nroff codes that all manual pages are formatted in, then converts the nroff output to HTML. #!/usr/local/bin/perl $webmaster = "Shishir Gundavaram (shishir\@bu\.edu)"; $script = $ENV{'SCRIPT_NAME'}; $man_path = "/usr/local/man"; $nroff = "/usr/bin/nroff -man"; The program assumes that the manual pages are stored in the /usr/local/man directory. The nroff utility formats the manual page according to the directives found within the document. A typical unformatted manual page looks like this: .TH EMACS 1 "1994 April 19" .UC 4 .SH NAME emacs \- GNU project Emacs .SH SYNOPSIS .B emacs [ .I command-line switches ] [ .I files ... ] .br .SH DESCRIPTION .I GNU Emacs is a version of .I Emacs, written by the author of the original (PDP-10) .I Emacs, Richard Stallman. .br . . . Once it is formatted by nroff, it looks like this: EMACS(1) USER COMMANDS EMACS(1) NAME emacs - GNU project Emacs SYNOPSIS emacs [ command-line switches ] [ files ... ] DESCRIPTION GNU Emacs is a version of Emacs, written by the author of the original (PDP-10) Emacs, Richard Stallman. . . . Sun Release 4.1 Last change: 1994 April 19 1 Now, let's continue with the program to see how this information can be further formatted for display on a web browser. $last_line = "Last change:"; The $last_line variable contains the text that is found on the last line of each page in a manual. This variable is used to remove that line when formatting for the Web. &parse_form_data (*FORM); ($manpage = $FORM{'manpage'}) =~ s/^\s*(.*)\b\s*$/$1/; $section = $FORM{'section'}; The data in the form is parsed and stored. The parse_form_data subroutine is the one used initially in the last chapter. Leading and trailing spaces are removed from the information in the manpage field. The reason for doing this is so that the specified page can be found. if ( (!$manpage) || ($manpage !~ /^[\w\+\-]+$/) ) { &return_error (500, "UNIX Manual Page Gateway Error", "Invalid manual page specification."); This block is very important! If a manual page was not specified, or if the information contains characters other than (A-Z, a-z, 0-9, _, +, -), an error message is returned. As discussed in Chapter 7, Advanced Form Applications, it is always important to check for shell metacharacters for security reasons. } else { if ($section !~ /^\d+$/) { $section = &find_section (); } else { $section = &check_section (); } If the section field consists of a number, the check_section subroutine is called to check the specified section for the particular manual page. If non-numerical information was passed, such as "Don't Know," the find_section subroutine iterates through all of the sections to determine the appropriate one. In the regular expression, "\d" stands for digit, "+" allows for one or more of them, and the "^" and "$" ensure that nothing but digits are in the string. To simplify this part of the search, we do not allow the "nonstandard" subsections some systems offer, such as 2v or 3m. Both of these search subroutines return values upon termination. These return values are used by the code below to make sure that there are no errors. if ( ($section >= 1) && ($section <= 8) ) { &display_manpage (); } else { &return_error (500, "UNIX Manual Page Gateway Error", "Could not find the requested document."); } } exit (0); The find_section and check_section subroutines called above return a value of zero (0) if the specified manual page does not exist. This return value is stored in the section variable. If the information contained in section is in the range of 1 through 8, the display_manpage subroutine is called to display the manual page. Otherwise, an error is returned. The find_section subroutine searches for a particular manual page in all the sections (from 1 through 8). sub find_section { local ($temp_section, $loop, $temp_dir, $temp_file); $temp_section = 0; for ($loop=1; $loop <= 8; $loop++) { $temp_dir = join("", $man_path, "/man", $loop); $temp_file = join("", $temp_dir, "/", $manpage, ".", $loop); find_section searches in the subdirectories called "man1," "man2," "man3," etc. And each manual page in the subdirectory is suffixed with the section number, such as "zmore.1," and "emacs.1." Thus, the first pass through the loop might join "/usr/local/man" with "man1" and "zmore.1" to make "/usr/local/man/ man1/zmore.1", which is stored in the $temp_file variable. if (-e $temp_file) { $temp_section = $loop; } } The -e switch returns TRUE if the file exists. If the manual page is found, the temp_section variable contains the section number. return ($temp_section); } The subroutine returns the value stored in $temp_section. If the specified manual page is not found, it returns zero. The check_section subroutine checks the specified section for the particular manual page. If it exists, the section number passed to the subroutine is returned. Otherwise, the subroutine returns zero to indicate failure. Remember that you may have to modify this program to reflect the directories and filenames of manual pages on your system. sub check_section { local ($temp_section, $temp_file); $temp_section = 0; $temp_file = join ("", $man_path, "/man", $section, "/", $manpage, ".", $section); if (-e $temp_file) { $temp_section = $section; } return ($temp_section); } The heart of this gateway is the display_manpage subroutine. It does not try to interpret the nroff codes in the manual page. Manual page style is complex enough that our best bet is to invoke nroff, which has always been used to format the pages. But there are big differences between the output generated by nroff and what we want to see on a web browser. The nroff utility produces output suitable for an old-fashioned line printer, which produced bold and underlined text by backspacing and reprinting. nroff also puts a header at the top of each page and a footer at the bottom, which we have to remove. Finally, we can ignore a lot of the blank space generated by nroff, both at the beginning of each line and in between lines. The display_manpage subroutine starts by running the page through nroff. Then, the subroutine performs a few substitutions to make the page look good on a web browser. sub display_manpage { local ($file, $blank, $heading); $file = join ("", $man_path, "/man", $section, "/", $manpage, ".", $section); print "Content-type: text/html", "\n\n"; print "<HTML>", "\n"; print "<HEAD><TITLE>UNIX Manual Page Gateway</TITLE></HEAD>", "\n"; print "<BODY>", "\n"; print "<H1>UNIX Manual Page Gateway</H1>", "\n"; print "<HR><PRE>"; The usual MIME header and HTML text are displayed. open (MANUAL, "$nroff $file |"); A pipe to the nroff program is opened for output. Whenever you open a pipe, it is critical to check that there are no shell metacharacters on the command line. Otherwise, a malicious user can execute commands on your machine! This is why we performed the check at the beginning of this program. $blank = 0; The blank variable keeps track of the number of consecutive empty lines in the document. If there is more than one consecutive blank line, it is ignored. while (<MANUAL>) { next if ( (/^$manpage\(\w+\)/i) || (/\b$last_line/o) ); The while loop iterates through each line in the manual page. The next construct ignores the first and last lines of each page. For example, the first and last lines of each page of the emacs manual page look like this: EMACS(1) USER COMMANDS EMACS(1) . . . Sun Release 4.1 Last change: 1994 April 19 1 This is unnecessary information, and therefore we skip over it. The if statement checks for a string that does not contain any spaces. The previous while statement stores the current line in Perl's default variable, $_. A regular expression without a corresponding variable name matches against the value stored in $_. if (/^([A-Z0-9_ ]+)$/) { $heading = $1; print "<H2>", $heading, "</H2>", "\n"; All manual pages consist of distinct headings such as "NAME," "SYNOPSIS," "DESCRIPTION," and "SEE ALSO," which are displayed as all capital letters. This conditional checks for such headings, stores them in the variable heading, and displays them as HTML level 2 headers. The heading is stored to be used later on. } elsif (/^\s*$/) { $blank++; if ($blank < 2) { print; } If the line consists entirely of whitespace, the subroutine increments the $blank variable. If the value of that variable is greater than two, the line is ignored. In other words, consecutive blank lines are ignored. } else { $blank = 0; s//&/g if (/&/); s//</g if (/</); s//>/g if (/>/); The blank variable is initialized to zero, since this block is executed only if the line contains non-whitespace characters. The regular expressions replace the "&", "<", and ">" characters with their HTML equivalents, since these characters have a special meaning to the browser. if (/((_\010\S)+)/) { s//<B>$1<\/B>/g; s/_\010//g; } All manual pages have text strings that are underlined for emphasis. The nroff utility creates an underlined effect by using the "_" and the "^H" (Control-H or \010) characters. Here is how the word "options" would be underlined: _^Ho_^Hp_^Ht_^Hi_^Ho_^Hn_^Hs The regular expression in the if statement searches for an underlined word and stores it in $1, as illustrated below. This first substitution statement adds the <B> .. </B> tags to the string: <B>_^Ho_^Hp_^Ht_^Hi_^Ho_^Hn_^Hs</B> Finally, the "_^H" characters are removed to create: <B>options</B> Let's modify the file in one more way before we start to display the information: if ($heading =~ /ALSO/) { if (/([\w\+\-]+)\((\w+)\)/) { s//<A HREF="$script\?manpage=$1§ion=$2">$1($2)<\/A>/g; } } Most manual pages contain a "SEE ALSO" heading under which related software applications are listed. Here is an example: SEE ALSO X(1), xlsfonts(1), xterm(1), xrdb(1) The regular expression stores the command name in $1 and the manpage section number in $2, as seen below. Using this regular expression, we add a hypertext link to this program for each one of the listed applications. The query string contains the manual page title, as well as the section number. The program continues as follows: print; } } print "</PRE><HR>", "\n"; print "</BODY></HTML>", "\n"; close (MANUAL); } Finally, the modified line is displayed. After all the lines in the file-or pipe-are read, it is closed. Figure 9.3 shows the output produced by this application. This particular gateway program concerned itself mostly with the output of the program it invoked (nroff). You will see in this chapter that you often have to expend equal effort (or even more effort) fashioning input in the way the existing program expects it. Those are the general tasks of gateways.
http://doc.novsu.ac.ru/oreilly/web/cgi/ch09_01.htm
CC-MAIN-2018-05
refinedweb
2,162
55.03
David Woodhouse wrote:> > prumpf@mandrakesoft.com said:> > Why bother ? It looks like a leftover debugging message which> > doesn't make a lot of sense once the code is stable (what might make> > sense is checking keventd is still around, but that's not what the> > code is doing).keventd *must* still be around.And the code obviously isn't completely stable, and this debugmessage has found something rather unpleasant.I don't think we should run the init tasks when keventd may, ormay not be running. Sure, the current code does, by happenstance,all work correctly when keventd hasn't yet started running, andwhen it's starting up. But it's safer, saner and surer justto crank the damn thing up before proceeding.> > Proposed patch:> > > dwmw2 ?> > Don't look at me. I didn't like the current_is_keventd stuff very much> in the first place. akpm?Heh. Tell that to wakeup_bdflush().The all-singing fully-async + fully-sync + async with callbackpatch was dropped, and until we can demonstrate that itfixes a bug, it can stay dropped. I think it _will_ fixa bug, but the development of userspace hotplug infrastructurehasn't reached the stage where the need for a kernel fixhas been proven.I believe the right thing to do here is the RMK approach.Here's a faintly tested patch.--- linux-2.4.2-pre4/kernel/context.c Sat Jan 13 04:52:41 2001+++ lk/kernel/context.c Mon Feb 19 23:33:38 2001@@ -19,6 +19,7 @@ #include <linux/init.h> #include <linux/unistd.h> #include <linux/signal.h>+#include <asm/semaphore.h> static DECLARE_TASK_QUEUE(tq_context); static DECLARE_WAIT_QUEUE_HEAD(context_task_wq);@@ -26,19 +27,9 @@ static int keventd_running; static struct task_struct *keventd_task; -static int need_keventd(const char *who)-{- if (keventd_running == 0)- printk(KERN_ERR "%s(): keventd has not started\n", who);- return keventd_running;-}- int current_is_keventd(void) {- int ret = 0;- if (need_keventd(__FUNCTION__))- ret = (current == keventd_task);- return ret;+ return (current == keventd_task); } /**@@ -57,13 +48,12 @@ int schedule_task(struct tq_struct *task) { int ret;- need_keventd(__FUNCTION__); ret = queue_task(task, &tq_context); wake_up(&context_task_wq); return ret; } -static int context_thread(void *dummy)+static int context_thread(void *sem) { struct task_struct *curtask = current; DECLARE_WAITQUEUE(wait, curtask);@@ -85,6 +75,7 @@ siginitset(&sa.sa.sa_mask, sigmask(SIGCHLD)); do_sigaction(SIGCHLD, &sa, (struct k_sigaction *)0); + up((struct semaphore *)sem); /* * If one of the functions on a task queue re-adds itself * to the task queue we call schedule() in state TASK_RUNNING@@ -146,9 +137,11 @@ remove_wait_queue(&context_task_done, &wait); } -int start_context_thread(void)+int __init start_context_thread(void) {- kernel_thread(context_thread, NULL, CLONE_FS | CLONE_FILES);+ DECLARE_MUTEX_LOCKED(sem);+ kernel_thread(context_thread, &sem, CLONE_FS | CLONE_FILES);+ down(&sem); return 0; } --To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2001/2/19/21
CC-MAIN-2016-30
refinedweb
457
55.34
Overview Search is complex. Nutch makes it easier. So you start off by installing Nutch. Now you are a pretty good developer. You get Nutch up and running without problems and you think it is a pretty neat piece of software. In fact you like it so much that you want to start adding to it. You want to develop new features and contribute them back to the community. But then it dawns on you. How does one contribute to an open source project like Nutch. You have never worked on an open source project before and don't really know how the entire process works. That is where this document comes in. The purpose of this document is to help you as developers take the next step in becoming contributing members of the Nutch community. We will cover a general overview of the Nutch development process including the different pieces and how they fit together. We will cover how the community works and interacts, using the mailing lists to search for information and how to ask questions to ensure that they get answered. We will also cover how to go about learning the internals of the Nutch codebase. We will cover how to use the JIRA for change requests and how to start developing for Nutch. And finally we will cover contributing back to the Nutch community. When we are finished you should have a good understanding of how the community works and how you can go about becoming a bigger part of that community. Table of Contents Contents - Overview - Table of Contents - The Nutch Community - Becoming a Nutch Developer - Becoming a Nutch Committer - Conclusion The Nutch Community Nutch Development Roles There are three main roles that a person can play in the Nutch community. The first role is that of user. This is someone who uses the Nutch software but is not active in its development. People in this category range from the curious programmer who wants to learn more about search technology to corporations setting up search on their local intranet. If you only want to use the Nutch software and don't want to help develop it, you can still be a contributing member of the community. By using the software and pushing the limits of what it can do, by filing bug reports and feature requests (more about how to do this later), by working with developers to track down issues, or just giving your input to discussions that arise, you can help the Nutch project become better and better. The second role is that of developer. This is someone who has used the Nutch software and has taken the next step to help program the underlying software. Since you are reading this document, it is assumed that you want to be a developer. Helping to develop the Nutch codebase can come in the form of bug fixes or by developing completely new features from scratch. An important thing to remember is that unlike most software development at big companies, you don't need anyone's permission to start developing software for Nutch. If you think you have a good idea for a feature, or if you want to track down and fix bugs in the software, go for it. If you want a specific piece of functionality, don't wait for someone else to develop it. Take the time to learn and develop it yourself. Then when you are done give it back to the community. This is how the Nutch project has been developed so far and how it will continue to be developed in the future. The community is a do-ocrcy, meaning those who do the work get to help set the directions and make the decisions. Communication is essential but not limiting. Anybody can become a developer simply by taking the initiative, whether in the form of fixes or functionality, for inclusion in the project. The third role is that of committer. This is usually a developer who has been working with the project for some time. Someone who has developed new pieces of functionality, who has fixed bugs, and who has helped others through answering questions and providing guidance to others through the mailing lists and wiki. In other words this is a person who has proved their commitment and usefulness to the project and in return are given commit access to the source code repository, an apache email address, and the ability to help make strategic decisions for the project by determining what submissions and bug fixes make it into the source code repository and release versions of the software. How the Community Works The community works together through shared mailing lists, email, wikis, bug tracking systems, and source code repositories. These tools when used together provide a virtual meeting room and workspace for all members of the community. Mailing Lists and Email Most communication is done through email and the mailing lists. Because of this the first thing that any person should do to become part of the Nutch community is to join the appropriate mailing lists. There are four different mailing lists. First there is the users mailing list. Contrary to the name this list is not just for users. If you have questions about the Nutch software including installation, configuration, bugs, errors, or general information, this is the list for you. Second is the dev mailing list. This is where most development communication occurs including updates to request tracking systems. This is also where developers can pose ideas for new functionality to see if someone is already working on such features or just to get general feedback. The dev mailing list is important for tracking functionality that other developers may be working on and consensus by the community on desired direction of new features. The third list is the commit mailing lists. This list tracks commits to the source code repository and changes in the wiki pages. The fourth list is the agents list. This is where webmasters and other people can post comments or questions about the Nutch crawler. Users can get by with subscribing to only the users mailing list. Developers should subscribe to all four mailing lists. Anybody doing internet crawls needs to be subscribed to the agents list. In order to post to any list, for example to ask a questions, it is necessary to first be subscribed to that list. Below are links for subscribing to the different mailing lists. JIRA and Issue/Request Tracking If mailing lists provide the ongoing conversation for the community, the issue/request tracking system provides a repository for the current state of the project. The request tracking system is JIRA system and it can be accessed at this address. The JIRA system is the central repository for all work wanting to be included int the Nutch source code base. The system tracks issues and feature requests by component, by version, and by status. You can view what requests are assigned to what person, what requests are currently being worked on, and which ones haven't been scheduled. You can search all requests by keyword or by various categories and filters. We will go into detail later on how to use the JIRA system to propose new functionality and submit bug fixes. For now understand this: If you are going to be a developer you will need to understand how to use the JIRA system as this is where you will propose new functionality, submit bug fixes, give you input on features other developers may be working on, and coordinate actions with other developers on specific pieces of functionality. The address to signup for JIRA was given above. Once you have signed up you will have access to all of the Apache JIRA repository, not just the Nutch project. Source Code Control through Subversion Source code control is very important to open source projects. Nutch uses the apache subversion repository for it source control. As a developer you will want to get into the habit of downloading and updated your development environment directly from the subversion repository. We will go into detail about how to do this later. There are two types of logins to the repository, users and committers. Users can download the repository but cannot make changes directly to the repository. You can make changes on your local system and those changes can be submitted to the JIRA system. Committers hold the committer role that we discussed previously. These individuals can make changes directly to the subversion repository and are responsible for take patches from the JIRA system and applying them to subversion where they then become available to all users. Wiki and Documentation The weakest part of most open source projects is their documentation and Nutch is no exception. Wikis are special web pages like the one that you are reading that allows users to directly edit text on the page and to create new pages. The wiki provides various tutorials and documentation for Nutch. Links to view the Nutch wiki and to register for the wiki are provided below. As a developer one of the ways you can contribute back to the community is by documenting your hard won experience on the wiki. You can do this in the form of tutorials, articles, or simple notes and instructions. Anything that you have learned may be of use to other developers. The wiki is also used as a virtual white board to help document general themes and directions for the project. These four tools, mailing lists and email, JIRA, Wikis, and Subversion together provide the community ways to coordinate their actions and conversations. As a developer you will need to understand each of these tools and how you will use them in the development process. Next Steps The rest of this document will cover four steps in becoming a Nutch developer. First is using the mailing lists to communicate, find information, and get solutions to problems. Second is how to go about learning the different parts of the Nutch codebase. Third is how to use the JIRA to search for bugs and coordinate development efforts. And four is how to use the wiki to help grow the community knowledge base. Becoming a Nutch Developer Step One: Using the Mailing Lists The most used tool in Nutch development is by far the mailing lists. We have already explained the various mailing lists and their uses. This section will provide general guidance on using the mailing lists to find information and get questions answered. First and foremost, any person wanting to become a Nutch developer should start reading the user, dev, and commits lists on a daily basis. To start out with simply read the questions that other users ask on the users list. As you begin delving into the Nutch codebase more and more you will be able to answer some of the questions that other users have. One of the best ways to learn Nutch is by daily taking one question that is asked on the users list and seeing if you can find the answer. As a developer you will want to keep up with the current state of development on the project and this is where the dev list comes in. The dev list is where JIRA messages will be delivered every time a JIRA request is updated. By following this list you will see discussions between other developers about various bugs and feature requests. You may also see an information voting occuring. If you feel you can contribute to one of the discussions, by all means add your input. You will also want to keep up to date on the commits list to see what code has been committed to the subversion repository and what updates have been made to the wiki. Don't think that you have to be an expert on Nutch to begin answering questions. If you think you can help another user on the mailing list, don't be afraid to go and do it. I think that it was Eric Raymond that said "Given enough eyeballs, all bugs are shallow". What this means is that you bring a unique perspective to the world as does everyone else and you may find bugs that no one else can find. The more people we have looking at the code and improving upon it, the more stable and robust it will become. The more people we have in the community constantly communicating, asking questions and helping each other, the better we all become. As you start to have questions about the configuration and operation of Nutch or about errors that you have recieved, go ahead and ask these questions on the user list. When asking questions it is best to provide a descritive subject and detailed information about the problem or question. Detailed information would include snippets of log or configuration files and a good description of the problem. In general the more specific information you can provide, the easier it is for other users and developers to help. General or abstract questions tend to be ignored. For example I have seen messages on the list like this before that were completely ignored. A Bad Email Subject: Problem with crawling I am having a problem with crawling the internet. It just seems that it is taking a long time. Does anyone know why crawling takes so long. Me With this type of question other users would have no idea what the problem is or how to help and therefore most simply ignore the question and move on. On the other hand here is better example of asking questions. A Good Email Subject: Crawl on 20K pages taking 4 hours I am using Nutch .8 branch over a cluster of 3 machines each running redhat linux and java 1.5_10 with 500G hard drives, 2.8 Ghz processors, and 2G of ram. I am trying to fetch 20K urls and the fetch process completes fine but when it gets to the reduce process, the cpus go to 100% and the process seems to spin indefinitely. I did a kill -SIGQUIT on the process and it seems to be stuck on the Regex normalizer class. Has anyone experienced similar problems or know what might be causing this problem. Me In the second case, much more detailed information such as operating system, java version, component in which the error is occurring, and a detailed description of the problem, is provided and developers are much better equipped to provide assistance. Before asking questions on the list you will want to search the list to see if your problem has come up in the past. There may already be a solution to your problem out there. I have seen times when questions went unanswered on the list because the same question had been answered only a few days before and the person asking never bothered to search the archives of the list. Below are two different web based locations from which you can search the Nutch mailing lists. When searching the list for errors you have received it is good to search both by component, for example fetcher, and by the actual error received. If you are not finding the answers you are looking for on the list, you may want to move to the JIRA and search there for answers. Here are some other important things to remember about the mailing lists. First, do not cross post questions. Find the best list for your question and post it to that list only. Posting the same question to multiple lists (i.e. user and dev) tends to annoy the very people you want help from. Second, remember that developers and committers have day jobs and deadlines also and that being rude, offensive, or aggressive is a sure way to get your posting ignored if not flamed. Most questions on the lists are answered within a day. If you ask a question and it is not answered for a couple of days, do not repost the same question. Instead you may need to reword your question, provide more information, or give a better description in the subject. Step Two: Learning the Nutch Source Code I have found that when teaching new developers the basics of the Nutch source code it is easiest to first start with learning the operations of a full crawl from start to finish. A word about Hadoop. As soon as you start looking into Nutch code (versions .8 or higher) you will be looking at code that uses and extends Hadoop APIs. Learning the Hadoop source code base is as big an endeavor as learning the Nutch codebase, but because of how much Nutch relies on Hadoop, anyone serious about Nutch develop will also need to learn the Hadoop codebase. First start by getting Nutch up and running and having completed a full process of fetching through indexing. There are tutorials on this wiki that show how to do this. Second get Nutch setup to run in an integrated development environment such as Eclipse. There are also tutorials that show how to accomplish this. Once this is done you should be able to run individual Nutch components inside of a debugger. This is essential because probably the fastest way to learn the Nutch codebase is to step through different components in a debugger. Start by looking at the crawl package, specifically the Injector, Generator, and CrawlDb classes. Start with Injector, read over the source code for it and classes that it references. Run the class as a main program inside of a debugger. You will want to do this in local mode on the local file system. You will also want to take a look at the junit test classes for the same package. These can be found under the src/test folder. There should be test classes for all of the main components. Again read the source code for the test classes and execute the junit test cases to get a deeper understand of how each component works. Follow this pattern of reading the source code and and classes it references, running through the source in a debugger if possible, and reading and executing the junit test cases for each component in the crawl process. In order they are Injector, Generator, Fetcher, ParseSegment, CrawlDb, LinkDb, Indexer, DeleteDuplicates. Then you will want to look at other components including SegmentMerger, CrawlDbMerger, and DistributedSearch. It is usually best to get through one or two components before beginning to look at he Hadoop code and then switch back and forth between the Nutch and Hadoop code to understand where and how Nutch uses Hadoop. In Hadoop it is better to take the packages one at a time, for example mapred or dfs, then to take the strategy of running components as Hadoop is server based. You can still follow the pattern of reviewing junit tests to get an understanding of the Hadoop source code. Once you feel you have a grasp of various parts of the source code in Nutch or Hadoop I would recommend creating small junit test cases that use your newfound knowledge. For example you can create a small test case that fetches a few urls and verifies that they were fetched correctly. If you get through all of this then you will have a good foundation of knowledge in the Nutch and Hadoop source code bases and you should fine in starting to develop software for both Nutch and Hadoop. Step Three: Using the JIRA and Developing Ok, so you have gone through the source code and have a good understand of the different components. Now you want to start developing or fixing bugs. Where do you start. First if you haven't already signed up for the JIRA, do so now. Instructions were provided earlier for this. Now it is time to start browsing. JIRA provides a lot of search facilities. On the top of the main JIRA page there is a free text search. On the right hand side of the main JIRA page there are preset filters. You can search by status of the issue, by priority, or by assignee. You will want to try out each of the different search options to get familiar with the capabilities of JIRA. When you do a search in JIRA you are presented with a listing of issues that match your query. The results listing will show you the JIRA id for the Nutch issue. This is in the form of NUTCH-XXX. It is important to remember the JIRA id numbers as this is how you will reference issues that you are working on both through the JIRA and in communicating with other developers on the list. The listing also shows a brief summary of the issue, who it is assigned to, who reported it, the priority and status of the issue, and if it is resolved. Clicking on the issue number will bring you to the main page for that issue. The main issue page is where you will communicate with other developers about this issue and where you will attach your code patches for bug fixes and new feature requests. Whenever changes are made to JIRA issues an email is automatically generated and sent to the dev mailing list. You will have to be logged in to leave comments or to attach documents to issues. Again it is important to become familiar with the interface. Once you have become familiar with the JIRA interface it is time to pick something to work on. If you already have something that you wish to work on, either a bug fix or a new piece of functionality then the first step is to send a message to the dev mailing list detailing the issue. By doing this you can get feedback from other developers. You may find that someone is already working on the issue or that the functionality is handled elsewhere. Either way first notifying the list, especially if it a major piece of new functionality, is the polite thing to do. On a side not, before you start creating issues in the JIRA that you are going to work on yourself you need to send an email to the developers list asking to be added to the nutch-developers group. Then when you create issues later in the JIRA you can have the issues you create assigned to you. This helps other developers know what is being worked on at any given point in time and avoids duplication of effort. Once you have gotten feedback from other developers and no one has objected then you will need to create an issue in the JIRA. In the JIRA issue please give as much detail and description as possible. Once the issue is created assign the issue to yourself or if you don't have permissions to do so then send a message to the dev mailing list asking that the issue be assigned to you. If you are not creating an new issue but instead want to begin working on an existing issue then here are the steps. First find the issue that you want to work on. If it is assigned to someone else then send a message to that person to see if they are working on it and where they are at in their process. It often happens that issues get assigned to developers but the developers are too busy to work on them. Or it may be that the person is in the process of working on the issue and would welcome your help. Either way, you should always contact that person and coordinate the efforts. That's only polite and sensible. Regarding the picking of the work to be done - natural ordering in JIRA should be followed. Issues marked critical are more important than "major", and the ones with a lot of votes are more important than those without any. Once the JIRA is created and has been assigned to you then it is time to start coding. Remember to follow a few simple guidelines while coding. - All public classes and methods should have informative Javadoc comments. Code should be formatted according to Sun's conventions, with one exception indent two spaces per level, not four. For Eclipse the code style formatter profile defined in eclipse-codeformat.xml can be used to automatically format (only, see below) the contributed code. - Contributions should pass existing unit tests. - New junit test cases should be provided to demonstrate bugs fixes and new features. You will also want to perform functional testing of your new code within your own environment as well as make sure that the and build and javadoc are successful with your new code. Once your code has been completed and tested then it is time to create a patch. Start by checking to see what files you have modified with: svn stat Keep this list for later because you will want to make sure that only code that you have changed is included in your patch. In order to create a patch, just type: svn diff > yourPatchName.patch This will report all modifications done on the Nutch sources on your local disk and save them into the yourPatchName.patch file. Read the patch file. Make sure it includes ONLY the modifications required to fix a single JIRA.) Finally, patches should be attached to the JIRA issue. You can do this by logging into the JIRA issue and clicking the attach file to this issue link on the left hand side of the JIRA issue page. Then please be patient and in the mean time start working on another issue. Committers are busy people too. If no one responds to your patch after a few days, please make friendly reminders to the dev mailing list. Please incorporate other's suggestions into into your patch if you think they're reasonable. Now here is the hard part. Even if you have completed your patch it may not make it into the final Nutch codebase. This could be for any number of reason but most often it is because the piece of functionality is not in lines with the strategic goals of Nutch. Of course if you had sent an email to the list before starting development on the issue then this would have already been addressed. Remember though that all developers have access to your functionality through the JIRA and they can and will use your patch even if it does not make it into release code. Every patch is useful to the community. Step Four: Contributing This is the easy step. As you get more and more understanding in the Nutch code base. It is useful to take your hard earned knowledge and start helping others in the community. You can do this by creating tutorials, articles, and notes on the wiki or by answering questions on the mailing lists. Remember that the project is a circle. The more people you help the better they become and better functionality they develop that in turn helps you. Together we can all life each other higher. Becoming a Nutch Committer So you have developed some very useful functionality and contributed it back to the community. You consistently fix bugs. You answer questions for other users and developers on the mailing lists. All in all you are an asset to the community. At this point you may be invited to become a committer. At this point you would get an apache email address and direct access to the subversion source code repository and you would be responsible for helping set the technical direction of the Nutch project. Conclusion So I hope this tutorial has helped to guide you in the direction of becoming a Nutch developer. Nutch is an awesome piece of software that has tremendous potential for changing search as we know it. If you desire to work on a piece of software that has the potential to affect millions of people around the world, then this is the project for you. Get started today and in a year or two you will look back and be amazed at just how much you have accomplished. I would like to thank Andrzej Bialecki, Chris Mattmann, and Doug Cutting for providing assistance in developing this tutorial. I hope to meet, as we say it in Texas, ya'll in person one day.
http://wiki.apache.org/nutch/Becoming_A_Nutch_Developer?highlight=SegmentMerger
CC-MAIN-2013-48
refinedweb
4,755
70.23
Day 1 Homework Assignment Solution - Posted: Nov 11, 2010 at 5:45PM - 28,152 views - 12 comments ![if gt IE 8]> <![endif]> Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” Got stuck with the homework assignment? No worries, simply watch enough of the solution video to get un-stuck, OR compare your solution with Bob's version. could ofcourse use the string message = (textBox1.Text == "20 10 30") ? "You saved the world" : "You die!"; Which would be the easiest way here ;) @Thee -- Yes, of course. Please keep in mind that I'm assuming the viewer has only one day of experience. There will be many ways to re-write these silly examples to make them more compact. At this point (again, target audience is the absolute beginner), I'm just happy people are able to write this much!!! But your point is well taken! Hi Clint and Bob, I mashed some up here :) No story, I will take a repeat on the previous videos. Can you tell me how to remove safely an unintentional namespace for let's say a button.click event, seems helpful to me. (English is not my first language, and my C is not that sharp yet, so excuse me on the occasion.) Hi Bob, I was wondering if it would be possible to create a count down timer for the '108 minutes' in the header? Thanks Hi Bob, good video, but at the end you didnt explain how to do the extra bit of the assignment where you get the title page to change (where currently it says 108). I managed to do it, I set it to have a message saying Saved world or End of the World depending on what you typed, but would be nice to add that to the end of the video for people who managed to do that. Cheers Like to add two requests/remarks: In the phone emulator one can go back (<-) to see a page with a choice list (with e.g. the choice Settings). To make the name of this app equal Brahma Initiative one (after some trials I discover) finds this name to change hidden in the file WMAppManifest.xml under tag <App xmlns...< ....>. It also takes a restart of the phone emulator to see it appear. Secondly: I like to know after setting the focus on the textbox how one can set (programmatically) the keyboard to the numeric keyboard on the top so the user can immediately startting to type the numbers. By the way: nice to follow these sequences of steps so far in the serie. Thanks. @PeterNL: Reference this: Here is the code I used... The reason I bring it up is the I ommitted, and it worked just fine, the string message = ""'; or the string userValue = myTextBox.Text Can someone explain what the purpose of doing it the other two ways above would ever have been for in the first place? I am a super noob so all the help the better. This works too: textBlock.Text=(textBox.Text=="4 8 15 16 23 42")?"OK":"KO"; very easy! Thanks! Thanks for this! Other than forgetting how to set the input scope focus, I think I'm quite on track here. did we have to include a countdown to count down to 0 fdorm 180 if they eneter in the wrong code cux i thought we had to and that's what i got sstuck on Remove this comment Remove this threadClose
https://channel9.msdn.com/Series/Windows-Phone-7-Development-for-Absolute-Beginners/Day-1-Homework-Assignment-Solution?format=auto
CC-MAIN-2016-30
refinedweb
602
80.51
I need help with a flowchart in a question I'm doing. I've completed the programming code but just have no idea on flowcharts if anyone can give me an idea on how to do this then I'll appreciate it. This is the question: Write a C++ program that reads from keyboard 3 integers, with proper input prompt, and then displays the maximum sum of any pair of numbers from these three. If the 3 numbers are 5, 6 and 7 for instance, then the maximum sum comes from 6+7=13. Draw the flowchart of this C++ program, and also desk check the program for the three input integers 12, 3 and 7, or a different set of 3 numbers which will make the desk checking less trivial within your program design. I've completed the code which is: Code:#include <iostream> using namespace std; int main() { int num1, num2, num3, sum; cout << "Enter number "; cin >> num1; cout << "Enter number "; cin >> num2; cout << "Enter number "; cin >> num3; /* if(num1 >= num2) { }*/ if(num1 > num2 && num3 > num2) { sum = num1 + num3; cout << "The sum of " << num1 << " and " << num3 << " = " << sum << endl; } else if(num1 >= num3 && num2 >= num3) { sum = num1 + num2; cout << "The sum of " << num1 << " and " << num2 << " = " << sum << endl; } else if (num2 >= num1 && num3 >= num1) { sum = num2 + num3; cout << "The sum of " << num2 << " and " << num3 << " = " << sum << endl; } //else //{ // cout << "all three numbers must be diffrent" <<endl; // } system("PAUSE"); return 0;
http://cboard.cprogramming.com/cplusplus-programming/136957-need-help-flowchart.html
CC-MAIN-2016-40
refinedweb
239
58.29
home › Forums › # Technical Support › fuzzylie error – use of undefined type 'T' - This topic has 2 replies, 2 voices, and was last updated 7 years, 6 months ago by Unknown. - AuthorPosts Hi I tried to compile a C++ program with fuzzylite libraries. When compiling it gives following errors fl/Operation.h(39): error C2027: use of undefined type ‘T’ fl/Operation.h(39): error C2226: syntax error : unexpected type ‘T’ fl/Operation.h(39): error C2334: unexpected token(s) preceding ‘:’; skipping apparent function body fl/Operation.h(42): error C2760: syntax error : expected ‘{‘ not ‘;’ fl/Operation.h(44): error C2143: syntax error : missing ‘}’ before ”template<” fl/Operation.h(45): error C2143: syntax error : missing ‘,’ before ‘<end Parse>’ fl/Operation.h(48): error C2143: syntax error : missing ‘)’ before ‘;’ fl/Operation.h(48): error C2238: unexpected token(s) preceding ‘;’ I am using Visual studio 2010 express edition and i tried to compile it in release mode. What would be the reason for this ? Juan Rada-Vilela (admin)Keymaster Hi, I think there is a problem in the type of the Visual Studio project that you have created. I am not a regular user of VS, but perhaps make sure you have selected a Console Project. Have you successfully built fuzzylite from the console? If so, then you need to add the INCLUDE_PATH, LIBRARY_PATH and library to your project, that is, “-I/path/to/fuzzylite”, “-L/path/to/fuzzylite/bin”, “-lfuzzylite”, respectively. These are in Unix mode, and you would need to find out how to correctly add them in your project properties. Hi, I already used this library for some other programs. However in this case I have linked some other 3rd party libraries along side with fuzzylite. I found a solution for this but I don’t know the exact theory behind that. In earlier case I put “#include “fl\Headers.h” in bottom of the header file list (look at the first code segment). However when I put it at the top of the header file list (see the 2nd code segment) it works fine :). This didn’t work: #include “stdafx.h” #include “Aria.h” #include “ArNetworking.h” #include <iostream> #include <stdio.h> #include “fl\Headers.h” // for fuzzylite This works well: #include “fl\Headers.h” // for fuzzylite #include “stdafx.h” #include “Aria.h” #include “ArNetworking.h” #include <iostream> #include <stdio.h> Could you please explain the possible reason for this ? - AuthorPosts - You must be logged in to reply to this topic.
https://www.fuzzylite.com/forums/topic/fuzzylie-error-use-of-undefined-type-t/
CC-MAIN-2022-33
refinedweb
411
61.02
Reply to a client with a message, endian-swapping if required #include <sys/iofunc.h> #include <sys/resmgr.h> int resmgr_msgreplyv( resmgr_context_t *ctp, struct iovec *iov, int parts ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The resmgr_msgreplyv()() function is similar to resmgr_msgreplyv() but takes a single message buffer.. The data is taken from the array of message buffers pointed to by iov. The number of elements in this array is given by parts. The size of the message is the sum of the sizes of each buffer.. It's quite common to reply with two-part messages consisting of a fixed header and a buffer of data. The resmgr.(), resmgr_msgwrite(), resmgr_msgwritev() “Layers in a resource manager” in the Bones of a Resource Manager chapter of Writing a Resource Manager Resource Managers chapter of Getting Started with QNX Neutrino
http://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx.doc.neutrino_lib_ref/r/resmgr_msgreplyv.html
CC-MAIN-2020-34
refinedweb
150
56.96
> From: Richard Copley <address@hidden> > Date: Sat, 17 Sep 2016 12:31:36 +0100 > Cc: Paul Eggert <address@hidden>, Bob Halley <address@hidden>, > Emacs Development <address@hidden> > > >>> >> +#ifndef _GNU_SOURCE > >>> >> +#define _GNU_SOURCE 1 > >>> >> +#endif > >>> > > >>> > > >>> > Thanks, I installed that into Emacs master. > >>> > >>> The Windows build is broken too (with MSYS2), presumably for a similar > >>> reason. > >> > >> It isn't broken here. Can you show the error messages? > > > > Sure, give me a few minutes. Did you reconfigure? You need the > > generated limits.h. > > Also, this is with 64-bit GCC 6.1.0. > > In file included from G:/emacs/repo/emacs/src/w32.c:87:0: > G:/emacs/repo/emacs/src/lisp.h:93:26: error: 'LLONG_WIDTH' undeclared > here (not in a function) > enum { EMACS_INT_WIDTH = LLONG_WIDTH }; > ^~~~~~~~~~~ > > In file included from G:/emacs/repo/emacs/src/w32proc.c:54:0: > G:/emacs/repo/emacs/src/lisp.h:93:26: error: 'LLONG_WIDTH' undeclared > here (not in a function) > enum { EMACS_INT_WIDTH = LLONG_WIDTH }; > ^~~~~~~~~~~ I don't understand how this could happen. Take w32proc.c, for example: it includes config.h _before_ lisp.h, and on my system config.h has these: /* Enable GNU extensions on systems that have them. */ #ifndef _GNU_SOURCE # define _GNU_SOURCE 1 #endif [...] /* Enable extensions specified by ISO/IEC TS 18661-1:2014. */ #ifndef __STDC_WANT_IEC_60559_BFP_EXT__ # define __STDC_WANT_IEC_60559_BFP_EXT__ 1 #endif So by the time limits.h gets included (via lib/stdint.h, which is included by nt/inc/stdint.h, which is included by nt/inc/ms-w32.h), both _GNU_SOURCE and __STDC_WANT_IEC_60559_BFP_EXT__ are already defined, and the definitions of LLONG_WIDTH in lib/limits.h should have been processed. Which part(s) of this don't work on your system, and why? To find out what happens during preprocessing, I did this: cd src make w32proc.o -W w32proc.c V=1 Then I copied the command displayed by Make, replaced -c with -E, appended "-o w32proc.ii", ran the command, and looked inside w32proc.ii to see how the preprocessor included the various headers and defined the macros. (My GCC version is 5.3.0, and this is a 32-bit build with wide ints.)
https://lists.gnu.org/archive/html/emacs-devel/2016-09/msg00438.html
CC-MAIN-2020-16
refinedweb
349
62.44
Webapp Debugging How can I debug my webapp in Pythonista? In the python learning book they use this code: app.run(debug=True) It doesn't function in Pythonista and I get these messages: - Running on - Restarting with reloader and some error messages. Here is my webapp code: from flask import Flask, render_template app = Flask(__name__) @app.route('/') def hello() -> str: return 'Hello world from Flask!' @app.route('/search4', methods=['POST']) def do_search() -> str: return str(search4letters('life, the universe and everything','eiru,!')) @app.route('/entry') def entry_page() -> 'html': return render_template('entry.html', the_title='Welcome to search for letters on the web!') app.run(debug=True) Flask's auto-reloader doesn't work in Pythonista because it would require spawning subprocesses which isn't possible on iOS. You can pass use_reloader=Falseto app.run()though, and debugging should still work.
https://forum.omz-software.com/topic/3879/webapp-debugging
CC-MAIN-2017-26
refinedweb
140
52.56
## *ctapipe* MAGIC event source This module implements the *ctapipe* class, needed to read the calibrated data of the MAGIC telescope system. It requires the [*ctapipe*]() and [*uproot*]() packages to run. #### Installation Provided *ctapipe* is already installed, the installation can be done like so: ```bash git clone pip install ./ctapipe_io_magic/ ``` This installation via pip (provided, pip is installed) has the advantage to be nicely controlled for belonging to a given conda environment (and to be uninstalled). Alternatively, do ```bash git clone cd ctapipe_io_magic python setup.py install --user ``` #### Usage ```python import ctapipe from ctapipe_io_magic import MAGICEventSource with MAGICEventSource(input_url=file_name) as source: for event in source: ...some processing... ``` The reader also works with multiple files parsed as wildcards, e.g., ```python MAGICEventSource(input_url=data_dir/*.root) ``` This is necessary to load and match stereo events, which are automatically created if data files from M1 and M2 for the same run are loaded. The reader is able to handle data or Monte Carlo files, which are automatically recognized. Note that the file names have to follow the convention: - `*_M[1-2]_RUNNUMBER.SUBRUNNR_Y_*.root` for data - `*_M[1-2]_za??to??_?_RUNNUMBER_Y_*.root` for Monte Carlos. Note that currently, when loading multiple runs at once, the event ID is not unique. #### Changelog - v0.1: Initial version - v0.2.0: Unification of data and MC reading
https://gitlab.mpcdf.mpg.de/ievo/ctapipe_io_magic/-/blame/5a7f78a0223db37a9a3b587cb044342d95c38233/README.md
CC-MAIN-2022-40
refinedweb
223
57.87
I have created new ASP MVC 5 app (with asp identity). I use IdentityDbContext class to put all my domain objects inside. public class SecurityContext : IdentityDbContext{ ... public DbSet<Country> Countries { get; set; } When I first time created user all database tables were created (identity tables) including my Country table. But now I added new table in database (created from Management Studio) and added in IdentityDbContext: public DbSet<City> Cities { get; set; } But when I run the app now I get error: The model backing the 'SecurityContext' context has changed since the database was created. Consider using Code First Migrations to update the database. If I delete all tables and run app again City will be created but I wan't to be more flexible. So what I want to do is that I can create all database tables and to create my POCO classes in project later so I can use EF for data access. I don't want EF to create classes for me (I feel more secure when connect those things alone). Also (I know that it is possible but just don't know how) can I prevent asp Identity to create tables alone? To sum whole question. I want to use EF but I want to create my database myself including Identity tables. How can I do this? You can turn off the EF initializer and manage the tables and mappings yourself. Disable the database initializer in the constructor like this... public SecurityContext() : base("DefaultConnection") { Database.SetInitializer<SecurityContext>(null); } Then you can just take the schema that EF has created thus far and manage it yourself (you can remove the migrations history table). I recommend using a SQL Server Database project so that you can source control your schema. It's now up to you to update the DbContext when the schema changes. You can also disable it through configuration <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="DatabaseInitializerForType YourNameSpace.SecurityContext, YourLibraryName" value="Disabled" /> </appSettings> </configuration>
https://entityframeworkcore.com/knowledge-base/24467721/i-want-to-use-entity-framework-plus-asp-identity-but-i-don-t-want-ef-to-generate-tables-for-me
CC-MAIN-2021-17
refinedweb
332
56.96
def hotel_cost(nights): return 140 * nights def plane_ride_cost(city): if "charlotte": return 183 elif "Tampa": return 220 elif "Pittsburgh": return 222 elif "Los Angeles": return 475 It looks like plane_ride_cost does not return 220 when the city is Tampa. idont understand can anyone help me? @muffens , You should format code when you post it, so we can see the indentation and other important details. See How do I format code in my posts?. You have ... elif "Tampa": "Tampa" will not suffice as a condition for the purposes of this program. Compare the value of city to "Tampa", as follows ... "Tampa" city elif city == "Tampa": The if and the elif headers all need to have their conditions revised. if elif All the city names must begin with an uppercase letter. Check them carefully. thank you that really helped
https://discuss.codecademy.com/t/3-getting-there/67505/2
CC-MAIN-2017-26
refinedweb
137
74.69
/bin/sh: 1: lsof: not found I'm trying to get an image from a webcam by using the SimpleCV shell on a Raspberry Pi. Here is my attempt and the results: python from SimpleCV import Shell Shell.main() SimpleCV:1> from SimpleCV import Camera, Display, Image SimpleCV:2> cam = Camera() /bin/sh: 1: lsof: not found VIDIOC-QUERYMENU: Invalid argument (7 times) select timeout SimpleCV:3> select timeout select timeout select timeout over and over again . . . My questions are: 1) Is "/bin/sh: 1: lsof: not found" the problem?, or 2) Is "select timeout" the problem? What can I do to solve the problem(s)? Thanks. /bin/sh lsof is certainly not a problem. Many people have encountered the similar problem in Raspberry Pi. It has very poor webcam support. Hopefully someone will come,up,with the answer. Like this one,my code accepts the webcam which works with other RPi software but fails exactly as above. So it doesn't seem to be a webcam support issue.
http://help.simplecv.org/question/1699/binsh-1-lsof-not-found/
CC-MAIN-2019-35
refinedweb
169
66.94
Hessian Add Example From Resin 4.0 Wiki Hessian Addition The [doc|hessian-1.0-spec.xtp Hessian 1.0 spec] and [doc|hessian-2.0-spec.xtp Hessian 2.0 spec] describe the full Hessian protocol. Files in this tutorial The. Hessian call c x01 x00 m x00 x03 add I x00 x00 x00 x02 I x00 x00 x00 x03 z Hessian reply. A Hessian Example Using Hessian generally uses three components: - A remote interface - The server implementation - The client (JSP or servlet) The remote interface is used by the Hessian proxy factory to create a proxy stub implementing the service's interface. Service ImplementationianMathService.java package example; import com.caucho.hessian.server.HessianServlet; public class HessianMathService extends HessianServlet { public int add(int a, int b) { return a + b; } } Remote Interface. MathService.java package example; public interface MathService { public int add(int a, int b); } Java Client RPC clients follow the following steps in using a remote object: - Determine the URL of the remote object. - Obtain a proxy stub from a proxy factory. - Call methods on the proxy stub. client.jsp <%@)); %>
http://wiki4.caucho.com/Hessian_Add_Example
CC-MAIN-2015-35
refinedweb
182
51.44
We will present examples by commenting each relevant source code block, and at the end, we will show the complete source code. In the first example, we will describe how to embed a Matplotlib Figure in a wxFrame. wxFrame is one of the most important widgets in wxWidgets. It's considered to be a container because it contains other widgets. wxFrame consists of a title bar, borders, and a center container area: the classic application window layout. The example code starts with: import wx This is the main wxPython module. It contains all the submodules, objects, and functions for the wxWidgets library. Every application that uses wxPython imports this module. from matplotlib.figure import Figure import numpy ... No credit card required
https://www.oreilly.com/library/view/matplotlib-for-python/9781847197900/ch07s02.html
CC-MAIN-2019-26
refinedweb
121
67.86
Start Learning Ruby with IronRuby – Setting up the Environment Recently I have decided to learn Ruby and for last few days I am playing with IronRuby. Learning a new thing is always been a fun and when it comes to adorable language like Ruby it becomes more entertaining. Like any other language, first we have to create the development environment. In order to run IronRuby we have to download the binaries form the IronRuby CodePlex project. IronRuby supports both .NET 2.0 and .NET 4, but .NET 4 is the recommended version, you can download either the installation or the zip file. If you download the zip file make sure you added the bin directory in the environment path variable. Once you are done, open up the command prompt and type : ir –v It should print message like: IronRuby 1.0.0.0 on .NET 4.0.30319.1 The ir is 32bit version of IronRuby, if you want to use 64bit you can try ir64. Next, we have to find a editor where we can write our Ruby code as there is currently no integration story of IronRuby with Visual Studio like its twin Iron Python. Among the free IDEs only SharpDevelop has the IronRuby support but it does not have auto complete or debugging built into it, only thing that it supports is the syntax highlighting, so using a text editor which has the same features is nothing different comparing to it. To play with the IronRuby we will be using Notepad++, which can be downloaded from its sourceforge download page. The Notepad++ does have a nice syntax highlighting support : I am using the Vibrant Ink with some little modification. The next thing we have to do is configure the Notepad++ that we can run the Ruby script in IronRuby inside the Notepad++. Lets create a batch(.bat) file in the IronRuby bin directory, which will have the following content: @echo off cls call ir %1 pause This will make sure that the console will be paused once we run the script. Now click Run->Run in the Notepad++, it will bring up the run dialog and put the following command in the textbox (The riir.bat is the batch file which we have saved in the above): riir.bat "$(FULL_CURRENT_PATH)" Click the save which will bring another dialog. Type Iron Ruby and assign the shortcut to ctrl + f5 (Same as Visual Studio Start without Debugging) and click ok. Once you are done you will find the IronRuby in the Run menu. Now press ctrl + f5, we will find the ruby script running in the IronRuby. Now there are one last thing that we would like to add which is poor man’s context sensitive help. First, download the ruby language help file from the Ruby Installer site and extract into a directory. Next we will have to install the Language Help Plug-in of Notepad++, click Plugins->Plugin Manger –>Show Plugin Manager and scroll down until you find the plug-in the list, now check the plug-in and click install. Once it is installed it will prompt you to restart the Notepad++, click yes. When the Notepad++ restarts, click the Plugins –> Language Help –> Options –> add and enter the following details and click ok: The chm file location can be different depending upon where you extracted it. Now when you put your in any of ruby keyword and press ctrl + f1 it will take you to the help topic of that keyword. For example, when my caret is in the each of the following code and I press ctrl + f1, it will take me to the each api doc of Array. def loop_demo (1..10).each{ |n| puts n} end loop_demo That’s it for today. Happy Ruby coding.
http://weblogs.asp.net/rashid/start-learning-ruby-with-ironruby-setting-up-the-environment
CC-MAIN-2015-32
refinedweb
632
69.52
: - 5:a70c0bce770d - Parent: - 4:02c7cd7b2183 - Child: - 6:cc35eb643e8f --- a/main.cpp Thu Jul 24 05:50:36 2014 +0000 +++ b/main.cpp Sun Jul 27 18:24:51 2014 +0000 @@ -1,3 +1,109 @@ +/*!). + + #include "mbed.h" #include "USBJoystick.h" #include "MMA8451Q.h" @@ -5,25 +111,35 @@ #include "FreescaleIAP.h" #include "crc32.h" -// customization of the joystick class to expose connect/suspend status -class MyUSBJoystick: public USBJoystick -{ -public: - MyUSBJoystick(uint16_t vendor_id, uint16_t product_id, uint16_t product_release) - : USBJoystick(vendor_id, product_id, product_release, false) - { - suspended_ = false; - } - - int isConnected() { return configured(); } - int isSuspended() const { return suspended_; } - -protected: - virtual void suspendStateChanged(unsigned int suspended) - { suspended_ = suspended; } + +// --------------------------------------------------------------------------- +// +// Configuration details +// - int suspended_; -}; +//. +const uint16_t USB_VENDOR_ID = 0xFAFA; +const uint16_t USB_PRODUCT_ID = 0x00F7; +const uint16_t USB_VERSION_NO = 0x0004; // On-board RGB LED elements - we use these for diagnostic displays. DigitalOut ledR(LED1), ledG(LED2), ledB(LED3); @@ -32,6 +148,24 @@ DigitalIn calBtn(PTE29); DigitalOut calBtnLed(PTE23); +// + + +// --------------------------------------------------------------------------- +// +// LedWiz emulation +// + static int pbaIdx = 0; // on/off state for each LedWiz output @@ -70,22 +204,14 @@ ledB = wizState(2); } -struct AccPrv -{ - AccPrv() : x(0), y(0) { } - float x; - float y; - - double dist(AccPrv &b) - { - float dx = x - b.x, dy = y - b.y; - return sqrt(dx*dx + dy*dy); - } -}; +// --------------------------------------------------------------------------- +// +// Non-volatile memory (NVM) +// -// Non-volatile memory structure. We store persistent a small +// Structure defining our NVM storage layout. We store a small // amount of persistent data in flash memory to retain calibration -// data between sessions. +// data when powered off. struct NVM { // checksum - we use this to determine if the flash record @@ -113,33 +239,211 @@ } d; }; -// Accelerometer handler -const int MMA8451_I2C_ADDRESS = (0x1d<. +// + +// point structure +struct FPoint +{ + float x, y; + + FPoint() { } + FPoint(float x, float y) { this->x = x; this->y = y; } + + void set(float x, float y) { this->x = x; this->y = y; } + void zero() { this->x = this->y = 0; } + +); } + + float distance(FPoint &b) + { + float dx = x - b.x; + float dy = y - b.y; + return sqrt(dx*dx + dy*dy); + } +}; + + +//() + { + // assume initially that the device is perfectly level + center_.zero(); + tCenter_.start(); + iAccPrv_ = nAccPrv_ = 0; + + // reset and initialize the MMA8451Q + mma_.init(); + // set the initial ball velocity to zero - vx_ = vy_ = 0; + v_.zero(); // set the initial raw acceleration reading to zero - xRaw_ = yRaw_ = 0; + araw_.zero(); + vsum_.zero(); // enable the interrupt - mma_.setInterruptMode(irqPin == PTA14 ? 1 : 2); + mma_.setInterruptMode(irqPin_ == PTA14 ? 1 : 2); // set up the interrupt handler intIn_.rise(this, &Accel::isr); // read the current registers to clear the data ready flag float z; - mma_.getAccXYZ(xRaw_, yRaw_, z); + mma_.getAccXYZ(araw_.x, araw_.y, z); // start our timers tGet_.start(); tInt_.start(); + tRest_.start(); } void get(float &x, float &y, float &rx, float &ry) @@ -148,11 +452,11 @@ __disable_irq(); // read the shared data and store locally for calculations - float vx = vx_, vy = vy_, xRaw = xRaw_, yRaw = yRaw_; + FPoint vsum = vsum_, araw = araw_; + + // reset the velocity sum + vsum_.zero(); - // reset the velocity - vx_ = vy_ = 0; - // get the time since the last get() sample float dt = tGet_.read_us()/1.0e6; tGet_.reset(); @@ -160,16 +464,178 @@ // done manipulating the shared data __enable_irq(); - // calculate the acceleration since the last get(): a = dv/dt - x = vx/dt; - y = vy/dt; + // check for auto-centering every so often + if (tCenter_.read_ms() > 1000) + { + // add the latest raw sample to the history list + accPrv_[iAccPrv_] = araw_; + + // commit the history entry + iAccPrv_ = (iAccPrv_ + 1) % maxAccPr); + } + } + else + { + // not enough samples yet; just up the count + ++nAccPr. - // return the raw accelerometer data in rx,ry - rx = xRaw; - ry = yRaw; +; + + // apply the velocity change for this interval + v_ += dv; + + // return the acceleration since the last update (change in velocity + // over time) in x,y + dv /= dt; + x = (v_.x - vprv.x) / dt; + y = (v_.y - vprv.y) / dt; + + // report the calibrated instantaneous acceleration in rx,ry + rx = araw.x - center_.x; + ry = araw.y - center_.y; } private: + // velocity damping function + float damping(float v) + { + // scale to -2048..2048 range, and get the absolute value + float a = fabs(v*2048.0); + + // damp out small velocities immediately + if (a < 20) + return v; + + // calculate the cube root of the scaled value + float r = exp(log(a)/3.0); + + // rescale + r /= 2048.0; + + // apply the sign and return the result + return (v < 0 ? -r : r); + } + // interrupt handler void isr() { @@ -178,39 +644,101 @@ // the "data ready" status bit in the accelerometer. The // interrupt only occurs when the "ready" bit transitions from // off to on, so we have to make sure it's off. - float z; - mma_.getAccXYZ(xRaw_, yRaw_, z); + float x, y, z; + mma_.getAccXYZ(x, y, z); + + // store the raw results + araw_.set(x, y); + zraw_ = z; // calculate the time since the last interrupt float dt = tInt_.read_us()/1.0e6; tInt_.reset(); - // Accelerate the model ball: v = a*dt. Assume that the raw - // data from the accelerometer reflects the average physical - // acceleration over the interval since the last sample. - vx_ += xRaw_ * dt; - vy_ += yRaw_ * dt; + //; } - // current modeled ball velocity - float vx_, vy_; - - // last raw axis readings - float xRaw_, yRaw_; - // underlying accelerometer object MMA8451Q mma_; - // interrupt router - InterruptIn intIn_; + // last raw acceleration readings + FPoint araw_; + float zraw_; + + // total velocity change since the last get() sample + FPoint vsum_; + + // current modeled ball velocity + FPoint v_; //_; + + // timer for atuo-centering + Timer tCenter_; + + // recent accelerometer readings, for auto centering + int iAccPrv_, nAccPrv_; + static const int maxAccPrv = 5; + FPoint @@ -218,6 +746,12 @@ ledG = 1; ledB = 1; + // clear the I2C bus for the accelerometer + clear_i2c(); + + // Create the joystick USB client + MyUSBJoystick js(USB_VENDOR_ID, USB_PRODUCT_ID, USB_VERSION_NO); + // set up a flash memory controller FreescaleIAP iap; @@ -234,11 +768,17 @@ //. VP doesn't seem to have very high - // resolution internally for the plunger, so it's probably not necessary - // to use the full resolution of the sensor - about 160 pixels seems - // perfectly adequate. We can read the sensor faster (and thus provide - // a higher refresh rate) if we read fewer pixels in each frame. + //; // if the flash is valid, load it; otherwise initialize to defaults @@ -271,34 +811,22 @@ // set up a timer for our heartbeat indicator Timer hbTimer; hbTimer.start(); - int t0Hb = hbTimer.read_ms(); int hb = 0; + uint16_t hbcnt = 0; // set a timer for accelerometer auto-centering Timer acTimer; acTimer.start(); - int t0ac = acTimer.read_ms(); - // Create the joystick USB client - MyUSBJoystick js(0xFAFA, 0x00F7, 0x0003); - // create the accelerometer object - Accel accel(PTE25, PTE24, MMA8451_I2C_ADDRESS, PTA15); +); - // recent accelerometer readings, for auto centering - int iAccPrv = 0, nAccPrv = 0; - const int maxAccPrv = 5; - AccPrv accPrv[maxAccPrv]; - // last accelerometer report, in mouse coordinates int x = 127, y = 127, z = 0; - // raw accelerator centerpoint, on the unit interval (-1.0 .. +1.0) - float xCenter = 0.0, yCenter = 0.0; - // start the first CCD integration cycle ccd.clear(); @@ -542,116 +1070,55 @@ float xa, ya, rxa, rya; accel.get(xa, ya, rxa, rya); - // check for auto-centering every so often - if (acTimer.read_ms() - t0ac > 1000) - { - // add the sample to the history list - accPrv[iAccPrv].x = xa; - accPrv[iAccPrv].y = ya; - - // store the slot - iAccPrv += 1; - iAccPrv %= maxAccPrv; - nAccPrv += 1; - - // If we have a full complement, check for stability. The - // raw accelerometer input is in the rnage -4096 to 4096, but - // the class cover normalizes to a unit interval (-1.0 .. +1.0). - const float accTol = .005; - if (nAccPrv >= maxAccPrv - && accPrv[0].dist(accPrv[1]) < accTol - && accPrv[0].dist(accPrv[2]) < accTol - && accPrv[0].dist(accPrv[3]) < accTol - && accPrv[0].dist(accPrv[4]) < accTol) - { - // figure the new center - xCenter = (accPrv[0].x + accPrv[1].x + accPrv[2].x + accPrv[3].x + accPrv[4].x)/5.0; - yCenter = (accPrv[0].y + accPrv[1].y + accPrv[2].y + accPrv[3].y + accPrv[4].y)/5.0; - } - - // reset the auto-center timer - acTimer.reset(); - t0ac = acTimer.read_ms(); - } - - // adjust for our auto centering - xa -= xCenter; - ya -= yCenter; - - // confine to the unit interval + // confine the accelerometer results to the unit interval if (xa < -1.0) xa = -1.0; if (xa > 1.0) xa = 1.0; if (ya < -1.0) ya = -1.0; if (ya > 1.0) ya = 1.0; - // figure the new mouse report data - int xnew = (int)(127 * xa); - int ynew = (int)(127 * ya); + // scale to our -127..127 reporting range + int xnew = int(127 * xa); + int ynew = int(127 * ya); // store the updated joystick coordinates x = xnew; y = ynew; z = znew; - // if we're in USB suspend or disconnect mode, spin - if (js.isSuspended() || !js.isConnected()) - { - // go dark (turn off the indicator LEDs) - ledG = 1; - ledB = 1; - ledR = 1; - - // wait until we're connected and come out of suspend mode - for (uint32_t n = 0 ; js.isSuspended() || !js.isConnected() ; ++n) - { - // spin for a bit - wait(1); - - // if we're suspended, do a brief red flash; otherwise do a long red flash - if (js.isSuspended()) - { - // suspended - flash briefly ever few seconds - if (n % 3 == 0) - { - ledR = 0; - wait(0.05); - ledR = 1; - } - } - else - { - // running, not connected - flash red - ledR = !ledR; - } - } - } - //. - js.update(x, -y, z, int(rxa*127), int(rya*127), 0); + // + // $$$ button updates are for diagnostics, so we can see that the + // device is sending data properly if the accelerometer gets stuck + js.update(x, -y, z, int(rxa*127), int(rya*127), hb ? 0x5500 : 0xAA00); // show a heartbeat flash in blue every so often if not in // calibration mode - if (calBtnState < 2 && hbTimer.read_ms() - t0Hb > 1000) + if (calBtnState < 2 && hbTimer.read_ms() > 1000) { - if (js.isSuspended()) + if (js.isSuspended() || !js.isConnected()) { - // suspended - turn off the LEDs entirely + // suspended - turn off the LED ledR = 1; ledG = 1; ledB = 1; - } - else if (!js.isConnected()) - { - // not connected - flash red - hb = !hb; - ledR = (hb ? 0 : 1); - ledG = 1; - ledB = 1; + + // show a status flash every so often + if (hbcnt % 3 == 0) + { + // disconnected = red flash; suspended = red-red + for (int n = js.isConnected() ? 1 : 2 ; n > 0 ; --n) + { + ledR = 0; + wait(0.05); + ledR = 1; + wait(0.25); + } + } } else if (flash_valid) { @@ -665,14 +1132,14 @@ { // connected, factory reset - flash yellow/green hb = !hb; - ledR = (hb ? 0 : 1); - ledG = 0; + //ledR = (hb ? 0 : 1); + //ledG = 0; ledB = 1; } // reset the heartbeat timer hbTimer.reset(); - t0Hb = hbTimer.read_ms(); + ++hbcnt; } } }
https://os.mbed.com/users/mjr/code/Pinscape_Controller_V2/diff/a70c0bce770d/main.cpp/
CC-MAIN-2021-25
refinedweb
1,639
58.28
Object Relationships Is-a Relationships Consider a Shape example where Circle, Square, and Star all inherit directly from Shape. This relationship is often referred to as an is-a relationship because a circle is a shape and Square is a shape. When a subclass inherits from a superclass, it can do anything that the superclass can do. Thus, Circle, Square, and Star are all extensions of Shape. In Figure 4, the name on each of the objects represents the Draw method for the Circle, Star, and Square objects, respectively. When we design this Shape system it would be very helpful to standardize how we use the various shapes. Thus, we could decide that if we want to draw a shape, no matter what shape, we will invoke a method called draw. If we adhere to this decision, whenever we want to draw a shape, only the Draw method needs to be called, regardless of what the shape is. Here lies the fundamental concept of polymorphism it is the individual object's responsibility, be it a Circle, Star, or Square, to draw itself. Figure 4 The shape hierarchy. Polymorphism Polymorphism literally means many shapes. Although polymorphism is tightly coupled to inheritance, it is often cited separately as one of the most powerful advantages to object-oriented technologies. When a message is sent to an object, the object must have a method defined to respond to that message. In an inheritance hierarchy, all subclasses inherit the interfaces from their superclass. However, because each subclass is a separate entity, each might require a separate response to the same message. For example, consider the Shape class and the behavior called Draw. When you tell somebody to draw a shape, the first question he asks is "What shape?" He cannot draw a shape, as it is an abstract concept (in fact, the Draw() method in the Shape code following contains no implementation). You must specify a concrete shape. To do this, you provide the actual implementation in Circle. Even though Shape has a Draw method, Circle overrides this method and provides its own Draw() method. Overriding basically means replacing an implementation of a parent with one from a child. For example, suppose you have an array of three shapes Circle, Square, and Star. Even though you treat them all as Shape objects, and send a Draw message to each Shape object, the end result is different for each because Circle, Square, and Star provide the actual implementations. In short, each class is able to respond differently to the same Draw method and draw itself. This is what is meant by polymorphism. Consider the following Shape class: public abstract class Shape{ private double area; public abstract double getArea(); } The Shape class has an attribute called area that holds the value for the area of the shape. The method getArea() includes an identifier called abstract. When a method is defined as abstract, a subclass must provide the implementation for this method; in this case, Shape requires subclasses to provide a getArea() implementation. Now, let's create a class called Circle that inherits from Shape (the extends keyword signifies that Circle inherits from Shape): public class Circle extends Shape{ double radius; public Circle(double r) { radius = r; } public double getArea() { area = 3.14*(radius*radius); return (area); }; } We introduce a new concept here called a constructor. The Circle class has a method with the same name, Circle. When the names are the same and no return type is provided, the method is a special method, called a constructor. Consider a constructor the entry point for the class, where the object is constructed; the constructor is a good place to perform initializations. The Circle constructor accepts a single parameter, representing the radius, and assigns it to the radius attribute of the Circle class. The Circle class also provides the implementation for the getArea method, originally defined as abstract in the Shape class. We can create a similar class, called Rectangle: public class Rectangle extends Shape{ double length; double width; public Rectangle(double l, double w){ length = l; width = w; } public double getArea() { area = length*width; return (area); }; } Now, we can create any number of Rectangles, Circles, and so forth and invoke their getArea() method because we know that all Rectangles and Circles inherit from Shape and all Shape classes have a getArea() method. If a subclass inherits an abstract method from a superclass, it must provide a concrete implementation of that method, or else it will be an abstract class itself (see Figure 5 for a UML diagram). Figure 5: Shape UML diagram. Thus, we can instantiate the Shape classes in this way: Circle circle = new Circle(5); Rectangle rectangle = new Rectangle(4,5); Then, using a construct such as a stack, we can add these Shape classes to the stack: stack.push(circle); stack.push(rectangle); Now comes the fun part. We can empty the stack, and we do not have to care about what kind of Shape classes are in it: while ( !stack.empty()) { Shape shape = (Shape) stack.pop(); System.out.println ("Area = " + shape.getArea()); } In reality, we are sending the same message to all the shapes: shape.getArea() However, the actual behavior that takes place depends on the type of shape. For example, Circle will calculate the area for a circle, and Rectangle will calculate the area of a rectangle. In effect (and here is the key concept), we are sending a message to the Shape classes and experiencing different behavior depending on what subclass of Shape is being used. Page 2 of 3
http://www.developer.com/lang/article.php/10924_3332401_2/Object-Relationships.htm
CC-MAIN-2014-49
refinedweb
928
61.26
In this tutorial, I'll be walking you through how to a write a Twitter widget for ASP.NET in the form of a reusable server control complete with nice things such as automatically turning URLs into links, and caching to speed up page load times. Step 1 Getting Started To follow this tutorial, all you need is Visual Studio (You can use MonoDevelop if you're not on Windows, although there's no guarantees there.) If you don't want to fork over cash for the full version of Visual Studio, you can grab the free Express Edition. You'll also need knowledge of C# 3.0, as this tutorial makes use of some of the newer features of the language, such as lambda expressions and the var keyword. Step 2 Creating the Control ASP.NET includes a handy feature known as Server Controls. These are custom tags that aim to help developers structure their code. When a page using a server control is requested, the ASP.NET runtime executes the Render() method and includes the output in the final page. Once you've created a new Web Application in Visual Studio, right click in the Solution Explorer and add a new item to the solution. Select ASP.NET Server Control, and give it a name. Here, I've called it Twidget.cs, but you're welcome to call it whatever you like. Paste the following code in, and don't worry if it all looks a bit foreign - I'll explain it all shortly. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.Script.Serialization; using System.Net; namespace WebApplication1 { public class Twidget : Control { public string Account { get; set; } public int Tweets { get; set; } protected override void Render(HtmlTextWriter writer) { writer.Write("<ul>"); foreach (var t in GetTweets().Take(Tweets)) writer.Write("<li>{0}</li>", HttpUtility.HtmlEncode(t)); writer.Write("</ul>"); } public List<string> GetTweets() { var ls = new List<string>(); var jss = new JavaScriptSerializer(); var d = jss.Deserialize<List<Dictionary<string, object>>>( new WebClient() .DownloadString("" + Account) ); foreach (var x in d) ls.Add((string)x["text"]); return ls; } } } This is about as basic as you can get for a Twitter widget. Here's how it works: When a user requests a page with this control on it, the Render() method gets executed with a HtmlTextWriter passed as a parameter. It writes out the <ul> tag, and then enters a loop which prints out each tweet as a list item. The magic here happens in the GetTweets() method. Notice how we are using the Take() extension method to make sure we only print the amount of tweets that we're asked to. Once execution passes to the GetTweets() method, we setup a List>string< to hold our tweets and a JavaScriptSerializer to parse the JSON from the Twitter API servers. The statement on lines 31 - 34 (split up for readability) retrives the user timeline in JSON format, then deserializes into .NET types we can work with. On line 36, we loop through all the tweets and add them one by one to the tweet list. We have to manually cast x["text"] to a string because we deserialized it as an object. We had to do this, because the JSON returned by the Twitter API uses a smorgasboard of different types - which is fine for JavaScript, but a little tricky with C#. Step 3 Using the Control Now we have the code for our Twitter widget; let's put it to use! Open your Default.aspx page (or whichever page you want to use this in) and put the following code immediately after the <%@ Page %> directive: <%@ Register TagPrefix="widgets" Namespace="WebApplication1" Assembly="WebApplication1" %> Feel free to change the TagPrefix to whatever you like, but make sure that the Namespace attribute is correctly set to whatever namespace you placed the widget code in, and ensure that the Assembly attribute is set to the name of your web application (in our case, WebApplication1). Once you've registered the proper tag prefix (and you'll need to do this for every page you want to use the control on), you can start using it. Paste the following code somewhere into your page, and once again, feel free to change the attributes to whatever you want: <widgets:Twidget If you've done everything properly, you should see a page similar to this when you run your web application: Step 4 Some Fancy Stuff... You've got to admit, the control we've got at the moment is pretty rudimentary. It doesn't have to be though, so let's spiffy it up a little by turning URLs into nice, clickable links for your visitors. Find the foreach loop in the Render() method and scrap it completely. Replace it with this: // you'll need to add this using directive to the top of the file: using System.Text.RegularExpressions; foreach (var t in GetTweets().Take(Tweets)) { string s = Regex.Replace( HttpUtility.HtmlEncode(t), @"[a-z]+://[^\s]+", x => "<a href='" + x.Value.Replace("'", """) + "'>" + x.Value + "</a>", RegexOptions.Compiled | RegexOptions.IgnoreCase ); writer.Write("<li>{0}</li>", s); } It's all pretty much the same code, except for the humongous call to Regex.Replace() on line 6. I'll explain what this does. The first parameter is the input, or the string that the Regex works on. In this case, it's just the tweet text after being passed through HttpUtility.HtmlEncode() so we don't fall victim to a vicious XSS attack. The input is then matched against the second parameter which is a regular expression designed to match a URL. The third parameter is where it gets a little involved. This is a lambda expression, a feature new to C# 3. It's basically a very short way of writing a method like this: public static string SomeFunction(Match x) { return "<a href='" + x.Value.Replace("'", """) + "'>" + x.Value + "</a>"; } All it does is wrap the URL with an <a> tag, of which all quotation marks in the URL are replace with the HTML entity ", which helps prevent XSS attacks. The fourth and final parameter is just an ORed together pair of flags adjusting the way our regex behaves. The output of the control after making this adjustment is somewhat similar to the screenshot below. Step 5 Caching There's a big problem with the code I've given to you above, and that is that it doesn't cache the response from the Twitter API. This means that every time someone loads your page, the server has to make a request to the Twitter API and wait for a response. This can slow down your page load time dramatically and can also leave you even more vulnerable to a Denial of Service attack. Thankfully, we can work around all this by implementing a cache. Although the basic structure of the control's code remains after implementing caching, there's too many small changes to list, so I'll give you the full source and then - as usual - explain how it works. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.Script.Serialization; using System.Net; using System.Threading; using System.Text.RegularExpressions; namespace WebApplication1 { public class Twidget : Control { public string Account { get; set; } public int Tweets { get; set; } public int CacheTTL { get; set; } static Dictionary<string, CachedData<List<string>>> Cache = new Dictionary<string, CachedData<List<string>>>(); protected override void Render(HtmlTextWriter writer) { writer.Write("<ul>"); foreach (var t in GetTweets().Take(Tweets)) { string s = Regex.Replace( HttpUtility.HtmlEncode(t), @"[a-z]+://[^\s]+", x => "<a href='" + x.Value.Replace("'", """) + "'>" + x.Value + "</a>", RegexOptions.Compiled | RegexOptions.IgnoreCase ); writer.Write("<li>{0}</li>", s); } writer.Write("</ul>"); } public List<string> GetTweets() { if (!Cache.Keys.Contains(Account) || (DateTime.Now - Cache[Account].Time).TotalSeconds > CacheTTL ) new Thread(Update).Start(Account); if (!Cache.Keys.Contains(Account)) return new List<string>(); return Cache[Account].Data; } public static void Update(object acc) { try { string Account = (string)acc; var ls = new List<string>(); var jss = new JavaScriptSerializer(); var d = jss.Deserialize<List<Dictionary<string, object>>>( new WebClient() .DownloadString("" + Account) ); foreach (var x in d) ls.Add((string)x["text"]); if (!Cache.Keys.Contains(Account)) Cache.Add(Account, new CachedData<List<string>>()); Cache[Account].Data = ls; } catch (Exception) { } } } class CachedData<T> { public DateTime Time { get; private set; } T data; public T Data { get { return data; } set { Time = DateTime.Now; data = value; } } } } As you can see, the Render() method remains unchanged, but there's some pretty drastic changes everywhere else. We've changed the GetTweets() method, added a new property (CacheTTL), added a private static field (Cache), and there's even a whole new class - CachedData. The GetTweets() method is no longer responsible for talking to the API. Instead, it just returns the data already sitting in the cache. If it detects that the requested Twitter account hasn't been cached yet, or is out of date (you can specify how long it takes for the cache to expire in the CacheTTL attribute of the control), it will spawn a seperate thread to asynchronously update the tweet cache. Note that the entire body of the Update() method is enclosed in a try/catch block, as although an exception in the Page thread just displays an error message to the user, if an exception occurs in another thread, it will unwind all the way back up the stack and eventually crash the entire worker process responsible for serving your web application. The tweet cache is implemented as a Dictionary<string, CachedData<string>>, where the username of the twitter account being cached is the key, and an instance of the CachedData<T> class is the value. The CachedData<T> class is a generic class and can therefore be used with any type (although in our case, we're only using it to cache a string.) It has two public properties, Data, which uses the data field behind the scenes, and Time, which is updated to the current time whenever the Data property is set. You can use the following code in your page to use this caching version of the widget. Note that the new CacheTTL attribute sets the expiry (in seconds) of the tweet cache. <widgets:Twidget Conclusion I hope this tutorial has not only taught you how to make a Twitter widget, but has given you an insight into how server controls work as well as some best practices when 'mashing up' data from external sources. I realise that the browser output of this control isn't exactly the prettiest, but I felt that styling it and making it look pretty was outside the scope of the article and has therefore been left as an exercise for the reader. Thanks for reading! Feel free to ask any questions that you might have in the comments section below.
http://code.tutsplus.com/articles/how-to-build-a-simple-twitter-widget-with-asp-net--net-11329
CC-MAIN-2014-10
refinedweb
1,821
64.3
Hi guys, I am doing a project for my mid term and I am held up with this doubt. I might have misunderstood something all together or it might be a small mistake. Please point it out: I have a baseentity class defining the features of a character. I have three entities TEnt, Male and Female which inherits from the base entity. Now what I wanted was to have a Entitymanager Class which manages all the entities. This is a code I wrote for my entity manager: namespace Sims{ /*! Forward declaration of the class*/ class TEnt; /*! class Entity manager inherits from the singleton_client class */ class EntityManagerrivate Singleton_client{ /*! \class Sims::EntityManager "include/EntityManager.h" <BR> \brief A Singleton manager entity that basically manages all the entities in the scene Inherited from the SEntityEyeDefs */ public: /*! A class that registers the entity to the base vector*/ /*! FIXME: This function needs to be defined for every class of entity*/ void REGISTER_ENTITY_INSTANCE(TEnt& instance); /*! Prints out a test message //TODO: needs to be removed after testing */ void print(); private: /*! A vector of Each of the classes */ //FIXME: This piece of code is rubbish.. i need to have a access to all the entities rather than a vector of every set of entity /*! A vector of TEnts*/ std::vector < TEnt* > m_ppTEntList; }; } As you can see here there needs to be a private member variable for every class that inherits from the base entity . Also Every function like the REGISTER which has a input or output of the class needs to be called for each and every class. I can understand that this is bad coding. Is there a better way to do this at all? | Originally
http://forums.codeguru.com/showthread.php?473208-A-question-on-Pointers-and-inheritance&mode=hybrid
CC-MAIN-2017-26
refinedweb
280
65.22
TypeScript is an open-source superset of JavaScript developed by Microsoft to add additional features without breaking existing programs. TypeScript is now widely used by front-end and full-stack developers for large-scale projects due to its unique benefits like static typing and many shorthand notations. Today, we'll help you prepare for your TypeScript coding interview by covering 50 of the top TypeScript interview questions and answers. Here’s what we’ll cover today: - General TypeScript Questions - TypeScript Syntax & Language Basics - TypeScript with JavaScript Questions - Advanced TypeScript Questions - 20 More TypeScript Questions to Practice - Tips for preparing for TypeScript interviews Brush up on your TypeScript the easy way. Get the hands-on practice you need to ace the TypeScript interview. By the end, you'll know how to use advanced TypeScript in professional projects. TypeScript for Front-End Developers 1. What are the main features of TypeScript? - Cross-Platform: The TypeScript compiler can be installed on any operating system such as Windows, macOS, and Linux. - ES6 Features: TypeScript includes most features of planned ECMAScript 2015 (ES6) such as Arrow functions. - Object-Oriented Language: TypeScript provides all the standard OOP features like classes, interfaces, and modules. - Static Type-Checking: TypeScript uses static typing and helps type checking at compile time. Thus, you can find compile-time errors while writing the code without running the script. - Optional Static Typing: TypeScript also allows optional static typing in case you are used to the dynamic typing of JavaScript. - DOM Manipulation: You can use TypeScript to manipulate the DOM for adding or removing client-side web page elements. 2. What are the benefits of using TypeScript? - TypeScript is more expressive, meaning it has less syntactical clutter. - Easy debugging due to advanced debugger that focuses on catching logical errors before compile-time - Static typing makes TypeScript easier to read and more structured than JavaScript's dynamic typing. - Usable across platforms, in both client and server-side projects due to versatile transpiling. 3. What are the built-in data types of TypeScript? Number type: It is used to represent number type values. All the numbers in TypeScript are stored as floating-point values. let identifier: number = value; String type: It represents a sequence of characters stored as Unicode UTF-16 code. Strings are enclosed in single or double quotation marks. let identifier: string = " "; Boolean type: a logical binary switch that holds either true or false let identifier: bool = Boolean value; Null type: Null represents a variable whose value is undefined. let num: number = null; Undefined type: an undefined literal that is the starting point of all variables. let num: number = undefined; Void type: The type assigned to methods that have no return value. let unusable: void = undefined; 4. What is the current stable version of TypeScript? The current stable version is 4.2.3. 5. What is an interface in TypeScript? Interfaces define a contract or structure for objects that use that interface. An interface is defined with the keyword interface and it can include properties and method declarations using a function or an arrow function. interface IEmployee { empCode: number; empName: string; getSalary: (number) => number; // arrow function getManagerName(number): string; } 6. What are modules in TypeScript? Modules in TypeScript are a collection of related variables, functions, classes, and interfaces. You can think of modules as containers that contain everything needed to execute a task. Modules can be imported to easily share code between projects. module module_name{ class xyz{ export sum(x, y){ return x+y; } } 7. How can you use TypeScript for the backend? You can use Node.js with TypeScript to bring the benefits of TypeScript to backend work. Simply install the TypeScript compiler into your Node.js by entering the following command: npm i -g typescript 8. What are Type assertions in TypeScript? Type assertion in TypeScript works like typecasting in other languages but without the type checking or restructuring of data possible in languages like C# and Java. Type assertion has no impact on runtime and is used purely by the compiler. Type assertion is essentially a soft version of typecasting that suggests the compiler see the variable as a certain type but does not force it into that mold if it's in a different form. TypeScript Syntax & Language Basics 9. How do you create a variable in TypeScript? You can create variables in three ways: var, let, and const. var is the old style of fiercely scoped variables. You should avoid using var whenever possible because it can cause issues in larger projects. var num:number = 1; let is the default way of declaring variables in TypeScript, Compared to var, let reduces the number of compile-time errors and increases code readability. let num:number = 1; const creates a constant variable whose value cannot change. It uses the same scoping rules as let and helps reduce overall program complexity. const num:number = 100; 10. How do you call a base class constructor from a child class in TypeScript? You can use the super() function to call the constructor of the base class.); } } 11. Explain how to use TypeScript Mixins. Mixins are essentially inheritance that works in the opposite direction. Mixins allow you to build new classes by combining simpler partial class setups from previous classes. Instead of class A extending class B to gain its functionality, class B takes from class A and returns a new class with additional functionality. 12. How do you check null and undefined in TypeScript? You can either use a juggle-check, which checks both null and undefined, and strict-check which returns true for values set to null and won't evaluate true for undefined variables. //juggle if (x == null) { } var a: number; var b: number = null; function check(x, name) { if (x == null) { console.log(name + ' == null'); } if (x === null) { console.log(name + ' === null'); } if (typeof x === 'undefined') { console.log(name + ' is undefined'); } } check(a, 'a'); check(b, 'b'); 13. What are getters/setters in TypeScript? How do you use them? Getters and setters are special types of methods that help you delegate different levels of access to private variables based on the needs of the program. Getters allow you to reference a value but cannot edit it. Setters allow you to change the value of a variable but not see its current value. These are essential to achieve encapsulation. For example, a new employer may be able to get the number of employees in the company but does not have permission to set the number of employees.); } 14. How do you allow classes defined in a module to be accessible outside of a module? You can use the export keyword to open modules up for use outside the module. module Admin { // use the export keyword in TypeScript to access the class outside export class Employee { constructor(name: string, email: string) { } } let alex = new Employee('alex', 'alex@gmail.com'); } // The Admin variable will allow you to access the Employee class outside the module with the help of the export keyword in TypeScript let nick = new Admin.Employee('nick', 'nick@yahoo.com'); 15. How do we convert string to a number using Typescript? Similar to JavaScript, You can use the parseInt or parseFloat functions to convert a string to an integer or float, respectively. You can also use the unary operator + to convert a string to the most fitting numeric type, "3" becomes the integer 3 while "3.14" becomes the float 3.14. var x = "32"; var y: number = +x; 16. What is a '.map' file, and why/how can you use it? A map file is a source map that shows how the original TypeScript code was interpreted into usable JavaScript code. They help simplify debugging because you can catch any odd compiler behavior. Debugging tools can also use these files to allow you to edit the underlying TypeScript rather than the emitted JavaScript file. 17. What are classes in TypeScript? How do you define them? Classes represent the shared behaviors and attributes of a group of related objects. For example, our class might be Student which all have the attendClass method. On the other hand, John is an individual instance of type Student and may have additional unique behaviors like attendExtracurricular. You declare classes using the keyword class: class Student { studCode: number; studName: string; constructor(code: number, name: string) { this.studName = name; this.studCode = code; } Keep practicing TypeScript. Prepare for your interview with expert lessons and 400 hands-on coding environments. Educative's text-based courses are easy to skim and focus on hirable skills to get you prepared in half the time. TypeScript for Front-End Developers TypeScript with JavaScript Questions 18. How does TypeScript relate to JavaScript? TypeScript is an open-source syntactic superset of JavaScript that compiles to JavaScript. All original JavaScript libraries and syntax still works but TypeScript adds additional syntax options and compiler features not found in JavaScript. TypeScript can also interface with most of the same technologies as JavaScript, such as Angular and jQuery. 19. What is JSX in TypeScript? JSX is an embeddable XML-like syntax that allows you to create HTML. TypeScript supports embedding, type checking, and compiling JSX directly to JavaScript. 20. What are the JSX modes TypeScript supports? TypeScript has built-in support for preserve, react, and react-native. preservekeeps the JSX intact for use in a subsequent transformation. reactdoes not go through a JSX transformation and instead emits react.createElementand outputs as a .jsfile extension. react-nativecombines preserveand reactin that it maintains all JSX and outputs as a .jsextension. 21. How do you compile a TypeScript file? You need to call the TypeScript compiler tsc to compile a file. You'll need to have the TypeScript compiler installed, which you can do using npm. npm install -g typescript tsc <TypeScript File Name> 22. What scopes are available in TypeScript? How does this compare to JS? - Global Scope: defined outside of any class and can be used anywhere in the program. - Function/Class Scope: variables defined in a function or class can be used anywhere within that scope. - Local Scope/Code Block: variables defined in the local scope can be used anywhere in that block. Advanced TypeScript Questions 23. What are Arrow/lambda functions in TypeScript? Fat arrow function is a shorthand syntax for defining function expressions of anonymous functions. It's similar to lambda functions in other languages. The arrow function lets you skip the function keyword and write more concise code. 24. Explain Rest parameters and the rules to declare Rest parameters. Rest parameters allow you to pass a varied number of arguments (zero or more) to a function. This is useful when you're unsure how many parameters a function will receive. All arguments after the rest symbol ... will be stored in an array. For example: function Greet(greeting: string, ...names: string[]) { return greeting + " " + names.join(", ") + "!"; } Greet("Hello", "Steve", "Bill"); // returns "Hello Steve, Bill!" Greet("Hello");// returns "Hello !" The rest parameter must be the last on parameter definition and you can only have 1 rest parameter per function. 25. What are Triple-Slash Directives? What are some of the triple-slash directives? Triple-slash Directives are single-line comments that contain an XML tag to use as compiler directives. Each directive signals what to load during the compilation process. Triple-slash Directives only work at the top of their file and will be treated as normal comments anywhere else in the file. /// <reference path="..." />is the most common directive and defines the dependency between files. /// <reference types="..." />is similar to pathbut defines a dependency for a package. /// <reference lib="..." />allows you to explicitly include the built-in libfile. 26. What does the Omit type do? Omit is a form of utility type, which facilitates common type transformations. Omit lets you construct a type by passing a current Type and selecting Keys to be omitted in the new type. Omit<Type, Keys> For example: interface Todo { title: string; description: string; completed: boolean; createdAt: number; } type TodoPreview = Omit<Todo, "description">; 27. How do you achieve function overloading in TypeScript? To overload, a function in TypeScript, simply create two functions of the same name but have different argument/return types. Both functions must accept the same number of arguments. This is an essential part of polymorphism in TypeScript. For example, you could make an add function that sums the two arguments if they're numbers and concatenates them if they're strings. function add(a:string, b:string):string; function add(a:number, b:number): number; function add(a: any, b:any): any { return a + b; } add("Hello ", "Steve"); // returns "Hello Steve" add(10, 20); // returns 30 28. How do you make all properties of an interface optional? You can use the partial mapped type to easily make all properties optional. 29. When should you use the ‘unknown’ keyword? You should use unknown if you don't know which type to expect upfront but want to assign it later on, and the any keyword will not work. 30. What are decorators, and what can they be applied to? A decorator is a special kind of declaration that lets you modify classes or class members all at once by marking them with the @<name> annotation. Each decorator must refer to a function that'll be evaluated at runtime. For example, the decorator @sealed would correspond to the sealed function. Anything marked with @sealed would be used to evaluate the sealed function. function sealed(target) { // do something with 'target' ... } They can be attached to: - Class declarations - Methods - Accessors - Properties - Parameters Decorators are not enabled by default. To enable them, you have to edit the experimentalDecoratorsfield in the compiler options from your tsconfig.jsonfile or the command line. 20 More TypeScript Questions to Practice - 31. What is the default access modifier for member variables and methods in TypeScript? - 32. When should you use the declarekeyword? - 33. What are generics in TypeScript? When would you use them? - 34. How and when would you use the enumcollection? - 35. What are namespaces and why would you use them? - 36. How would you implement optional parameters? - 37. Name 3 differences between TypeScript and JavaScript. - 38. Is TypeScript a functional programming language? - 39. What TypeScript features would be beneficial for a full-stack developer? - 40. What are the advantages of TypeScript Language Service (TSLS)? - 41. What features does TypeScript offer to help make reusable components? - 42. What is the difference between a tuple and an array in TypeScript? - 43. What is the difference between internal and external modules in TypeScript? - 44. What collections does TypeScript support? - 45. What is the Recordtype used for? - 46. What advantages does TypeScript bring to a tech stack? - 47. How do you generate a definition file using TypeScript? - 48. Does TypeScript support abstract classes? - 49. How can you set your TypeScript file to compile whenever there's a change? - 50. What are Ambientsin TypeScript and when should you use them? Tips for preparing for TypeScript interviews Preparing for interviews is always stressful, but proper preparation beforehand can help you build confidence. When preparing for your interview, remember to: - Get practice working with questions hands-on, not just reading them. - Break up your study material into sections and quiz yourself after each one. - Prepare for your behavioral interview, which is equally as important as the coding portion. To help you get more hands-on practice with TypeScript, Educative has created the TypeScript for Front-End Developers Path. This Path is full of our best TypeScript content from across our site. You'll revise the fundamentals, explore practical applications, and build React projects all using Educative's in-browser coding environments. By the end of the Path, you'll have the skills you need to ace your next TypeScript interview. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/educative/top-50-typescript-interview-questions-explained-3mpc
CC-MAIN-2021-21
refinedweb
2,626
58.69
Prev Java Exception Experts Index Headers Your browser does not support iframes. Re: Customise Application Icon From: "Andrew Thompson" <u32984@uwe> Newsgroups: comp.lang.java.programmer Date: Fri, 11 Jan 2008 02:13:55 GMT Message-ID: <7e071d506cde0@uwe> Leigh wrote: I can't find where the main form's .. Form? There is no 'form' in the J2SE. It sounds like you are 'talking the language of your IDE'. I suggest learning Java instead. Thanks for your kind words Andrew. (dismissive) Puh! The only kindness I will generally offer to posters is to reply. Thereafter I usually aim for 'techically specific' - which although it is not 'unkind' it often ends up being 'direct and blunt'. ..I used the word "form" in a generic sense (perhaps influenced from prev experience with C++). I don't understand that comment. Surely the C++ compilers are just as finicky as the Java compilers, but if you're refering to the C++ usenet groups - I can only assume that it takes longer to get to a positive result that way. ..I made the basic NetBeans example Java Desktop application. this doesn't create a JFrame, rather a frameView, whatever that is, as it's top level container I think. You think? This is the sort of comment that inspires me to say 'understand it from the command line first'. The fact is that NetBeans (last time I checked) had relatively ways to see the 'inheritance tree' of any Java object, so it should be relatively easy to tell others what this 'frameView' actually is. Further, the lower case initial 'f' suggests this is a reference variable name, rather than a class. E.G. .. // FrameView is a class, frameView is a reference to an instance of FrameView. FrameView frameView = new FrameView("Frame Title"); Please try to wuote class names, rather than the name of the variable (and if the class name is truly 'frameView' - it seems whoever wrote it was a nonser and the code should probably be avoided completely). I am learning java, and I've always found not being afraid of asking questions is a helpful aid. Bye the way, one needs to learn an ide as well as java at the beginning. That is something I strongly disagree with. Ant and a simple editor is all that is needed to 'learn Java', and gives a good foundation for understanding any IDE. ..I don't want to learn java only using the console. Life is too short for that. I warrant you will waste a lot more time trying to 'figure the IDE' that way, and ultimately (over a long period) that will lead to lower productivity. So to put it back to you.. " Life is too short for 'learning' IDEs." ..I would like to use another icon(instead of the default coffee cup). How can I reference my own icon? <(java.awt.Image )> Gosh, it's not easy is it? There are some 'roundabout'* factors in creating an icon image for an app. ..I am finding it tricky so far to create an image object to put in the user code field in the IconImage property of a test JFrame I added to my project to try your suggestion. I can't yet find an Image class in the java.awt.Image package. Hmmm, I'll struggle on. Hmm.. well I realise you were not asking specifically for further help, but I will include a simplistic* example anyway. <sscce> import java.awt.*; import javax.swing.*; import java.net.URL; class FrameWithIcon { public static void main(String[] args) throws Exception { final URL url = new URL( ""); Runnable r = new Runnable() { public void run() { Image icon = Toolkit. getDefaultToolkit(). createImage(url); JFrame frame = new JFrame("Icon"); frame.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE); frame.setIconImage(icon); frame.setSize(400,400); frame.setLocationRelativeTo(null); frame.setVisible(true); } }; EventQueue.invokeLater(r); } } </sscce> * The example avoids some 'where is my sh*t?' hassles by grabbing the image direct from the net, rather than from the jar archive that contains the application. The second situation might encounter some problems finding the URL to the image, especially if called from the main() method. -- Andrew Thompson Message posted via JavaKB.com Generated by PreciseInfo ™ "we must join with others to bring forth a new world order... Narrow notions of national sovereignty must not be permitted to curtail that obligation." -- A Declaration of Interdependence, written by historian Henry Steele Commager. Signed in US Congress by 32 Senators and 92 Representatives 1975
http://preciseinfo.org/Convert/Articles_Java/Exception_Experts/Java-Exception-Experts-080111041355.html
CC-MAIN-2021-49
refinedweb
744
64.91
Hello Everyone Thanks for the message of welcome in a previous post. I have many questions but i have sleected this one as it represents my favourite piece of not working code (hahaha) but it has been studied and altered so i got this bugger working .. mostly . Explain: I backup using 7zip, this works, and it works well, instead of the regular grab a dir and back it up, this does somethign more like .... seek out all the image types in the user folder of a windows machine then compress them up and store them. So i have a main menu done with tkinter and i press the button, this button leads to this sexy piece of code def picbackup(): source = ['-ir!"%USERPROFILE%\\*.jpg"', '-ir!"%USERPROFILE%\*.bmp"', '-ir!"%USERPROFILE%\*.tif"', '-ir!"%USERPROFILE%\\*.gif"'] target_dir = '.\\' # Remember to change this to what you will be using today = time.strftime('%d_%m_%Y') # The current day is the name of the subdirectory in the main directory target = '.\\Backup\Pictures_' + today + '.zip' zip_command = "D:\\Backup\\7Zip\\7z.exe a -bd -ssw {0} {1}".format(target, ' '.join(source)) # # if os.system(zip_command) == 0: print('Successful backup to', target) else: print('Backup FAILED') ok .. this is the issue, while this is not yet compiled, i can see the output of this in my intepreter window so i can see 7zip doing it's job, but my original intention was for a window to pop up and display 7zips output in there, once complete, you could press OK and be back at the main menu. This was proving difficult to research so i switched to a simplier idea using some code i borrowed from a fella who made a ping GUI - and i shuffled my main menu to include a small live action output window so once you click the button, the backup will take place, but inside the output window the action should take place The output window code looks like this outputLabel = Label(root,text="7Zip Output Window") outputLabel.place(x=500,y=425) outputWindow = Text(root) outputWindow.config(relief=SUNKEN, bg='beige',width=51,height=17) outputWindow.place(x=350,y=130) Problem is ... simply .... it doesn't work, i have tried to put certain combinations into the 7zip section like shellOutput = zip_command.read() zip_command.close() outputWindow.delete('1.0',END) outputWindow.insert('1.0',shellOutput) that was between the # # and then 7zip doesn't do the backup no more. Would love to hear some anyone about either opening a window to display the output with a OK when done or using my now created 7zip output window Thanks for reading this .. it was alot :)
https://www.daniweb.com/programming/software-development/threads/206511/i-want-output-in-a-window
CC-MAIN-2017-17
refinedweb
441
60.55