text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I’ve said before that one of the main benefits of a strong type system is automatic and compiler-enforced documentation. Types as compiler-enforced documentation An API with carefully chosen input and output types is easier to use because the types establish a kind of “upper bound” for the function’s behavior. Consider the following Swift function signature as a simple example: func / (dividend: Int, divisor: Int) -> Int Without knowing anything about the function’s implementation, you can deduce that it must perform integer division because the return type is incapable of expressing fractional values. In contrast, if the function’s return type were NSNumber, which can express both integer and floating-point values, you’d have to trust that the behavior is adequately documented. This technique of using types for documenting behavior becomes more and more useful as a type system’s expressiveness grows. If Swift had a NonZeroInt type1 to express the concept of “any integer except zero”, the divide function might be declared like this: func / (dividend: Int, divisor: NonZeroInt) -> Int Because the type checker would no longer allow you to pass 0 as the divisor, you wouldn’t have to question how the function handles a division by zero error. Does it trap? Does it return a garbage value? This is something the first variant of the function must document separately. Make illegal states impossible We can turn this insight into a general rule: Use types to make illegal states unrepresentable in your program. If you want to learn more about how to do this, check out Brandon Williams and Stephen Celis’s new video series Point-Free. They talk a lot about this and related topics. The first eight episodes have been great and I highly recommend the subscription. You’ll learn a lot. In episode 4 on algebraic data types, Brandon and Stephen discuss how enums and structs (or tuples) can be combined to design types that can precisely represent the desired states but no more (making all invalid states unrepresentable). Towards the end of the episode, they mention Apple’s URLSession API as a negative example of an API that doesn’t use types as well as it should, which brings me to this article’s subtitle. URLSession Swift’s type system is much more expressive than Objective-C’s. However, many of Apple’s APIs don’t yet take full advantage of it, be it for lack of resources for updating old APIs or to maintain Objective-C compatibility. Consider the commonly used method for making a network request on iOS: class URLSession { func dataTask(with url: URL, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Void) -> URLSessionDataTask } The completion handler receives three optional values: Data?, URLResponse? and Error?. That makes 2 × 2 × 2 = 8 possible states2, but how many of those are legal? To quote Brandon and Stephen, there are a lot of representable states here that don’t make sense. Some are obviously nonsensical, and we can probably rely on Apple’s code to never call the completion handler with all values being nil or all being non- nil. Response and error can be non- nil at the same time Other states are trickier, and here Brandon and Stephen made a small mistake: they assumed that the API will either return (a) a valid Data and URLResponse, or (b) an Error. After all, it shouldn’t be possible to get a non- nil response and an error at the same time. Makes sense, right? It turns out that this is wrong. A URLResponse encapsulates the server’s HTTP response headers, and the URLSession API will always provide you with this value once it has received a valid response header, even if the request errors at a later stage (e.g. due to cancellation or a timeout). It’s thus expected behavior for the completion handler to contain a populated URLResponse and a non- nil error value (but no Data). If you’re familiar with URLSession’s delegate-based API this may not be surprising to you because there are separate delegate methods for didReceiveResponse and didReceiveData. And to be fair, the documentation for dataTask(with:completionHandler:) also calls this case out: If a response from the server is received, regardless of whether the request completes successfully or fails, the response parameter contains that information. Still, I bet this is a very popular misconception among Cocoa developers. Just in the past four weeks, I saw two blog posts whose authors made the same mistake (or at least didn’t acknowledge the subtlety). I absolutely love the irony in this: the fact that Brandon and Stephen, while pointing out a flaw in an API due to badly chosen types, made an honest mistake that could have been prevented if the original API had used better types, illustrates the point they were making beautifully: a more strictly-typed API can prevent accidental misuse. Sample code If you want to check out URLSession’s behavior yourself, paste the following code into a Swift playground: import Foundation import PlaygroundSupport // If this 404s, replace with a URL to any other large file let bigFile = URL(string: "")! let task = URLSession.shared.dataTask(with: bigFile) { (data, response, error) in print("data:", data as Any) print("response:", response as Any) print("error:", error as Any) } task.resume() // Cancel download after a few seconds DispatchQueue.main.asyncAfter(deadline: .now() + 3) { task.cancel() } PlaygroundPage.current.needsIndefiniteExecution = true The code starts downloading a large file and then cancels the request after a few seconds. As a result, the completion handler gets called with a non- nil response and error. (This assumes that the specified timespan is long enough to receive the response headers from the server and too short for the download to complete. If you’re on a very slow or incredibly fast network, you may have to tweak the time parameter.) What is the correct type? Brandon and Stephen published their own follow-up of the issue as part of episode 9 of Point-Free. Their conclusion is that the “correct” parameter type for the completion handler is: (URLResponse?, Result<Data, Error>) I disagree because getting valid data but no response seems impossible. I think it should be: Result<(Data, URLResponse), (Error, URLResponse?)> Translation: you’ll either get data and a response (which is guaranteed to not be nil), or an error and an optional response. Admittedly, my suggestion conflicts with the common definition of the Result type, which constrains the failure parameter to the Error protocol — (Error, URLResponse?) can’t conform to Error. It’s currently being discussed on the Swift forums whether the Error constraint is necessary. The Result type The URLSession API is particularly tricky due to the unintuitive behavior of the URLResponse parameter, but pretty much all of Apple’s callback-based asynchronous APIs exhibit the same anti-pattern that the provided types make illegal states representable. How can we fix this? The common approach in Swift is to define a Result type — an enum that can represent either a generic success value or an error. Recently, there’s been another push (not the first one) to add Result to the standard library. If Result makes it into Swift 5 (big if), Apple might (even bigger if) be able to automatically import Cocoa APIs of the form completionHandler: (A?, Error?) -> Void as (Result<A>) -> Void, turning four representable states into two. Until then (if it ever happens), I encourage you to do the conversion yourself. On a longer timescale, Swift will get proper language support for working with asynchronous APIs someday. It’s likely that whatever solution the community and the Swift team come up with will allow existing Cocoa APIs to be ported to the new system, similar to how NSError ** parameters in Objective-C are already imported into Swift as throwing functions. Don’t count on seeing this before Swift 6 at the earliest, though. Nothing’s stopping you from defining a NonZeroInttype yourself, but there is no way to tell the compiler “raise an error if someone tries to initialize this type with zero”. You’d have to rely on runtime checks. Still, introducing types like this is often a good idea because users of the type can rely on the stated invariants after initialization. I haven’t yet seen a NonZeroInttype in the wild; custom types for guaranteed-to-be-non-empty collections are somewhat more popular. ↩︎ I’m only counting “nil” or “non-nil” as possible states here. Obviously, a non- nil Datavalue can have an infinite number of possible states, and the same is true for the other two parameters. But these states aren’t interesting to us here. ↩︎
https://oleb.net/blog/2018/03/making-illegal-states-unrepresentable/?utm_campaign=Swift%20Weekly&utm_medium=Swift%20Weekly%20Newsletter%20Issue%20112&utm_source=Swift%20Weekly
CC-MAIN-2018-47
refinedweb
1,447
51.58
This article explains the open closed principle, one of the SOLID Design Principles, with example in Java. Open/Closed Principle – by Bertrand Meyer Open/Closed Principle was originally defined by Bertrand Meyer in his book Object Oriented Software Construction. Bertrand Meyer’s definition was divided into 3 statements.The first 2 statements defined the notions of Open & Closed modules(or classes) – 1st statement – As per Meyer a module is Open when – A module will be said to be open if it is still available for extension. For example, it should be possible to add fields to the data structures it contains, or new elements to the set of functions it performs. What it means – If attributes or behavior can be added to a class it can be said to be “open”. 2nd statement – As per Meyer a module is Closed when – A module will be said to be closed if it is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding). What it means – If a class is re-usable or specifically available for extending as a base class then it is closed. One of the basic requirements for a class to be closed in this way is that its attributes and methods should be finalized because if they change all the classes which inherit the base class are affected. Meyer’s third statement gave the final shape to the Open/Closed principle which is so much in practice in Object Oriented Programming. 3rd statement – Meyer defined that a class adheres to the Open/Closed Principle when –. What it means – A class can be open and closed at the same time. To elaborate on this further, we need to next consider the previous two statements by Meyer where he describes the conditions for a class to be Open and Closed together. Conditions for a class to be Open & Closed together (and satisfy the Open/Closed Principle) - A class can be considered to be closed if its runtime or compiled class is available for use as a base class which can be extended by child classes. Baselining here refers to making sure that changes are guaranteed to not happen. In short – The said class is open for extension. - A class can be considered to be open if its functionality can be enhanced by sub-classing it. When a class is a sub-class then by using the Liskov Substitution rule it can be replaced by its sub-class. This sub-class behaves as its parent class but is an enhanced version of it. In short – The said class is open for modification(via extension). Example of Open/Closed Principle in Java Lets say we need to calculate areas of various shapes. We start with creating a class for our first shape Rectangle which has 2 attributes length & width– public class Rectangle{ public double length; public double width; } Next we create a class to calculate area of this Rectangle which has a method calculateRectangleArea() which takes the Rectangle as an input parameter and calculates its area – public class AreaCalculator{ public double calculateRectangleArea(Rectangle rectangle){ return rectangle.length *rectangle.width; } } Circlewith a single attribute radius– public class Circle{ public double radius; } AreaCalculatorclass to add circle calculations through a new method calculateCircleArea()– public class AreaCalculator{ public double calculateRectangleArea(Rectangle rectangle){ return rectangle.length *rectangle.width; } public double calculateCircleArea(Circle circle){ return (22/7)*circle.radius*circle.radius; } } However, note that there were flaws in the way we designed our solution above. Lets say we have a new shape pentagon next. In that case we will again end up modifying AreaCalculator class. As the types of shapes grows this becomes messier as AreaCalculator keeps on changing and any consumers of this class will have to keep on updating their libraries which contain AreaCalculator. As a result, AreaCalculator class will not be baselined(finalized) with surety as every time a new shape comes it will be modified. So, this design is not closed for modification. Also, note that this design is not extensible, i.e what if complicated shapes keep coming, AreaCalculator will need to keep on adding their computation logic in newer methods. We are not really expanding the scope of shapes; rather we are simply doing piece-meal(bit-by-bit) solution for every shape that is added. Modification of above design to comply with Open/Closed Principle Let us now see a more elegant design which solves the flaws in the above design by adhering to the Open/Closed Principle. We will first of all make the design extensible. For this we need to first define a base type Shape and have Circle & Rectangle implement Shape interface – public interface Shape{ public double calculateArea(); } public class Rectangle implements Shape{ double length; double width; public double calculateArea(){ return length * width; } } public class Circle implements Shape{ public double radius; public double calculateArea(){ return (22/7)*radius*radius; } } - There is a base interface Shape. All shapes now implement the base interface Shape - Shape interface has an abstract method calculateArea(). Both circle & rectangle provide their own overridden implementation of calculateArea()method using their own attributes. - We have brought-in a degree of extensibility as shapes are now an instance of Shape interfaces. This allows us to use Shapeinstead of individual classes wherever these classes are used by any consumer. The last point above mentioned consumer of these shapes. In our case consumer will be the AreaCalculator class which would now look like this – public class AreaCalculator{ public double calculateShapeArea(Shape shape){ return shape.calculateArea(); } } AreaCalculatorclass now fully removes our design flaws noted above and gives a clean solution which adheres to the Open-Closed Principle. The design is now correct as per Open Closed Principle due to the following reasons – - The design is open for extension as more shapes can be added without modifying the existing code. We just need to create a new class for the new shape and implement the calculateArea()method with a formula specific to that new shape. - This design is also closed for modification. AreaCalculatorclass is complete w.r.t area calculations. It now caters to all the shapes which exists now, as well as to those that may be created later. Summary In the above tutorial we learnt what is Open Closed Principle by definition, then elaborated on that definition. We then saw an example of java code which was flawed in its design. Lastly, we made the design good by making it adhere to the Open Closed_0<<
https://www.javabrahman.com/programming-principles/open-closed-principle-with-examples-in-java/
CC-MAIN-2018-34
refinedweb
1,094
51.48
This article was written by Lee Howes and Lewis Baker from Facebook. This is the second in a series of posts covering how we have used C++ coroutines at Facebook to regain stack traces for dependent chains of asynchronous waiting tasks. In the previous blog post we talked about the work we have done to implement stack traces for asynchronous coroutine code. Here we’ll go into more detail on the technical differences and challenges involved in implementing async stack traces on top of C++ coroutines, compared with traditional stack traces. With normal stacks, when you call a function the compiler generates code that automatically maintains a linked list of stack frames and this list represents the call stack. At the start of each frame is a structure that (at least on Intel architectures) looks like this: struct stack_frame { stack_frame* nextFrame; void* returnAddress; }; This structure is usually filled out by specialised assembly instructions. e.g. in x86_64, a caller will execute a call instruction, which pushes the return address on the stack and jumps to the function entry point. Then the first instructions of the callee pushes the ebp register (which usually holds the pointer to the current stack_frame structure) onto the stack and then copies the esp register (which now contains the pointer to the stack_frame structure we just populated) into the ebp register. For example: caller: ... call callee # Pushes address of next instruction onto stack, # populating 'returnAddress' member of 'stack_frame'. # Then jumps to 'callee' address. mov rsp[-16], rax # Save the result somewhere ... callee: push rbp # Push rbp (stack_frame ptr) onto stack (populates 'nextFrame' member) mov rbp, rsp # Update rbp to point to new stack_frame sub rsp, 16 # Reserve an additional 16 bytes of stack-space ... mov rax, 42 # Set return-value to 42 leave # Copy rbp -> rsp, pop rbp from stack ret # Pop return address from top of stack and jump to it When a debugger or profiler captures a stack trace for a given thread, it obtains a pointer to the first stack frame from the thread's ebp register and then starts walking this linked list until it reaches the stack root, recording the return addresses it sees along the way in a buffer. Subsequent profiling tools may then translate the addresses to function names and/or file+line numbers using a symbolizer that makes use of debug info for the binary, and this information may be logged or displayed as useful for the tool in question. Usually these stack frames live in a single contiguous memory region and the data structure looks a bit like this: If we want to walk the async-stack trace instead of the normal stack trace then we still want to first start walking normal stack frames just like we do for a normal stack trace. A coroutine may call normal functions and we want to include the frames for these normal function calls in the stack trace. However, when we get to the frame corresponding to the top-most coroutine (in this case coro_function_1) we do not want to follow the 'nextFrame' link into the coroutine_handle::resume method as the normal stack walking would. Instead we need a link to the waiting coroutine. At this point in the trace the histories for the normal stack trace and the async-stack trace diverge. To walk an async-stack trace involves answering a few questions: Before we can implement any of this, we need to understand a bit more about how coroutines are structured. When you call a coroutine in C++, this allocates storage for a coroutine frame. The allocation is usually obtained from the heap although the compiler is free to optimise out this allocation in some circumstances, for example by inlining the allocation into the frame of the caller. The compiler uses the coroutine frame storage to store all of the state that needs to be preserved when the coroutine is suspended so that it is available when the coroutine is later resumed. This usually includes storage for function parameters, local variables, temporaries and any other state the compiler deems necessary, such as at which suspend-point the coroutine is suspended. The coroutine frame also includes storage of a special object, the coroutine promise, which controls the behaviour of the coroutine. The compiler lowers your coroutine function into a sequence of calls to methods on the coroutine promise object at certain key points within the coroutine body, in addition to the user-written code of the coroutine. The coroutine promise controls the behaviour of the coroutine by implementing the desired behaviour in these methods. For more details about the promise type see the blog-post Understanding the promise type. The promise type is determined based on the signature of the coroutine function and for most coroutine types is based solely on the return-type of the coroutine. This allows coroutines that return a given return type (eg. folly::coro::Task<T>) to store additional per-coroutine-frame data within the coroutine frame by adding data members to the promise type. For the clang implementation, the layout of a coroutine frame for a given coroutine function looks a bit like this: struct __foo_frame { using promise_type = typename std::coroutine_traits<Ret, Arg1, Arg2>::promise_type; void(*resumeFn)(void*); // coroutine_handle::resume() function-pointer void(*destroyFn)(void*); // coroutine_handle::destroy() function-pointer promise_type promise; // coroutine promise object int suspendPoint; // keeps track of which suspend-point coroutine is suspended at char extra[458]; // extra storage space for local variables, parameters, // temporaries, spilled registers, etc. }; When a coroutine is suspended, all of the state for that coroutine invocation is stored in the coroutine frame and there is no corresponding stack frame. However, when a coroutine is resumed on a given thread, this activates a stack frame for that coroutine on that thread's stack (like a normal function) and this stack frame is used for any temporary storage whose lifetime does not span a suspend point. When a coroutine body is currently executing, the pointer to the current coroutine frame is usually held in a register, which allows it to quickly reference state within the coroutine frame. However, this address may also be spilled into the stack frame in some cases, in particular when the coroutine is calling another function. Thus, when a coroutine is active and has called another function, the memory layout will look something like this: Note in particular that walking the asynchronous version of the stack means diverging into heap memory by following the framePointer in coro_function_1. Unlike the pointer to the current stack frame, which can generally be assumed to be stored in the rbp register, there is no standard location for the pointer to the coroutine frame. This has implications for how we are able to navigate from a stack-frame to its corresponding async-frame data. To be able to produce a stack trace that represents the async-call chain instead of the normal synchronous call chain, we need to be able to walk the chain of coroutine frames, recording the return/continuation address of each coroutine as we walk the chain. The first piece of the puzzle to be solved is storing the state needed to be able to walk the async stack-frames during an async stack trace. For each async-stack frame we need to be able to determine the address of the next async-stack frame and we need to be able to determine the return address of the current stack-frame. One of the constraints to be aware of here is that the code that is going to be walking the stack trace is not necessarily going to have access to debug information for the program. Profiling tooling, for example, may want to sample function offsets only and symbolize later, as it does for synchronous stack traces. We must be able to walk the async stack without needing additional complex data structures which could make the stack walk overly expensive. For example, profiling tools built on the Linux eBPF facility must be able to execute in a deterministic, finite amount of time. There is technically enough information in the folly::coro::Task‘s promise type, folly::coro::TaskPromise, to be able to walk to the next frame, as it's already storing the coroutine_handle for the continuation and the coroutine frame of that continuation already encodes information about which suspend point of which coroutine function is awaiting it in its resumeFn and suspendPoint members. However, there are some challenges with trying to use this information directly in walking an async-stack trace. If we have a pointer to a coroutine-frame stored in a coroutine_handle then in theory, if we know the layout of the promise object, we can calculate the address of the 'continuation' member of the promise which contains the address of the next coroutine-frame simply by adding a constant offset to the coroutine-frame pointer. One approach is that we might require that all promise types store a coroutine_handle with the continuation as their first data-member: template<typename T> struct TaskPromise { std::coroutine_handle<void> continuation; Try<T> result; ... }; struct __some_coroutine_frame { void(*resumeFn)(void*); void(*destroyFn)(void*); TaskPromise<int> promise; int suspendPoint; }; Then, even if we do not know the concrete promise type, we know that its first member is a coroutine_handle and that the promise is placed immediately after the two function pointers. From the perspective of a debugger walking the async-stack trace it could assume that coroutine frames look like: struct coroutine_frame { void(*resumeFn)(void*); void(*destroyFn)(void*); coroutine_frame* nextFrame; }; Unfortunately, this approach breaks down when the promise-type is overaligned (that is, it has an alignment larger than 2 pointers: 32 bytes or larger on 64-bit platforms). This can happen if the folly::coro::Task<T> type is instantiated for an overaligned type, T, for example, a matrix type optimised for use with SIMD instructions. In such cases the compiler inserts padding between the function pointers and the promise object in the structure to ensure that the promise is correctly aligned. This variation in layout makes it much more difficult to determine what offset to look at for the next coroutine-frame address because it is type dependent; the debugger needs to know something about the layout of the promise type to be able to calculate the offset. In theory we could look at the value of the resumeFn/destroyFn pointers to lookup the promise type that corresponds to the coroutine body in a translation table, but this would either require debug information or by modifying the compiler to encode this information in the binary. We cannot assume the availability of debug info, and modifying the compiler is a much larger project. Other approaches are possible, such as changing the ABI of coroutine-frames to eliminate the padding, but these also require compiler changes, and would make the implementation compiler dependent. The approach we took instead is to insert a new `folly::AsyncStackFrame` data-structure as a member of the coroutine promise and use these to form an intrusive linked list of async frames. That is, a structure that looks something like this: namespace folly { struct AsyncStackFrame { AsyncStackFrame* parentFrame; // other members... }; } that can then be added as a member to the coroutine promise objects: namespace folly::coro { class TaskPromiseBase { ... private: std::coroutine_handle<> continuation_; AsyncStackFrame asyncFrame_; ... }; Whenever we launch a child coroutine by co_awaiting it we can hook up that child coroutine's AsyncStackFrame so that its parentFrame member points to the parent coroutine's AsyncStackFrame object. Using a separate data structure gives us a lot of flexibility in how we represent async-stack traces. It insulates the data structures from any dependence on compiler internals, and will allow us to reuse AsyncStackFrame objects for non-coroutine async operations in future. It comes at a small memory and run time cost as we now effectively have two pointers to the parent coroutine to store and maintain. This decision can be revisited in future if we later want to squeeze some more performance by making some of the previously mentioned compiler changes. Now we have a way to represent a chain of async-frames that can be walked by a debugger without needing to know anything about the concrete promise type. In the next post in the series we will look at how to determine the return-address of a coroutine and how to use these data structures to hook coroutine frames into a chain at runtime. To learn more about Facebook Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter and Facebook. Interested in working with open source technologies at Facebook? Check out our open source-related job postings on our career page.
https://developers.facebook.com/blog/post/2021/09/23/async-stack-traces-folly-synchronous-asynchronous-stack-traces/
CC-MAIN-2022-21
refinedweb
2,124
51.62
. This need not be your game project partner, though it may be if you are in the same lab and wish to work together on this task too. Spammers sometimes search the web for email addresses. People who want people to cantact them, but not spammers, sometimes post their addresses in an obfuscated form. Spammers write programs to detect this obfuscation. People come up with new ones. Etc. In this lab, you’ll practice regular expressions by harvesting email addresses from a website. As a reminder, you can read it line-by-line using import urllib.request stream = urllib.request.urlopen('') for line in stream: decoded = line.decode('UTF-8').strip() # add code here Add code that finds email addresses, and prints them out. Print only email addresses from the website given, one per line, with no duplicates. You should not hard-code: if we change the content of the webpage, your output should change. Example output for might be basic@virginia.edu link-only@virginia.edu multi-domain@cs.virginia.edu Mr.N0body@cand3lwick-burnERS.rentals a@b.ca no-at-sign@virginia.edu no-at-or-dot@virginia.edu first.last.name@cs.virginia.edu with-parenthesis@Virginia.EDU added-words1@virginia.edu added-words2@virginia.edu may.end@with-a-period.com underscore@virginia.edu reverse@virginia.edu JohnDoe@virginia.edu markdown@virginia.edu See how many you can get (without having non-addresses). We don’t expect many of you to get them all… Most of these will be easier with regular expressions than without, but don’t forget about string methods like replace as well. At least one partner should submit one .py file named emailhunt.py to Archimedes (the submission system):. Please put all partners’ ids in comments at the top of the file.
http://cs1110.cs.virginia.edu/lab13-email.html
CC-MAIN-2017-43
refinedweb
300
50.53
Answered by: Treeview dragdrop event ignores exceptions! HELP Hi, I have a windows.forms application in C# using a treeview with dragdrop capability, however I have found when an exception occurs in the DragDrop event and is unhandled, .Net ignores it (instead of showing the exception dialog) Attached I have some test code where I explicitly throw an exception in the DragDrop event, nothing happens! Any ideas? thanks Brian publicpartial class Form1 : Form {public Form1() { InitializeComponent(); }private void treeView1_DragDrop(object sender, DragEventArgs e) {throw new ArgumentException(); }private void treeView1_ItemDrag(object sender, ItemDragEventArgs e) { DoDragDrop("!", DragDropEffects.Copy); }private void treeView1_DragOver(object sender, DragEventArgs e) { e.Effect =DragDropEffects.Copy; } } Question Answers -. All replies FYI (For whoever is interested) I have discovered why this behaviour occurs, it is because you can DragDrop from other applications, if these other applications should throw an exception you don't want those exceptions bubbling up through your application. I have not found a work around for this yet, as I WANT unhandled exceptions to be eventually thrown in my app. (So as to find bugs during testing) My application doesn't really support drag and drop from other apps so perhaps I might just use the mouse up event or something.... Hope this helps others.... Hey, I was having a similar problem (not drag-drop related), but certain events were supressing any exceptions being thrown that I didn't catch. I solved the problem by going into the Exception Debugging options at: Debug > Exceptions Then clicking the checkbox for "Common Language Runtime Exceptions" under the "User-unhandled" column. By default this is supposed to be checked, so I'm not sure why mine wasn't, but you may want to check that... Thanks for your response, I am aware of this option but I was more worried about exceptions not being handled in run-time say on a clients machine or a testers machine rather than on my development machine. The only workaround I could find was to put a try..catch all exceptions in the DragDrop event method around my code and display a messagebox detailing the exception information. Ideally no exception should occur here but if it does I want to know about it! Agreed, this is the reason I believe why unhandled exceptions don't get caught at all, however I am doing drag and drop within the same/one application! i.e. From one treeview to another and unhandled exceptions are still ignored in the DragDrop event. (Even if you drag drop within the same treeview this happens) -. - While first suspecting the try/catch handler in Control.DoDragDrop, it is actually the COM DoDragDrop() function that swallows the exceptions. Even a null reference exception (see sample below) is dismissed without causing the COM DoDragDrop() loop to exit. This is questionable tactic and not documented anywhere I see. Nothing that the framework can do, it needs to rely on the COM implementation. Really serious runtime errors, like StackOverflowException, do however get caught by the CLR. Here's the test program I used: using System; using System.Security; using System.Windows.Forms; namespace WindowsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); this.AllowDrop = true; } private void treeView1_ItemDrag(object sender, ItemDragEventArgs e) { DoDragDrop(e.Item, DragDropEffects.Copy); } private void Form1_DragEnter(object sender, DragEventArgs e) { unsafe { int* p = null; *p = 0; } } } } I was quite surprised too to find that this was not documented anywhere, I wonder is there anyway of intializing .net applications to catch unhandled COM Exceptions perhaps? similar to how you can set up applications to catch unhandled thread exceptions. If I get some time today or later this week I'll see if I can find some more info.... In any case I have found an in between solution I have simply put a try..catch ALL excpetions in my DragDrop event and display a message box with the exception.stacktrace so at least know something has gone wrong.... - Well, it's not "undocumented" per se. It's an overlap in the term "exception". With .NET certain native "exceptions" are wrapped in a framework Exception-based class. For example, an access violation is caught by the framework and raised as an AccessViolationException. Since the cause of the AccessViolationException is native that can be passed back over the managed-unmanaged-managed threshold. Other exceptions that are unique to .NET (i.e. they don't wrap a native error reporting artifact) like "throw new ArgumentException();" have no direct correlation to a native error reporting artifact and ergo can't make they're way back. - Hmm, it certainly can by handling the underlying SEH exception. This blog post is relevant. Chris specially talks about what a COM method should do when a managed exception is thrown (near the end of the section "COM error handling"). Choice #2 is exactly what the DoDragDrop() API function appears to do. Makes sense, considering that COM methods aren't allowed to propagate exceptions. - I got out of Chris' blog that SEH isn't used for exception propagation from managed code to unmanaged code via COM. Although, he's not all that clear: "Unfortunately, the most broken of the three is the last one… and that’s the one we currently follow." where the "last one" is: "Convert the exception object into an HRESULT value", but slightly contradicts with "But this time the NullReferenceException will be mapped to an SEH exception code of 0xE0434F4D"; he details that the HRESULT of a COM method is used when exceptions occur in managed code and the client can use the IErrorInfo interface to get more detail. While a SEH may be used for certain exceptions (like NullReferenceException) originating in managed code as a result of call from unmanaged code, Chris details that one SEH code (0xE0434F4D) is used for all managed exceptions. Which doesn't give much to the unmanaged client to give back to it's managed client (in a managed-to-unmanaged-to-managed scenario). But, I don't think it really matters. This is directed more at Brian: (ignoring the whole managed-to-unmanaged-to-managed interop details and the case of managed-to-unmanaged) how can exceptions coming from an arbitrary managed drag-drop target be propagated back through the call to DoDragDrop in the general case? What if the drag-drop target threw an Exception-derived type local only to it? The managed drag-drop source would have to explicitly reference that assembly and explicitly handle that exception. And what if the drag-drop source didn't handle that (or some other system exception): the drag-drop target now causes the drag-drop source to terminate from an unhandled exception, not good. This would completely couple the two applications, coupling that OLE Drag and Drop is explicitly designed to avoid. Best case--and OLE Drag and Drop can handle this--is the drag-drop source is informed of success of the drop (the Win32 DoDragDrop is documented as returning E_UNSPEC if the target wasn't successful); but, I don't see what the drag-drop source can really do with that information. Hi Peter, I agree with what you say, it makes perfect sense but what I actually was expecting was that the Drag/Drop TARGET would handle the exception occuring on dropping. I was expecting the Target to at least notify this (with the usual unhandled exception has occured dialog box) but this may be that I misunderstood the Drag/Drop mechanism or confused myself as I am doing drop and drop just within my own application and hadn't considered drag/drop BETWEEN applications. thanks nobugz for the link to that blog, I've printed it out for some bedtime reading! Brian
http://social.msdn.microsoft.com/Forums/windows/en-US/8beb1aba-1699-46c7-84dc-38768c7a21f6/treeview-dragdrop-event-ignores-exceptions-help?forum=winforms
CC-MAIN-2014-35
refinedweb
1,285
52.6
Efficient MLOps in a Kubernetes Environment Source:-containerjournal.com MLOps addresses the specific needs of data science and ML engineering teams without impacting Kubernetes If your organization has already started getting into machine learning, you will certainly relate to the following. If your organization is taking its first steps into data science, the following will illustrate what is about to be dropped on you. If none of the above strikes a chord this might interest you nonetheless, because AI is the new frontier and it won’t be long until you’ll need to address the following challenges, too. Data scientists are … well … scientists. If you ask them, their focus should be on developing the science, the neural networks, and the models behind the AI predictors they are tasked to build. They all have their preferred ways of working and they may need special environments for their required tasks. Of course, your colleagues, the data scientists would love to be able to develop their code on their laptops. Unfortunately for them, this is not possible. Their machines either lack memory or storage, or they need GPU or several GPUs for added horsepower. After all, AI workloads can be humongous compute hogs. Thus, they have to work on remote machines. Their code will be running from several hours to several days, and since it’s quite long, they obviously want to try different configurations, parameters, etc. Here comes DevOps to save the day! Allocate a Kubernetes cluster for them and let them run their containers. Problem solved! If only it was that easy … The crux lies in the fact that AI development and deployment are not like standard software. Before delving into the differences, let’s remember that K8s clusters were built for production. As such there are several big no-nos that we are all familiar with. Specifically, on a K8s cluster we should never: Place all pods in the same namespace. Manage too many different types of pods/repos workloads manually. Assume we can control the order of the K8s scheduler. Leave lots of stale (used once) containers hanging around. Allow node auto-scaling to kick automatically based on pending jobs. Basically, what I’m trying to say is that the first rule of DevOps is, Do not allow R&D teams to access your K8s cluster. This is your cluster (read: production cluster), not theirs. However, with deep learning (DL) and sometimes with machine learning (ML), the need to run heavy workloads is required from relatively early in the development process, and it continues from there. To top that, unlike traditional software, wherein a tested and usually stable version is deployed and replicated on the K8s cluster, in ML/DL the need is to run multiple different experiments, sometimes in the hundreds and thousands, concurrently. Those are, by definition, non-production grade tested and stable pieces of software. In other words, unless we want to constantly set up permissions and resources for the continuously changing needs of the data science teams, we have to provide an interface for them to use the resource we allocated for them. Finally, since we are the experts on K8s and orchestration in general, the data science team will immediately come to us to support them in Dockerizing their code and environment. When things are simple, everything works, but things get out of hand very quickly. Because ML/DL code is unstable and packaging sometimes takes time, we will need to have it as part of the CI. This means that the data science team will have to maintain requirements/YAML files. As these experiments are mostly ephemeral, we will end up building lots and lots of Dockers that will be used only once. It is not uncommon to see clusters with tens of thousands or more Dockers on them. The long and short of this is that we need someone or something that easily Dockerizes the data science team’s endless environment setups. Continuously. Let’s decompose the requirements list into its different ingredients: Resource Access For DevOps, resource access means permissions/security, reliability, location, etc. For a data scientist, resource access means which resource type to use and its availability. That’s it. Even a resource type that could be defined at low granularity is usually an overkill from the development teams’ perspective. What they care about is whether it is a CPU or a GPU machine and the number of cores. Three- or four-level settings for these resources (e.g. CPU, 1xGPU, 2xGPU, 8xGPU etc.) would probably be enough for most AI teams. Environment Packaging Environment packaging is usually thought of as containerizing your codebase. This is a reasonable assumption coming from a DevOps perspective. For AI data science teams, maintaining a Docker file, a requirement.txt and updating Conda YAMLs is possible. However, it’s a distraction from their core work, it takes time and it is easy to leave behind old setups because they are not used in the coding environment itself. The development environment is constantly changing, and so the key is to extract the information without the need to manually keep updating it. Also, easily replicating environments from local machines to remote execution pods is needed. Monitoring For DevOps, monitoring usually means hardware monitoring, CPU usage, RAM usage, etc. For data science teams, monitoring is about model performance, speed, accuracy, etc. AI teams need to be able to monitor their applications (processes/experiments, whatever we call them) with their own metrics and with an easy-to-use interface. Unfortunately, no standard exists for this kind of monitoring, and oftentimes adding more use case-specific metrics gives the data science team a huge advantage in terms of understanding the black box. This, too, is a constantly changing environment that needs to allow for customization by the data scientists. Job Scheduling For DevOps, using K8s job scheduling is akin to resource allocation, e.g. a job needs resources that it will use for unlimited amounts of time. If we do not have enough resources, we need to scale. For data science teams, job scheduling is actually an HPC challenge. Almost by definition, there will never be enough resources to execute all the jobs (read: experiments) at the same time. On the other hand, jobs are short-lived (at least relative to the lifespan of servers), and the question is which job to execute first and on which set of resources? MLOps to the Rescue Kubernetes is a great tool for DevOps to manage the organization’s hardware clusters, but it is the wrong tool to manage AI workloads for data science and ML engineering teams. So, is there a better way? To address the specific needs of data science and ML engineering teams, a new breed of tools has been designed: MLOps or AIOps. MLOps is an ideal tool for the data science team; in a K8s-centric organization it will interface or run on top of K8s. From a K8s perspective, it should be just another service to spin, with exact multiple copies and replicated setup on different resources, just like we would do with any other application running on our K8s. These tools offer the data science team and ML engineers a very different interface with the K8s nodes. Ideally, the AI team should be able to see their allocated hardware resources as nodes on their own “cluster,” for which they are given the ability to launch their jobs (experiments) directly into a dedicated set of queues without interfering with any of the K8s schedulers. The MLOps solution provides the glue between the queues managed by the data science team and K8s: resource allocation provisioned by Kubernetes and job allocation and execution performed by the dedicated MLOps solution. This solution should be able to pick a job from the dedicated ML/DL team queue (with priorities, job order and a few more things), then set up the environment for the job, either inside the Docker or as sibling Docker, and monitor the job (including terminating it if the need arises). The DevOps team is only needed when new hardware resources need to be allocated or resource allocation needs to be changed. For everything else, the users (read: AI team) self-service through the MLOps solution, purposely built for their needs, while letting the DevOps team manage the entire operation. Doesn’t that sound better than being on call 24/7 for provisioning and then having to clean everything up afterward?
https://www.bestdevops.com/efficient-mlops-in-a-kubernetes-environment/
CC-MAIN-2022-40
refinedweb
1,415
52.29
Hi, On 7/16/05, Michael Niedermayer <michaelni at gmx.at> wrote: > Hi > > On Saturday 16 July 2005 20:10, Guillaume POIRIER wrote: > > Hi, > > > > On 7/16/05, M?ns Rullg?rd <mru at inprovide.com> wrote: > > > Julio C?sar Carrascal said: > > > > Hi. I'm trying to create a compliant video for a DVD with mencoder. > > > > Right now most DVD creation programs (Nero Vision, Sony DVD Architect > > > > and DVD-Lab PRO) complaint because the GOPs aren't closed in the file > > > > and try to re-encode it. I asked in the mencoder-users list and they > > > > thold me that the feature wasn't implemented and I should ask here. > > > > > > > > My question would be: Is "cgop" going to be implemented any time soon? > > > > If so, any idea when will a release be available? > > > > > > It's been there for a year or so, at least. IIRC, CODEC_FLAG_CLOSED_GOP > > > or something similar is the internal name. I'm sure someone else can > > > tell you how to have mencoder use it. > > > > It should be already implemented in MEncoder. ve_lavc.c:612 has this: > > > > #ifdef CODEC_FLAG_CLOSED_GOP > > lavc_venc_context->flags|= lavc_param_closed_gop; > > #endif > > > > What I'm not too sure though is if CODEC_FLAG_CLOSED_GOP is always > > defined so this part of the code get compiled or not. MPlayer's doc > > says that closed GOP is not implemented yet. > > the docs are wrong, cgop does work as long as scene change detection is > disabled (-sc_threshold 1000000000 for example will do that in ffmpeg) Ok, I guess it's not something you want to do (because the scene change detection is supposed to code the scene change with an I frame, which is seekable, and is wiser instead of coding it with a P frame, and of putting I frames only every keyint) unless you really need closed GOPs. Am I right? Guillaume -- A lie gets halfway around the world before the truth has a chance to get its pants on. -- Winston Churchill
http://ffmpeg.org/pipermail/ffmpeg-devel/2005-July/001892.html
CC-MAIN-2019-18
refinedweb
321
81.02
* Joachim Breitner <nomeata@debian.org> [100703 11:32]: > b) libnss-extrausers builds only one binary package, which contains all > variants for a given architecture. [...] > The approach b has the advantage that all variants are available and the > user does not have to remember that he might want to run 32bit variants > and needs to install another package. The cost of this is the additional > dependency on the “other” libc6-* package, as noted by Petter in this > bug. Take a look at: It only depends on libc6 (or libc0.1 or libc0.3 on kfreebsd or hurd), but only suggests the libc*-<otherarch>. There is no need to have a depends on libc6-i386 at all. (And even the Suggests in libnss-extrausers is mostly documentary): The modules are used by the corresponding libc and the need the libc they are used from. So whenever something uses some variant of an nss-modules the correct variant of libc is already there. And as the libc6-arch depend on the exact version if libc6, it is even the correct version.: ------------------------- ALSODO = # life would be to simple if there was something to work everywhere.... ABIFLAG_32=-m32 ABIFLAG_64=-m64 # do not forget to update Build-depends when changing something here: ifeq ($(DEB_HOST_ARCH),sparc) ALSODO = 64 else ifeq ($(DEB_HOST_ARCH),i386) ALSODO = 64 else ifeq ($(DEB_HOST_ARCH),kfreebsd-amd64) ALSODO = 32 else ifeq ($(DEB_HOST_ARCH),amd64) ALSODO = 32 else ifeq ($(DEB_HOST_ARCH),mips) ABIFLAG_32=-mabi=n32 ABIFLAG_64=-mabi=64 ALSODO = 64 32 else ifeq ($(DEB_HOST_ARCH),mipsel) ABIFLAG_32=-mabi=n32 ABIFLAG_64=-mabi=64 ALSODO = 64 32 else ifeq ($(DEB_HOST_ARCH),powerpc) ALSODO = 64 else ifeq ($(DEB_HOST_ARCH),s390) ALSODO = 64 endif -------------------------? Bernhard R. Link
https://lists.debian.org/debian-devel/2010/07/msg00067.html
CC-MAIN-2016-40
refinedweb
272
52.7
How to replace old C code (printf) with C++/Qt for GUI application? - kahlenberg I have a code that was written some years ago for command line. There are lots of printf codes that was writing on screen what is happening. Now I want to replace all printf codes and I want to show everything on GUI. int FLASH_Image(char *fname, unsigned StartAddr, unsigned imgSize) { .... for (i = StartSector; i < StartSector + NrOfSectors; i++) { printf("\rErasing sector %02d ...",i); FLASH_SectorErase((unsigned)i*SECTORSIZE); } .... } If I use a parameter char *msgpointer for text messages, it will be "valid" after function ends. I want to "update" it during function runs. How is it possible? - Wieland Moderators Hi! Are you familiar with Qt's concept of signals and slots? Your function could emit a signal that contains a string with the current text to be displayed. Your displaying UI component can be connected to that signal. - kahlenberg @Wieland Yes I am familiar with signal/slots. But the problem is that there is no class in those old C code . Without object how can I emit a signal? It doesn't work even if I declare signal functions as static. I have a UDP class in UDP.h, I have this old C code in flash.h and implementation in flash.c, Mainwindow.h includes flash.h, and flash.h includes udp.h, the include hieararchy is so: udp.h (with object, signals and slots) | V flash.h (wihtout objects, only func declarations) | V mainwindow.h - jsulm Moderators Replace printf() with your own function which then passes the strings to main window. You could even name it printf() and remove the include <stdio.h> (and include your header file) from your C code, then you do not have to change anything else :-) - Wieland Moderators I wouldn't mix UI components and business logic, e.g. access the MainWindow from flash.cpp. IMAO it's better to create an adapter for message passing: msgadapter.h #ifndef MSGADAPTER_H #define MSGADAPTER_H #include <QtCore> class MsgAdapter : public QObject { Q_OBJECT public: MsgAdapter(QObject *parent = 0); public: void printf(const char *format, ...); signals: void msg(QString); }; #endif // MSGADAPTER_H msgadapter.cpp #include "msgadapter.h" MsgAdapter::MsgAdapter(QObject *parent) : QObject(parent) { } void MsgAdapter::printf(const char *format, ...) { va_list args; va_start(args, format); char *result; const int n = vasprintf(&result, format, args); va_end(args); if (n>=0) { emit msg(result); free(result); } } flash.h #ifndef FLASH_H #define FLASH_H class MsgAdapter; void setMsgAdapter(MsgAdapter *msgAdapter); void FLASH_Image(); #endif // FLASH_H flash.cpp #include "flash.h" #include "msgadapter.h" static MsgAdapter *g_msgAdapter = 0; void setMsgAdapter(MsgAdapter *msgAdapter) { g_msgAdapter = msgAdapter; } void FLASH_Image() { g_msgAdapter->printf("\rErasing sector %02d ...", 23); } In mainwindow.cpp #include "msgadapter.h" #include "flash.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); MsgAdapter * msgAdapter = new MsgAdapter(this); setMsgAdapter(msgAdapter); connect(msgAdapter, SIGNAL(msg(QString)), ui->label, SLOT(setText(QString))); } MainWindow::~MainWindow() { setMsgAdapter(0); delete ui; }
https://forum.qt.io/topic/63416/how-to-replace-old-c-code-printf-with-c-qt-for-gui-application
CC-MAIN-2017-47
refinedweb
484
61.93
Memoization and id2ref This article was originally included in the February issue of the Engine Yard Newsletter. To read more posts like this one, subscribe to the [_Engine Yard Newsletter]()._ In this series, Evan Phoenix, Rubinius creator and Ruby expert, presents tips and tricks to help you improve your knowledge of Ruby. The performance of a library or application is one of the key factors into getting it accepted, so it should come as no surprise that Ruby programmers have many different tricks they use to squeeze more performance out of their code. One of the most common is memoization. This is the technique of calculating a value once, then saving the result and transparently substituting it for the code that calculated the original value. Here's a short example: def size_of_universe @size ||= Universe.find.size end Here, we've calculated the size of the universe and then saved the result into the @size ivar. This way, the next time size_of_universe is called, the previously calculated value is returned. We've already gone over one of the simplest and most basic techniques, above. This technique uses the ||= operator to run the right hand side if, and only if, the left hand side is not true. It's short and sweet, rarely confusing the user. Another technique that has been seen in production code uses ObjectSpace._id2ref. While this is becoming a common technique, it has a number of problems that we'll look at today. Here is an example of using this technique: obj = Universe.find.size eval <<CODE def size_of_universe ObjectSpace._id2ref(#{obj.object_id}) end CODE This technique is used frequently with metaprogramming, when you want to embed a specific object directly into a generated method. People use this technique because, at first glance, it removes any kind of data dependency on the generated code and obj. There is no ivar to make sure is in scope, no constant, etc. But, in fact, this technique masks some rather terrible bugs. This technique basically uses the whole Ruby process as a big table, leveraging the ability to easily get the table index for an object and convert that table index back into the object. The primary issue stems from the fact that Ruby is a garbage collected language. Even though the code has requested the object_id for an object, that is not enough to keep the object alive. So if the only reference to the return value from #size was obj, when this method returns, obj becomes garbage. So what happens when you run #size_of_universe and obj has been garbage collected? Well, a few things can happen: id2refwill raise a RangeError, saying that the id no longer points to an object. - A random object will be returned. The second scenario is probably the strangest, but this can be observed. This bizarre _id2ref behavior occurs because the return value from #object_id is actually the address in memory of the object itself. This means that when the GC runs and collects the object, and then the allocator puts another object in the same place (which is exactly what an GC does), whatever object happens to be there is returned. This is essentially the same as a hanging pointer bug in C. Lastly, the implementation of #_id2ref varies wildly between different Ruby implementations, each having different performance and different potential bugs. Due to these factors, using #_id2ref in production is even more nebulous. So what's a simple alternative? UNIVERSE_SIZES = [ ] idx = UNIVERSE_SIZES.size UNIVERSE_SIZES << Universe.find.size eval <<-CODE def size_of_universe UNIVERSE_SIZE[#{idx}] end CODE This seems silly if there is just a single value in UNIVERSE_SIZES, but the expectation here is that you might be generating many methods with values that need to memoized. In the example above, we're storing methods in an Array that is in a constant, which will keep the value alive from a GC standpoint. This avoids the bugs that #_id2ref has. So hopefully if you need to memoize, you won't use _id2ref. There are a number of alternatives, most of them are better than worrying about the bugs that #_id2ref can easily introduce. Share your thoughts with @engineyard on Twitter
https://blog.engineyard.com/2010/memoization-and-id2ref
CC-MAIN-2016-07
refinedweb
696
63.59
. Random r1 = new Random(); Random r2 = new Random(150); The Random class constructor is overloaded – one that takes no parameter and the other that takes an integer value (known as seed ). The first object r1 generates different random numbers at different times of execution. The second object r2, with a seed of 150, generates same set of random numbers at different times of execution.. sir can we do any restriction on the random number generations i.e only 2 digit integer number generation like that You must write extra code. The following code prints from 10 to 99. public class Demo { public static void main(String[] args) { int x = 10 + (int) (Math.random() * 90); System.out.println(x); } } sir here we are writing class name is different and object creation is with different class name what is this meaning and what is this nextInt() method what is this method?? Your class name is that one on which you open file to write the code. In your code you can create an object of any class. This is called “composition”. See way2java for composition.
http://way2java.com/java-util/class-random/
CC-MAIN-2017-26
refinedweb
183
66.23
Sometimes, a user wishes to allocate a local array whose size is not known at compile-time, but at runtime only. Nonetheless, the array's size will remain unchanged during the lifetime of the array. Examples are This paper proposes to add local runtime-sized arrays with automatic storage duration to C++, for example: void f(std::size_t n) { int a[n]; for (std::size_t i = 0; i < n; ++i) a[i] = 2*i; std::sort(a, a+n); } Traditionally, the array bound "n" had to be a constant expression (see 8.3.4 dcl.array). For local arrays with automatic storage duration, this paper proposes to lift that restriction. The syntax is intended to be the same as that used for C99 variable length arrays (VLAs). As a design guideline, the same rules should apply to " new T[n]" and a local array " T a[n]", except that arrays of zero size are only supported for new. There is well-established existing practice with gcc, Clang, and Intel C++ all implementing a similar, if not identical feature. In fact, Douglas Gregor reported in c++std-ext-12553 on 2012-01-30: Users really seem to want this feature. It's a fairly common extension, and when we tried to ban it out of principle (in Clang), our users reacted *very* strongly. void f(std::size_t n) { int a[n]; unsigned int x = sizeof(a); // ill-formed const std::type_info& ti = typeid(a); // ill-formed typedef int t[n]; // ill-formed } Data structures that allocate from the heap access, by design, a global resource that is often highly contended in a multi-threaded program. Therefore, avoiding heap allocations is usually advantageous for performance. Allocating such data from the stack is much more efficient, because the stack is local to each thread and bytes on the stack are often cached locally. (Since each thread has a separate stack, it is unlikely that another thread on another CPU accesses the same data, thereby causing more expensive cache invalidations.) The syntax does not require additional keywords. Instead, a restriction on the existing array declaration syntax is lifted in certain circumstances. There is no reason to limit the feature to PODs as array element types, thus such a limitation is not proposed. Stack overflow becomes more likely, in particular if the size depends on external input and is not properly checked. Some environments might therefore prohibit the use of the feature. Such a prohibition can be easily enforced with a static analysis tool. There is no longer an upper bound on the size of a function's stack frame. This makes static analysis of stack usage harder. A type is a literal type if it is:Change in 3.9.2 basic.compound paragraph 2 (the same wording is added by the proposed resolution of core issue 1464): - ... - an array of literal type ; or - ... These methods of constructing types can be applied recursively; restrictions are mentioned in 8.3.1 dcl.ptr, 8.3.4 dcl.array, 8.3.5 dcl.fct, and 8.3.2 dcl.ref.Change in 4.2 conv.array paragraph 1: AnInsert a new paragraph before 5.2.8 expr.typeid paragraph 2: lvalue or rvalueof type "array of N T" or "array of unknown bound of T" can be converted to a prvalue of type "pointer to T". The result is a pointer to the first element of the array. Change in 5.3.1 expr.unary.op paragraph 3:Change in 5.3.1 expr.unary.op paragraph 3: When typeidis applied to a glvalue expression ... The result of the unary & operator is a pointer to its operand. The operand shall be an lvalue or a qualified-id. ...Change in 5.3.3 expr.sizeof paragraph 1: .... ... Drafting note: 5.3.7 expr.unary.noexcept does not need to be changed, because the declaration of an array of runtime bound cannot be lexically part of the operand of a noexcept; see also 5.1.2p2 expr.prim.lambda.Change in 6.5.4 stmt.ranged paragraph 1: Insert a new paragraph before 7.1.3 dcl.typedef paragraph 3:Insert a new paragraph before 7.1.3 dcl.typedef paragraph 3: - if _RangeTis an array type, begin-expr and end-expr are __rangeand __range + __bound, respectively, where __boundis the array bound. If _RangeTis an array of unknown sizeor an array of incomplete type, the program is ill-formed; - ... Change in 7.1.6.2 dcl.type.simple paragraph 3:Change in 7.1.6.2 dcl.type.simple paragraph 3: In a given non-class scope, a typedefspecifier can be used to redefine the name of any type declared in that scope to refer to the type to which it already refers. [ Example: ... ] The type denoted by decltype(e) is defined as follows:Change in 8 dcl.decl paragraph 4: - if eis an unparenthesized ... noptr-declarator: declarator-id attribute-specifier-seqopt noptr-declarator parameters-and-qualifiers noptr-declarator [ constant-expressionopt] attribute-specifier-seqopt ( ptr-declarator ) Drafting note: Section 8.1 [dcl.name] defining the grammar term type-id is intentionally unchanged. Thus, constructing an array of runtime bound in a type-id is ill-formed, because the grammar continues to require all constant-expressions in array bounds.Change in 8.3.1 dcl.ptr paragraph 1: ... Similarly, the optional attribute-specifier-seq (7.6.1) appertains to the pointer and not to the object pointed to.Change in 8.3.2 dcl.ref paragraph 5: There shall be no references to references, , no arrays of references, and no pointers to references. ...Change in 8.3.4 dcl.array paragraph 1 (partly taken from Mike Miller's drafting for core issue 1464): In a declaration T D where D has the formChange in 8.3.4 dcl.array paragraph 3:D1 [and the type of the identifier in the declaration T D1 is "derived-declarator-type-list T", then the type of the identifier of D is an array type; if the type of the identifier of D contains the constant-expressionopt] attribute-specifier-seqopt autotype-specifier, the program is ill-formed. T N elements numbered 0 to N-1, and the type of the identifier of D is "derived-declarator-type-list array of N T".An object of array type contains a contiguously allocated non-empty set of N subobjects of type T. Except as noted below, if the constant expression is omitted, the type of the identifier of D is "derived-declarator-type-list array of unknown bound of T", an incomplete object type.The type "derived-declarator-type-list array of N T" is a different type from the type "derived-declarator-type-list array of unknown bound of T", see 3.9 basic.types. Any type of the form "cv-qualifier-seq array of N T" is adjusted to "array of N cv: ... ] When several "array of" specifications are adjacent, a multidimensional array is createdAdd a new paragraph before 8.3.4 dcl.array paragraph 4: ; only the first of the constant expressions that specify the bounds of the arrays may be omitted. In addition to ... Change in 8.3.5 dcl.fct paragraph 8:Change in 8.3.5 dcl.fct paragraph 8:.Change in 8.5.1 dcl.init.aggr paragraph 6: Change in 8.5.2 dcl.init.string paragraph 2:Change in 8.5.2 dcl.init.string paragraph 2: Aninitializer-list is ill-formed if the number of initializer-clauses exceeds the number of members or elements to initialize. [ Example: ... ] There cannot be more initializers than there are array elements. [ Example:Change in 9.2 class.mem paragraph 10:char cv[4] = "asdf"; // erroris ill-formed since there is no space for the implied trailing '\0'. -- end example ] Change in 14.1 temp.param paragraph 7:Change in 14.1 temp.param paragraph 7: Non-static(9.4 class.static) data membersshall not have incomplete types.In particular, a class C shall not contain a non-static member of class C, but it can contain a pointer or reference to an object of class C. A non-type template-parameter shall not be declared to have floating point, class, , or void type. [ Example: ... ] Drafting note: It is not necessary to explicitly prevent template argument deduction for an array of runtime bound, because 14.8.2p8 [temp.deduct] says "If a substitution results in an invalid type or expression, type deduction fails. An invalid type or expression is one that would be ill-formed if written using the substituted arguments." 8.3.5p8 (and other places) establishing restrictions on forming types are thus sufficient.Change in 15.1 except.throw paragraph 1: ... [ ] ...Add a new section just before 18.6.2.2 new.badlength: Class bad_array_lengthnamespace.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3497.html
CC-MAIN-2014-52
refinedweb
1,467
57.98
RE: Implementing custom interfaces - From: FoxtrotEcho <FoxtrotEcho@xxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Mon, 6 Mar 2006 16:30:29 -0800 When you added a web reference to your project, the framework created a class that derives from SoapHttpClientProtocol (which implements IDisposable). You can see this yourself if you view all files and naviagte down the web reference tree until to come to a file named Reference.cs. The class implemented in this file will not have derived from your interface even if the methods are clearly there. If you need that functionality,i.e. to have your class assume the behaviour of an interface , it is best that create another class in your project and derive it from the class implemented in Reference.cs and your interface- No additional code would be required because the class in Reference.cs already takes care of all implementations. Now from your console app, instantiate this newly created class instead of the one created by the framework and extract your interface as required. ex public class MyProxy : Service, IMyInterface { //other customizations here if required. } where service is the SoapHttpClientProtocol derived class in Reference.cs and IMyinterface is your custom interface. In console app MyProxy p = new MyProxy(); IMyInterface i = p as IMyInterface; if (p!=null) //and is should not { //use i } You could achieve the same result by directly tampering with the framework generated class, but everytime you update the web reference, you would lose your customisations. Hope this helped. "Hari" wrote: Hi,. I have created a web service which implements a custom interface. I have added this web service as a web reference to a console application that acts as a web service consumer. After creating an instance for the web service class, when i am trying to type cast that instance to my custom interface, i am getting "InvalidTypeCastException" ? When I do the same for System Interfaces like IDisposable it is working fine. Can somebody help me with this ? Thanks in Advance Regards Hari - Follow-Ups: - RE: Implementing custom interfaces - From: Hari - Prev by Date: multiple installs of the same webservice? - Next by Date: RE: Implementing custom interfaces - Previous by thread: multiple installs of the same webservice? - Next by thread: RE: Implementing custom interfaces - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet.webservices/2006-03/msg00027.html
crawl-002
refinedweb
371
55.64
User talk:Zombiebaron/archive41 From Uncyclopedia, the content-free encyclopedia AAAA:AAAAAAAAA!/AAAAAAA Please give a reason why you huffed this page; Will you create an Uncyclopedia in AAAAAAA? --218.186.15.241 08:56, November 26, 2011 (UTC) - I deleted it because it was nominated on QVFD. Please stop creating AAAAA pages. -- Brigadier General Sir Zombiebaron 07:52, November 26, 2011 (UTC) - Frosty then replaced {{talkheader}} with {{:AAAAAAAAA!/talkheader}} in Talk:AAAAAAAAA!. I reverted that edit saying that {{:AAAAAAAAA!/talkheader}} was meant to be used on Talk:AAAAAAAAA!/AAAAAAA only if it is re-created. Please tell Frosty what he just did and correct him. —The preceding unsigned comment was added by 218.186.15.241 (talk • contribs) 218.186.15.241 - You can ask him not to do that again yourself. However, that talk page is unlikely to be recreated, so I don't see any harm in him using it. Pup 11:15 26 Nov '11 - PuppyOnTheRadio is correct, you can contact Frosty yourself on his talkpage. -- Brigadier General Sir Zombiebaron 15:21, November 26, 2011 (UTC) Please unsmite sauron if you find a capacity within your human heart to do so. so much mdma rides on this. —The preceding unsigned comment was added by 65.25.91.229 (talk • contribs) - What? -- Brigadier General Sir Zombiebaron 07:51, November 27, 2011 (UTC) Sauron, that yellow motherfucker; which component of this piece derilicted and thusly whereupon did thus far thing whence, haha no but really, several of us found rather good laughs within the sauron piece, did mimey's spiel on that whole school of rock shit kill it? (when it went into jack blacks terrible, terrible input on the film) Deleted article myself and three other the people under the influence of POWERFUL psychostimulants had fun with that plushy, yellow fucking dinosaur, why'd you delete it? (apart from the obvious reasons) because a bunch of people at jims place shit im rambling —The preceding unsigned comment was added by 65.25.91.229 (talk • contribs) - Sorry man, most of our readers aren't high on MDMA. -- Brigadier General Sir Zombiebaron 07:51, November 27, 2011 (UTC) Greek Uncyclopedia Vector Hello from Sunny Bankrupt land*. I'm a bureaucrat of the Frikipaideia, the Greek Uncyclopedia. We are considering using the Vector skin in our wiki and, as we have no tech wizes amongst our admins, we wondered if you (not necessarily personally, I mean uncyclopedia in general) could tell us what needs to be done and maybe assist us in the process. After all, don't forget that you owe us! :P --JimmyX 12:07, November 27, 2011 (UTC) **I live in the UK. - Step one: Get a real host (ie, not Wikia). - Step two: set $wgDefaultSkin = "vector"; in the LocalSettings.php (or have someone at whoever is running it do it, or use a fancy ui thing to do it for you). - Step three: There is no step three. You're already done at this point. - I say this because what the English Uncyclopedia uses is not stable - it doesn't work at all on some browsers, and doesn't fully work on any. And every time Wikia changes something, it needs to be updated, something which noone here is prepared, at this point, to continue doing indefinitely. The only other viable option would be if Wikia were to suddenly enable the skin after all, but their PR guys have repeatedly said they won't and this seems like one of those rare things they might keep their word on. ~ 17:57, 27 November 2011 - Lyrithya is the one who designed our vector skin. If you'd like more help from her beyond what she's just said, I suggest you talk to her on her talkpage. -- Brigadier General Sir Zombiebaron 19:07, November 27, 2011 (UTC) A load of crap Hello, I was wondering if I could get the stuff I did for the article A load of crap moved to my user space. I took some time off for Thanksgiving and noticed that it got jack-smacked out of existence, but I was about 90% finished with it, so I was wondering whether it could be placed there until I can finish it. If you thought it was NRV worthy, then I would also like some ideas on what to do with it, but I thought several of the sections were pretty funny (at least the section titled Dung was pretty funny). Let me know. Thx. Jonny appleseed 17:28, November 28, 2011 (UTC) - Your page was deleted because it had a {{construction}} tag on it, and it hadn't been edited for 7 days. I assumed that meant that the author had abandend it without finishing it. In the future, please remember to remove the {{construction}} tag after you finish your article. -- Brigadier General Sir Zombiebaron 21:46, November 28, 2011 (UTC) Thanks for un-huffing this one. Jonny appleseed 16:10, November 29, 2011 (UTC) Page Deletion I had noticed that you had deleted my page on Toast Kicking, I am sorry that I couldn't make this clear, but that was a complete article, I would have added more. I was on to other things at that point. If you could please reply, detailing the parameters of a complete page, that would be great. --rincewindthecoward - I have recreated your page so you can continue working on it. Please add some more links, and some pictures to it. When you are done, remove the {{construction}} tag. -- Brigadier General Sir Zombiebaron 22:54, November 28, 2011 (UTC) - thanks, I will--rincewindthecoward Please unhuff my vanity page you deleted one my group of friends' page called SandyGeorgia. we know it's not a person, but it's a person behind an account on wikipedia. it's a username of someone! so we'd like to petition to get it back... it'd mean a lot. THANKS <3 if you choose not to, fuck you. —The preceding unsigned comment was added by 65.184.159.47 (talk • contribs) - Please read Uncyclopedia's policy on vanity. -- Brigadier General Sir Zombiebaron 00:51, November 29, 2011 (UTC) HOW DI I START A PAGE ON HERE? I AM NEW TO THIS AND DONT KNOW HOW TO START A PAGE I THOUGHT THAT PEOPLE GOT ON HERE AND EDITED OTHER PAGES, FOR LAFING. I DIDNT KNOW SOMEONE ELSE MADE THE PAGES. SO NOW KNOWING THAT OTHER PEOPLE MAKE THESE PAGES I WOULD LIKE TO START MY OWN SO I CAN PUT WHAT I WANT. PLEASE HELP ME. THANKS —The preceding unsigned comment was added by Calvincrabtree2 (talk • contribs) - I'll answer for ZB. It's very easy to create a new page, you can go here and simply fill in the boxes. If that does not tickle your pickle then you can just type the name you would like to use for your article into the Search bar, on the next screen you should see some text saying that the page does not exist (if it does choose another title). Just click the title of your article on that page (it should be a red link) then when you save that page you will have created a new article. Also, STOP TYPING IN CAPS, IT LOOKS LIKE YOU ARE SHOUTING, you should also sign your posts with four tildes like this: ~~~~. Hope that helps. --ChiefjusticePS2 12:33, November 29, 2011 (UTC) - Yeah. -- Brigadier General Sir Zombiebaron 13:23, November 29, 2011 (UTC) Can you allso tell me what people will find funny right off the bat, like the same day with in a few minnetts of me postig a page up? sorry im not one of the people who grew up with all the stuff that is on here and i make diffrent humor. i read the guid lines, and im not a person that has nothing better to do tan read the other pages and then sit around and quote them at some gatering, it all new stuff man —The preceding unsigned comment was added by Calvincrabtree2 (talk • contribs) - Try reading HTBFANJS. -- Brigadier General Sir Zombiebaron 15:26, November 30, 2011 (UTC) Mr. Baron Good day, kind Sir. A question and a potential disaster (why oh why do changes harm rather than help???) I look at changes to "my pages" by hitting on "Related Changes" on my user page. But that is now gone???!!!! Would you put it back, or else I won't know what changes have been made. Thanks, and all holiday wishes to you and a few of yours. Aleister 18:04 1-12-'11 - I'm not sure what you are talking about. The related changes link is where it has always been: in the toolbox right under what links here. -- Brigadier General Sir Zombiebaron 18:08, December 1, 2011 (UTC) - Ah, I see, thanks. This is the first time my toolbox was collapsed, it is usually open and I didn't know it collapsed. I now have "Participate" and "Toolbox" as collapsed elements and not fully listed. My deepest and unsincere apolgies for taking up your time, but the holiday wishes still hold even though I'd like to take them all back now. Enjoy! Aleister 18:11 1-12-'11 - Yes, the toolbox and the participate boxes have been collapsing by default since this morning. I'm not entirely sure why. -- Brigadier General Sir Zombiebaron 18:15, December 1, 2011 (UTC) - Ghosts in the machine. Thanks for swift answers which have relived my mind. Aleister 18:21 1-12-'11 - You're welcome. -- Brigadier General Sir Zombiebaron 18:24, December 1, 2011 (UTC) You've ruined my sense of Nostalgia. Dear Zombiebaron, I need to know something. You deleted a page of mine called UnNews:Crippled Boy Found Guilty of Treason. I just want to know this: why? I'm not mad (at least not very much), all I'd like really is to get it back. And I don't mean back on Uncyclopedia. I just want a copy of the source to keep o my computer. But I'm getting sidetracked, just, please tell me, why'd you do it? —The preceding unsigned comment was added by 24.209.182.98 (talk • contribs) - I deleted your page after it was nominated on QVFD. While it was long and probably took you some time to write, I didn't find it very funny. Maybe it's because I'm Canadian, but "a kid with a broken leg couldn't stand up to salute the flag, so his family shot him to death" isn't the sort of joke that makes me laugh. If you'd like to work on the article some more, let me know. -- Brigadier General Sir Zombiebaron 02:33, December 5, 2011 (UTC) I guess I do understand where you're coming from with that. I was just trying to parody some real life suspensions based on not stand during the pledge of alliegance. And you can never expect everyone to find something funny. And to be honest, my sense of humor probably won't change very much. It's just, I know that there are some people out there that liked it, and even if it is never published again, I'd still just likke to archive the source formatting for myself, if that isn't too much too ask for. -Thanks for your time, 24.209.182.98 12:40, December 5, 2011 (UTC) - I have undeleted your page and placed a {{construction}} tag on it. This will provide you with a chance to improve it before it can be deleted again. -- Brigadier General Sir Zombiebaron 15:35, December 5, 2011 (UTC) - You should probably consult with with UPjcm too, considering you're using his work as the source.-- Phlegm Leoispotter * (garble! jank!) 19:23, December 5, 2011 (UTC) Thank you. 24.209.182.98 21:57, December 5, 2011 (UTC) Actually, I did just realize something. I, well, I don't really know how you expect me to improve it (other than the parts where the parents shoot him, I got rid of that already). I don't want to bother you any more. Looking at your talk page, it does look like you've got a lot to do, (or maybe not. I'm no expert) but some pointers might be useful. -24.209.182.98 22:08, December 5, 2011 (UTC) - Just do your best, then remove the {{construction}}. -- Brigadier General Sir Zombiebaron 02:46, December 6, 2011 (UTC) OH NOES! D: I think I've qvfd'd an article that was far longer and vandalised: "Worst 100 Times To Fart" since it shows in red on the enormous list of lists.... Or maybe it was already deleted? Mattsnow 19:05, December 5, 2011 (UTC) - It looks like the page was indeed vandalized, but it also had had an {{ICU}} tag on it that the author had illegally removed. So, it's six of one half dozen of the other. -- Brigadier General Sir Zombiebaron 02:49, December 6, 2011 (UTC) HowTo:oTwoH I think you may have deleted the article of that title. You don't have to restore it, but please at least give me the original text. My talk page would be fine. Thanks. --S0.S0S.0S.0S0 00:00, December 6, 2011 (UTC) - The title does not appear to be something the mediawiki likes. It's possible it just got eaten when the new namespaces were created, in which case nothing can be done... although if that's the case, that's decidedly not good, since it'd mean anything else with a lowercase after the : would have been eaten as well... o__o Okay, hopefully I'm just being paranoid. Don't mind me... ~ 02:51, 6 December 2011 - Lyrithya is correct. The page you linked seems to have never existed, and I don't know why. Welcome back to Uncyclopedia, So So! -- Brigadier General Sir Zombiebaron 02:55, December 6, 2011 (UTC) - Innnnteressting.... I'll get it looked at -- sannse (talk) 03:08, December 6, 2011 (UTC) - This is a repeat of what happened with PotR's disappearing feature. Articles in the new namespaces that originally had lowercase titles after the colon seem to have all vanished somehow. -- 03:17, December 6, 2011 (UTC) - sannse has been alerted on IRC, and she can restore all the missing pages, but first we need to provide her with a list of what those pages are. -- Brigadier General Sir Zombiebaron 03:57, December 6, 2011 (UTC) - Can you use regex in sql queries? ~:50, 6 December 2011 - I have no idea. -- Brigadier General Sir Zombiebaron 01:44, December 7, 2011 (UTC) - Depending on the DBMS, the answer is probably "yes." MySQL, for instance, has extensive support for RegExp in SELECT queries. ~ 7 '11 3:07 (UTC) Why did you delete my page I believe you deleted my page, Pat Bukkkanan. Could you please explain? I thought the point of Uncyclopedia was to make politically incorrect jokes about people and their characteristics, such as Pat and his hatred for Jews, Minorities, and Catholics. 130.15.131.24 11:07, December 6, 2011 (UTC) - I deleted your page because the {{ICU}} tag expired. -- Brigadier General Sir Zombiebaron 11:14, December 6, 2011 (UTC) - AKA: It wasn't funny. --:39, December 7, 2011 (UTC) Why did you delete my page Pt. 2 My old Hayao Miyazaki page, which survived on uncyc for nearly 4 years, was blamed just a few months ago. Last I checked up on it, the article had been raped with unfunny content and images by uncyc vandals. Can we restore the older version? --AmericanBastard 15:43, December 9, 2011 (UTC) - I have moved your page into your userspace so that you can fix it up however you like. -- Brigadier General Sir Zombiebaron 02:12, December 10, 2011 (UTC) Thank you, sir. ~ 11 '11 0:01 (UTC) - You're welcome. -- Brigadier General Sir Zombiebaron 00:19, December 11, 2011 (UTC) John McCain Hello, What happened to the John McCain page? —The preceding unsigned comment was added by ULIT. (talk • contribs) - It was deleted after it remained unedited for 30 days with a {{Fix}} tag on it. -- Brigadier General Sir Zombiebaron 05:35, December 11, 2011 (UTC) Zombie from SirDerr Please let me continue my Jalos page. It's a real place in Mexico that is unsung. It won't cause a fuss on this website. I can change whatever you want me to to keep it on here. Please let me know what you want me to do regarding keeping this page up. —The preceding unsigned comment was added by SirDerr (talk • contribs) - I have restored your page and added a {{construction}} tag. Please remove the tag when you are finished writing. -- Brigadier General Sir Zombiebaron 03:09, December 12, 2011 (UTC) You're a star! Congratulations, you have no:13, December 12, 2011 (UTC) - Thanks. -- Brigadier General Sir Zombiebaron 03:09, December 12, 2011 (UTC) ??? Hello Mr. Baron. Did Lytheryia or whatever the name is huff the Hall of Shame? Can't seem to find it anywhere. I brought coffee for everyone and now have my arms full of cooling brown liquid. Aleister 15:36 12-12-'11 - Nope, it hasn't been deleted. -- Brigadier General Sir Zombiebaron 15:51, December 12, 2011 (UTC) - Yep, there it is. I used to reach it by just typing in Hall of shame, so the redirect must have been lost. I'll go remedy this awful situation. Thanks for the finger-point to the page. Aleister 16:05 12-12-'11 - You're welcome. -- Brigadier General Sir Zombiebaron 16:10, December 12, 2011 (UTC) Question about a huffed file Most gracious Zombiebaron, I noticed that you huffed "File:Mrsa cyst exploded.jpg", with the comment "eeew". Other than the obvious "eeew" of the file, I would humbly request (insert low bow here) that you un-huff it. Here's my reasoning: The notice on the upload page states "Pornographic/shock/gore images which are clear copyright violations and/or which serve little satirical purpose WILL be deleted without warning." Please notice the clause "which serve little satirical purpose". Now, I realize that the image is somewhat gory, but it was serving its intended satirical purpose on the page Body piercing. It was intended to satirically show the medical risks of body piercing. And seriously, that whole head exploding thing I've seen on a bunch of other pages is just as gory. (By the way, I got it from Wikimedia, and if it's not too gross for them...) Anyway, the deletion of the file has left a big, gaping *sniff* hole in the article on Body piercing, and I was hoping that in your greatness and benevolence you would kindly restore the image. Your servant Sir, Jonny appleseed 20:32, December 13, 2011 (UTC) - I just think there's probably a better way to make that joke. If you need me to find/make you a replacement image, I can do that. -- Brigadier General Sir Zombiebaron 06:22, December 14, 2011 (UTC) - You were right. I think I found something funnier. Jonny appleseed 20:43, December 14, 2011 (UTC) Thank you Thank you for the highly personalised and heartfelt welcome. Does this mean you have adopted me? Pup 04:20 15 Dec '11 - Yes. -- Brigadier General Sir Zombiebaron 05:46, December 15, 2011 (UTC) - Yippee! No longer an orphan! Pup 08:35 15 Dec '11 - Unrelated - Could I get you to remove me from the rollback thingy? I've ended up reverting a number of things recently by accident, and I think that it's the combination of browsing on iPhone and rollback privilege that's doing it. I rarely use it for it's intended purpose anyway. Pup 10:05 15 Dec '11 - Ok, I've removed your rollbacks. -- Brigadier General Sir Zombiebaron 15:09, December 15, 2011 (UTC)
http://uncyclopedia.wikia.com/wiki/User_talk:Zombiebaron/archive41
CC-MAIN-2015-14
refinedweb
3,337
72.97
Python args and kwargs: Demystified (Summary) Congratulations, you made it to the end of the course! What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment in the discussion section and let us know. Many Thanks for this video turorial, it’ a great refresher for args and kwargs. Really liked the list merging and dictionary merging examples! Very handy. Thanks! very good! Suggestion - maybe add how to pass arguments from the command line including variable numbers of args from the command line. Alan, thanks for the suggestion. We actually have a full article coming out on Wednesday on the topic of command line arguments! You should be able to see it already in your membership preview: click your profile picture, then notifications. Scroll down to find Python Command Line Arguments No hard-coded values ever in my scripts again ! Nice tricks with just one operator. It would be good to explain runtime complexity as well when using such iterables. Any more practical use-cases for these examples? Just re learning these things again. Rich, thanks for the video course. I understand what’s going on the args and kwargs. What if you have a simple function like the below. How would implement args to this function. def get_auth_token(): login = LoginCreds(ip, api, username, password) try: auth = login.LoginToken(ip, api, username, password) return auth except ConnectionError as e: logging.info(“Could not connect to host”) sys.exit(0) regarding my last post. I have written scripts using the requests module extensively. The responses are large json the jsons are a dictionary of dictionaries. I parse those to get certain information, such as, permittedInterconnectTypeUri = [‘/rest/interconnect-types/ce3381c9-c948-4c71-946a-8893163ae4a6’] networkUris = [‘/rest/fc-networks/0b20f7fc-370d-4163-b8a6-dc06442f6657’] These URI need to be placed in a json payload to send to the appliance. How would i use args and *kwargs in this scenario. Amazing! THanks, now I understand these C-like symbols I have seen in argument lists of other code. I needed an update on this and I have not seen any better! Thanks! Thanks, very useful..! Become a Member to join the conversation. tsusadivyago on Jan. 8, 2020 We learn python basic properly in this site :)
https://realpython.com/lessons/python-kwargs-and-args-summary/
CC-MAIN-2020-34
refinedweb
384
68.97
thank you (just saw your reply now, must have missed the email notification) thank you (just saw your reply now, must have missed the email notification) Hey, A common convention for mobile web code is to link to google maps when needing to link to a map provider. Works really nice for android and iOS where it opens the map client, but not great... Would you have the time to prototype just very bare bones so I can follow you guys? Are you suggesting to start a new python instance altogether that is used to open the browser? The memory situation... Thanks so much, that is a start. I can't get back to the python app either, but it serves my purpose so far. Great help! Anybody else out there knowing how to launch the browser without stopping... Hi, that is latest piece for my application everything else is safe and sound under python. How do I open a browser in a N95 with a URL and make page show right away. I refuse to accept that this... Thank you, I need to write the app for the N95. Do you know what the issue is? Would like to understand it better. Best, Stan hey, i've run into the problem that in certain times despite perfect reception and valid sim card i cannot access the cellid information with python. i use the bergers elocation which provides... Hey, i still haven't figured it out. I tried: def open( url ): browser = 'BrowserNG.exe' e32.start_exe(browser, ' "4 %s"' %url, 1) def open( url ): browser = 'BrowserNG.exe' Hey, thanks for the quick feedback, so if I want to open a URL in a browers I go for. >4. Start/Continue the browser specifying a URL >=> Parameter = “4”+” <Space>“+”<Url>” In my case... Hey there, Is there a guideline on howto launch a browser from a python script, I saw that in another post: >url = '4' # 4 means Start/Continue the browser specifying a URL... Thanks very much, just travelling right now and will get back to that soon, thanks so much so far, your comments make total sense to me! here are my statements to sign. are they correct? python ~/bin/ensymble.py signsis --execaps=LocalServices+NetworkServices+ReadUserData+UserEnvironment+WriteUserData --cert=stan.cer... Hey! So i deployed the signed elocation and the shell on my n95 and when i run >>> import elocation >>> elocation.extended_gsm_location() i get {} Thank you will try that asap. Since i am still so new to python and symbian, how do I include that in my app? Can I just sign the elocation and deploy it as a library and simply include it my... Hey, i have a question, how can you explain how you sign the location and the shell module? i have a certificate and signed my shell and still can't get the gsm_location(), I thought you can help... Is there a way that I can see what the permissions associated with the certificate are? I really would like to get that to work, shouldn't be so difficult, i literally just print the gsm_location. ... thanks for the help. I did exactly as described. Got my own certificate for development, signed an unsigned shell and uploaded it. When installing it warned me that the application is for development... do i understand right that i can sign the python shell i am using? How can I do that? I develop on OS X. I found that here: Once I installed everything, can I just... I am not releasing anything to greater public, just doing some work for myself and I have sign my prototypes all the time? That doesn't make any sense to me at all. I understand the need for signing... Hey, I've seen the issue of gsm_location() return None before in the forum. I used it frequently before on my n90 a year ago but now on n95 it is not working anymore. Is there a certain python &...
http://developer.nokia.com/community/discussion/search.php?s=9243f21c79dd5ff6db400e6908d01602&searchid=3283693
CC-MAIN-2014-35
refinedweb
667
74.79
02 May 2012 17:33 [Source: ICIS news] LONDON (ICIS)--The May European acetone methyl methacrylate (MMA) contract price has been agreed €12/tonne lower than April's settlement because of a drop in the value of feedstock propylene, market sources confirmed on Wednesday. The May settlement was agreed at €1,026/tonne ($1,350/tonne) on a FD (free delivered) NWE (northwest ?xml:namespace> In relation to the settlement, a major producer said: “We have adjusted our price according to developments in the propylene price.” The producer said a major customer had been pushing for a bigger decrease because it expected the market to be better supplied in May. However, the producer did not expect supply to improve until June. INEOS Phenol is due to re-start its phenol and acetone production in Most producers do not expect to see the European acetone market to move to a balanced position until June. One producer said it had to decline an order this week from a “major customer” because it did not have enough acetone available. On the buying side, a major consumer of acetone recognised €1,026/tonne FD NWE as the May contract price. Despite sellers reporting that acetone as tight, the buyer said it could procure all the volume it needed and expected to see a balance market in May. Acetone has been tight for much of 2012 because of various planned and unplanned production problems, as well as relatively healthy demand from downstream MMA, bisphenol A and solvents and spot prices have only just started to come off. The market appears divided and there is some scepticism among market players, largely on the part of buyers, about the current balance in Europe. Buyers say they can get all the volume they need, while producers say they are looking for additional volume externally in order to fulfil contractual obligations. “The acetone market is tighter than everybody is giving it credit for. We are very tight on acetone in Europe
http://www.icis.com/Articles/2012/05/02/9555957/europe-may-acetone-mma-contract-down-12tonne-feedstock-driven.html
CC-MAIN-2014-41
refinedweb
332
56.69
Hi all. I've got a strange issue: I've got a png with transparency that I am using for a cursor during part of my game. the "Texture Type" is set to "Cursor" and it works great if you're 1) playing in the editor or 2) playing in a window. As soon as you start the game full screen (doesn't matter the resolution), the cursor never changes from the default Windows cursor. Here's the code I'm using to set the cursor. This script is attached to the main camera in every scene. If I want the cursor to be custom, I set a texture, and the script does the rest. Otherwise, I leave the texture field blank, and uses the windows default. I've also tried setting this to the Awake function, with no change. using UnityEngine; using System.Collections; public class SetCursor : MonoBehaviour { public Texture2D cursor; private Vector2 hotspot; void Start () { if (cursor != null) { hotspot = new Vector2(cursor.width / 2, cursor.height / 2); } else { hotspot = new Vector2(0, 0); } Cursor.SetCursor(cursor, hotspot, CursorMode.Auto); } } Alternatively, if there's a better way I could be doing this, I'd be happy to learn! Thank you in advance! I can confirm this, not working for me as well, only after switching to windowed and back... Answer by achynes · May 18, 2016 at 09:28 AM Confirmed here as well. Anyone cracked this? I think I somehow fixed it. It was in my code where SetCursor() method didnt run, thats why it didnt work. After it got called, the cursor changed even when in full. Cursor dissapearing in my menu 0 Answers Cursor lock prevents UI interaction 0 Answers [SOLVED] My Cursor Stays Visible When Exiting my Pause Menu Even Though I Have Called the Cursor.Visible Function When Exiting the Menu 1 Answer How to make a similar cursor like the one in below image ? 0 Answers Way to lock cursor to center and still interact with Worldspace ui 0 Answers
https://answers.unity.com/questions/1119138/custom-cursor-not-showing-up-in-full-screen.html
CC-MAIN-2020-29
refinedweb
336
75.5
1: <core:Kind 2: <attributes itemref="ns:attr"/> 3: <core:Kind/>That XML snippet defines a item named "foo" of kind //Schema/Core/Kind, and assigns a value to its "attributes" attribute. In the current implementation, when an XML element represents an attribute assigment as in line 2 above (as opposed to an item definition element as in line 1), the namespace of that element is ignored; the element's name is passed directly to the appropriate kind's getAttribute( ) method to determine which attribute to use to makeValue( ). This makes some sense given what we're trying to do, but it's a bit inconsistent. The proposal? I don't have an alternative to this that I like, and might suggest leaving it the way it is. We could try to make this more XML-correct by using an <assignment> element for each attribute assignment, and specify the attribute via name=" ", but it adds wordiness: <core:Kind <core:assignment <core:Kind/>
http://chandlerproject.org/Projects/ParcelFramework
crawl-002
refinedweb
162
50.3
Deno, introduced by Ryan Dahl, the creator of Node during JSConf 2018 has been growing into a major alternative to Node.js. Deno is similar to Node.js – you write your scripts in JavaScript and run them – but Deno get's more powerful once you use it. It has first class TypeScript support, simplifies modules, is more secure, and bridges the gap between browsers and Node, and much more. Node Released in 2009, Node took over really quickly. Even though there was initially some skepticism about Node, support from the community was unrivalled. Today, Node is one of the most popular tools used for backend development. Enter Deno Fun fact: Deno is just node reversed. no + de = node, de + no = deno. Even though Node was great, there are many design mistake in it. You can check out the talk by Ryan Dahl to learn more, but here's a few: - Node didn't stick with promises. Node had added them way back in 2009, but removed them almost a year later in 2010. - Node wasn't secure enough. Any node program has access to system calls, http requests, filesystem calls. Your linter shouldn't have complete access to your computer and network. - more... Essentially, Node was focused on IO. Modules were an afterthought. To fix all this, Ryan introduced Deno. Deno is secure by design Suppose you want to run a lint script. If you were using node, you would just do this: ~$ node linter.js But in Deno, you do this: ~$ deno run --allow-read linter.js There's a couple of things to note here. First is the run subcommand. Deno has a bunch of other tools, which we'll get to later. Next thing to note is the flag --allow-read. It, along with a bunch of other flags are part of deno's security system. By default, when a script is run using deno run, it can't use anything more than the console. Now, more security is great, but nobody wants to be putting in a bunch of --allow flags everytime you need to run stuff. Fortunately, deno provides an install command which can "install" stuff. Installing as an creating a thin wrapper in a platform-specific directory ( ~/.deno/bin on MacOS and Linux, not sure about Windows). ~$ deno install --allow-read linter.js ✅ Successfully installed linter /Users/APPLE/.deno/bin/linter ~$ linter linter running! The file at .deno/bin/linter is very simple: #!/bin/sh # generated by deno install exec deno run --allow-read '' "$@" No package managers here Deno uses ES Modules import syntax, which means that imports must be full or relative paths to files. And unlike Node.js, there's no deno_modules (thank goodness!), and deno doesn't look anywhere special for modules. // These work + import {lint} from './linter.js'; + import {lint} from 'absolute/path/to/linter.js'; + import {WebSocket} from ""; // But these wont: - import {lint} from './linter'; // Note the extension is missing - import {WebSocket} from "ws"; // ws who? You don't have to relearn (most of) JavaScript Deno tries to use web platform APIs (like fetch) instead of inventing a new API. These APIs generally follow the specifications and should match the implementation in Chrome and Firefox. Deno even uses web standards in it's own APIs, for example Deno's http API uses the standard Request and response objects. Deno's even got window Node.js goes the other way replacing stuff with it's own APIs, usually using callbacks, making us reach for modules. Deno gets to take advantage of all the evolution of JavaScript instead of having to build it all again. Also, it's easier to port stuff to the web if you use Deno (and vice versa). TypeScript is a first class citizen here Deno has built in support for TypeScript ! This isn't just used as an external modules or anything, no extra flags, not even a tsconfig.json. There is even interoperability – import JS in TS, import TS in JS Simpler distribution Unlike Node, Deno is just a single binary. This makes installation and deployment a breeze. Deno can even compile programs to binaries, which is absolutely awesome! It can even cross compile! A simple demo Here's a simple cat implementation in deno: // mycat.ts import { expandGlob } from ""; // no need to remove the path to deno, etc. const files = Deno.args; files.forEach(async file => { for await (const fileExpansion of expandGlob(file)) { const contents = await Deno.readTextFile(fileExpansion.path); console.log(contents); } }); This script takes filenames as arguments and prints them to the console. ~$ deno run --allow-read mycat.ts cat.ts // cat.ts import { expandGlob } from ""; // no need to remove the path to deno, etc. const files = Deno.args; ... Note that you don't need to install or configure anything - Deno handles that for you. Now, we can install the script: ~$ deno install --allow-read mycat.ts ✅ Successfully installed mycat /Users/APPLE/.deno/bin/mycat ~$ Summary Deno is still new. It has a thriving community and a bunch of libraries (many node libraries have been ported to deno). But it's not as popular or as supported as node. But deno's ease of use and simplicity make it useful for writing everyday scripts, and it's url-based system of sharing modules makes distributing programs as easy as putting them on a GitHub repo or personal site. Discussion (26) Deno looks quite interesting, but I really don't like the way you need to import packages from URL's. I still think there should be a package management system as it makes handling and updating packages easier. URLs are more flexible, and it may be a good idea in the long run. But package management systems make stuff easier to use. Deno has a section in its docs on Managing modules. One of the conventions is to place all imports in a single deps.tsfile. Functionality is then exported out of deps.tsfor use by local modules. This makes it easier to manage dependency versions and stuff You don't have to worry about network problems though – once you run it, Deno caches all the modules required. Agree on this a little, because sometimes I just want to download all packages and go code somewhere without the internet. Or a situy may happen when a dev does not have access to internet, but while having all the packages he can still do some work. With urls it becomes impossible until the caching mechanism is created(which is almost node modules). + Yoully need to always specify the needed version directly, because the package may change a lot(like react router does) and your app won't work after that. And also as well as I understand we'll need to specify the version in all urls in all files where it is imported, not in one config. Caching is not like node_modules, as far as I understand. Deno caches in a directory somewhere at ~/. So, once a package is cached, it may not have to be cached again, even for a seperate program You misunderstand how url imports work you need to read more about it... nothing will break because version are kept in the url. And also No internet No problem if it is cached/downloaded you can work just like with npm if there is no internet it behaves the same as npm if your internet is dead you cant do npm install what do you think npm has under the hood? Correct there are a bunch of web urls who get requested. I have no idea why people always assume you can't work without internet when using url imports all you can't do is cache new packages same with npm you can't npm install without internet but you can work with whats in the node_modules or in case of Deno whats in the cache. Funny that you mention this. A few months ago I did a silly experiment, I manage to use npm packages in denousing vite. It is totally possible to use npmto manage some packages of your denoapp. You might be interested in Deno's import maps which could be generated by a package manager like Trex. The only caveat is that the import map file has to be specified explicitely as opposed to node's package.json which gets picked up automatically. That is unless you use somthing like Denon. This is how the web works. Try to learn more about it. Maybe you haven't worked a lot with the web. Python, Ruby, Rust and Node all work with the same philosophy. What I mean this is ECMA standard. ECMA standards, just because someone dictate a worst way to work doesn't make it law I mean they dictate how JS works Deno just implements it correctly. I think that if people are dissatisfied with URLs, people will come up with new, better ways to import modules. This has happened before, for example, in Vim. There was no standard way to load plugins, other than by cloning them via git, but third party plugin manager were created to make it simpler. This even led to vim introducing their own plugin system. Siddharth makes a fantastic point. It is not like Deno does not bring along a way to manage your packages/dependencies. With that in mind, just because a method of doing something in one place works well, does not make it the superior method in another environment. It makes total sense for JS to have packages be imported via urls. Functionality can always be added on top to make it work like you are used to after the fact. People tend to think that a method of doing something is superior or the best just because they are used to it and not necessarily because it IS the better method. My only concern is dependency management with URL imported packages. How would one go about changing the version of the package if it is been imported in 10 different places. Also solving resolution conflicts that package managers like yarnand npmperform when one package depends on several is quite handy. Having URL's is fine as long as there's a central way of managing the dependencies. vs I personally find the former easier to read and write. Also for autocomplete and intellisense, the dependencies need to be downloaded anyway. As I said here, Deno has a convention of putting all imports in a deps.tsfile. This makes it easier to change versions, as there is only a single import. Deno has no "magical" module resolution. Instead, imported modules are specified as files (including extensions) or fully qualified URL imports. This makes it harder to mess up. If you use the deps.tsconvention, you could have this: deps.ts main.tsor wherever you use it Deno caches modules once they are required once, so intellisense can work at that time. Yup deps/tsis a much better way :) Great to know! Will try this out with deps.ts. Thanks! BTW there is a deno registry Used Deno in production and recently ported everything back to Node. Tbh Deno is great and I love it, would like to use it a lot more. The only downside at the moment is the lack of libraries available. The deal breaker for me is there aren't any mature Database ORM Library for Deno right now. Aside from that, Deno is great and a pleasant to work with. Putting my Deno project in archive and looking forward to get back to it in the future. Deno was hyped as the Node killer when it released 1.0, quicky after no one talked about it any more when they figure out that it would be too much work convert Node apps too Deno, without that hype people not create packets for it what really is key to Node's success. It will take years to catch on again and likey the lead dev that got a history of exit projects will leave before it will be.... Demo should make its own package manager with restricted uses of control to the package or they have to implement a system like if I import package from "react" or "any package name" it should under the hood converts it into the url or import it from local cache. I think if it is implemented we can save lot of space in hard disk. As it is first party implementation unlike pnpm. Nice, thank you. The code example has an error though. I think it needs an "async" before file, and ".path" after fileExpansion as: // mycat.ts import { expandGlob } from "deno.land/std@0.102.0/fs/expand_gl..."; // no need to remove the path to deno, etc. const files = Deno.args; files.forEach(async file => { for await (const fileExpansion of expandGlob(file)) { const contents = await Deno.readTextFile(fileExpansion.path); console.log(contents); } }); Nice catch in the .path The async is not needed, as deno supports top level await. It could be added though. Would be cool if web browsers can make use of deno in a way so we can directly reference .tsscript files in HTML and actually run them. I am glad that TS support is encouraged but not mandatory. In a few months I might get into Deno as well. security to the max. clone or fork and host the lib import into coding any bad changes won't effect your coding.
https://dev.to/siddharthshyniben/deno-the-next-step-in-node-js-ij1
CC-MAIN-2021-39
refinedweb
2,252
66.54
Str have 2 java pages and 2 jsp pages in struts registration.jsp for client to register user registeraction.java to forward... with source code to solve the problem. For read more information on Struts visit java - Struts java what is the default Action class display multiple images on struts using Arraylist java - Struts java How To Connect oracle Database With Struts Project error - Struts java struts error my jsp page is post the problem...*; import javax.servlet.http.*; public class loginaction extends Action{ public...*; import javax.servlet.http.*; public class loginform extends ActionForm{ private Java - Struts Java What is the difference between Struts and Struts2. Pls explain with a simple example. HI, Please check http... between Struts 1 and Struts 2. Thanks - Struts in DispatchAction in Struts. How can i pass the method name in "action" and how can i map at in struts-config.xml; when i follow some guidelines... more the one action button in the form using Java script. please give me struts interview Question - Struts struts interview question and answer java struts interview question and answer java doubt is when we are using struts tiles, is there no posibulity to use action class... in struts-config file i wrote the following action tag... action class and tiles freame work simultaniously --->it is not working java - Struts *) Action class public class MyAction extends...java how can i get dynavalidation in my applications using struts... : *)The form beans of DynaValidatorForm are created by Struts and you configure java - Struts of the Application. Hi friend, Struts is an open source framework... pattern. It uses and extends the Java Servlet API to encourage developers...:// java - Struts friend. what can i do. In Action Mapping In login jsp Hi friend, You change the same "path" and "action" in code : Java + struts - Struts java.sql.ResultSet; public class ImportingAction extends Action { public ActionForward...Java + struts my problem is : import multiple .xls workbooks... org.apache.struts.upload.FormFile; public class ImportFileBean extends ActionForm zylog.web.struts.actionform.LoginForm; public class LoginAction extends Action... friend, Check your code having error : struts-config.xml In Action...: Submit struts be done in Struts program? Hi Friend, 1)Java Beans are reusable software Java - Struts :// Thanks. my doubt is some java - Struts is in action class,because it is not exucted ,but formbean will executed...this is my problem...see the code i have created formbean class,formaction class... java.io.IOException; public class LoginAction extends Action Struts Architecture - Struts Struts Architecture Hi Friends, Can u give clear struts architecture with flow. Hi friend, Struts is an open source... (MVC) design pattern. It uses and extends the Java Servlet API to encourage java - Struts Java JRE latest Which is Java JRE latest Version java - Struts Java string tokenizer What is Java string tokenizer java - Struts Java API Documentation Java API Documentation example in eclipse java - Struts class LoginAction extends Action { public ActionForward execute... config /WEB-INF/struts-config.xml 1 action *.do Thanks...! struts-config.xml using captcha with Struts - Struts application using Struts framework, and i would like to use captcha Hi friend,Java Captcha in Struts 2 Application : java - Struts architecture. * The RequestProcessor selects and invokes an Action class...java What is Java as a programming language? and why should i learn java over any other oop's? Hello,ActionServlet provides the " java - Struts Inheriting a class in java constructor Need example of inheriting a class in java constructor java - Struts Java long to string conversion What is the steps to convert a long to String in Java Beginners java struts Hi sir i need complete digram of struts... how i can configer the struts in my eclipse... please send me the complete picture diagram java - Struts :// Thanks what is struts? - Struts of the Struts framework is a flexible control layer based on standard technologies like Java...what is struts? What is struts?????how it is used n what... Commons packages. Struts encourages application architectures based on the Model -differecne between RequestProcessor and RequestDispatcher What is differecne betweenw RequestProcessor and RequestDispatcher? What is differecne b/w RequestProcessor and RequestDispatcher?http - Java Interview Questions Struts Interview Questions I need Java Struts Interview Questions and examples java - Struts you want in your project. Do you want to work in java swing? Thanks struts - Java Beginners java struts i want to do the project in the struts.... how i can configure the project in my eclipse... can u help me in this issues Struts 2 problem - Struts Struts 2 problem Hello I have a very strange problem. I have an application that we developed in JAVA and the application works ok in Windows... seemed to worked fine, until the user reported to us a problem. After doing Query - Struts Writing quires in Struts How to write quires in Java Struts Java - Struts java - Struts java - Struts action code java - Struts java When i am using Combobox. when i am selecting the particular value. how can i pass the value to the action. please give me the suggestion as early as possible. Hi friend, To solve the problem struts - Java Beginners struts how to calculate monthly billing in struts, give me example - Jboss - I-Report - Struts Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report what are Struts ? what are Struts ? What are struts ?? explain with simple example. The core of the Struts framework is a flexible control layer based on standard technologies like Java Servlets, JavaBeans, ResourceBundles, and XML About Struts processPreprocess method - Struts About Struts processPreprocess method Hi java folks, Help me... that the request need not travel to Action class to find out that the user... will abort request processing. For more information on struts visit java keyboard shortcuts - Struts java keyboard shortcuts hi, i hav an application already developed using struts framework.now i would like to add some keboard shortcuts to my application, so that the user need not to go for the usage of the mouse every time DynaActionform in JAVA - Struts DynaActionform in JAVA how to use dynaActionFrom?why we use it?give me a demo if possible? Hi Friend, Please visit the following link: Hope
http://roseindia.net/tutorialhelp/comment/64832
CC-MAIN-2014-15
refinedweb
1,046
58.18
I was wondering if anyone tried training on popular datasets (imagenet,cifar-10/100) with half precision, and with popular models (e.g, resnet variants)? It works, but you want to make sure that the BatchNormalization layers use float32 for accumulation or you will have convergence issues. You can do that by something like: model.half() # convert to half precision for layer in model.modules(): if isinstance(layer, nn.BatchNorm2d): layer.float() Then make sure your input is in half precision. Christian Sarofeen from NVIDIA ported the ImageNet training example to use FP16 here: We’d like to clean-up the FP16 support to make it more accessible, but the above should be enough to get you started. This is great! Is there documentation on when/where half precision can be used? For example, it doesn’t seem like half precision computation is supported on CPU, but I only discovered this by giving it a shot. @colesbury, could you suggest the right way for conversion of fp16 inputs to BatchNorm with fp32 parameters? I think, this modification is now mentioned in NVIDIA documentation as special batch normalization layer, but I couldn’t find any implementation example. Hi @colesbury, I converted my batch-norm layers back to floats with this code, which works: def batchnorm_to_fp32(module): if isinstance(module, nn.modules.batchnorm._BatchNorm): module.float() for child in module.children(): batchnorm_to_fp32(child) return module But autograd doesn’t like me using a mixture of float & half tensors, running loss.backward() leads to an error like this: Do you think you can help me? Thank you if yes! Is it the same situation with weight norm? It should use float32? What speed gain do you achieve using FP16 instead of FP32 on pytorch (on resnet or simular)? It really depends. On a Titan X or P100, you get about 15% speedup for all the architectures I’ve tried. On a Titan V or V100, I get about a 50% speedup for resnet50 and 2x speedup on Xception, probably because of the way tensor cores work. The 2x speedup actually makes the Titan V worth it if you are going to be training a lot of networks that use grouped convolution. You also get to use double the batch size because the smaller floats all fit in vram. I should say that the speed-up isn’t painless. I’ve had issues with fp16 overflow at times. Usually these are fixable, but you only find out about them after investing a significant amount of time training. I also worry about the added complexity leading me to make wrong conclusions about my experimental outcomes. (i.e. could this issue be an fp16 issue? Or thinking one model works better but really the other model was faulty.) You also need to be sure to maintain a full 32 bit copy of your parameters. This helps stability substantially. We. Our examples page demonstrates the use of FP16_Optimizer and Apex DistributedDataParallel. Amp examples are coming soon, and Amp’s use is thoroughly discussed in its README. Give Apex a try and let us know what you think! sorry for double post, the forum page told me “new users may only post 2 links at a time” or something along those lines. The link to csarofeen/examples does not work any more. You can find an example here: Fp16 on pytorch 0.4 Hi thanks for your explanation. May I ask why the BN must use float32, does that mean BN us different from other layers, like conv, linear, etc? I’d say the easiest way to use and not make a mistake is to use PyTorch Lightning with Trainer(use_amp=True). This will train your model using 16-bit. any suggestions on using float16 with transformers. Should I keep some layers in float32 just like batch-normalization is recommended to keep in float32? I would generally recommend to use the automatic mixed precision package (via torch.cuda.amp), which uses casts the input to the appropriate dtype for each method. okay thanks. Should we keep val_step under autocast scope as well for fair comparison between tr_loss & val_loss? Yes, you can also use autocasting during the validation. Especially if you plan on using it for the test dataset (or deployment) I would use it. I used torch.cuda.amp tools to training an u-net-like network but my loss function gave NaN. I guess this is overflow’s problem when using fp16. Can you give me some advice to overcome this? Thank you so much!
https://discuss.pytorch.org/t/training-with-half-precision/11815
CC-MAIN-2022-27
refinedweb
758
67.04
Java EE Application Client deployment plan Java EE application client modules run in client container and also have access to server environment. Usually, Java EE client applications are created to administer the running enterprise applications in the server. Client modules run in a separate JVM and connect to enterprise application resources but have access to all the application resources in standard Java EE way. The Java EE Java EE default namespace of the above XML document is. The XML elements that do not have a namespace prefix belong to the default namespace. Hence, in the above XML document, all the XML elements belong to the default namespace. The application client declares the ejb name ejb/Converter through <ejb-ref>> .. </ejb-ref> elements. Following is the corresponding deployment plan of the Java EE client module. The default namespace of the above XML document is. The XML elements that do not have a namespace prefix belong to the default namespace. Hence, in the above XML document, <application-client>, <ejb-ref> and <ref-name> elements belong to the default namespace.. Do not forget to insert a new line after the Main-Class: entry in the MANIFEST.MF file. The Java EE.
https://cwiki.apache.org/confluence/display/GMOxDOC22/Creating+deployment+plans+for+Java+EE+application+clients
CC-MAIN-2017-39
refinedweb
198
56.66
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. My Jira version is 5.2.6 and Behaviours plugin is version0.5.3 I have a custom field that is called Production Cut-in that has a select list of Yes or No. If the user selects Yes I need to have another custom field called ECP marked as required, if the user selects no than the ECP field should not be required. I am new to groovy scripting and am not sure if I wrote the script correctly. Here is what I came up with: FormField pro = getFieldById("customfield_11505") FormField ecp = getFieldById("customfield_11506") if(pro.getValue().contains('Yes')){ ecp.setRequired(true) }else{ ecp.setRequired(false) } Any help would be greatly appreciated. Thanks. You can try this for newly added custom fields on issue screen (version 7.1) import com.onresolve.jira.groovy.user.FormField FormField test1 = getFieldByName("test") FormField test2 = getFieldById("duedate") String str=test1.getValue() String str2=test2.getValue() test2.setRequired(true) test1.setRequired(true) if(test1.getValue() == "1") { log.error("************ WELCOME in first loop ************") test2.setRequired(true) } else { log.error("************ WELCOME in second loop ************") test2.setRequired(false) } Hey, I would use the option ID for the "Yes" value e.g.: if ((pro.getFormValue() == "12135") { ...} Either way, you should use getFormValue() and not getValue() AFAIK. Cheers Christian Hey, I would use the option ID for the "Yeas" value e.g.: if ((pro.getFormValue() == "12135") { ...} Either way, you should use getFormValue() and not getValue() AFAIK. Cheers Christian Hrm, getValue() was introduced partly to avoid this need to look up an option by ID, or to hard-code an option ID, which makes it hard transferring code from dev to prod. If you print the value that getValue() returns, you might find it's not a list, or it doesn't contain what you think. Add log.warn ("value returned was: " + pro.getValue()) Then tail your atlassian-jira.log, and refresh the page. Jamie, I added the log.warn ("value returned was: " + pro.getValue()) and it said the value returned was null. Any ideas on this? Does getFormValue() return anything? you should also log.warn (pro) to make sure the field is actually found... -1 sounds suspicious, that's not a valid option ID. What does log.warn(pro) say. You may want to consider also this solution for making a custom field required based on another custom field:, using the JJupin plugin and its Live Fields feature. You can try also ... FormField pro = getFieldByName("Name_of_f_no1_not_ID1") FormField ecp = getFieldById("Name_of_f_no2_not_ID2") String vpro = (String) pro.getFormValue() String vecp = (String) ecp.getFormValue() //to see values near fields uncomment sentences // pro.setHelpText("vpro: "+ vpro) //ecp.setHelpText("vecp: "+ vecp ) if (vpro == "Yes") { ecp.setRequired(true) }else{ ecp.setRequired(false) } Line 2 should be getFieldByName too I think... Hi Vidic, how we can run above script for a specific project in jira? do we need to create custom event for this or we need to use post.
https://community.atlassian.com/t5/Jira-questions/Make-a-custom-field-required-based-on-another-custom-field/qaq-p/203610
CC-MAIN-2018-09
refinedweb
505
61.83
So Ive googled this error but I cant find an explanation of why this is happening in my code.. Error: g++ -o code code.cpp code.cpp:8:17: error: too many decimal points in number code.cpp:8:43: error: too many decimal points in number code.cpp:1:1: error: expected unqualified-id before '<' token make: *** [code] Error 1 My code: //Includes #include <stdlib.h> using namespace std; //Other Functions //Main function void signalEnd(); int main(int argc, char** argv) { //main code here signalEnd(); } void signalEnd() { system("./done.sh"); } I know usually you declare a function above the main to use it but in my current way I construct this file this is not possible. So the line with the error is my forward declaration. I have no idea what is wrong. I would appreciate any tips or help on this. Thank you
https://www.daniweb.com/programming/software-development/threads/434964/too-many-decimal-points-in-number
CC-MAIN-2018-39
refinedweb
145
66.74
The method java.io.InputStream.close() is used to close this input stream and release any system resources associated with the stream. This method requires no parameters and returns no value. Also, the IOException is thrown when an I/O error occurs. A program that demonstrates this is given as follows − import java.io.FileInputStream; import java.io.InputStream; public class Demo { public static void main(String[] args) throws Exception { InputStream i = null; int num = 0; try { i = new FileInputStream("C://JavaProgram//data.txt"); num = i.available(); System.out.println("The number of bytes are: " + num); i.close(); num = i.available(); System.out.println("The number of bytes are: " + num); } catch(Exception e) { System.out.print("Error!!! The input stream is closed"); } } } The output of the above program is as follows − The number of bytes are: 4 Error!!! The input stream is closed
https://www.tutorialspoint.com/java-program-to-close-this-input-stream-and-release-any-system-resources-associated-with-the-stream
CC-MAIN-2020-24
refinedweb
143
62.34
SSL_GET_ERROR(3) OpenSSL SSL_GET_ERROR(3) SSL_get_error - obtain result code for TLS/SSL I/O operation #include <openssl/ssl.h> int SSL_get_error(const SSL *ssl, int ret); tran- sport. MirOS BSD #10-current 2005-04-29 1 SSL_GET_ERROR(3) OpenSSL SSL_GET_ERROR(3) par- ticular, protocol. If ret == -1, the underlying BIO reported an I/O error (for socket I/O on Unix systems, consult errno for details). SSL_ERROR_SSL A failure in the SSL library occurred, usually a proto- col error. The OpenSSL error queue contains more infor- mation on the error. ssl(3), err(3) SSL_get_error() was added in SSLeay 0.8..
http://mirbsd.mirsolutions.de/htman/sparc/man3/SSL_get_error.htm
crawl-003
refinedweb
101
59.3
Implementing a “Quick SaveAs” command in AutoCAD using .NET I had a fun request, earlier in the week, that I thought worth turning into a couple of blog posts. The problem itself is quite specific, but some of the techniques shown – especially in the second of the two posts – seemed worth sharing. The basic request was this: to implement a “quick SaveAs” command which will – the first time it’s run – ask for a file location and name and will then – each time it’s called subsequently – simply save the current drawing into the same folder with a filename based on the original with a incrementing suffix. An example: the first time you run QSAVEAS you get presented with a SaveAs dialog and choose “c:\temp\Test.dwg”. Each time you call QSAVEAS afterwards a new drawing gets created in the temp folder: “Test 1.dwg”, “Test 2.dwg”, “Test 3.dwg”, etc. This is likely to be a handy technique for people who want to take regular snapshots of their designs as they’re working. There’s another dimension to this particular problem, however, which is why we’re looking at it over the course of two posts: imagine we’re in an environment where some kind of script or external data-file is being generated automatically for later recreation of the model. You can think of the mechanism as being a little like AutoCAD’s action recorder, although this is not about capturing user operations as much as it is about storing the information needed to recreate at a later point in time the model being worked upon. We still want to save our DWG file – as this is a “snapshot” of the model that we can use in other systems taking AutoCAD drawings as inputs – but we also want to save our script/data-file, which is really the file that will be used to recreate the model. And at the same time we want to create an item on a special tool palette which will call a command to execute our script (or interpret our data-file). And this command tool needs to have the icon of the drawing we’ve just saved. Fun stuff! :-) Anyway, before we get too carried away on the tool palette manipulation code – which will come in the next post – we’re going to look at our simple QSAVEAS command. I did want to set the stage appropriately, though, as the need to save the DWG and have its thumbnail preview available has driven some implementation decisions in today’s code. One thing I should add, quickly: I’m not going to look at how we recreate our model from the script/data-file. That is really a much bigger problem and beyond the scope of these posts, which focus more on defining the QSAVEAS command and the steps needed to populate our tool palette. In the next post we will create a dummy script, but only to show how this can be picked up when our tool palette is used. Here’s the C# code implementing our QSAVEAS command: using Autodesk.AutoCAD.ApplicationServices; using Autodesk.AutoCAD.DatabaseServices; using Autodesk.AutoCAD.EditorInput; using Autodesk.AutoCAD.Runtime; using System.IO; namespace QuickSaveAs { public class Commands { // Set up static variable for the path to our folder // of drawings, as well as the base filename and a // counter to make the unique filename static string _path = "", _base = ""; static int _count = 0; // Various filename and path-related constants const string sfxSep = " ", extSep = ".", pthSep = "\\", lspSep = "/", dwgExt = ".dwg"; // Our QuickSaveAs command [CommandMethod("QSAVEAS")] public void QuickSaveAs() { Document doc = Application.DocumentManager.MdiActiveDocument; Editor ed = doc.Editor; Database db = doc.Database; // If this is the first time run... if (_path == "" || _base == "") { // Ask the user for a base file location PromptSaveFileOptions opts = new PromptSaveFileOptions( "Select location to save first drawing file" ); opts.Filter = "Drawing (*.dwg)|*.dwg"; PromptFileNameResult pr = ed.GetFileNameForSave(opts); // Delete the file, if it exists // (may be a problem if the file is in use) if (File.Exists(pr.StringResult)) { try { File.Delete(pr.StringResult); } catch { } } if (pr.Status == PromptStatus.OK) { // If a file was selected, and it contains a path... if (pr.StringResult.Contains(pthSep)) { // Separate the path from the file name int idx = pr.StringResult.LastIndexOf(pthSep); _path = pr.StringResult.Substring(0, idx); string fullname = pr.StringResult.Substring(idx + 1); // If the path has an extension (this should always // be the case), extract the base file name if (fullname.Contains(extSep)) { _base = fullname.Substring( 0, fullname.LastIndexOf(extSep) ); } } } } // Assuming the path and name were set appropriately... if (_path != "" && _base != "") { string name = _base; // Add our suffix if not the first time run if (_count > 0) name += sfxSep + _count.ToString(); // Our drawing is located in the base path string dwgPath = _path + pthSep + name + dwgExt; // Now we want to save our drawing and use the image // for our tool icon // Using either COM or .NET doesn't generate a // thumbnail in the resultant file (or its Database) // .NET: // db.SaveAs(dwgPath, false, DwgVersion.Current, null); // COM: // AcadDocument adoc = (AcadDocument)doc.AcadDocument; // adoc.SaveAs(dwgPath, AcSaveAsType.acNative, null); // So we'll send commands to the command-line // We'll use LISP, to avoid having to set FILEDIA to 0 object ocmd = Application.GetSystemVariable("CMDECHO"); string dwgPath2 = dwgPath.Replace(pthSep, lspSep); doc.SendStringToExecute( "(setvar \"CMDECHO\" 0)" + "(command \"_.SAVEAS\" \"\" \"" + dwgPath2 + "\")" + "(setvar \"CMDECHO\" " + ocmd.ToString() + ")" + "(princ) ", false, false, false ); // Print a confirmation message for the DWG save // (which actually gets displayed before the queued // string gets executed, but anyway) ed.WriteMessage("\nSaved to: \"" + dwgPath + "\""); _count++; } } } } A few further comments on the implementation: There are various ways to save a DWG file from .NET, but the requirement to have a thumbnail preview image created with the file (which is then accessible from its editor-resident Database) has limited our choice somewhat: neither Database.SaveAs() nor AcadDocument.SaveAs() (the .NET and COM methods) generate and save a thumbnail. So we’re calling the SAVEAS command, which does generate the thumbnail. We could call this in a number of ways - COM’s AcadDocument.SendCommand() would call SAVEAS synchronously, for instance – but I’ve decided to use Document.SendStringToExecute() to fire off our SAVEAS and (eventually) launch a continuation function to update the tool palette. We’re setting CMDECHO to keep the noise down on the command-line, but we’ve managed to avoid having to set FILEDIA to zero by wrapping the call in a LISP (command) call (which AutoCAD knows prefers command-line input). Here’s what happens when we run the QSAVEAS command at regular intervals during an editing session: Command: NETLOAD Command: QSAVEAS [Selected initial location via file selection dialog...] Saved to: "C:\QSaveAs Test\Solid model.dwg" [Various editing operations...] Command: QSAVEAS Saved to: "C:\QSaveAs Test\Solid model 1.dwg" [Various editing operations...]Command: QSAVEAS Saved to: "C:\QSaveAs Test\Solid model 2.dwg" [Various editing operations...]Command: QSAVEAS Saved to: "C:\QSaveAs Test\Solid model 3.dwg" [Various editing operations...]Command: QSAVEAS Saved to: "C:\QSaveAs Test\Solid model 4.dwg" [Various editing operations...]Command: QSAVEAS Saved to: "C:\QSaveAs Test\Solid model 5.dwg" And here are the files, shown in Explorer: Update: This post shows a streamlined approach for this application for AutoCAD 2010 and above. Recent Comments Archives More...
http://through-the-interface.typepad.com/through_the_interface/2009/10/implementing-a-quick-saveas-command-in-autocad-using-net.html
CC-MAIN-2015-06
refinedweb
1,213
57.16
Copyright © 2001 1 (this document) describes the SOAP envelope and SOAP transport binding framework; Part 2[1]describes the SOAP encoding rules, the SOAP RPC convention and a concrete HTTP binding specification. This section describes the status of this document at the time of its publication. Other documents may supersede this document. The latest status of this document series is maintained at the W3C. This is the second transport binding framework, and SOAP Version 1.2 Part 2: Adjuncts, which describes the SOAP encoding rules the SOAP RPC convention and a concrete HTTP binding specification. For a detailed list of changes since the last publication of this document, refer to appendix C Part 1 Change Log. A list of open issues against this document can be found at. Comments on this document should be sent to xmlp-comments@w3.org (public archive[11]). It is inappropriate to send discussion emails to this address. Discussion of this document takes place on the public xml-dist-app@w3.org mailing list[12] per the email communication rules in the XML Protocol Working Group Charter 4.4.2 MustUnderstand Faults 5 SOAP Transport Binding Framework 5.1 Binding to Application-Specific Protocols 5.2 Security Considerations 6 References 6.1 Normative References 6.2 Informative References A Version Transition From SOAP/1.1 to SOAP Version 1.2 B Acknowledgements (Non-Normative) C Part (4 SOAP Envelope) construct defines an overall framework for expressing what is in a message, who should deal with it, and whether it is optional or mandatory. The SOAP binding framework (5 SOAP Transport Binding Framework) defines an abstract framework for exchanging SOAP envelopes between peers using an underlying protocol for transport. The SOAP HTTP binding [1](SOAP in HTTP) defines a concrete instance of a binding to the HTTP protocol[2]. The SOAP encoding rules [1](SOAP Encoding) defines a serialization mechanism that can be used to exchange instances of application-defined datatypes. The SOAP RPC representation [1]( SOAP for RPC) defines a convention that can be used to represent remote procedure calls and responses. These four parts are functionally orthogonal. In recognition of this, the envelope and the encoding rules are defined in different namespaces.).]. The following example shows a simple notification message expressed in SOAP. The message contains the header block alertcontrol and the body block alert which are both application defined and not defined by SOAP. The header block contains the parameters priority and expires which may be of use to intermediaries as well as the ultimate destination of the message. The body block contains the actual notification message to be delivered. > message, or on top of TCP. as seen by a SOAP node. The type of a SOAP. A collection of zero or more SOAP blocks which may be targeted at any SOAP receiver within the SOAP message path. A collection of zero or more SOAP blocks targeted at the ultimate SOAP receiver within the SOAP message path. A special SOAP, generate SOAP faults, SOAP responses, and if appropriate "" (see also 4.2.2 SOAP actor Attribute)..) SOAP header blocks carry optional attribute information items with a local name of actor and a namespace name of (see 4.2.2 SOAP actor Attribute) that are used to target them to the appropriate SOAP node(s). SOAP header blocks with no such attribute information item and the SOAP body are implicitly targeted at the anonymous SOAP actor, implying that they are to be processed by the ultimate SOAP receiver. The specification refers to the (implicit or explicit) value of the SOAP actor attribute as the SOAP actor for the corresponding SOAP block (either a SOAP header block or a SOAP body block). A SOAP block is said to be targeted to a SOAP node if the SOAP actor (if present) on the block matches (see [7]) a role played by the SOAP node, or in the case of a SOAP block with no actor attribute information item (including SOAP body blocks), if the SOAP node has assumed the role of the anonymous SOAP actor.. SOAP header blocks carry optional attribute information items with a local name of mustUnderstand fail (see 4.4 SOAP Fault).. Generate a single SOAP MustUnderstand fault (see 4.4.2 MustUnderstand Faults) if one or more SOAP blocks targeted at the SOAP node are mandatory and are not understood by that node. If such a fault is generated, any further processing MUST NOT be done. Process SOAP blocks targeted at the SOAP node, generating SOAP faults (see 4.4 SOAP Fault) if necessary. A SOAP node MUST process SOAP blocks identified as mandatory. A SOAP node MAY process or ignore SOAP blocks not so identified. In all cases where a SOAP block is processed, the SOAP node must understand the SOAP block and must do such processing in a manner fully conformant with the specification for that SOAP block. Faults, if any, must also conform to the specification for the processed SOAP block. It is possible that the processing of particular SOAP block would control or determine the order of processing for other SOAP blocks. For example, one could create a SOAP header block to force processing of other SOAP header blocks in lexical order. In the absence of such a SOAP block, the order of processing 4.4 SOAP Fault) a whitespace delimited list where each item in the list is of type anyURI in the namespace. Each item in the list identifies a set of serialization rules that can be used to deserialize the SOAP message. The sets of rules should be listed in the order most specific to least specific.. SOAP defines an actor attribute information item that. At a SOAP receiver, the special URI "" indicates that the SOAP header block is targetted at the current SOAP node. This is similar to the hop-by-hop scope model represented by the Connection header field in HTTP. Blocks marked with this special actor URI are subject to the same processing rules, outlined in 2 SOAP Message Exchange Model, as user defined URIs. At a SOAP receiver, the special URI "" indicates that the SOAP header block is not targetted at any SOAP node. This allows data which is common to several blocks to be referenced from them, without being processed. Omitting the SOAP actor attribute information item implicitly targets the SOAP header block at the ultimate SOAP receiver. As described in 2.4 Understanding SOAP Headers, the SOAP mustUnderstand attribute information item is used to indicate whether the processing of a SOAP header block is mandatory or optional at the target SOAP node.". The SOAP mustUnderstand attribute information item allows for robust evolution of SOAP itself, of related services such as security mechanisms, and of applications using SOAP. SOAP blocks tagged with a SOAP mustUnderstand attribute information item with a value of "true" MUST be presumed to somehow modify the semantics of their parent or peer element information items. Tagging SOAP blocks in this manner assures that this change in semantics will not be silently (and, presumably, erroneously) ignored by those who may not fully understand it. Specific rules for processing header blocks with mustUnderstand attribute information items are provided in 2.4 Understanding SOAP Headers and 2.5 Processing SOAP Messages. The SOAP mustUnderstand attribute information item. Note: SOAP extensions can be defined for indicating the order in which processing is to occur, and for generating faults when a header entry is not processed in the appropriate order. Specifically, it is possible to create SOAP header blocks which are themselves targeted to the endpoint (or intermediaries), have a mustUnderstand attribute information item with a value of "true", and which have as their semantic a requirement to generate some particular fault if other headers have inadvertently survived past the intended point in the message path message (presumably due to a failure to reach the intended processing node earlier in the path). Such extensions MAY depend on the presence or value of the mustUnderstand attribute information item in the surviving headers when determining whether an error has occurred. are called SOAP body blocks. Each SOAP body block element information item: MAY be namespace qualified. MAY have an encodingStyle attribute information item SOAP defines one particular SOAP body block, the SOAP fault, which is used for reporting errors (see 4.4 SOAP Fault).). The SOAP Fault element information item is used to carry error and/or status information within a SOAP message. If present, the SOAP Fault MUST appear as a SOAP body block and MUST NOT appear more than once within a SOAP Body. The Fault element information item has: A local name of Fault; A namespace name of; Two or more child element information items in order as follows: A mandatory faultcode element information item as described below; A mandatory faultstring element information item as described below; An optional faultactor element information item as described below; An optional detail element information item as described below..1). The SOAP faultcode values defined in this section MUST be used as values for the SOAP faultcode element information item when describing faults defined by SOAP 1.2 Part 1 (this document). The namespace identifier for these SOAP faultcode values is "". Use of this namespace is recommended (but not required) in the specification of methods defined outside of the present specification. SOAP faultcode values are defined in an extensible manner that allows for new SOAP faultcode values to be defined while maintaining backwards compatibility with existing SOAP faultcode values. The mechanism used is very similar to the 1xx, 2xx, 3xx etc basic status classes classes defined in HTTP (see [2] section 10). However, instead of integers, they are defined as XML qualified names[7]. The character "." (dot) is used as a separator of SOAP faultcode values indicating that what is to the left of the dot is a more generic fault code value than the value to the right. This is illustrated in the following example. Client.Authentication The faultcode values defined by SOAP. Some underlying protocols may be designed for a particular purpose or application profile. SOAP bindings to such protocols MAY use the same endpoint identification (e.g., TCP port number) as the underlying protocol, in order to reuse the existing infrastructure associated that protocol. However, the use of well-known ports by SOAP may incur additional, unintended handling by intermediaries and underlying implementations. For example, HTTP is commonly thought of as a 'Web browsing' protocol, and network administrators may place certain restrictions upon its use, or may interpose services such as filtering, content modification, routing, etc. Often, these services are interposed using port number as a heuristic. As a result, binding definitions. The SOAP/1.1 specification[14] says the following on versioning in section 4.1.2: "SOAP does not define a traditional versioning model based on major and minor version numbers. A SOAP message MUST have an Envelope element associated with the "" namespace. If a message is received by a SOAP application in which the SOAP Envelope element is associated with a different namespace, the application MUST treat this as a version error and discard the message. If the message is received contains an ordered list of namespace identifiers of SOAP envelopes that the SOAP node supports in the order most to least preferred. Following is an example of a VersionMismatch fault generated by a SOAP Version 1.2 node including the SOAP upgrade extension: that existing SOAP/1.1 nodes are not likely to indicate which envelope versions they support. If nothing is indicated then this means that SOAP/1.1 is the only supported.
http://www.w3.org/TR/2001/WD-soap12-part1-20011002/
crawl-002
refinedweb
1,937
54.42
import csv def hit_count(file): try: with open(file, newline='') as fin: reader = csv.reader(fin) custid = set() count = 0 hitcount = 0 #the following loop pulls out unique customer id's and puts them into a set called "custid" for row in reader: if row[0] != "0": continue #print(row[13]) custid.add(row[13]) count +=1 print("There were", count, "events in the collection.\n") print("There were", len(custid), "unique customer's collected\n") stats = list(custid) print(stats) #the following loop counts the number of times each unique custid was collected and appends the count to "stats" with open(file, newline='') as fin: reader = csv.reader(fin) for item in range(len(stats)): item = stats.pop(0) #print("Checking", item) for row in reader: #print("Checking", item, "against", row[13]) if item == row[13]: hitcount += 1 stats.append((item, hitcount)) for item in stats: print(item) except IOError as err: print("File Error", str(err)) hit_count("Austin_12032013.csv") I created "stats" because I'm planning on further functionality that will add to the list of analysis items. So eventually I will have a list of lists that include customer id, number of times each customer came to the store, which store they visited, etc. At first the second loop wouldn't work at all. When I added the redundant "with open..." and csv reader lines then the loop would work with the first custid in the set, but stops after that. I can't figure out why the program won't iterated through the entire csv file for each of the customer id's in the custid set.
http://www.dreamincode.net/forums/topic/318370-iteration-problem/page__pid__1835318__st__0
CC-MAIN-2016-22
refinedweb
271
71.75
A Python package to scrape the NBA api and return a play by play file Project description nba_scraper This is a package written in Python to scrape the NBA's api and produce the play by play of games either in a csv file or a pandas dataframe. This package has two main functions scrape_game which scrapes an individual game or a list of specific games, and scrape_season which scrapes an entire season of regular season games. The scraper goes back to the 1999-2000 season and will pull the play by play along with who was on the court at the time of each play. Some other various statistics may be calculated as well. Installation To install this package just type this at the command line: pip install nba_scraper Usage') scrape_date_range This allows you to scrape all regular season games in the date range passed to the function. As of right now it will not scrape playoff games. Date format must be passed in the format YYYY-MM-DD. import nba_scraper.nba_scraper as ns #scrape a season nba_df = ns.scrape_date_range('2019-01-01', 2019-01-03') # if you want a csv if you don't pass a file path the default is home # directory ns.scrape_date_range('2019-01-01', 2019-01-03',.
https://pypi.org/project/nba-scraper/1.0.4/
CC-MAIN-2020-29
refinedweb
213
74.83
See also: IRC log minutes from last meeting approved srt: chor has been chasing vertical standards group from the start. ISO looking to consolidate a number of financial vertical protocol standards, SWIFT etc. I presented CDL to the ISO WG, and they showed interest discussion of ISO classifications Philippe: steve and I will continue to work on this coordination, but will track and report back within this WG srt: ws-chor has 10-12 CR issues raised, WG has been kicked back into life, Primer being worked up, Robin Milner contributing to formal semantics work, which is exciting. need Yves to work on publication from latex. need an extension until the end of the year, on target for that bob: addressing published a CR for addressing following Edinburgh F2F. two participants in the interop, organised an interop workshop for Boston next month, may be virtual if we don't see more participation. WSDL 2.0 testing in our current time frame is at risk Philippe: no pressure (yet) to release the room for the interop event Jonathan: not a lot to report since our last CG, we have about 50 issues. CR22 is about import/include interactions, may have some impact with addressing. working towards a successful event hosted by IBM in Toronto in July, may not be the end of interop, but will move us on Tony: we need more coordination with addressing Philippe: no XMLP chair as yet, we're looking for a new chair and Yves isn't here jacek: we're starting to work, first F2F next week and soon after we'll publish our first working draft pauld: had a F2F in Edinburgh, working towards a F2F in Apsley in July after which we'll publish our last Working Draft. been publicising our work at WWW2006 and plan to present at the WS-I next week and raise awareness paulc: will alternate with chris, participation is still light, expecting more people to join soon (including IBM :). working towards our first F2F next month in Austin, TX Philippe: AC meeting mentioned eventing, and discussed Web services. we've a new Semantic Web lead, and there is a new iteration of XML core documents, but no widespread review anticipated Jonathan: relative namespaces goes from errata to being collapsed into Rec documents Philippe: other clarifications made, e.g regarding default XML namespaces. some fairly recently Philippe: new F2F item from DB 31 July-1st August Jonathan: WSDL 2.0 Features and Properties (v) WS-Policy is an issue we agreed to revisit once WS-Policy gets going. unsure of the impact on the WS-Policy WG would be should F&P continue to exist. on our agenda, but WG is possibly reluctant to enter that discussion again. we may have more information on F&P as a result of our CR testing, technical and support in implementations. some discussion in the Policy WG could be useful on how these two things could co-exist paulc: announcement for Policy WG went to the AC list (which means that not everyone here will have seen it, see call for participation). announcement implies coordination with WSDL 2.0, but unclear who initiates that coordination. unhappy to air this dirty laundry in my nice new WG :-) Philippe: can WSDL discuss this before WS-Policy meet F2F Jonathan: explains the issue CR022, import and include. components built from two WSDL files may inherit properties and extensions, or take on default values. Performance of this processing is expodential. We'd like to simplify this processing based upon CR testing with implementations.. may impact how addressing is using WSDL 2.0, however given this is a function of the component model and not markup, this may not be impact Addressing Philippe: will this help the performance issue? Jonathan: should help, as it's a function of how you build the component model, and may be optimised so you don't have to reprocess each file each time it's imported Jonathan: may result in editorial clarifications. we're interested in building testing around this issue. WS-Addressing can help by contribute test cases. WSDL 1.1 to 2.0 converter may be helpful here. we should share test cases between working groups bob: happy to contribute test cases Jonathan: we need examples quickly Pauld wonders if WS-Policy extensions would be helpful here Jonathan: we have overlap between with policy thanks to Asir, he'll certainly be interested in this area. we're obviously be looking for other examples of WSDL extensions Philippe: semantic annotations? jacek: our extensions are localised, and aren't typically inherited, so we're unlikely to encounter this issue (afaict) Philippe: next call - we have a choice of falling back into our usual alternate week pattern, or skipping a week and continuing on this pattern paulc: be aware of July the 4th jonathan: WS-I next week and then have conflicts during August bob: at WS-I, otherwise reasonably OK Philippe: next meeting in two weeks, 27th? jonathan: on holiday further discussion of dates .. Philippe: July 18th, then August 1st?. are WGs planning to meeting in August? Jonathan: WSDL traditionally doesn't meet bob: Addressing may meet, or at least the testing Philippe: unlikely policy will take August off Tony: our WG page is out of date regarding participation Philippe: blushes :-) Resolution: Upcoming meetings: July 18, August 1, August 15, August 29 No new action items [End of minutes]
http://www.w3.org/2006/06/13-ws-cg-minutes.html
CC-MAIN-2016-26
refinedweb
908
57.3
Distributed Ray Overview¶ One of Ray’s strengths is the ability to leverage multiple machines in the same program. Ray can, of course, be run on a single machine (and is done so often) but the real power is using Ray on a cluster of machines. Key Concepts¶ Ray Nodes: A Ray cluster consists of a head node and a set of worker nodes. The head node needs to be started first, and the worker nodes are given the address of the head node to form the cluster. The Ray cluster itself can also “auto-scale,” meaning that it can interact with a Cloud Provider to request or release instances according to application workload. Ports: Ray processes communicate via TCP ports. When starting a Ray cluster, either on prem or on the cloud, it is important to open the right ports so that Ray functions correctly. See the Ray Ports documentation for more details. Ray Cluster Launcher: The Ray Cluster Launcher is a simple tool that automatically provisions machines and launches a multi-node Ray cluster. You can use the cluster launcher on GCP, Amazon EC2, Azure, or even Kubernetes. Summary¶ Clusters are started with the Ray Cluster Launcher or manually. You can also create a Ray cluster using a standard cluster manager such as Kubernetes, YARN, or SLURM. After a cluster is started, you need to connect your program to the Ray cluster. You can connect to this Ray runtime by starting a Python process that calls the following on the same node as where you ran ray start: # This must import ray ray.init(address='auto') If you want to run Java code, you need to specify the classpath via the --code-search-path option. See Code Search Path for more details. $ ray start ... --code-search-path=/path/to/jars and then the rest of your script should be able to leverage Ray as a distributed framework! Using the cluster launcher¶ The ray up command uses the Ray Cluster Launcher to start a cluster on the cloud, creating a designated “head node” and worker nodes. Any Python process that runs ray.init(address=...) on any of the cluster nodes will connect to the ray cluster. Important Calling ray.init on your laptop will not work if using ray up, since your laptop will not be the head node. Here is an example of using the Cluster Launcher on AWS: # First, run `pip install boto3` and `aws configure` # # Create or update the cluster. When the command finishes, it will print # out the command that can be used to SSH into the cluster head node. $ ray up ray/python/ray/autoscaler/aws/example-full.yaml You can monitor the Ray cluster status with ray monitor cluster.yaml and ssh into the head node with ray attach cluster.yaml. Manual Ray Cluster Setup¶ The most preferable way to run a Ray cluster is via the Ray Cluster Launcher. However, it is also possible to start a Ray cluster by hand. This section assumes that you have a list of machines and that the nodes in the cluster can communicate with each other. It also assumes that Ray is installed on each machine. To install Ray, follow the installation instructions. To configure the Ray cluster to run Java code, you need to add the --code-search-path option. See Code Search Path for more details. Starting Ray on each machine¶ On the head node (just choose some node to be the head node), run the following. If the --port argument is omitted, Ray will choose port 6379, falling back to a random port. $ ray start --head --port=6379 ... Next steps To connect to this Ray runtime from another node, run ray start --address='<ip address>:6379' --redis-password='<password>' If connection fails, check your firewall settings and network configuration. The command will print out the address of the Redis server that was started (the local node IP address plus the port number you specified). Then on each of the other nodes, run the following. Make sure to replace <address> with the value printed by the command on the head node (it should look something like 123.45.67.89:6379). Note that if your compute nodes are on their own subnetwork with Network Address Translation, to connect from a regular machine outside that subnetwork, the command printed by the head node will not work. You need to find the address that will reach the head node from the second machine. If the head node has a domain address like compute04.berkeley.edu, you can simply use that in place of an IP address and rely on the DNS. $ ray start --address=<address> --redis-password='<password>' -------------------- Ray runtime started. -------------------- To terminate the Ray runtime, run ray stop If you wish to specify that a machine has 10 CPUs and 1 GPU, you can do this with the flags --num-cpus=10 and --num-gpus=1. See the Configuration page for more information. If you see Unable to connect to Redis. If the Redis instance is on a different machine, check that your firewall is configured properly., this means the --port is inaccessible at the given IP address (because, for example, the head node is not actually running Ray, or you have the wrong IP address). If you see Ray runtime started., then the node successfully connected to the IP address at the --port. You should now be able to connect to the cluster with ray.init(address='auto'). If ray.init(address='auto') keeps repeating redis_context.cc:303: Failed to connect to Redis, retrying., then the node is failing to connect to some other port(s) besides the main port. If connection fails, check your firewall settings and network configuration. If the connection fails, to check whether each port can be reached from a node, you can use a tool such as nmap or nc. $ nmap -sV --reason -p $PORT $HEAD_ADDRESS Nmap scan report for compute04.berkeley.edu (123.456.78.910) Host is up, received echo-reply ttl 60 (0.00087s latency). rDNS record for 123.456.78.910: compute04.berkeley.edu PORT STATE SERVICE REASON VERSION 6379/tcp open redis syn-ack ttl 60 Redis key-value store Service detection performed. Please report any incorrect results at . $ nc -vv -z $HEAD_ADDRESS $PORT Connection to compute04.berkeley.edu 6379 port [tcp/*] succeeded! If the node cannot access that port at that IP address, you might see $ nmap -sV --reason -p $PORT $HEAD_ADDRESS Nmap scan report for compute04.berkeley.edu (123.456.78.910) Host is up (0.0011s latency). rDNS record for 123.456.78.910: compute04.berkeley.edu PORT STATE SERVICE REASON VERSION 6379/tcp closed redis reset ttl 60 Service detection performed. Please report any incorrect results at . $ nc -vv -z $HEAD_ADDRESS $PORT nc: connect to compute04.berkeley.edu port 6379 (tcp) failed: Connection refused Running a Ray program on the Ray cluster¶ To run a distributed Ray program, you’ll need to execute your program on the same machine as one of the nodes. Within your program/script, you must call ray.init and add the address parameter to ray.init (like ray.init(address=...)). This causes Ray to connect to the existing cluster. For example: ray.init(address="auto") You need to add the ray.address parameter to your command line (like -Dray.address=...). To connect your program to the Ray cluster, run it like this: java -classpath /path/to/jars/ \ -Dray.address=<address> \ <classname> <args> Note Specifying auto as the address hasn’t been implemented in Java yet. You need to provide the actual address. You can find the address of the server from the output of the ray up command. Note A common mistake is setting the address to be a cluster node while running the script on your laptop. This will not work because the script needs to be started/executed on one of the Ray nodes. To verify that the correct number of nodes have joined the cluster, you can run the following. import time @ray.remote def f(): time.sleep(0.01) return ray.services.get_node_ip_address() # Get a list of the IP addresses of the nodes that have joined the cluster. set(ray.get([f.remote() for _ in range(1000)]))
https://docs.ray.io/en/master/cluster/index.html
CC-MAIN-2020-50
refinedweb
1,378
74.39
07 December 2011 13:00 [Source: ICIS news] LONDON (ICIS)--Germany’s chemical production is forecast to increase 1.0% year on year in 2012 as growth weakens compared with 2011, the country’s chemical producers’ trade group, Verband der Chemischen Industrie (VCI), said on Wednesday. In 2011, ?xml:namespace> “It is difficult to make an accurate forecast for the coming 12 months,” said Klaus Engel, the president of VCI and CEO of specialty chemicals major Evonik. Engel pointed to the unresolved government debt crises in the eurozone and the VCI hopes a planned summit of EU leaders in Meanwhile, Germany-based chemicals producers are facing additional uncertainties from rising electricity costs because of the country’s renewable energy law, the Erneuerbare-Energien-Gesetz (EEG), and emissions trading. In 2011 alone, the chemical industry’s costs from the EEG and related legislation added up to €1.3bn ($1.7bn), Engel said. A further challenge is Engel said that, over the winter months, Nevertheless, Engel said he would not suggest there was a “crisis mood” in the chemical industry. “[ Companies’ assessment of their overall business situation was on a level with the strong years of 2006 and 2007, he added. “There are no recognisable signs in the real economy, that would, from our perspective, justify a crisis scenario,” said Engel. As for sales and prices in 2012, VCI expects prices to rise 1.0% in 2012 and sales to increase 2.0%, he said. In 2011, VCI said naphtha prices should “remain largely stable” in 2012, given the moderate growth forecasts for the global economy in coming months. The group expects oil prices to range between $100/bbl and $120/bbl in 2012. (
http://www.icis.com/Articles/2011/12/07/9514583/germany-vci-forecasts-2012-chemical-production-growth-at.html
CC-MAIN-2014-42
refinedweb
282
60.75
Custom Datepicker 😱 Recently at work, I have been getting a lot of heat because of a custom datepicker they wanted. I was like datepicker, custom why do you want it to be custom please use the default ones available, pick any one you like and use it everywhere. I put a lot of effort into styling the datepicker and on each page, they tweaked it just a little so that it gets to my head. Today what I’m going to share is a component that has solved all my problems ngbDatepicker of the ng-bootstrap library. It helped me a lot and saved me tonnes of hours that I would’ve spent styling and tweaking my datepicker. It has many options and let us talk about a few of my favorite ones. Available both as a calendar and a dropdown My first requirement, sometimes they used to embed the date picker sometimes they wanted it in the DOM. Each time different styles and the deprecation of ::ng-deep made matters worse. Different Selections You can select the date as a range and select a single date also. Custom months It lets you create the entire view of the months using an Angular template and you just have to put a directive on the ng-template tag. Let me give an example this is a default datepicker. <ngb-datepicker #dp and to customize it just add whatever Html you want to add in it and the ngbDatepickerContent directive on that template <ngb-datepicker #dp <ng-template ngbDatepickerContent> <div * <div> This is custom datepicker </div> <ngb-datepicker-month [month]="month"></ngb-datepicker-month> <div>Here is a footer<div> </div> </ng-template> </ngb-datepicker> which will make something like this as an output. Now you can put anything at the bottom or top of the date picker. Style it as you want in those div tags. For footer, you can use inbuilt footer template input too. Change week name labels This one was a bit complex but you actually just have to extend a NgbDatepickerI18n provider and provide it instead of default one. An example of adding a custom class is as follows: import {NgbDatepickerI18n, NgbDateStruct, NgbDatepicker} from '@ng-bootstrap/ng-bootstrap';const I18N_VALUES = { 'en': {// Provide labels in multiple languages weekdays: ['S', 'M', 'T', 'W', 'T', 'F', 'S'], // Use whatever values you want in any language months: ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']// // Use whatever values you want in any language } };@Injectable() export class I18n { language = 'en'; }@Injectable() export class CustomDatepickerI18n extends NgbDatepickerI18n { constructor(private _i18n: I18n) { super(); } getWeekdayShortName(weekday: number): string { return I18N_VALUES[this._i18n.language].weekdays[weekday - 1]; } getMonthShortName(month: number): string { return I18N_VALUES[this._i18n.language].months[month - 1]; } getMonthFullName(month: number): string { return this.getMonthShortName(month); } getDayAriaLabel(date: NgbDateStruct): string { return `${date.day}-${date.month}-${date.year}`; } }@Component({ selector: 'app-calendar', templateUrl: './calendar.component.html', styleUrls: ['./calendar.component.scss'], providers: [[I18n, {provide: NgbDatepickerI18n, useClass: CustomDatepickerI18n}]] })export class DetailsCalendarComponent{ constructor(public i18n: NgbDatepickerI18n) { } } Give custom days template Now we have styled the months the labels now the only thing left is the day template. This can also be styles modified according to your needs. You just have to provide a custom template for your days using the daytemplate input on the ngb-datepicker <ngb-datepicker #dp [dayTemplate]="customDay"></ngb-datepicker><ng-template #customDay let-date <div [class.selected-date]="selected"> {{ date.day }} </div> </ng-template> Here you can define different states using disabled, selected, and focused. I put the selected-date class on the day whenever we select a day of the month. You can add whatever styles you want in that class. Now, we were able to make custom days too. So, we can see that we can style the entire datepicker using our own template and styles. This is very helpful to me at work. There are lot of other options available in this library please check it out ngbDatepicker. If you have any questions please mail me here. It will save you tonnes of time. Happy coding.
https://ajitsinghkaler.medium.com/custom-datepicker-13d94f31d9b4
CC-MAIN-2022-05
refinedweb
678
52.6
H2 Database and Java When I began using Java years ago I heard of a database that was being coded entirely using Java. This was HypersonicSQL and you can still download it at Sourceforge. The author working on it decided it needed a rewrite and he began H2 database. First download H2 installer at H2 download. Get the stable version 1.4.195 as the h2 jar file for the last version .196 gave a class not found exception on startup looking for the driver. And the tutorial can be found here H2 Tutorial. Next look at your install and find the H2/bin folder. Here you find he jar file needed h2-some-version.jar i.e. h2-1.4.195.jar. Most if not all databases now create the database when you connect to it. All we need to do is make the JDBC connection and create a table, add some records with SQL and then query for records with SQL, same when using SQLite or almost any database. H2 will give use some data types that SQLite does not support because well its a Lite weight database. Data Types supported by H2 One thing that is nice about H2 is the huge array of datatypes supported by H2. It supports all Java primitives and Strings as VARCHAR. I'll list a few here. - BigInt (long maybe) - Int (int) - TinyInt (short) - SmallInt (byte maybe) - Boolean - Identity (ID) - Decimal (several sub types) - Double - Float - Real - Time - Date - TimeStamp - TimeStamp with TimeZone - Binary - Serialized Java Objects - VarChar - Char - Blob - Clob - UUID - Array (Java arrays) - Enum - Geometry Our H2 Example I'm going to use INT, VARCHAR, IDENTITY, BOOLEAN and DATE for our example. This will be the same example we used in the SQLite article except recoded to use H2. I may go ahead and make the output look a little better this time with a helper method getColumnString(aString,columnWidth) Note for the fields that returned a primitive, not String I had to make new Integer, Float and Boolean objects to get a String representation. Also had to get a String representation of the Date object. The Source import java.sql.*; public class ReadWriteH2{ String url = "jdbc:h2:file:c://dev/java/db/h2test.db"; String sqlTableCreate = "CREATE TABLE IF NOT EXISTS datarecords (\n" + " id identity PRIMARY KEY NOT NULL,\n" + " name varchar,\n" + " weight float,\n" + " date date,\n" + " active boolean\n" + ")"; public ReadWriteH2(){ try { Class.forName("org.h2.Driver"); Connection conn = DriverManager.getConnection(url); Statement stmt =conn.createStatement(); stmt.executeUpdate(sqlTableCreate); String sqlInsertUpdate = "INSERT INTO datarecords (id,name,weight,date,active)" + "VALUES (1,'John Brown',175.2342, '2008-12-13',true)"; stmt.executeUpdate(sqlInsertUpdate); sqlInsertUpdate = "INSERT INTO datarecords (id,name,weight,date,active)" + "VALUES (2,'George Washington',248.9263, '1776-03-14',false)"; stmt.executeUpdate(sqlInsertUpdate); String sqlSelectQuery = "SELECT * FROM datarecords"; ResultSet rs = stmt.executeQuery(sqlSelectQuery); while (rs.next()) { System.out.println(getColumnString(new Integer(rs.getInt("id")).toString(),10) + "\t" + getColumnString(rs.getString("name"),20) + "\t" + getColumnString(new Float(rs.getFloat("weight")).toString(),8) + "\t" + getColumnString(rs.getDate("date").toString(),10) + "\t" + getColumnString(new Boolean(rs.getBoolean("active")).toString(),6) ); } } catch (SQLException sqle) { sqle.printStackTrace(); } catch (ClassNotFoundException cnfe){ cnfe.printStackTrace(); } } public String getColumnString(String data,int colLen){ data+=" "; return data.substring(0,colLen); } public static void main(String args[]){ new ReadWriteH2(); } } The Output c:\dev\java\arksoft\post\files>java -cp "h2.jar;." ReadWriteH2 1 John Brown 175.2342 2008-12-13 true 2 George Washington 248.9263 1776-03-14 false Pingback: Using JDBC and SQLite to store/retrieve data. | Arksoft
https://www.softwaredeveloperzone.com/using-jdbc-h2-formerly-hypersonicsql-store-retrieve-data/
CC-MAIN-2018-13
refinedweb
596
51.44
In this article, I'll describe a way to unit test data caching with ASP.NET when using SQL Server, by utilizing the TraceServer class. During the summer, I spent quite a lot of time building application layer APIs whose central parts where factories that created objects based on data retrieved from a SQL Server database. As the APIs had to be built to handle high performance loads, all database requests where cached. I tried to build the APIs in a test driven fashion, and therefore wrote xUnit tests. When I was about to test the factory methods that implemented caching, I found myself in a dilemma. How should I test that objects were actually cached and not read from the database every time, and how should I test that cached objects were released from the cache when needed? A couple of obvious solutions popped into my head. I could let the factories expose the cache keys they were using and query the cache system in my tests. This would allow me to check that objects were actually stored in the cache and that they were released from the cache when needed. The tests would not, however, be able to test that the factories actually checked the cache and returned cached objects instead of building new ones from the database. It also required that I had to expose the cache key building methods in the factories. Another possible solution was to modify my factories to keep track of when objects were fetched from the database or from the cache. While I, in theory, could perform all the necessary testing with this approach, it would force me to build quite a lot of extra logic into my factories that would only be used while testing. I also suspected that this could be quite error prone. When testing caching without automated unit tests, I would run a trace against the database with SQL Server Profiler and count the number of times a certain Stored Procedure was run, and what I really needed to do was to find a way to automate that kind of tests. After some Googling, I found the Microsoft.SqlServer.ConnectionInfo assembly which contains the TraceServer class. The TraceServer class creates a new trace on a given SQL Server based on a trace definition file which must be supplied with it. It implements the IDataReader interface, and its rows consists of the columns that are specified in the trace definition file. In order to use the TraceServer class for unit testing my factory methods, I set out to build a class of my own which would have three public methods: StartTrace()- Starts a trace against a SQL Server instance. StopTrace()- Stops the trace. CountOccurencesInTextData(string textToMatch)- Returns the number of rows in the trace which contained a specified string. I'll continue with a simplified description of how it can be implemented. You may, however, download a Visual Studio 2008 project with a complete class, a trace definition file, and the configuration functionality, here. The first thing we need to do is to add a reference to Microsoft.SqlServer.ConnectionInfo in our project and then import the namespaces below: using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Trace; In order to start a trace, we will need a TraceServer, an instance of the SqlConnectionInfo class which will hold the necessary information to connect to the server, and a trace definition file. The SqlConnectionInfo object will require the name of the SQL Server (such as localhost or 123.456.123.456) and the username and password of a user who has the necessary permissions to start a trace on the server. As for the trace definition file, it can be created by exporting a trace template from SQL Server Profiler (by clicking on File -> Templates -> Export Template). In the downloadable sample project and in this example, I use a definition file that only contains the RPC:Completed and SQL:BatchCompleted events, with only three columns: Event name, text data, and the required column SPID. SqlConnectionInfo connectionInfo = new SqlConnectionInfo ("serverName", "username", "password"); TraceServer traceServer = new TraceServer(); string traceDefinitionFilePath = "BatchOrSPCompleted.tdf"; traceServer.InitializeAsReader(connectionInfo, traceDefinitionFilePath); Once we're done with the trace, we stop it and then count the number of rows in which the TextData column contained a specified string. traceServer.Stop(); string textToMatch = "Some string to match, " + "such as the name of a stored procedure"; int count = 0; while(traceServer.Read()) { if (traceServer.IsDBNull(1)) continue; string textData = _traceServer.GetString(1); if (textData.Contains(textToMatch)) count++; } traceServer.Close(); The functionality described above, and especially the complete class in the downloadable project, can be used in a number of ways. A simple example scenario is that we want to check that a factory class named BlogFactory fetches a certain blog from the database the first time, and from the cache the second time. [Test] public void TestBlogCaching() { string blogSelectQuery = "SELECT ID, name FROM blog"; GetBlogWhileTracing(); int numberOfDatabaseHits = TraceCounter.CountOccurencesInTextData(blogSelectQuery); //As this is the first time we fetch the Blog it should be fetched //from the database Assert.AreEqual(1, numberOfDatabaseHits); GetBlogWhileTracing(); numberOfDatabaseHits = TraceCounter.CountOccurencesInTextData(blogSelectQuery); //The Blog should now be fetched from the cache and the method call //should not result in any database hits Assert.AreEqual(0, numberOfDatabaseHits); } private void GetBlogWhileTracing() { TraceCounter.StartTrace(); BlogFactory.GetBlog(1); TraceCounter.StopTrace; } While I'll try to post any updates here, you will be sure to find them, plus other articles by me, at bloodsweatand.net where I have a blog. This specific article can be found here. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/web-cache/DataCachingTester.aspx
crawl-002
refinedweb
933
53.81
20 Best Unity Tips and Tricks for Game Developers By Damian Wolf, Livecoding. Unity is a popular game development platform. It is impressive regarding functionality, and also caters to the different game development requirements. Game developers can use Unity to create any type of game imaginable, from world-class RPG games to the most popular augmented reality game, Pokêmon Go. With widespread use around the world, many developers showcase their Unity skills and build an audience online even before the game is released! Furthermore, many beginners utilize Unity to learn game development or game programming in general. The real impact of Unity is more diverse because it is a perfect tool for both indie game developers as well as big teams working on a project. The ecosystem also helps Unity to sustain and grow in the right direction. Due to its complexity—it handles design, scripting, debugging, and other aspects of game development—Unity can be tough to manage. And, that's why we will go through the 20 Best Unity Tips and Tricks for Game Developers. 20 Best Unity Tips and Tricks for Game Developers Before we start, understand that Unity is updated frequently, so the best tips listed here can differ from version to version. It is always a good idea to introspect and modify the tips according to your project and the version of Unity you are using. Let's get started with the tips. Five Workflow Improvement Tips Workflow improvement tips are clearly aimed to help you improve your game development process. They will ensure that your project moves faster and in the right direction. Let's list the five best workflow improvement tips for Unity game developers: - Source control your work for maximum effectiveness: Make proper use of source control to improve your workflow. This will ensure that you don't lose any of your work and also enable you to go back and forth to check what's changed. You can serialize assets, use branching strategy to maximize control over production, and use sub-modules to maximize effective control of your source code. - Ensure that you decide on the scale of assets you are going to use in your project. The decision depends on the type of project you are working on, and the resolution the game is aimed to run at. - Always automate your build process to save time. Automating the build process also will ensure you can work on different game versions simultaneously, and also help make small changes now and then without going through the whole build process after every change. - Properly document your work. There can be no big disaster when you find yourself stuck over a piece of code that you wrote earlier but forgot to do code documentation. Also, documentation can help other teammates to better understand your work and collaborate on the project. You can use Livecoding for video code documentation. Read this to learn more. - Test scenes can become a bulky part of the project and they are useless after the project is done. To make sure that your project files don't become bulky, keep test scenes separate from your code and delete them when the project is complete. Five Coding Improvement Tips Now, let's move to the most important part of game development: coding! Let's get started. - Use namespace to your advantage. Namespace enables you to handle your code better because it allows you to avoid any clashes with 3rd-party libraries and other classes within your code. - Coroutines is a great tool for solving many game problems, but they are equally hard to understand and debug. If you are using Coroutines, make sure you know what you are doing. Understand how they work in sequence and parallel mode, and so forth. Read more about Coroutines here. - Assertions can be your best friend when finding bugs in your code. You can use the Unity.Assertions.Assert class to use assertions. - Extension methods are great for improving your syntax readability and management. - Localization should be done in separate files. Keep only one language in each file. Five Debugging Improvement Tips Debugging can be a tough nut to crack. With proper debugging, you can make your game release-ready and ensure that final game quality is maintained. Let's get started with some debugging tips for Unity. - Master the debugging tools available in Unity. The debugging tools in Unity provide a lot of functionality, including functions that can effectively help you debug your game. Utilize functions such as Debug.Break, Debug.Log, Debug.DrawRay, and Debug.DrawLine to your advantage. The first two functions are used to understand game state, whereas the last two functions help you to visually debug the game. You also can use the debug visual inspector to locate runtime private fields. - Because Unity doesn't provide any special IDE to work with, you can opt to use any IDE for your development work. It is also a good idea to master the IDE debugging features. Check out Visual Studio's debugging article to learn more. - Unity has released many test tools. You can check them out and enhance your debugging methods. You also can check the tutorial for Unity test tools here. In addition, you can use the available tools to run scratchpad tests. Scratchpad tests are more conventional, and don't require you to run a scene. - Console logging can be very useful if used in conjunction with an extension. For example, you can use Console Pro Enhanced to make your console amazing! - You need to debug differently to debug visual animation. The Visual debugger can help you do that by generating graphs over time. For example, you can use Monitor Components to do so. Five Performance Improvement Tips Tightening up your game optimization is necessary to make your game successful. The game could be great, but is still plagued with performance issues. And, games with performance issues are not received well by the end users. To make sure your Unity game is well-optimized, try out the following tips. - Before you start optimizing your game, you need to find out where the performance issues are coming from. For starters, it is a good idea to find out whether or not it is coming from the GPU or the CPU. Finding the culprit will help you approach your optimization better, because both GPUs and CPUs have different performance optimization strategies. - Performance optimization is important, but don't write code that is complex to read and hard to maintain. The decision should be made according to what performance gains you are getting for the change. If it is not substantial, ignore it. If the gains are high, keep them and do proper code documentation for others to go through the code. - Try to share object material in a scene to improve performance per scene. - Check if the game works better by lowering the game resolution. If that's the case, use better materials and algorithms to make it work at a higher resolution. - Use a profiler to understand and track performance problems. You can get started here. Conclusion Game development is a complex trade, and requires mastery of different skills. The preceding tips will help you to make your game development more refined. Additionally, these tips are not exhaustive at all. It is all about mastering your craft and learning on the go. If you are a Unity game developer, you can showcase your work and build your audience at the same time by broadcasting your work on Livecoding.tv. The platform also offers unique value on feedback because other game developers chip in by sharing their thoughts to help improve the community. Do you think the article lacks some important points? If so, don't forget to comment below and let us know. About the Author Damian Wolf is an author and tech enthusiast with articles published on top technology & coding Web sites, such as InfoWorld, DZone, HongKiat, and more. In addition to working with Unity Game Engine, he loves trying out new things: apps, software, and trends, and will gladly share his views. - Twitter: - Google+: GamingPosted by Joseph on 11/21/2016 09:34am I've been into gaming for years. Probably like every kid. But just recently I got into coding myself. I love this article. There are tips here that are really helpful. At least for me. Good jobReply
http://www.codeguru.com/csharp/csharp/cs_graphics/20-unity-tips.html
CC-MAIN-2017-09
refinedweb
1,403
65.73
UISplitViewController Tutorial: Getting Started On an app running on the iPad, it rarely makes sense to have a full-screen table view like you do so often on iPhone – there’s just too much space. To better use that space, UISplitViewController comes to the rescue. The split view lets you carve up the screen into two sections and display a view controller on each side. It’s typically used to display navigation on the left hand side, and a detail view on the right hand side. Since iOS 8, the split view controller works on both iPad and iPhone. In this UISplitViewController tutorial, you’ll make a universal app from scratch that makes use of a split view controller to display a list of monsters from Math Ninja, one of the games developed by the team here at Razeware. You’ll use a split view controller to handle the navigation and display, which will adapt to work on both iPhone and iPad. This UISplitViewController tutorial focuses on split view controllers; you should already be familiar with the basics of Auto Layout and storyboards before continuing. Getting Started Create a new Project in Xcode, and choose the iOS\Application\Single View App template. Name the project MathMonsters. Leave language as Swift. Uncheck all the checkboxes. Then click on Next to finish creating the project. Although you could use the Master-Detail App template as a starting point, you are going to start from scratch with the Single View App template. This is so you can get a better understanding of exactly how the UISplitViewController works. This knowledge will be helpful as you continue to use UISplitViewController in future projects. Open Main.storyboard. Delete the initial View Controller that is placed there by default in the storyboard. Delete ViewController.swift. Drag a Split View Controller into the empty storyboard: This will add several elements to your storyboard: - A Split View Controller. This is the root view of your application – the split view that will contain the entire rest of the app. - A Navigation Controller. This represents the UINavigationControllerthat will be the root view of your master view controller (ie, the left pane of the split view when on iPad or Landscape iPhone 8 Plus). If you look in the split view controller, you’ll see the navigation controller has a relationship segue of master view controller. This allows you to create an entire navigation hierarchy in the master view controller without needing to affect the detail view controller at all. - A View Controller. This will eventually display all the details of the monsters. If you look in the split view controller, you will see the view controller has a relationship segue of detail view controller: - A Table View Controller. This is the root view controller of the master UINavigationController. This will eventually display the list of monsters. Since you deleted the default initial view controller from the storyboard, you need to tell the storyboard that you want your split view controller to be the initial view controller. Select the Split View Controller and open the Attributes inspector. Check the Is Initial View Controller option. You will see an arrow to the left of the split view controller, which tells you it is the initial view controller of this storyboard. Build and run the app on an iPad simulator, and rotate your simulator to landscape. You should see an empty split view controller: Now run it on an iPhone simulator (any of them except a plus-sized phone, which is large enough that it will act just like the iPad version) and you will see that it starts off showing the detail view in full screen. It will also allows you to tap the back button on the navigation bar to pop back to the master view controller: On iPhones other than an iPhone Plus in landscape, a split view controller will act just like a traditional master-detail app with a navigation controller pushing and popping back and forth. This functionality is built-in and requires very little extra configuration from you, the developer. Hooray! You’re going to want to have your own view controllers shown instead of these default ones, so let’s get started creating those. Creating Custom View Controllers The storyboard has the view controller hierarchy set up – split view controller with its master and detail view controllers. Now you’ll need to implement the code side of things to get some data to show up. Go to File\New\File… and choose the iOS\Source\Cocoa Touch Class template. Name the class MasterViewController, make it a subclass of UITableViewController, make sure the Also create XIB file checkbox is unchecked, and Language is set to Swift. Click Next and then Create. Open MasterViewController.swift. Scroll down to numberOfSections(in:). Delete this method. This method is not needed when only ever one section is returned. Next, find tableView(_:numberOfRowsInSection:) and replace the implementation with the following: override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return 10 } Finally, uncomment tableView(_:cellForRowAt:) and replace its implementation with the following: override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "Cell", for: indexPath) return cell } This way, you’ll just have 10 empty rows to look at when you test this thing out later. Open Main.storyboard. Select the Root View Controller. Click on the Identity inspector. Change the class to MasterViewController. In addition, you need to make sure the prototype cell in the table view is given a reuse identifier, or it will cause a crash when the storyboard tries to load. Within the Master View Controller, select the Prototype Cell. Change the Identifier to Cell. Change the cell Style to Basic. Now, you’ll create the view controller for the detail side. Go to File\New\File… and choose the iOS\Source\Cocoa Touch Class template. Name the class DetailViewController, make it a subclass of UIViewController, and make sure the Also create XIB file checkbox is unchecked and the Language is set to Swift. Click Next and then Create. Open Main.storyboard, and select the view controller in the View Controller Scene. Click on the Identity inspector. Change the Class to DetailViewController. Then drag a label into the middle of the detail view controller. Pin the label to the horizontal and vertical centers of the container with Auto Layout. Double-click the label to change its text to say Hello, World! so you will know it’s working when you test it out later. Build and run. At this point you should see your custom view controllers. On iPad: On iPhone: Making Your Model The next thing you need to do is define a model for the data you want to display. You don’t want to complicate things while learning the basics of split view controllers, so you’re going with a simple model with no data persistence. First, make a class representing the monsters you want to display. Go to File\New\File…, select the iOS\Source\Swift File template, and click Next. Name the file Monster and click Create. You’re just going to create a simple class with some attribute properties about each monster you want to display, and a couple of methods for creating new monsters and accessing the image for the weapon each monster has. Replace the contents of Monster.swift with the following: import UIKit enum Weapon { case blowgun, ninjaStar, fire, sword, smoke } class Monster { let name: String let description: String let iconName: String let weapon: Weapon init(name: String, description: String, iconName: String, weapon: Weapon) { self.name = name self.description = description self.iconName = iconName self.weapon = weapon } var weaponImage: UIImage { switch weapon { case .blowgun: return UIImage(named: "blowgun.png")! case .fire: return UIImage(named: "fire.png")! case .ninjaStar: return UIImage(named: "ninjastar.png")! case .smoke: return UIImage(named: "smoke.png")! case .sword: return UIImage(named: "sword.png")! } } var icon: UIImage? { return UIImage(named: iconName) } } This file defines an enumeration to track the different kinds of weapons, and then a class to hold the monster information. There’s a simple initializer to create Monster instances, and a convenience method to get an image corresponding to the monster’s weapon. That’s it for defining the model – so next let’s hook it up to your master view! Displaying the Monster List Open up MasterViewController.swift and add a new property to the class: let monsters = [ Monster(name: "Cat-Bot", description: "MEE-OW", iconName: "meetcatbot", weapon: .sword), Monster(name: "Dog-Bot", description: "BOW-WOW", iconName: "meetdogbot", weapon: .blowgun), Monster(name: "Explode-Bot", description: "BOOM!", iconName: "meetexplodebot", weapon: .smoke), Monster(name: "Fire-Bot", description: "Will Make You Steamed", iconName: "meetfirebot", weapon: .ninjaStar), Monster(name: "Ice-Bot", description: "Has A Chilling Effect", iconName: "meeticebot", weapon: .fire), Monster(name: "Mini-Tomato-Bot", description: "Extremely Handsome", iconName: "meetminitomatobot", weapon: .ninjaStar) ] This holds the array of monsters to populate the table view. Find tableView(_:numberOfRowsInSection:) and replace the return statement with the following: return monsters.count This will return the number of monsters based on the size of the array. Next, find tableView(_:cellForRowAtIndexPath:) and add the following code before the final return statement: let monster = monsters[indexPath.row] cell.textLabel?.text = monster.name This will configure the cell based on the correct monster. That’s it for the table view, which will simply show each monster’s name. Download and unzip this art pack. Drag the folder containing those images into Assets.xcassets in Xcode. Build and run the app. You should see the list of monster bots on the left hand side on landscape iPad: On iPhone: Remember that on a compact-width iPhone, you start one level deep already in the navigation stack on the detail screen. You can tap the back button to see the table view. Displaying Bot Details Now that the table view is showing the list of monsters, it’s time to get the detail view in order. Open Main.storyboard, select Detail View Controller and delete the label you put down earlier. Using the screenshot below as a guide, drag the following controls into the DetailViewController’s view: - A 95×95 image view for displaying the monster’s image in the upper left hand corner. - A label aligned with the top of the image view with font System Bold, size 30, and with the text “Monster Name” - Two labels underneath, with font System, size 24. One label should be bottom aligned with the image view; the other label should be below the first label. They should have their left edges aligned, and titles “Description” and “Preferred Way To Kill” - A 70×70 image view for displaying the weapon image, horizontally center aligned with the “Preferred way to Kill” label. Need some more hints? Open the spoilers below for the set of constraints I used to make the layout. Getting Auto Layout to use the proper constraints is especially important since this app is universal, and Auto Layout is what ensures the layout adapts well to both iPad and iPhone. That’s it for Auto Layout for now. Next, you will need to hook these views up to some outlets. Open DetailViewController.swift and add the following properties to the top of the class: @IBOutlet weak var nameLabel: UILabel! @IBOutlet weak var descriptionLabel: UILabel! @IBOutlet weak var iconImageView: UIImageView! @IBOutlet weak var weaponImageView: UIImageView! var monster: Monster? { didSet { refreshUI() } } Here you added properties for the various UI elements you just added which need to dynamically change. You also added a property for the Monster object this view controller should display. Next, add the following helper method to the class: func refreshUI() { loadViewIfNeeded() nameLabel.text = monster?.name descriptionLabel.text = monster?.description iconImageView.image = monster?.icon weaponImageView.image = monster?.weaponImage } Whenever you switch the monster, you’ll want the UI to refresh itself and update the details displayed in the outlets. It’s possible that you’ll change monster and trigger the method even before the view has loaded, so you call loadViewIfNeeded() to guarantee that the view is loaded and its outlets are connected. Now, go open up Main.storyboard. Right-click the Detail View Controller object from the Document Outline to display the list of outlets. Drag from the circle at the right of each item to the view to hook up the outlets. Remember, the icon image view is the big image view in the top left, while the weapon image view is the smaller one underneath the “Preferred Way To Kill” label. Go to to AppDelegate.swift and replace the implementation of application(_:didFinishLaunchingWithOptions:) with the following: guard let splitViewController = window?.rootViewController as? UISplitViewController, let leftNavController = splitViewController.viewControllers.first as? UINavigationController, let masterViewController = leftNavController.topViewController as? MasterViewController, let detailViewController = splitViewController.viewControllers.last as? DetailViewController else { fatalError() } let firstMonster = masterViewController.monsters.first detailViewController.monster = firstMonster return true A split view controller has an array property viewControllers that has the master and detail view controllers inside. The master view controller in your case is actually a navigation controller, so you get the top view controller from that to get your MasterViewController instance. From there, you can set the current monster to the first one in the list. Build and run the app, and if all goes well you should see some monster details on the right. On iPad Landscape: and iPhone: Note that selecting a monster on the MasterViewController does nothing yet and you’re stuck with Cat-Bot forever. That’s what you’ll work on next! Hooking Up The Master With the Detail There are many different strategies for how to best communicate between these two view controllers. In the Master-Detail App template, the master view controller has a reference to the detail view controller. That means the master view controller can set a property on the detail view controller when a row gets selected. That works fine for simple applications where you only ever have one view controller in the detail pane, but you’re going to follow the approach suggested in the UISplitViewController class reference for more complex apps and use a delegate. Open MasterViewController.swift and add the following protocol definition above the MasterViewController class definition: protocol MonsterSelectionDelegate: class { func monsterSelected(_ newMonster: Monster) } This defines a protocol with a single method, monsterSelected(_:). The detail view controller will implement this method, and the master view controller will message it when a monster is selected. Next, update MasterViewController to add a property for an object conforming to the delegate protocol: weak var delegate: MonsterSelectionDelegate? Basically, this means that the delegate property is required to be an object that has monsterSelected(_:) implemented. That object will be responsible for handling what needs to happen within its view after the monster was selected. weakto avoid a retain cycle. To learn more about retain cycles in Swift, check out the Memory Management video in our Intermediate Swift video tutorial series. Since you want DetailViewController to update when the monster is selected, you need to implement the delegate. Open up DetailViewController.swift and add a class extension to the very end of the file: extension DetailViewController: MonsterSelectionDelegate { func monsterSelected(_ newMonster: Monster) { monster = newMonster } } Class extensions are great for separating out delegate protocols and grouping the methods together. In this extension, you are saying DetailViewController conforms to MonsterSelectionDelegate and then you implement the one required method. Now that the delegate method is ready, you need to call it from the master side. Open MasterViewController.swift and add the following method: override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { let selectedMonster = monsters[indexPath.row] delegate?.monsterSelected(selectedMonster) } Implementing tableView(_:didSelectRowAt:) means you’ll be notified whenever the user selects a row in the table view. All you need to do is notify the monster selection delegate of the new monster. Finally, open AppDelegate.swift. In application(_:didFinishLaunchingWithOptions:), add the following code just before the final return statement: masterViewController.delegate = detailViewController That’s the final connection between the two view controllers. Build and run the app on iPad, and you should now be able to select between the monsters like the following: So far so good with split views! Except there’s one problem left – if you run it on iPhone, selecting monsters from the master table view does not show the detail view controller. You now need to add make a small modification to make sure that the split view works on iPhone, in addition to iPad. Open up MasterViewController.swift. Find tableView(_:didSelectRowAt:) and add the following to the end of the method: if let detailViewController = delegate as? DetailViewController { splitViewController?.showDetailViewController(detailViewController, sender: nil) } First, you need to make sure the delegate is set and that it is a DetailViewController instance as you expect. You then call showDetailViewController(_:sender:) on the split view controller and pass in the detail view controller. Every subclass of UIViewController has an inherited property splitViewController, which will refer to it’s containing view controller, if one exists. This new code only changes the behavior of the app on iPhone, causing the navigation controller to push the detail controller onto the stack when you select a new monster. It does not alter the behavior of the iPad implementation, since on iPad the detail view controller is always visible. After making this change, run it on iPhone and it should now behave properly. Adding just a few lines of code got you a fully functioning split view controller on both iPad and iPhone. Not bad! Split View Controller in iPad Portrait Run the app in iPad in portrait mode. At first it appears there is no way to get to the left menu, but try swiping from the left side of the screen. Pretty cool huh? Tap anywhere outside of the menu to hide it. That built in swipe functionality is pretty cool, but what if you want to have a navigation bar up top with a button that will display the menu, similar to how it behaves on the iPhone? To do that, you will need to make a few more small modifications to the app. First, open Main.storyboard and embed the Detail View Controller into a Navigation Controller. You can do this by selecting the Detail View Controller and then selecting Editor/Embed In/Navigation Controller. Your storyboard will now look like this: Now open MasterViewController.swift and find tableView(_:didSelectRowAt:). Change the if block with the call to showDetailViewController(_:sender:) to the following: if let detailViewController = delegate as? DetailViewController, let detailNavigationController = detailViewController.navigationController { splitViewController?.showDetailViewController(detailNavigationController, sender: nil) } Instead of showing the detail view controller, you’re now showing the detail view controller’s navigation controller. The navigation controller’s root is the detail view controller anyway, so you’ll still see the same content as before, just wrapped in a navigation controller. There are two final changes to make before you run the app. First, in AppDelegate.swift update application(_:didFinishLaunchingWithOptions:) by replacing the single line initializing detailViewController with the following two lines: let rightNavController = splitViewController.viewControllers.last as? UINavigationController, let detailViewController = rightNavController.topViewController as? DetailViewController Since the detail view controller is wrapped in a navigation controller, there are now two steps to access it. Finally, add the following lines just before the final return statement: detailViewController.navigationItem.leftItemsSupplementBackButton = true detailViewController.navigationItem.leftBarButtonItem = splitViewController.displayModeButtonItem This tells the detail view controller to replace its left navigation item with a button that will toggle the display mode of the split view controller. It won’t change anything when running on iPhone, but on iPad you will get a button in the top left to toggle the table view display. Run the app on iPad portrait and check it out: Where To Go From Here? Here’s an archive of the final project with all of the code you’ve developed so far. For new apps, you’re likely just to use the Master-Detail template to save time, which gives you a split view controller to start. But now you’ve seen how to use UISplitViewController from the ground up and have a much better idea of how it works. Since you’ve seen how easy it is to get the master-detail pattern into your universal apps, go forth and apply what you’ve learned! Check out our short video tutorial series on split view controllers if you’re interested in some more details on split view controllers across devices. If you have any questions or comments, please join the forum discussion below! Team Each tutorial at is created by a team of dedicated developers so that it meets our high quality standards. The team members who worked on this tutorial are: - Author Michael Katz - Tech Editor Jayven Nhan - Final Pass Editor James Frost - Team Lead Richard Critz
https://www.raywenderlich.com/173753/uisplitviewcontroller-tutorial-getting-started-2
CC-MAIN-2018-30
refinedweb
3,476
55.84
I'm trying to create a method which creates a url based on the controllername and the actionname. I don't want to use magic strings, so I was thinking about a method taking a lambda expression as a parameter. The tricky part is, I don't want to specify any parameters on the action method. So for instance if I have this controller: public class HomeController : IController { public Index(int Id) { .. } } I would like to call the method like this: CreateUrl<HomeController>(x=>x.Index); The signature of the method I've come up with is: public string CreateUrl<TController>(Expression<Action<TController>> action) where TController : IController But this does not solve the problem of skipping the parameters. My method can only be called with the parameter specfied like this: CreateUrl<HomeController>(x=>x.Index(1)); Is it possible to specify an action or method on a controller without having to set the parameters? It is not possible to omit the parameters with an expression tree unless you have optional or default parameters within your action methods. Because expression trees can be compiled into runnable code, the expression is still validated by the compiler so it needs to be valid code - method parameters and all. As in Dan's example below, supplying a default parameter is as simple as: public ActionResult Index(int Id = 0) Additionally, since action methods have to return some sort of result, your Expression should be of type Expression<Func<TController, object>> , which will allow for any type of object to be returned from the method defined in the expression. Definitely check out MVCContrib.
https://expressiontree-tutorial.net/knowledge-base/6581418/get-action-from-controller-using-a-lambda-expression
CC-MAIN-2021-43
refinedweb
269
50.57
There are many problems that involve processing and analyzing text in much the same way as analyzing sentences in a natural language such as English to find nouns, verbs, adjectives, adverbs, etc. and determining if they fit together according to the grammar rules of that language. stringdata type. stringtype provided in the C++ stringlibrary. Some of the problems in the programming projects prepared for this lab exercise require using similar string operations in some simple data encryption methods. The textbook (C++ for Engineering and Science) describes, in addition to C++'s stringclass, its istreamand ostreamclasses and complexclass for processing complex numbers. It also describes a RandomIntclass for generating random numbers that are useful in programming simulations. Making a noun plural usually consists of adding an 's', but sometimes there are special cases. Consider these examples: As for Pig Latin, there are two basic rules: (NOTE: Vowels are 'a', 'e', 'i', 'o', 'u', and 'y' (unless it begins the word). Words in which 'u' is the first vowel and it is preceded by 'q' also require special treatment as described later.)(NOTE: Vowels are 'a', 'e', 'i', 'o', 'u', and 'y' (unless it begins the word). Words in which 'u' is the first vowel and it is preceded by 'q' also require special treatment as described later.) stringops.cppis a program for you to use to try out some string operations described in the first part of this lab exercise. ==>in the opening documentation. ==>at the beginning of main(). The file translate.cpp is a driver for translating keyboard input from English to Pig Latin. Later in this lab exercise you will repeat the preceding 3 steps for it and then add a function englishToPigLatin() to it, inserting its prototype and definition at the designated places of the program. For both problems — pluralizing nouns and english-to-pig-latin conversion — we begin by studying C++'s string type that is provided by the <string> library. Note the compiler directive in thein the #include <string> translate.cppprogram. The string data type in C++ is implemented with a class. The main difference between a class such as string and simple types such as int and double is that a class encapsulates both data and actions together in one object (as opposed to the simple types, which must be "shipped off" to other functions that perform the required actions). In a future lab, we'll look at how to build our own classes, but for now we'll restrict our attention to those provided in C++. A string object is used to hold a sequence (i.e., string) of characters. A single character (i.e., a char) isn't, in general, as useful as strings; so you'll probably find yourself using strings almost every time you need to process text. stringObjects We can easily declare and initialize string objects with string literals; for example, After the declaration,After the declaration,string englishWord = "farm"; englishWordis a stringobject tnat contains the string of characters that make up the word farm. A string is an indexed type, which means that we can access individual characters in the string by specifying their positions: englishWordand can be used to access the individual characters within it. One important thing about indexing the characters of a string is where the indexing starts: So the first character of the string has index 0, the second character has index 1, the third character has index 2, and so on up to the last character whose index is one less than the size of the string. Keep this in mind as you use indices to access the characters and substrings of a string. stringOperations As mentioned above, a class encapsulates both data and actions into one object. As a class, the string class provides many operations that can be performed on string objects. Operations in a class are usually implemented as member functions (also known as methods or instance methods), which are simply functions provided within that class. In a later lab we will see how to make these definitions ourselves, but for now all we need to know is how to call them. This requires a slightly different syntax for a function call: object. memberFunctionName( arguments);. Member functions can be thought of as messages sent to an object and the dot ( .) as a "push-button" operation that sends the message (somewhat like buttons on a stop watch or a calculator or a phone and so on). For example, We can think ofWe can think ofint size = stringObject.size(); stringObjectreceiving a message size()that asks it to figure out and report back its size. In other words, "Hey, stringObject, what's your size?" This metaphor becomes more important when we implement classes, but it's also helpful in mastering this new syntax for calling methods. Let's look at a few of the operations provided by the string class. In the following table, str is of type string: The first four operations are quite straightforward. The following statements illusrate how they are used: Now let's look at some examples of theNow let's look at some examples of thestring str = "milesperhour"; cout << str << endl; cout << str[0] << str[1] << str[2] << str[3] << str[4] << endl; cout << "Size = " << str.size() << endl; cout << str + " (mph)" << endl; if (str == "MilesPerHour") cout << "Yes\n"; else cout << "No\n"; Question #7.1: What output will be produced by these statements? Predict what you think will be produced and then enter the statements in stringops.cppand execute it to check your answers. substr()method: string sub1 = str.substr(0,5); cout << sub1 << endl; sub1 = str.substr(5,3); cout << sub1 << endl; sub1 = str.substr(8,0); cout << sub1 << endl; sub1 = str.substr( 0, str.size() ); cout << sub1 << endl; Question #7.2: What output will be produced by these statements? Predict what you think will be produced and then enter the statements in stringops.cppand execute it to check your answers. In these examples, it is important to note that the second argument is a size, not an index (a common mistake). The find_first_of() method is a little more complicated, but is very useful. Consider this code: string factorial= "n! = 1 * 2 * ... * n"; int firstIndex = factorial.find_first_of(".*!?", 0); TheThe Question #7.3: What output will be produced by these statements? Predict what you think will be produced and then enter the statements in stringops.cppand execute it to check your answers. find_first_of()method searches factorialto find the first occurrence of a character from the pattern ".*!?". It does not need to find the entire pattern, just one of the characters from the pattern. So, in the code above, firstIndexwill be set to the index of the first exclamation mark in the string. Starting a search at the beginning of a string — i.e., at position 0 — is common but isn't required. We can start the search anywhere within the string. To illustrate, suppose we wish to continue the search in the preceding example beyond the first occurence of the first ., *, or !. We need only conduct a search starting with the character after the one we found in the first search : This starts the search right after the previous search and finds the second occurrence of one of the characters.This starts the search right after the previous search and finds the second occurrence of one of the characters.int secondIndex = factorial.find_first_of(".*!?", firstIndex + 1); Question #7.4: What will be the value of secondIndex? Question #7.5: What would be the value of secondIndexif we started the search at firstIndexinstead of firstIndex + 1? Memorizing a complete description of a library such as string is usually not necessary; in fact, in most cases it isn't really feasible. What is important is to know generally what's available in the library, where you can find the library, and most importantly, where you can find documentation for the library — for example, in your textbook (perhaps in an appendix) or by searching the Internet. Here is a handy string Library Quick Reference that describes some of the most useful string operations. Don't try to memorize them, but be able to come back to this section when working with these string operations. It might be wise to print this quick reference and keep a hard copy (or a pdf file) handy. After examining the table of examples of singular and plural nouns at the beginning of this exercise, we can formulate some simple rules: Here's how our function should behave: Our function should receive a singular noun. If the noun ends in "s" or "x", return the noun with "es" attached at the end. Otherwise, if the noun ends in "y", return the noun with the "y" replaced with "ies". Otherwise, return the noun with "s" tacked on the end. Here are the objects we need: And we can write a specification for our function: Specification:Up to now, we've usedreceive: a singular noun, a string precondition: the noun should be singular return: the plural version of the noun, a string assert()or an ifstatement to check preconditions. Trying that here would require a lot of work because there is no easy way to tell whether a word is a noun and, in addition, whether it is singular. In such situations when it isn't practical to enforce preconditions in our code, we settle for indicating clearly to those using the function that this is a precondition. If someone uses our function with a plural word (like "nouns") and gets strange results (like "nounses"), that's not our problem because we warned them! Using the string operations listed above and other operations we've seen before, here are the operations we need for this function: And here now to coding the Pig Latin translator. Like the pluralizer, the Pig Latin translator will have certain rules for transforming words that are based on the contents of the word. In the case of the pluralizer, the rule we used was determined by the last letter of the word. For the Pig Latin translator, it will be the location of the first vowel. Let's look at some examples: The main point in both rules is the first vowel. In additon to 'a', 'e', 'i', 'o', and 'u', we will treat 'y' as a vowel except when it is the first letter of the word. For example, "my" becomes "ymay" in Pig Latin, but "your" becomes "ouryay." Here's a first attempt at our our Pig Latin translator is to behave: Receive an English word. Find the position of the first vowel in that word, checking that a vowel was actually present. If the word begins with a vowel other than 'y', we need: This gives the following specification for our function: Specification:Receive: englishWord, a string. Precondition: englishWordshould be an English word. Return: piglatinWord, a string. Use the specification to create a prototype for a function named englishToPigLatin(). The rest of the design is up to you — identifying the operations needed and developing an algorithm. As with the noun-pluralizing function, checking the precondition isn't really feasible. The driver program ( translate.cpp) where you will put your function prompts the user to enter English sentences, so you need not be concerned about what happens if they don't. Task: Begin work on your Pig Latin translator function by identifying the operations needed and developing an algorithm for it. Even if you aren't required to hand it in, it will help a lot if you write some of this down and have it handy as you begin to work on the function definition. We can code our algorithm for the "pluralizing" algorithm as follows::"; } Now let's break this down and spend some time comparing it with the algorithm. The first statement in the pluralize function, Step 1: Receive singularNoun. This is done automatically through the parameter passing mechanism. Step 2: Let lastCharIndex be the size of singularNoun minus 1. The first statement in the pluralize function, int lastCharIndex = singularNoun.size() - 1; uses the size() method to get the index of the last character. Step 3: Let lastChar be the character of singularNoun at index lastCharIndex. The second statement in the pluralize function, char lastChar = singularNoun[lastCharIndex]; uses this index to get the last character of singularNoun. Step 4: If lastChar is 's' or 'x' Return singularNoun + "es" Otherwise if lastChar is 'y' (a) Let base be singularNoun without the trailing "y" (b) Return base + "ies" Otherwise, Return singularNoun + "s" is implemented by the multi-branch if statement at the end of the function: if ((lastChar == 's') || (lastChar == 'x')) return singularNoun + "es"; else if (lastChar == 'y') { string base = singularNoun.substr(0, lastCharIndex); return base + "ies"; } else return singularNoun + "s";Each of the conditions in the ifstatements compares two chars and determines which rule to apply. Each rule results in a string concatenation, the first and last of which are quite simple; but the second one requires a closer examination. Recall that the substr() method needs an index where it is to start extracting a substring and the length of the substring to be extracted. The beginning index is easy: because — the character we need to avoid. Because the indices of the individual characters begin with 0, the index of any one character is equal to the number of characters that precede it. For example, for the string "play", the indices of the characters are 0, 1, 2, and 3, so the last character ( y) has index 3, and there are 3 characters that precede it. This explains why lastCharIndex is doing double duty as an index into singularNoun and as a size for the substring. Now, go back and compare Step 4 of the algorithm with the code, noting how straightforward each step of the algorithm translates into code. You may struggle some with the syntax of the code since this is your first look at the string operations being used. The double use of lastCharIndex is a bit tricky, but otherwise the algorithm and the code match up very well. The trickiest operation is finding the first vowel in a word. Unlike the noun pluralizer, you cannot simply test a fixed position with the index operator. Rather, you have to search for it. However, take another careful look at the table of string operations given earlier and the examples that follow it. One of those methods is exactly what you need! Then, after locating the first vowel, you need only extract the appropriate substrings from the English word and use the concatenate operator ( +) to build up the Pig Latin word to be returned by the function. Test your Pig Latin translator on all of the words from the table of examples given earlier and others that you want to try (and perhaps some others that your instructor assigns.) As you test your function, you'll discover that your function doesn't work on all words; in particular, words that begin with 'y' or that contain a 'q' in the initial consonants. Words beginning with 'y' pose a problem because we've been considering 'y' to be a vowel. If we didn't, words like "style" or "spry" wouldn't translate properly. However, for most words that begin with 'y' such as "yellow" and "yard", the initial 'y' should be treated as a consonant so that the Pig Latin versionis are ""ellowyay" and ""ardyay." Note: Your program won't be perfect, however, because for some words, such as "Ypsilanti" and "yperite", the initial 'y' acts as a vowel. But we won't worry about these!Note: Your program won't be perfect, however, because for some words, such as "Ypsilanti" and "yperite", the initial 'y' acts as a vowel. But we won't worry about these! Question #7.6: Add code to your function to handle this special case of words beginning with 'y' and test it with the words "yellow" and "yard". Words with a 'q' in the initial consonants followed by a 'u' also pose a problem because the 'u' after the 'q' should also be moved to the end of the word. For; for example, the country "Qatar." These two cases should be considered for a full-fledged Pig Latin translator and your instructor may wish to assign them. (See Project 6.1.) There are also some other interesting string-processing projects, including two that deal with encryption and decryption.
https://cs.calvin.edu/activities/books/c++/engr-sci/LabExercises/Lab6/exercise.html
CC-MAIN-2018-22
refinedweb
2,743
62.07
The Storage Team Blog about file services and storage features in Windows and Windows Server. Hi, this is Sanjoy Chatterjee, program manager for DFS Namespaces here to talk about client failback. Several branch office failure scenarios can result in client being bound indefinitely to a remote server even after availability to their local, preferred server is restored. Customers commonly deploy DFS for availability or for data protection. They will, for example, deploy DFS for software or document distribution with replicas in both the hubs and branches. Clients should access the local replica within their branch. When their branch server fails, they will failover to the hub. Normally, DFS employs a referral system known as “sticky referral”, i.e. DFS will stick to a referral once it has been established. This is an optimal scenario in most cases. However, in cases where the referral failed over to a remote server because the local server was down, this behavior causes the referral not to failback to the local site, resulting in poor experience for the end user as well as costly WAN bandwidth consumption. The lack of failback behavior is a common customer complaint. In Windows 2003 R2 (and Windows XP SP2 clients), DFS introduced a feature called client failback. The client failback setting can be applied to a stand-alone root, a domain root, or a link. Links may either inherit or override the behavior of the root similar to the interaction of the Insite flag on roots and links. As of XP SP2, clients refresh their link and root referrals when the referral TTL expires. When refreshing the referral for a link or root, the client will determine if it should attempt to failback to a preferred server. If the failback policy is enabled per the inheritance rules listed above, the DFS server will set the failback bit in the referral response. If the failback bit is set in the referral, the client will attempt to failback to its local site, starting at the top of the referral list. If the client determines that it had earlier failed over to a client in the same cost bucket (as computed by the server considering site cost and priority), it will stick to the existing target. On an XP+ client, any handles open at the time the failback occurs will not be broken. Handles which were created prior to the failback continue to access the previous link or root target. In Vista/Longhorn, if CSC is enabled, then the handle will be transferred over to the new target and CSC will ensure that the two targets are synchronized. This handle transfer is transparent to the user. If CSC is not enabled, the behavior is similar to XP. To enable client failback in a namespace, refer to the instructions in the DFS Management Help on TechNet. You will also need to install a client failback hotfix as described in KB article 898900. --Sanjoy Sanjoy provides a look at how Windows Clients locate failback links in DFS implementations. The Filing... Just published on the CFS teams Blog :- &nbsp; How client failback works in a DFS Namespace New... Just published on the CFS teams Blog :- How client failback works in a DFS Namespace New Single Instance A blog reader recently contacted us with the following question, “When I was testing DFS failback, the
http://blogs.technet.com/b/filecab/archive/2006/03/27/423269.aspx
CC-MAIN-2014-10
refinedweb
558
61.16
OCTOBE R 3 2002 Volume 6 Issue 13 FREE M U A L M DNLE M T Sa aui E lut Tim S e e S st Survivor 4 Cakewalk 12 Blood Bomb 16 he We M ekl ulle y t! Day & Night 15 Dick’s Place n ow t n i eer b t des Col “The Sports Palace” of e m ho 1.00 $ e th ai” t i ma “ Hottest Karaoke In town sunday nights! Beer can sundays steiny, mgd & miller light, coors & coors light only $2.00 kitchen open til midnight! Dick’s Place “World Class Billiards, Bar & Restaurant” 2463 South Kihei Road (Next to Denny’s) with plenty of parking in the rear for reservations call 874-8869 for more info visit us at Pleasant Island Holidays 1913 SOUTH KIHEI ROAD • 891-8010, Next to Foodland maxi’´s costumes We Got WINGS! “Your Best Value For Inter-Island Travel” 16400* 3500* Are you a Good or Bad Angel? $ INTER-ISLAND PACKAGES AIR/ROOM/CAR FROM $ ROOM/CAR FROM 16400* 3500* $ AIR/ROOM/CAR FROM $ ROOM/CAR FROM PRESENT THIS AD *All Neighbor Island prices are per/person based on double occupancy. Prices subject to change without notice. Your choice of Aloha or Hawaiian Airlines. los angeles / san francisco One Way FROM Roundtrip FROM 159 $ 269 $ * Plus Tax 1601 Kapiolani Blvd. Suite 940 OPEN 7 DAYS A WEEK OCTOBER 3, 2002 20% * Plus Tax 922-1515 or 1-800-654-4FUN 2 FOR OFF! F Speall cia l Pet Costumes Too! Maui’s Largest Selection of Quality Costumes, Masks & Accessories! Your #1 Source for Wings! Awesome Feather Boa’s & Headresses! Wigs by the M 100’s ega Leather & Vinyl daring costumes! Supero Jumb Harry Potter/Children Costumes! FROS’s! B io All Kinds of Fangs & Teeth Hazard Costum CatWomen costume in 3 Styles! es Best Make-up & Costumes from Kids to Full sizes s! Two ion 37 Central Ave • Wailuku • 249-2544 Locat Maui Mall next to Fun Factory & the Pet Shop • 873-7955 2 1 MAUItimeweekly for n ews Domestic Awareness Month ..............................4 Role-Model Survivor Shares Message of Hope by Lydee Ritchee Kama`aina Special! Cover St ory Same day booking only Subject to Availability KAMA`AINA Bring your family and friends! Only ONE local ID required Expires Oct. 31, 2002 The Mighty Mullet...........6 Why the “business up front and party out back” look rulese the scene by Christy Miles surf & sports Maui Time Weekly’s Mullet Contest Runner-Up King of the Air................ 12 Ho’okipa is Home to this International Championship by Eli Kealoha Special Rates for Visitors! Ms. Mullet Diann Mulkey of Haiku wins second place. See the First Place Winner on Pg. 7 D ining Cakewalk Paia Bakery.... 12 A &E Go ahead and make your day a cakewalk! Maria Maldaur............... 15 by Mat Seavey A Legend in Blues at the Royal Lahaiana by Lucy Buur Publisher and Editorial Director Tommy Russo tommy@mauitime.com F il m cr i t i q u e Associate Publisher Jennifer Spector jen@mauitime.com Silence of the Lambs Prequel Stumble............. 16 Director of Advertising Jeff Onderko jeff@mauitime.com Hannibal Gets Dinner and A Show - the Audience Gets A Bomb - Two Stars by Cole Smithey Art Director Rudi King rudi@mauitime.com Calendar Editor Samantha Campos sam@mauitime.com d ay & n i g h t Production Assistant Audrey McShane Movie Capsules........................17 Cartoons Ted Rall, Max Cannon Movie Times............................17 Contributing Writers Travis Henderson, travis@mauitime.com, Sara Artman, Cole Smithey, Mat Seavey, Chuck Shepherd, Eli Kealoha, Amy Alkon, Charles Cooper, Lucy Buur, Don Gronning, The Grid....................... 19, 21, 23 Maui’s nightlife at a glance. 30 venues. Da Kine Calendar.............. 18-24 Dates, Times and Venues of upcoming events. Surf & Sport Editor Dave Sweedler d ep a r t m e n t s Photography Sean M. Hower, Howerfirecrotch@aol.com, Lydee Ritchee, Kirsten Guenther Letters....................................... 4 Distribution Pacific Isle Circulation Force Fed.................................. 8 Web Design Liko Resources webmaster@likoresources.com Opinions by Travis Henderson EH BRAH.................................. 8 658 Front St., Ste. 126A-7278 Lahaina, HI 96761 (808) 661–3786 fax (808) 661–0446 <> by Chuck Shepherd Dining Listings........................13 The Valley Isle’s most up-to-date dining resource.. Maui Time Weekly may be distributed only by Maui Time Weekly’s authorized independent contractor. Maui Time Weekly is valued at $.50 per copy and permits one complimentary copy per person. No person may, without written permission of Maui Time Weekly, take more than one copy of each weekly issue. All opinions expressed throughout Maui Time Weekly are those of the authors and not necessarily the same opinions as Maui Time Productions, Inc. and Maui Time Weekly. Maui Time Personals.............. 25 The Advice Goddess............... 26 by Amy Alkon Your Horoscope...................... 26 by Charles Cooper Classified Listings..............26-27 Back Side................................ 28 Cover story PRICELESS!!! News of the Weird.................... 9 Maui Time weekly News Air Maui Helicopter Tours by Anonymous COVER DESIGN: RUDI KING Letters surf dining day&night A & E Film With Air Maui, you are never lost in the crowd. Every passenger receives individual service and attention. Our people make the difference, let us be your personal guide on a exciting Maui adventure. Air Maui flies modern, air-conditioned A-star helicopters. all flights include great visibility from every seat, CD stereo music and pilot narration. Our 4-camera video system records all the sights and sounds of your actual flight. For Reservations and Information 808.877.7005 • da kine calendar the grid Classifieds Maui Time Weekly OCTOBER 3, 2002 3 LettersTothe Editor localnews Big Business Politics As I was driving past Dowling Company’s huge shopping center construction site in Kula, I saw it lined with Council Candidate Beverly Paoli-Moore’s campaign signs. In an interview, I found out she is married to the Superintendent for Goodfellow Construction Company and feels like part of the Goodfellow Family. She also has profit sharing with Goodfellow worth over $1,000,000. With the extensive business Goodfellow does in construction, it would mean not being able to vote on many, many issues if she became a Councilmember. I cannot fault big business for wanting to preserve their majority representation on the council. I do not believe Beverly PaoliMoore can look beyond such major fiduciary responsibility to Goodfellow to serve the people’s best interests. It is time for genuine representation for the common voter. If you are tired of traffic, uncontrolled growth and government being run by the big money candidates, then I recommend looking beyond the brochures before you vote. Sean Lester, Kula No Incinerator How can a county council that has just banned cigarette smoking be considering incineration as a garbage processing option? The last thing we need in Hawaii is another source of air pollution. While it’s easy to believe that we don’t have pollution issues out here in the middle of the Pacific Ocean, the reality is that what we affectionately call “Vog” is partly composed of car exhaust, smoke from cane fires and sugar processing. I’d hate to see Maui County add an incinerator to this list. Call it waste-to-energy, plasma arc or H-power; they all amount to the same thing. The proposition is that by simply buying the right technology, Maui County can turn its garbage into electricity, while helping extend the landfills. Who wouldn’t want one? The truth is that incinerators are notoriously expensive and yield only small amounts of electricity. Worse, they destroy non-renewable resources like aluminum while pouring dioxin and other toxins into the air and soil. A better alternative for processing our garbage would be to divert the large percentage of organic materials and turn it into compost. That compost would be well used on Maui’s crops and gardens to reduce our dependency on imported compost and chemical fertilizers. The current Mayor and County Council are disturbingly open to garbage incineration despite a previous ruling not to consider it. I’m hoping our future Mayor and County Council are able to resist the siren song of the incinerator manufacturers and have the guts to say “No!” Camille Armantrout, Makawao 4 OCTOBER 3, 2002 news Paia Stung One evening last month, I was in my van painting a picture and 3 policemen drove up and accused me of living in my van, which I don’t. I checked out legal. The next morning I happened to park in the same parking lot, 2 different policemen showed up with the orange tow sticker. The van was not there overnight so this was inappropriate. At this point, it was discovered that my license was expired (which was missed the previous evening). The cop said, “If you drive with this license, you will be arrested”. In a panic, I grabbed a tourist to move the van out of the lot so it wouldn’t be towed. Upon reaching the DMV, I was shocked to be told of a 90-day grace period for an expired license! For the next three days, the police continued their “Paia sting” for hardened criminals. Well, I’ll have you know that on September 10,20002 between 7:30 am and 8:00 am, I spotted 60+ vehicles go through the Baldwin/ Hana Hwy light with expired safety stickers. There is not necessarily a shortage of police here. They just need to stop creating more problems than they solve! Gary Baechle, Paia Many Mahalos Na Kupuna O Maui wishes to extend mahalo nui loa to all who participated and assisted with the Ku’e Me Ke Aloha at the State Building in Wailuku on August 12 and on September 2 at the Kahului, Kona, Hilo, Kauai, Molokai and Lanai airports and the Queen Liliuokalani Canoe Regatta in Kona. A special mahalo to Na Kupuna O Hilo, Na Kupuna O Kauai, Na Kupuna O Molokai, Na Kupuna O Lanai, Na Kupuna O Waimea, Na Kupuna O Kohala, Na Kupuna O Kona and Na Kupuna O Maui. Also, mahalo nui to the airport staff and Federal and State Security at the Kahului, Kona, Hilo, Kauai, Molokai and Lanai Airports and the Police Departments of Maui, Kona, Hilo, Kauai, Molokai, and Lanai. Mahalo nui to the Office of Hawaiian Affairs; Old Lahaina Luau; AKAKU; Mayor Apana and staff; Pastor Laki, Queen Kaahumanu Church; KPOA radio station and staff; the candidates in the upcoming election who participated; and all the media. The support and the kokua of these people that made it possible for us to give out 2,000 handbills at the State Building regarding the Arakaki lawsuit which threatens OHA, Hawaiian Homes and Native Hawaiian Programs; and 5,000 handbills at the September 2 rally to bring awareness and education regarding Honolulu City Council Bill 53, which is a threat to the Queen Liliuokalani Trust that serves 9,000 children. Na Ke Akua e malama ia oe I na la apau (God will care for you daily). Aunty Patty Nishiyama Domestic Violence Awareness Month Role-Model Survivor Shares Message Of Hope Hope was all she had left. Her selfesteem, self-worth, and self-confidence were damaged by years of emotional and physical abuse. In the beginning, she thought his control over her was a demonstration of love, that he wanted to protect her, was afraid of losing her, and therefore kept her at close watch. She recalls giving up being a cheerleader in high school because he didn’t want her to “get hooked up with the jocks.” Then she was voted as prom queen and took photographs with the king for the school yearbook. But he “didn’t like the king,” so she didn’t show up for the ball. “I remember years later finding my tiara, putting it on and feeling the loss of my teenage years--the years that should have been the best time of my life,” she says sadly. The abuse worsened to the point where she had to beg for her life. Ironically, she grew more emotionally and psychologically dependent on him and began to think she could not ever live without him. Why? “My confidence and self esteem were so shattered that at times I felt like a beaten dog just waiting patiently for acknowledgement from her master-a pat on my head, a scratch under my chin. The emotional abuse and control kept me psychologically tied to the relationship. It was crazy-making for me because everywhere else in my life I had complete control.” Under distress, she managed to hide the emotional and physical bruises and went to work. One day her boss noticed she wasn’t laughing with the rest of the employees and called her into his office to find out what was going on. She says breaking her silence to him “changed her life.” How? Her boss became one of the many caring people who would soon become a part of her new support system, and somewhere along the way, someone inspired her with hope. One step led to another, and she felt like she was developing more support and strength. She finally mustered the courage to take her children away from that abusive environment. After a month, their father called and begged her forgiveness. He also wanted another chance. She allowed him to he come back. Despite his promises, the vicious cycle of violence reared its ugly head again. Little did she know that only six months later her five-year-old son would be the one with the strength to summon for help from nearby neighbors. He pounded on their door yelling, “Please, you gotta help my mommy, call the police! My daddy’s gonna kill my mommy!” He too, was affected by the violence. Suddenly something else hit her hard: “The realization of the extent of the violence that my children and I endured came when the officer arrived.” He asked her if she had any protection, and she was puzzled by what that meant. He advised her to immediately see a domestic violence agency to get a restraining order. After eight years of abuse, she finally realized she had choices in life, and learned to accept love and support from family, friends, and agencies. localnews Lydee Ritchie Get Listed with Kim! Kimberly Partyka (RA) Direct Line 808–875–5605 Fax 808–874–3680 Email kimberly.partyka@pruhawaii. com Locations: Longs Kihei Center, PO Box 1310 Kihei, HI 96753–1310 VOTE Today, Catherine Long is a staunch advocate of domestic violence prevention. She is a success story and has even remarried. She and her husband, Kalani, own Keiki Clubhouse Inc. Preschool in Kihei, where the curriculum focuses on supporting the entire family. Grateful to all who have helped her in the past, she now gives back to the community. She works with several agencies including Child and Family Service and Malama Family Recovery Center, which is part of Aloha House. She has been asked to share her message of inspiration at an event called “Day of Hope.” The public is invited to attend the presentation on Wednesday, Oct. 16, 5 p.m. at Queen Ka’ahumanu Shopping Center. In a resolution recognizing the month of October as “Domestic Violence Awareness Month” in Maui County, it is stated that domestic violence abuse cases have increased in Maui County from 1,181 cases in 1990 to 4,116 cases in 2000. And it’s gotten incredibly worse. From Oct. 1, last year to Aug. 31, this year, 4786 abuse cases and 523 restraining order violations were reported in Maui County. On Oct. 2, 1995, President William J. Clinton issued a proclamation designating the month of October as “Domestic Violence Awareness Month.” Since then, every year in October, anti-violence agencies throughout the nation have joined forces to create an awareness campaign, to commemorate victims, survivors, and to pay tribute to those who have died from domestic abuse. Brenda Plant, Program Manager with Child and Family Service is coordinator of DVTF’s “Day of Hope” planning committee. She says the committee asked Long to share her message because “Cathie is inspirational. She exemplifies hope, strength, and courage. It is Letters News Cover story surf the committee’s hope that when others hear her message, they will know that they no longer have to live in fear, that there is help available, and that our community will come forward and start talking about domestic violence, and others will want to join our efforts.” Plant hopes that the “Day of Hope” event will continue to be held every year. She states, “Our mission for this event is to create more public awareness so that we can eradicate all domestic violence in our community. We hope to accomplish this by instilling hope within our victims, as well as enlightening all to be a part of the solution and to not hide in silence. We will mourn those who have died because of domestic violence. We will celebrate those who have survived, and connect with those who work to end violence. We appreciate and thank all of them for their efforts.” LANCE HOLTER NOVEMBER 5TH MAUI COUNTY COUNCIL Makawao - Paia - Haiku “Nature is our economy. Our pristine ocean, reefs and watersheds fuel Maui’s economic engine. We must preserve, protect and restore our island home.” Remember, wherever you live in Maui County, you can vote for Lance Paid for by Lance Holter for County Council The County of Maui Domestic Violence Task Force (DVTF) is sponsoring the “Day of Hope,” which includes a Men’s March that begins at 4 p.m. in front of Sears at Queen Ka‘ahumanu Center, sign waving on Ka‘ahumanu Avenue at 4:30 p.m., and a presentation at Center Stage in the mall at 5 p.m. A candlelight vigil will be held in Hana Bay at 5:30 on the same evening, and members of the DVTF will be participating in Hana’s Aloha Festivals parade on Oct. 19, starting at 11 a.m. Domestic Violence Awareness month activities on Maui will conclude with a candlelight vigil to remember those who died because of domestic violence, and to honor survivors. It will be held on Oct. 30 at Ka‘ahumanu Church in Wailuku from 6: 30 to 7:30 p.m. For more information on “Day of Hope,” call Brenda Plant at 877-6888. dining day&night A & E Film da kine calendar the grid Classifieds Maui Time Weekly OCTOBER 3, 2002 5 Billy Ray Cyrus had one. So did Patrick Swayze. No, I am not talking about a great butt. What these two celebrities have in common is a mullet. Long hair in the back with short layers on top can best describe this classic hair craze. Mullets of all variations, and there are many, can be observed all over the world to this day even though they hit their peak of popularity in the 80’s decade. However, men and women refuse to let go of the mullet craze that hit hard when leg warmers were acceptable wear on the street, and continue to sport this peculiarly fashionable “do”. On any given day, mullets can be spotted, just look around you. To better understand mullets, the history of the mullet needs to be set straight. The term “mullet” is said to trace back to the 1967 prison film “Cool Hand Luke,” where George Kennedy’s character refers to Southern men with long hair as “Mullet Heads.” In the 1700’s, “Mull-et” was the customary hair style of nobility and studied men, which consisted of the “short in front, long in back (SFLB)” design. “Les Mullet” website states that there is a common myth that the mullet style got its name from a breed of fish with the same name, this is untrue. Some legends say that fishermen in cold regions used the long hair on their neck to keep them warm. Despite this popular folklore the mullet actually originated in the old palaces and universities of Poland. “Mull” means to ponder while “et” is a Polish suffix that means “eternally”. Therefore the mullet received its name from people who were always pressing themselves to become smarter. In today’s age mullets are worn by everyday people who wander the streets, shop next to you at the market, drive cars, raise children with mullets, drink lots of beer at concerts, etc. You can spot mullets on Front Street in Lahaina, driving around Upcountry, any of the harbors, and of course, at Wal-Mart. According to mulletjunky.com, in a legal mullet, the hair on the back has to be 3 times longer than any portion of hair on the skull. There are several different types of mullets. One mullet in particular is the Stingray mullet, which is very com- mon within the Hawaiian Islands. To spot one, look for a tail, where the mullet is extended, straight, and narrowed, just like the stingray. If you haven’t seen one, then head down to any of the harbors and look for a captain. They are notorious for having this type of mullet. Other common names for the mullet include the 10/90, the mud flap, the ape drape, the long island ice teased, and the achy-breaky-big mistakey, just to name a few. Here are some other types of mullets to look for. Skullet- An elder mullet that is bald on the top with either a curly or straight mullet in the back. Perllet- A permed mullet; this style also includes crimped mullets. Mulltdown- The slow process of losing a mullet. This takes a period of time, but eventually, the mullet is gone. Mulleto- Slang term for a child who only has one mullet parent. Camaro Mullet- This mullet used to be popular in the 70’s and 80’s. A person who fits this style can be seen wearing acid wash jeans, a fuzzy mustache, and a key ring hanging from the belt loop. Femullet (fem-mullet) - This is a woman who is on the butch side. Her mullet is buzzed on the top with long, straight hair on the back. Euromullet- A sophisticated, stylish, soft mullet. Yet it’s still a mullet. Trashmullet- This is your typical trailer park trash mullet. If you do spot someone with this mullet, they usually will be driving a huge pick-up truck, have gun racks on their truck, and of course, the truck will be parked next to a trailer. No products are used and the mullet is generally in a rattail. Feathermullet- The late 80’s and early 90’s had several of these mullets roaming around high schools. The mullet is short on top with several layers that feather back just below the shoulders. Thumbing through old yearbooks, you are more than likely to find a feather mullet. A half smile, a bright colored sweater and a The Mighty Mullet Why the “business up front and party out back” look rules the By Christy Miles scene NO MULLET 0 POTENTIAL MULLET BUT STILL NO MULLET 1.5 fig. 1 3 fig. 2 A legal mullet is: The back hair length must be 3x’s longer than any part of the rest of the head’s hair. 6 OCTOBER 3, 2002 cover story A LEGAL MULLET fig. 3 matching turtleneck will always accompany the photo. Maui Time Weekly’s Mullet Contest Grand Prize Winner David Baker was once a mulletman himself. He sported the feather mullet in high school from 1985-1989. “The cut was short on top and it came down on the sides,” Baker said. “It was long and straight in the back. I miss that mullet.” Baker had to cut his infamous feathermullet off because he joined the military. Looking back now, Baker thinks photos of his mullet are funny as hell. His brothers still tease him about his hairstyle. They also love making fun of people who have mullets. “My wife doesn’t like mullets,” Baker said. “But if she would let me, I would grow my mullet back.” Why is the mullet craze still popular among women and men? When mullet people look in the mirror every morning, don’t they realize that the 80’s went by as fast the wind? Owner of For Shear Hair Design, Sina Ahia, said the mullet will always be in style in Hawaii because Hawaii is behind in the world of style. “It is a well know fact that everything is at least 20 years behind here in Maui,” Ahia said. “Especially hairstyles.” When customers come into For Shear who have mullets, they assume that one of the stylists will just trim up the mullet. Ahia sometimes suggests to customers that it is time for a change. “I try to gain their trust so that they will allow me to cut their hair in a flattering way so that they will lose their mullet.” Ahia said that hairstyles are an emotional attachment for many people. They usually keep the style in which they feel the most attractive. That is why people cling to their mullet-dos for so long. Ahia does have customers who still want a mullet. She informs them to keep it quiet on who does their hair when they are out in public. Mullets will stay here forever. Don’t ridicule men, women, and children who have mullets. They are human beings who happen to be behind in style, or attempting to resurrect the age old hair trend. Befriend them. Take them in and encourage them to change their hairstyle. Of course, you can always let people keep their mullets and secretly laugh behind their back. Regardless of what anyone does, mullets are here to stay. Become an expert for yourself, check out mullets on the internet: Here is some mullet Q & A from mulletjoe.com Q: Does a Mullet’s personality have anything to do with the type of Mullet they exhibit? A: Great Question! Yes. Mullet traditionalists are drawn to the timeless, feathered-bi-level. The more rebellious display the beaver paddle look. These forward thinking beavers are the avant-gardes of the up-and-coming Mullet generation. Q: Is the Mullet a backlash against the hairstylist or the greed-driven hairstyles of the corporate 90’s? A: Neither. These folks just don’t know any better. Q: The Mullet has extended beyond the lowest rungs of the corporate ladder. Is there a niche amongst the blue-collar, working class? A: The Mullet’s niche is amongst the lowest classes of human society…it is there to stay. The Mullet is growing in popularity amongst middle managers, pro-wrestlers, and others with less than average intelligence. Letters News Cover story surf dining day&night A & E Film da kine calendar the grid Classifieds Mighty Mullet Man, a super hero by Rebecca Marie Mullet Man, yes Mullet Man Hero to us all Fighting Crime in the trailer parks Short in the front, in the back its tall Mullet Man, fighting crime where ever it may be, so brave, so strong, Sporting a 10-90 Mullet Man, defending truth, love, and Billy Ray, with his mighty magical Ape drape, up, up, and away Mullet man! Watch his mighty attack! Fighting for us all, and the party In the back Mullet MAAAAAAAAAAAN! MULLET CONTEST WINNER! The Maui Time Weekly Mullet contest was an incredible success with thousands of entries. It was a difficult decision but our highly qualified panel of staff judges went for style, authenticity, length, and overall appearance. One entry stood out above them all, David Baker, from Kihei. The clincher was the coat and tie. While he no longer wears a mullet due to career choices and the fact that his wife is repulsed by mullets, he still had the good sense to keep a few nostalgic photos of that golden time in his life. Among the bounty of public exposure, David also won choice tickets to Journey and dinner for two at Manana Garage. Maui Time Weekly would like to thank Tom Moffat Productions, the Maui Arts and Cultural Center, and Manana Garage for sponsoring the Mullet Contest. TOM MOFFATT PRODUCTIONS Maui Time Weekly Maui Arts & Cultural Center OCTOBER 3, 2002 7 NEWSOFTHEWeIrd LEAD STORIES Among the personal items that former Tyco International chief executive L. Dennis Kozlowski bought and charged to the company (without authorization, said the company in September) were two New York City apartments ($24 million), a Boca Raton, Fla., house ($29 million), furnishings and renovations ($14 million), a travel toiletries box ($17,000), an umbrella stand ($15,000), a shower curtain ($6,000) and a pincushion ($445), along with half the $2.1 million tab for a 40th birthday party for his wife (a former waitress at a restaurant near Tyco headquarters in Exeter, N.H.). (The party, at a Sardinian resort, featured Stoli vodka loaded into a statue of a man so that it could be poured out to guests through his penis.) Democracy in Action Compelling Explanations In August, a jury in Sarasota, Fla., awarded a 59-year-old woman $2.1 million from surgeon Holly Barbour for a faulty face-lift and neck-lift. According to testimony, Barbour had offered the patient a discount operation (at $7,500) because Barbour had previously worked only on eyes and wanted to expand her practice to faces. Barbour’s surgery took 10 hours (twice the norm) and left the patient with a lump on her face that made a popping sound when she blinked. Vince Dominach, the county economic development director in Easton, Pa., who was in trouble in June for $1,388 worth of personal calls on his government phone, told reporters that the problem stemmed from a hectic period in which his wife and he had become sexually involved with another couple. And Jeremiah Frank Dubois, 24, pleaded guilty to rape in Letters News Cover story surf August in Raleigh, N.C.; police said he told them the reason he did it was that his wedding day was approaching and he wanted one last fling before then. Former University of Hong Kong graduate architecture student Francis Frick, 34, said in May he would resist being sent back to the United States, despite the school’s having kicked him out for lack of progress. As his Ph.D. dissertation last year, Frick submitted a blank piece of paper (his only UHK thesis product), calling it an example of his “quantum arcology,” which focuses on nonverbal creativity; he said he plans a legal challenge to the school because his adviser failed to understand Frick’s approach. In September in Carlisle, Pa., Gordon Neal Diem was convicted of several charges in connection with an alleged attempt to lure two teenage girls (one being merely a police officer posing as one online) to a motel room for sex, but according to him, everything he did was part of his life’s dedication to finding and stopping adults who sexually abuse children. The 60 items of bondage and sex toys he had on him (and the Viagra tablets) were merely props, he said, to make him look like an authentic pervert, and a child-sex photo he had “helps motivate” him in his work, he said. Our Civilization in Decline Bounty Music Music Lessons • Rentals Expert Guitar, bass, Ukulele & Electronic Repair ukulele Kamaka, Maui Music, lehua, Koaloha Guitars C.F. Martin, Taylor, Fender, Gibson Keyboards Roland, Korg, Yamaha, Kurzweil Drums Tama, yamaha, l.p., roland, dw Sound Systems & recording jbl, eaw, mackie, tascam, alesis $99 00 Futon Sale! *Full Size Only. Full Size Avalon Platform Bed Frame w/Supreme IV Futon Also, in the Last Month ... dining day&night A & E Film * Buy any futon frame at regular price & purchase a Silver Foam Core Futon for only $9900! reg. $245 Lutheran minister David Benke, the main voice on the church’s national radio show, was demoted in June solely because he spoke at an all-denomination prayer service in New York City just after Sept. 11; Lutherans are strictly against praying with “pagans” because that would imply that there is more than one God. And ex-con and illegal Iranian immigrant Peyman Bahadori, who works (illegally, of course) as a private investigator in Colorado Springs and who was pursuing another Iranian man (who turned out to be a legal resident), was charged with impersonating an immigration agent after he harassed the man in August; Bahadori somehow persuaded four Aurora, Colo., police officers to help him in his pursuit of the man. Reuters reported that a 40-year-old Yemeni man named Yahya, who had left his wife of 15 years because of her screaming, married a deaf-mute woman (Dhamar province, Yemen). Beckman Research Institute investigators working with genetically engineered flies converted them temporarily from heterosexual to homosexual by merely turning up the temperature past 86 degrees (Duarte, Calif.). The latest person to be killed by a flying cow was a 54-year-old truck driver, who crashed after another driver knocked the cow into his truck on U.S. 160 (near Kayenta, Ariz.). A 43-yearold man was charged with kidnapping his wife and roughing her up during an argument about whether to attend church (Salt Lake City). 111 Hana Hwy. #105 • kahului 871-1141 Email: info@ukes.com • Reg. SALE $554 $419 Save up to 50% Off select Futon Covers Financing Available - 90 Days Same as Cash f u t o n l i fe s t y l e s 285 Hukilike St. • Kahului • 871–6406 Tues-Fri 10-5pm • Sat 10-2pm futon lifestyles Robert Bouslaugh dropped out of the race for sheriff in Durango, Colo., in September after he, wearing a dress, allegedly shot a man to death after the man stole his purse as he was leaving an adult bookstore; Bouslaugh said he was “working undercover” but did not elaborate. And the district attorney in Oshkosh, Wis., Joe Paulus, was beaten in the September primary after an audio tape surfaced of him bragging that he had had sex in his office with five women (but which he later denied as just “boy talk” during a night out). And the German Green party, which provided the margin of victory for Chancellor Gerhard Schroeder in September, drew 8 percent of the vote with such campaign billboards as the one for gay rights featuring a male couple and a female couple holding their respective partners’ nipples. Maui’s Largest Selection of Musical Instruments & Sound Systems for 23 years By chuck Shepherd *Sale ends October 10, 2002. da kine calendar the grid Classifieds Maui Time Weekly OCTOBER 3, 2002 9 Surf&Sports BY Eli Kealoha The King of the Air Ho’okipa is Home to Competition The art of kiteboarding is fairly new to the sports world, making its way to the forefront of atheletic competition in the past five years. Red Bull has wasted no time jumping in to the game, creating the top kiteboarding event in the world on Maui’s pristine North Shore. From the initial event four years ago, Red Bull King of the Air was the first international kiteboarding event to take place in the World, setting the standard for kiteboarding events around the world. With Maui’s guaranteed wind conditions and frequent visits from the world’s best kiters with backgrounds in kiteboarding, windsurfing, surfing, wakeboarding, waterskiing, and snowboarding, Ho’okipa Beach is an ideal spot to hold this event. Last Monday, September 29, the respected North Shore was pumping with constant wind and big surf, as a collection of the world’s best kiteboarders came together to launch themselves off the waves for the fourth annual Red Bull King of the Air competition. There are few sports that rival the extreme height and flight time of kiteboarding. Kiteboarders often jump 40-60 feet high, flying more than 100 feet in distance while pulling triple and quad rotations. Top riders from around the globe will challenge each other to be crowned King of the Air. The judges are looking for overall ability in two disciplines: big air freestyle and hang time. For the freestyle portion, Tides&Times Tide times set for Honolulu - adjust as follows: Kahului -1hr 41min. Hana -1hr 23min. Makena -0hr 32min. Kihei/Ma`alaea -0hr 22min. Lahaina -0hr 40min. 3 Sun R 6:23a Thur Set 6:17p H 1:51a +1.2 L 7:16a +0.3 H 2:00p +2.2 L 8:44p +0.1 4 Fri Sun R 6:24a Set 6:16p H 2:33a +1.5 L 8:13a +0.2 H 2:40p +2.1 L 9:12p +0.1 5 Sat Sun R 6:24a Set 6:15p H 3:15a +1.7 L 9:09a +0.2 H 3:20p +2.0 L 9:41p +0.0 6 Sun Sun R 6:24a Set 6:15p H 3:59a +1.9 H 3:59p +1.8 L 10:05a +0.2 L 10:10p +0.0 7 Sun R 6:24a Mon Set 6:14p 8 Tue Sun R 6:25a Set 6:13p 9 Sun R 6:25a Wed Set 6:12p Letters News riders are scored in three categories: jumps, maneuvers (wave rides, tricks, transitions, etc.) and overall impression. The object for hang time is to achieve the longest period of time in the air for any one jump. On Tuesday, October 1st, four women and twenty men qualified in the initial portion of the competition. These qualifiers will go on to compete against the preceding athletes in the King of the Air main event at Ho’okipa beach this Thursday, October 3 to Saturday, October 5. In the women standings, after round two, Rebecca Wocthers, Dana Pinto, Sheldon Plentovich and Fabienne d’Ortoli will meet preceded atheletes, Julie Porchaska and Marigold Zoll. The men standings after round two include Mark Doyle, Jose Luengo, Ben Meyer, Chuck Patterson, Jeff Tobias, Paul Franco, Greg Drexler, Jaime Herraz, Ryan Rawson, Will James, Martin Vari, Boyington, Sky Solback, Simone Vannucci, David Tyburski, Marc Ramsier, Bertrand Fleury, Starros Niarchos, Mark Shinn, and Katsushi Shinjo, who will meet Robby Naish, Marcus “Flash” Austin, Chris Gilbert, Mauricio Toscano at the main event as well. This unique combination of local, national, and international pro and amateur kiters adds a new twist to the world-renowned kiteboarding event, while opening doors for the newer atheletes to compete for the Red Bull King of the Air title and $20,000 prize purse. Freestyle surfreports hawaiiweathertoday.com buoyweather.com phone H 5:33a +2.2 H 5:20p +1.2 L 12:10p +0.4 L 11:14p +0.0 Hana Highway Surf 871-6258 (NALU) Cover story surf Hi-Tech Surf Sports 877-3611 dining day&night A & E Film 40 $ • easy • affordable • effective For info call 661-3786 Affordable Fashion Platforms Boots Prom Shoes Candies Have the hottest Halloween costume ever! If the Shoe Fits 12 N. Market St. Wailuku, HI 96753 Ph. 249-9710 Custom Tattoos • Body Piercing • Body Jewelry • Maui’s Largest Selection of Body Piercing! 193 Lahainaluna, Lahaina • 667-2156 • 10am-10pm six days presented by Maui Surfing Ohana: Hi Tech/Lopez Surf Bash – 11/30-12/1/02 Hookipa Beach Park. Entries are due 10/18/02. Honolua Surf Co. - 1/25/03 holding period to 2/8/03. Westside location to be determined. Maui Built/Maui Tropix – 3/8 - 3/9/03 Hookipa Beach Park. H 4:45a +2.1 H 4:39p +1.5 L 11:05a +0.3 L 10:41p +0.0 H 6:05p +1.0 L 11:48p +0.1 Hawaii Amateur Surfing Association (HASA) Schedule Websites National Weather Service 877-3477 H 6:26a +2.3 L 1:27p +0.5 Upcoming 2002-20033 Band Ads Quicksilver Boardriders – 5/17 - 5/18/03 Lahaina Harbor Entries are available at your local surf shop. Three of the four events will be counted to qualify for the State Championships in Oahu. We will be having our 1A Division (non-sponsored) again. We are encouraging new members! This is a great opportunity to give surfing competition a try. For more information, call Glynis King, Secretary/Treasurer at 579-9003. da kine calendar the grid Classifieds Tee Days at Sandalwood Golf Course Tuesday & Thursday Visitors Including cart Residents $ $ 50 25 Including cart (*Hawaii Driver’s License Required) Twilight Rates Everyday after 12:00 Noon $25 Residents / $50 Visitors 242–GOLF (4653) 2500 Honoapi`ilani Highway (Hwy. 30) • Waikapu, Maui Maui Time Weekly OCTOBER 3, 2002 11 Celebrate Happy Hour At The Blue Lagoon Tropical Bar & Grill 3pm-9pm Daily Mai Tais & Margaritas - $2.50 Draft Bud / Coors Light - $1.50 Complimentary Snacks at the Bar Bring this ad and receive your second dinner entree at 1/2 price of equal or lesser value 15% gratuity will be added to total amount of check before discount, 5-7pm only Maui Coffee Roasters 444 Hana Hwy. Kahului, HI 96732 Full menu and appetizers from 9am-10pm Casual Dining & Affordable Prices diningfeature By Mat Seavey Cakewalk Paia Bakery Go ahead and make your day a Cakewalk! What’s better on a fine, sunny north shore morning than the smell of lovingly crafted confections baking in the oven? Not much, I say. Cakewalk Bakery in the heart of Paia is the perfect place to get a fresh start on another sweet Maui day. Heidi Cramer begins working her magic from scratch every day at the crack of dawn. While you’re still adrift in a blissful sea of dreams, Heidi is busy baking her dreams into reality. Blueberry Oatmeal Scones, Chocolate Croissants, and Pumpkin Chocolate Chip Muffins dance before me like a vision as I rub the sleep from my weary eyes. The selections change from day to day but they’re all too good to be real. at the end are the object of my envy. Perfectly crisped coconut and caramel colored Mac Nuts make this a must have morning treat. I can imagine the Girl Scouts of America in secret enclaves plotting to steal this recipe. While Heidi’s fabulous display of sweets is certainly tempting, her creative cakes are her true specialty. Some of her more popular creations are the Double Chocolate Devil’s Food Cake layered with chocolate mousse and finished with vanilla buttercream and the Lemon Buttermilk Cake with lemon zest and fresh strawberry buttercream. Other flavors include Strawberry Mango, Banana Caramel Coconut, Tropical Lilikoi Delight, and Chocolate Espresso Mascarpone. Her signature concoction is the award winning Rose Petal Cake, a feast for the eyes and the palate. I order up a rich and frothy double Mocha as I consider my options. The Chocolate Macadamia Nut Brownie looks divine but definitely too decadent considering my choice of beverage. The sugar dusted Cinnamon Rolls catch my eye and Heidi grins and nods her approval. I sink my teeth into one and smile myself as this amazingly crafted creation seeps into my senses. The texture is what does it for me. It has a light crispness that artfully rolls toward a sweetly chewy, cinnamoned center. This is one of those things that are best enjoyed slowly with your eyes closed. Jars of cookies adorn the top of her antique display case: Chocolate Chip, Ginger, and Chocolate Decadence, but the Coconut Macadamia Nut Macaroons Cakewalk also offers sandwiches made to order on hearty, old-world style bread that’s par-baked fresh in the morning and finished in the oven as you wait. Try her House Roasted Turkey, Italian Tuna Salad, Salami and Cheese or Fresh Mozzarella, Tomato and Basil Sandwich for a filling meal on the run. Add chips, beverage, and a cookie for a few dollars more and have a nice picnic on the beach. Cakewalk Paia Bakery is celebrating its one year anniversary and will be featuring dessert specials throughout the week. Don’t forget to register in the drawing for free cakes to suit any occasion. You can reach Heidi or her friendly staff at 579-8770 or drop by Monday through Saturday from 7:00 A.M. to 5:00 P.M. or Sunday from 8:30 A.M. to 2:30 P.M. Go ahead and make your day a Cakewalk! Corner of Dairy Rd. & Hana Hwy. Across from the Banyan Tree in the Wharf Cinema Center 877–cups 658 Front Street, Lahaina • 661–8141 Photo: Kirsten Guenther Catering • Banquets • Buffets Group Discounts It’s all about Food Fresh. Affordable. Delicious. 10:30-8pm Mon-Sat, 11-8pm Sun 395 Dairy Road • Kahului, Maui • 877–8707 Opening This Weekend! 41 E. Lipoa St. Kihei 12 OCTOBER 3, 2002 dining f Next to Hapa’s f 891-MEXI Dininglistings Central Maui Ale House - Wide selection of food with sports and games all around. 355 E. Kamehameha Ave. 877–9001 Aloha Grill - 22 different burgers including veggie styles, plus all the extras. kids meals. Dairy Road Marketplace. 893–0263 Bangkok Cuisine - Casual setting featuring exceptional Thai food with plenty of crisp vegetables and fresh seafood. Lunch, dinner, or take-out. 395 Dairy Road, Unit F. 893-0026 Dunes Restaurant - Adventuresome revisions of local and American breakfast, lunch, and dinner favorites. Maui Lani Golf Course. 877–7461 Hale Imua Internet Cafe - Espresso bar, deli sandwiches, salads, real fruit smoothies, and iced blended coffees. 1980 Main St. Wailuku. 242-1896. Ichiban Restaurant and Sushi Bar Breakfast, lunch and dinner featuring modestly priced Japanese and Local cuisine. Kahului Shopping Center. 871–6977 Manaña Garage - Latin American cuisine. Chicken Tortilla Epozote, vegetarian enchiladas and paella. 33 Lono St., Ste 150. 873–0220 Maui Coffee Roasters - Ono grinds and freshly roasted coffee in a fun and casual atmosphere makes this the place to ‘take five’. 444 Hana Hwy. 877–CUPS Piñata’s - Fresh and wholesome Mexican food from the Kitchen Sink burritos to quesadillas ala carte. Casual dining, pinatas available too. 395 Dairy Rd. 877–8707 Ramon’s - Contemporary Mexican, full sushi bar, awesome desserts. Banquet area available. 2102 Vineyard St. 244–7243 Ruby’s - Walk down memory lane at this fabulous fifties cafe. Quintessential American dining morning to night. Queen Ka`ahumanu Center. 248-7829 Wow-Wee Cafe - Unique candy bars, ice cream shakes, bagels, coffees, great sandwiches, soups, and an oxygen bar. 333 Dairy Rd. 871-1414 <> South Maui BadaBing! - Homey Italian haven, award-winning thin crust pizzas, veal, calamari or chicken picatta. 1945 S. Kihei Rd. 875–0811 Bocalino Bistro & Bar - Affordably priced Mediterranean cuisine. Open for dinner, pupus served until 1am. Live entertainment and dancing 10pm ‘til 1am. 1279 S. Kihei Rd., #314. 874-9299 Capiche? - Contemporary Italian with a twist; extensive wine list. Commanding ocean views from every table. Diamond Resort. 879–2224 Cyberbean Internet Cafe - Gourmet coffee, espressos, cappucinos, lattes, sandwiches, smoothies & salads. 1881 S. Kihei, #112. 879-4799 DeanO’s Maui Pizza Cafe - Top quality pizza with traditional toppings, full menu with salads, pasta and sandwiches. 2439 S. Kihei Rd. 891–2200 Dick’s Place - Incredible all-you-can-eat food specials, free pool playing with purchase. 8 pool tables. 2463 S. Kihei Rd. 874–8869 El Restaurante Pasatiempo - Authentic homestyle Mexican food, with a wide range of dishes and meats to choose from. Azeka’s Plaza II. 879–1089 Five Palms Beach Grill - Local produce and fish featured in Pacific Rim cuisine. 2960 S. Kihei Rd. 879–2607 Greek Bistro - Moderately priced Greek and Continental Cuisine. Open for dinner 5-10pm. Kai Nani Village, 2511 S. Kihei Rd. 879-9330 Harlow’s Restaurant - Enjoy fine dining among the cozy and chic furniture and great sunset views. 2511 S. Kihei Rd. 879–1954 Jabba’s Place - Family restaurant featuring homestyle cooking at a great price. Specials nightly. Azeka’s Plaza I. 891–0989 Kai Ku Ono - A tapas-style menu, where everything is ala carte, special late night menu. 2511 S. Kihei Rd. 875–1007 La Creperie - French cuisine in a stylish and comfortable atmosphere. Serving escargot, ratatouille, and many other dishes. 1913 S. Kihei Rd. 891–0822 Life’s a Beach - Food & drinks in a fun atmosphere. Best Mex, nachos, burritos, prime rib, and grilled mahimahi are just some of the specialties. 1913 S. Kihei Rd. 891–8010 Lobster Cove - Varied menu of seafood including fresh island fish. 100 Ike Dr. 879–7677 Longhi’s Wailea - Seafood, meat and pasta entrees with many not listed on the menu. Ask the server for details. 3750 Wailea Alanui Dr. 891–8883 Ma`alaea Grill - Reasonably priced fine dining overlooking the harbor from the Maui Ocean Center. Ma`alaea Harbor Village Shops. 243–2206 Marco’s South Side Grill - A lavish and beautiful setting, compliments the hearty Italian food and excellent wines. 1445 S. Kihei Rd. 874–4041 Mulligan’s On the Blue - Maui’s authentic Irish pub, plenty o’Irish food, whiskey and beer. Breakfast is served till 3pm. 100 Kaukahi St. Wailea 874–1131 Nick’s Fishmarket - Fine dining in open air and elegance with amazing seafood dishes and fresh fish preparations. Kea Lani Hotel. 879–7224 Pita Paradise - Good food, fast. Serving up a mean Mediterranean-style “gyro”, salads or wraps, Kihei Kalama Village Center. 875–7679 Sansei Restaurant - Japanese based Pacific Rim dining, sushi bar and late night menu. Award winning cuisine, early bird and late night specials. 1881 S. Kihei Rd. 879–0004 Sarento’s on the Beach - Contemporary dining near the water’s edge. Private VIP table available. 2980 S. Kihei Road. 875–7555 Sausage Shack - Homemade sausage in meats or veggie style on fresh baked buns, with every condiment you could desire. 1913 S. Kihei Rd. 874–6444 South Shore Grinds - Delicious and healthy plate lunches, burgers, dinners, desserts and more. 362 Huku Li`i Place #101. 875–8472 Stella Blues Cafe - Healthy, quality food in a casual, homestyle setting. Breakfast, lunch and dinner with daily specials. 1215 S. Kihei Rd. 874-3779 Taj Mahalo’s - The only Indian restaurant on Maui, homemade curries and naan, chicken tandoori, wraps, lots of vegetarian delights. Lipoa Center. 874–1911 winning menu. 900 Hali`imaile Rd, Hali`imaile. 572–2666 Jacque’s Northshore Bistro - Tropical yet festive atmosphere, with a sushi bar, indoor and lanai dining. 120 Hana Hwy, Pa`ia. 579–8844 Kitada’s - Saimin for breakfast is a standard. Teri beef, hamburger steak, tofu and hekka all available. 3617 Baldwin Ave., Makawao. 572–7241 Mama’s Fish House - Fresh island fish with fresh local ingredients at “Maui’s favorite restaurant,” 799 Poho Pl., Kuau. 579–8448 Milagros Food Co. - Mexican food with an island influence. Best people watching spot in Pa`ia. 3 Baldwin St. 579–8755 Moana Bakery & Cafe - Pacific rim dining for vegetarians and meat eaters. Bakery provides wonderful 2 fOr 1! Upcountry Cakewalk Paia Bakery - High quality baked goods, sandwiches & specialty cakes. 2 Baldwin Ave., Paia. 579-8770 Casanova’s - First class service, first class food. Fine Italian dining at night and Makawao’s favorite deli by day. 1188 Makawao Ave., Makawao. 572–0220 Charley’s Restaurant & Saloon - Hankering for some grub? Charley’s serves it hearty and healthy from breakfast to dinner. 142 Hana Hwy., Pa`ia. 579–9453. Hali`imaile General Store - Gourmet dining in a charming atmosphere. Chef Beverly Gannon’s award- Maui’s Cafe 333 Dairy Rd • By the Airport Any Breakfast, Lunch or… Maui’s Only Oxygen Bar Buy One, Get One Free now open in Kihei town center near Foodland Serving dinner nightly from 5:30pm Late night specials Every Thurs.–Sat. 10pm to 1am Phone 879-0004 Also at The Shops at Kapalua, Phone 669-6286 Da Kahuna Has A Brand NEW Funk! Come Check us OuT! •FISH, CHICKEN, STEAK, SHRIMP, & TOFU KABOBS! •FRESH FISH! Pastas! Sandwiches! •Uncle harry’s Fresh SOUP bar! Now Delivering in LahainaTown (FREE on Front Street) Lahaina Market Place corner of Lahainaluna & Front Street Letters News 661–9999 Cover story surf dining day&night A & E Film da kine calendar the grid Classifieds Maui Time Weekly october 3, 2002 13 Dininglistings goodies for the sweet tooth. 71 Baldwin Ave., Pa`ia. 579–9999 Pa`ia Fish Market - By serving fresh local Hawaiian fish daily, they are the hot spot for seafood lovers without the upscale pocket. 100 Hana Hwy., Pa`ia. 579–8030 West Maui A&J Kitchen, Deli & Bakery - Choose from American, Hawaiian, Korean and Chinese cuisines. Bakery with cakes & cookies. Lahaina Center. 667–0623 Banyan Tree - “Eclectic Pacific Cuisine with a Hawaiian Twist.” Lodge atmosphere, ocean views. Ritz Carlton Kapalua. 669–6200 Blue Lagoon - Casual dining with local grinds, surrounded by waterfalls and palm trees. Wharf Cinema Center. 661–8141 Bubba Gump Shrimp Co. - Fine Southern foods, with Forrest Gump movie memorabilia and logo wear. 889 Front St. 661–3111 Cafe O’Lei - Oceanfront dining featuring light and healthy yet hearty gourmet lunch and dinner. Delicious salads and Focaccia sandwiches. 839 Front St. 661–9491 Cafe Sauvage - Gourmet, hearty, satisfying fare in an unpretentious setting. Extensive beer and wine menu, after-dinner cordials, and desserts! 844 Front St. 661–7600 Canoes - Casual yet elegant dining serving a combination of island-inspired contemporary and traditional cuisine.1450 Front St. 661–0937 David Paul’s Lahaina Grill - Fine dining in the intimate dining room on the ground floor of the Lahaina Inn building. 127 Lahainaluna. 667–5517 All. 900 Front St. 667–7400 Maui Marriott. 667–1200 ext. 51 Honokowai Okazuya & Deli - Gourmet plate lunches, sandwiches and pastas prepared as you order. Take out available. 3600-D Lower Honoapi`ilani Hwy. 665–0512 Pancho & Lefty’s - Delicious and spicy appetizers, traditional and specialty Mexican food. Wharf Cinema Center. 661–4666 House of Saimin - Ono homemade Saimin, chicken sticks, and Haupia pie are just some of the local favorites here.Old Lahaina Center. 667–7572 Hula Grill - Barefoot Bar and beachside dining in a 1940’s style. Menu is a seafood lover’s delight. Whaler’s Village. 667–6636 Roy’s Nicolina Restaurant - A quiet ambiance suffuses this dining experience, enhanced by the Pacific Rim cuisine. 4405 Honoapi`ilani Hwy, upstairs. 669–5000 Gerard’s - Fine French dining in Lahaina. Rich, f lavorful yet light foods await your taste buds. 174 Lahainaluna. 661–8939 Kahuna Kabob - Healthy food, low price! Soups, brown rice, veggies & kabobs, will deliver. Lahaina Marketplace. 661–9999 Rusty Harpoon Restaurant and Tavern Quench thirst, satiate hunger, and watch sports. Large parties welcome. Whalers Village. 661–3123 Fleming’s On the Green - Fine dining, on the golf course. Delicate raviolis, to the Filet Mignon, wonderful sauces. 2000 Village Rd., Kapalua 665–1000. Karma Kafe - Coffee drinks, specialty smoothies, tea drinks and fabulous vegetarian food. Zen garden and internet access. Anchor Square. 662–1258 Hard Rock Cafe - Good American food at decent prices amongst rock ‘n roll memorabilia. Love All-Serve Kimo’s - Fresh fish, prime rib, and their famous Hula Pie, oceanside dining. 845 Front St. 661–4811 Sansei Seafood Restaurant and Sushi Bar D.K. Kodama has combined the highest quality sushi bar infused with Hawai`i’s cultural flavors. 115 Bay Drive #115., Kapalua. 669–6286 “Best Mahi” - Kama’ Aina Hot Spots “Best Steak of Maui” - Maui News Readers “Award of Excellence” - Wine Spectator “Best Steak” - Taste of Lahaina Lahaina Coolers - Off the beaten path “surf bistro”, Good food, good quality, late night menu. 80 Dickenson St. 661–7082 Lahaina Fish Co. - Chef’s Signature Pacific Rim Specialties prepared with fresh island fish, and seafood, dine on the oceanside lanai. 831 Front St. 661–3472 Lemongrass - Serving ala carte to the seven course traditional Vietnamese dinner. Reasonably priced and full of flavor. 930 Waine`e St. 667–6888 Longhi’s - Elegant fine dining, freshest ingredients, pasta, seafood and steaks. 888 Front St. 667–2288 Mama’s Ribs & Rotisserie - Serving ribs and roasted chicken, BBQ baked beans, cole slaw, and macaroni salad. Napili Plaza. 665–6262 Maui Brews - Daily specials, great appetizers salads and entrees in large portions. Lahaina Center. 667–7794 Maui Mama’s - A quaint shop serving coffees grown throughout the islands. Light food, souvenirs, internet access. 578 Front St. 667–7700 Moose McGillicuddy’s - Great value, large portions, all you can eat specials and merry atmosphere. 844 Front St. 667–7758 We invite you to discover what makes Reilley’s the best restaurant on Maui Nachos Grande - Fresh mexican food fast. Vegetarian too. Honokowai Marketplace. 662–0890 conveniently located at the entrance of Ka`anapaali 5 . /1 is a 10 th qu ree o -x6 p. sental. Eree frior t e pre ci nt p e ed t pe us 1 s alue add t M r * fo r v ity oun 2 sse atu isc d gr le % 15 Reilley’s - Known for their choice award winning beef. Gourmet steaks and seafood overlooking the Ka`anapali Golf Course’s 18th hole. 2290 Ka`anapali Pkwy, 667–7477 i`o - Pacific Rim cuisine among awesome sunset views, and indoor or outdoor dining. 505 Front St. 661–8422 Steaks & Seafood r p/0m fo 2 0 :3 0 ad l or Penne Pasta - Mark Ellman’s inexpensive Italian bistro with homestyle pasta, pizza and salad. 180 Dickenson St., Suite 113. 661–6633 667-7477 Sea House Restaurant - Looking out over incredible Napili Bay, dining is an amazing experience here under the direction of Chef Michael Gallager. 5900 Lwr. Honoapi`ilani Hwy. 669–1500 Sir Wilfred’s - Lahaina Cannery Malls gourmet coffee house and cafe. Soups, Salads and Sandwiches grace this simple menu. Lahaina Cannery. 667–1941 Spats Trattoria - Step into old Northern Italy. Tables are private, the Antipasti serves two. The Hyatt Regency. 667–4727 Sports Club Kahana Grill - Upscale, healthy restaurant inside Sports Club Kahana. Breakfast, lunch & take-out. 4327 Lwr. Honoapi`ilani Rd. 669-3539 Swan Court - One of the top ten romantic restaurants in the world, extensive list of contemporary fine wines. Hyatt Regency Maui. 667–4727 Thai Chef - Thai food like you’ve never had it, curry, pad thai, summer rolls and more. Old Lahaina Center. 667–2814 Tropica - Enjoy the fire and ice-themed restaurant where the cold food and drink bar is tucked between two “volcanoes.” The Westin Maui. 667–2525 Whale’s Tale - All open-air lanai dining. Casual dining, specials, large portions. 672 Front St. 667–4044 Nalu Sunset Bar & Sushi - Sushi rolls, sashimi, various Japanese appetizers, sandwiches and more. EARLY BIRD SPECIAL 2 for 1 Entrees* 2 for 1 Entrees* •Daily Fresh Fish Specials •Pan Seared NY Steak •Meatloaf •Chicken Bayou La Fourche •Crab Cakes •Guava Glazed Ham Open air Restaurant Serving: Breakfast, Lunch, and Dinner Open 7 days a week Azeka PLaza i 1280 S. Kihei Rd. 891-0989 14 OCTOBER 3, 2002 dining DAY NIGHT Hot Off The Grill! A&E pg. 15 Film Critique pg. 16 Movie Times pg. 17 THE GRID 19,21,23 Da Kine Calendar 18-24 Arts&Entertainment Lucy buur Maria Muldaur A Legend in Blues at the Royal Lahaina Maria Muldaur is a living blues legend traveling through time and music. She is one of America’s most prolific women in music recording 25 albums to date. Maria’s inspirations came from bluegrass, folk, jazz, blues and gospel, and is producing her own time capsule of American roots music. Her most recent album, Richland Women Blues (2001), is a tribute to the blues women and men of the 1920’s and 1930’s. Maria will be appearing for Maui this Friday, October 4, at the Royal Lahaina Resort in the Alii Room with the Red Hot Bluesiana Band. Maria’s musical journey began in Greenwich Village in New York, growing up surrounded by the sounds of country and western singers like Hank Williams, Kitty Wells, Hank Snow, and Ernest Tubb. “I was a little girl trapped in the urban jungle, and the magic of radio opened up the world of country music to me,” recalls Maria. As a teenager, Maria tuned into early rhythm and blues and formed her own girl “doo wop” group in high school called The Cashmeres. Maria quickly became swept up in the growing new wave in American roots music that involved after hours jams with blues legends like Reverend Gary Davis. Motivated by this intense New York scene she headed south soaking up Appalachian music and culture. After returning to New York she was invited to record with the Even Dozen Jug Band. “This was my first exploration of early blues and it was during this time that I first heard early recordings of Memphis Minnie,” recalls Maria. “I was deeply moved and influenced by her raw, soulful sound.” Her foray into blues had begun. Later, moving to Boston she joined the Jim Kweskin Jug Band and recorded 3 albums with them. When the group disbanded in 1968 she remained with Reprise records recording Letters News Cover story surf two acclaimed albums. Her first solo album with Reprise was recorded in 1973 and went platinum two years later, with the song “Midnight at the Oasis,” which remains on playlists to this day. In 1974, Maria recorded the celebrated “Waitress In A Donut Shop” album with Warner Brothers containing the hit “I’m a Woman,” making 3 more records with them before the seventies. “Transbluency”, Maria’s 1986 release, was declared the Pop/Jazz Album of the Year Award from the New York Times. Touring extensively with her band she also discovers her love for the New Orleans sound. Incorporating this flavor into her own musical repertoire she coins this mix of blues, R&B, and Louisiana music: Bluesiana. By 1992 Maria Muldaur signed on with Black Top Records and recorded “Louisiana Love Call” in New Orleans, contributing to the upsurge of American roots music popularity. It was immediately hailed as the best album of her career with guest appearances by Dr. John, Aaron and Charles Neville, accordionist Zachary Richard, and guitar guru Amos Garrett. The album was awarded “Best Adult Alternative Album of the Year” by the National Association of Independent Record Distributors. Rolling Stone magazine, adult alternative radio and blues radio raved. Maria holds the distinction as Black Top’s best-selling artist to date. Her latest project, “Richland Woman Blues”, brings Maria back to her roots paying homage to blues pioneers like Bessie Smith, Memphis Minnie, Leadbelly, Mississippi John Hurt, and Blind Willie Nelson. Collaborating with Bonnie Raitt, Taj Mahal, Roy Rogers, and Alvin Youngblood Hart, Maria creates another blues classic. “I felt it was important to try and encapsulate what has been for me some of the deepest, most poignant and poetic artistic expression of the 20th century - or any century for that matter!” Maria says. Maria continues to tour over 200 nights a year, stating, “My goal is to continue growing and improving as a singer of soulful songs all my life.” Every Maria Muldaur performance is infectious; part down-home revival, part sophisticated and joyful sensuality, and all celebration. Don’t miss this opportunity to celebrate a night of killer blues; tickets are available at Royal Lahaina Resort, Groove Two Music in Lahaina, Tropical Disc in Kihei, Maui Coffee Roasters in Kahului, and Anthony’s Coffee in Paia. For more information call 891-0172. Celebrating the Blues... THIS WEEK’S PICKS County Fair & Parade Thursday thru Sunday. Food, Fun and Families flock to this fabulous festival of felicitous frivolity. Don’t forget to enter your recipe for Turkey SPAM! Catch Kahului’s biggest parade as it starts at Maui Community College, 4:45pm, and ends at the Maui County Fairgrounds, 6pm. La Jazzerie Every night! Maui’s newest Kihei venue with a Parisian Jazzclub atmosphere makes for a smooth late night groove. Check out Monday’s Jazz Jam where jazz improv is in full effect as you Never Know Who will show up! Finally, a happenin’ lounge for jazzin’ sophisticates! Tiffany Lee & Josh 8:30-11:30pm, Mon., Tue., Thu., & Sat. in the Lobby Lounge at The Four Seasons Resort Wailea; 5pm at the Lagoon Bar, Sheraton Maui in Kaanapali. If you haven’t heard this harmonious contemporary duo yet, then you haven’t been pleasantly musically mystified…Tiffany could sing you the phone book and you would swoon! Kiteboard Championships Thru Sunday, at Ho’okipa Beach. Don’t miss the finals that divide the world’s best kiteboarders from the ones that are just making a lot of splashing around in the water! Who will face the challenge and the elements to become crowned the Red Bull King of the Air Champion? Hana High Students Poetry Reading 12-1pm, at Borders Books. Come support the arts, Hawaiian culture & our students as they celebrate the release of their book which offers poetry and chants, mythological stories, collages, paintings and writing in Hawaiian and Hawaii Creole! dining day&night A & E Film da kine calendar the grid Classifieds Maui Time Weekly OCTOBER 3, 2002 15 FilmCritique By Cole Smithey Silence of the Lambs Prequel Stumble Hannibal Gets Dinner And A Show - The Audience Gets A Bomb Red Dragon (´´) A quickly escalating game of cat and mouse between Hannibal Lecter (Sir Anthony Hopkins) and FBI investigator Will Graham (Ed Norton) sets a surprisingly fierce tone early in “The Red Dragon” that the remainder of the film scarcely rises to again. This prequel to “The Silence of the Lambs,” based on Thomas Harris’ 1981 novel, begins with our pre-captured Chianti lover hosting a plush dinner party in his home featuring an untalented symphony violinist as the unnamed beef of the day. When agent Graham makes a late visit to pick Hannibal’s psycho-analytic brain about a revelation he’s had regarding a killer’s habit of cutting up his victims to feast on, the blood of both men is properly spilled and Hannibal is carted off to his famous “Silence of the Lambs” cell. Ralph Fiennes plays serial killer Frances Dolarhyde (a.k.a. the Red Dragon) a little too sympathetically to evince substantial terror from his deadly character nicknamed “the Tooth Fairy.” During “Red Dragon’s” false bottom climax, a large house is engulfed in flames before it explodes like a tinderbox filled to the rafters with dynamite. No explanation is necessary to support the overkill of the scene. This is a big budget Hollywood thriller with big name actors like Harvey Keitel, Emily Watson and Philip Seymour Hoffman, so it’s a given that no opportunity will be missed to crank up a big explosion when one comes along. Where the last film in the series, “Hannibal,” went over-the-top with ironic Grand Guignol set pieces at the expense of character development, “Red Dragon” goes too far into action-thriller territory to match the dark mood of horror that Hitchcock’s instigating “Psycho” struck the blueprint for 32-years ago. Ed Norton is nearly passable as an ex-FBI agent brought out of retirement to assist on a high-profile case, but his dyed blond hair and slightly underage status drag on the portrayal. The twist that gave “Silence of the Lambs” its intensity of fear grew from the tender nature of Jodie Foster’s Clarise Starling. In “Silence,” Foster towed a similar line to the dramat- Fine Shops at the Old Poi Factory • Chadwick Hawaii Jewelers • Loni’s Handmade Gifts • Lahaina Ticket Co. • Leather Brush Artists • Oh Baby! • David’s of Hawaii • Bead Jewelry Stand • Tropical Toes • Air Brush Tattoo • Budget Activities Maui • Tropical Tan Lines • Sea of Sorongs 815-819 Front St. • Lahaina, HI 96761 16 OCTOBER 3, 2002 film n Han r ecte L ibal ic arc that drove Mia Farrow’s character in “Rosemary’s Baby.” Starling was completely alone with her fears and doubts, and therein rested the palpable threat of Hannibal, who so lovingly salivated at the prospect of cooking up a little Clarise stew. Hannibal’s hunger for Clarise Starling’s flesh came across distinctly to the audience because of a miraculous chemistry between Hopkins and Foster. In contrast, Ed Norton and Anthony Hopkins have a kind of anti-chemistry that practically cancels the other one out as the film goes on. For the meat of the film’s suspense we’re left with an awkward budding romance between the abused-as-a-child Dolarhyde and blind photo lab co-worker Reba McClane (Emily Watson). Dolarhyde worships at his self-made altar of William Blake’s painting “The Red Dragon and the Woman Clothed in the Sun” when he isn’t clipping out tabloid articles about his homicide idol Hannibal Lecter for his mondo scrapbook of pain and obsession. Reba is a guileless young woman much too sweet to ever be killed off in a Hollywood script. It’s the biggest flaw in the movie that the audience is allowed such safety in a horror movie. We’re never completely sold on Dolarhyde’s ruthless insanity because most of his violent activity is exposed only in flashbacks and crime scene photos. ShowTimes MovieCapsules Maui Film Festival’S Candlelight cafÉ & cinema Wednesday, October 9 Tadpole 5:00 & 7:30 p.m., Castle Theater A witty and hilarious treatment of an offbeat subject: a pubescent boy’s infatuation with an older woman starring Sigourney Weaver, Bebe Neuwirth and newcomer Aaron Stanford. Think The Graduate for a new millennium. “Short and sweet, small and smart, Tadpole is the oasis in the desert of dopey summer blockbusters - an uproarious, sophisticated coming-of-age comedy so flawlessly written, acted and directed it seems practically miraculous.” (NY Post) Rated PG-13. 88 min. Presented by Maui Film Festival and MACC. Tickets:$7 w/MFF passport. $10 single.. red dragon - (R) - Suspense/Thriller - In this prequel to “The Silence of the Lambs,” FBI Agent Will Graham (Edward Norton) consults the cannabalistic killer Hannibal Lecter (Anthony Hopkins) in his pursuit of a serial killer known as the ‘Tooth Fairy’. Based on the novel by Thomas Harris. See Review. jonah: A Veggie Tales Movie - (G) Animation - The first feature based on the popular children’s Christian animated series, “Jonah: A Veggie Tales Movie” tells the classic story of Jonah and the Whale. Now Showing Ballistic: Ecks Vs. Sever - (R) - Action/ Adventure - Reclusive former FBI manhunter Jeremiah Ecks (Antonio Banderas) is blackmailed back into service to track down an unstoppable ex-DIA operative, code-named Sever (Lucy Liu), who has kidnapped the young son of the head of a secret committee of int’l Moonlight mile - (PG13) - Romance - When security agencies. See Review. Joe Nast’s (Jake Gyllenhaal) plans for marriage The Banger Sisters - (R) - Drama, Comedy change due to an unexpected loss, he wants Two best friends and former rock groupies reunite after to be the man he believes everyone wants him twenty years to find that one of them is still rocking out while the other has “grown up” and become more proper. Stars Susan Sarandon and Goldie Hawn. Barbershop - (PG-13) - Comedy - An ensemble comedy about a day at a barbershop on the south side of Chicago. It’s Calvin’s (Ice Cube) shop, and he inherited the struggling business from his father, but with bills to pay and a baby on the way he sees the shop as a burden and waste of time. After selling the shop to a local loan shark, however, Calvin finally. 332228 Maui Times BaliMoonlight Sale.hb Mile 2/7/02 4:21 PM Page 1 New This Week Save on The Bali Collection Exotic fashions ~ handmade in Bali Buy one, get 50 off the second % * Only from Hilo Hattie, this extensive selection of exquisite designs is inspired by the exotic beauty of distant Bali. Pick from a wide choice of exquisite floral patterns. And get 50% off the second garment. Prices range from $9.99 to $59.99. Exclusively from Visit any of our 3 convenient locations: Lahaina Center, near Front St. In Kihei at Piilani Village or Hilo Hattie Collections At Whalers Village in Kaanapali *While supplies last. Not valid on sales merchandise. Valid only at time of purchase. Offer good on Bali Collections garments only. Not combinable with any other discounts or offers. Letters News Cover story surf dining day&night A & E Film Blue crush - (PG13) - Action/Adventure - Living in a beach shack with three other roommates including her rebellious younger sister, Anne Marie is up before dawn every morning to conquer the waves and count the days until the Rip Masters surf competition. Anne Marie finds all she needs in the adrenaline-charged surf scene in Hawaii... until pro quarterback Matt Tollman comes along. Crafted by filmmakers dedicated to the sport, Blue Crush brings together world-class surfers in front of the camera and behind-the-scenes, and features some of the best sequences of women surfing ever captured on film. The four feathers - (PG13) - Drama - The story, set in 1898, follows a British officer (Heath Ledger) who resigns his post when he learns of his regiment’s plans to ship out to the Sudan for the conflict with the Mahdi. His friends and fiancee (Kate Hudson) send him four white feathers to symbolize cowardice. To reddem his honor, he disguises himself as an Arab and secretly saves the lives of those who branded him a coward. Lilo & Stitch - (PG) - Animation - A captivating tale of a young girl’s close encounter with the galaxy’s most wanted extraterrestrial, combining whimsical, unforgettable characters, an imaginative and offbeat story, and colorful artistry. My Big Fat Greek Wedding - (PG) - Romantic Comedy - Yes, it’s still here! An unmarried Greek woman who is pressured to find a nice Greek man to meet and breed with, falls in love with the charming and irresistible non-Greek John Corbett, and hysterical familial antics abound! And if you haven’t seen it by now... one hour photo - (R) - Suspense/Thriller - Photo development is a responsibility Sy Parrish takes very seriously. A person’s life, after all, in its simplest terms is nothing more than moments strung together from the second of birth to that final instant when the last breath is drawn. Sy Parrish treasures these moments more than most people do.... Spy Kids 2: The Island of Lost Dreams - (PG) - Adventure - Now, Carmen and Juni are Level 2 OSS agents, about to set off on their own solo mission to save the world from a mysterious volcanic island populated by a mad scientist and his imaginative menagerie of creatures. Sweet home alabama - (PG13) - Romantic Comedy - Reese Witherspoon stars as New York fashion designer Melanie Carmichael. Melanie suddenly finds herself engaged to the city’s most eligible bachelor (Patrick Dempsey), but her past holds many secrets, including Jake (Josh. Trapped - (R) - Suspense/Thriller - Joe and Cheryl Hickey (Kevin Bacon and Courtney Love), who along with Joe’s cousin Marvin (Pruitt Taylor Vince), have orchestrated and refined a foolproof plan to extort money from wealthy families. They’ve preyed upon helpless families with confidence, skill, and success. But this time, they picked the wrong family - a family that chooses to fight back and take control of a terrifying ordeal that is spiraling towards an unthinkable outcome. Also stars Stuart Townsend and Charlize Theron. Tuxedo - (PG13) - Action -. XXX - (PG13) - Action/ Adventure - Vin Diesel stars as former extreme sports athlete Xander “XXX” Cage, notorious for his death defying public stunts. Enlisted for a dangerous covert mission, he must use all his extreme skills to combat a clever, organized and ruthless enemy far beyond the scope of his experience. A new kind of hero is born. da kine calendar the grid Classifieds maui film festival Castle Theatre, 572-3456 Tadpole - PG13 - Wednesday 5pm, 7:30pm Maui Mall Megaplex Maui Mall, 249–2222 - D - Daily Ballistic: Ecks vs. Sever - R - D (1:50, 4:10), 7:10, 9:15, Sa-Su (11:40, 1:50), 4:10, 7:10, 9:15 Barbershop - PG13 - D (1:40, 4:40), 7:30, 9:50, Sa-Su (11:20, 1:40), 4:40, 7:30, 9:50 Blue Crush - PG13 - D 7:05, 9:35, Sa-Su 7:05, 9:35 Lilo & Stitch - PG - D (1:35, 4:20), Sa-Su (11:30, 1:35), 4:20 One Hour Photo - R - D (1:25, 4:25), 7:20, 9:40, Sa-Su (11:05, 1:25), 4:25, 7:20, 9:40 Red Dragon - R - D (1, 3:30, 3:45, 4), 6:30, 6:45, 7, 9, 9:30, 10, Sa-Su (11:45, 12, 1), 3:30, 3:45, 4, 6:30, 6:45, 7, 9, 9:30, 10 Spy Kids 2 - PG - D (1:20, 4:05), Sa-Su (11:10, 1:20), 4:05 Sweet Home Alabama - PG13 - D (1:15, 2, 4:15, 4:50), 6:50, 7:25, 9:20, 9:55, Sa-Su (11:25, 1:15, 2), 4:15, 4:50, 6:50, 7:25, 9:20, 9:55 Trapped - R - D 6:55, 9:25, Sa-Su 6:55, 9:25 Tuxedo - PG13 - D (1:30, 1:45, 4:30, 4:45), 7:15, 7:45, 9:45, 10:05, Sa-Su (11, 11:15, 1:30, 1:45), 4:30, 4:45, 7:15, 7:45, 9:45, 10:05 Ka`ahumanu 6 Queen Ka`ahumanu Shopping Center, 878– 3456 The Banger Sisters - R - Fr-Th (12:30, 2:45), 5, 7:30, 9:50 The Four Feathers - PG13 - Fr-Th (1:15), 4:30, 7:20, 9:55 Jonah: A Veggie Tales Movie - G - Fr-Th (12:30, 2:30), 4:30, 7, 9 Moonlight Mile - PG13 - Fr-Th (12:40, 3), 5:20, 7:40, 10 My Big Fat Greek Wedding - PG - Fr-Th (12:35, 2:50), 5:05, 7:20, 9:35 Spy Kids 2: The Island of Lost Dreams - PG Fr-Th (12:35, 2:45), 5 XXX -PG13 - F-Th 7, 9:45 Kukui Mall 1819 South Kihei Road, 878–3456 The Banger Sisters - R - Fr-Su (12:45, 3:15), 7, 9:30, M-Th (1:30), 3:45, 5:50, 8:15 Red Dragon - R - Fr-Su (1:15, 4:15), 7, 9:45, M-Th (1:15), 4:15, 7:30 Sweet Home Alabama - PG13 - Fr-Su (1, 3:50), 7:30, 10, M-Th (1), 3:15, 5:30, 8:15 The Tuxedo - PG13 - Fr-Su (12:30, 2:45), 5, 7:15, 9:55, M-Th (1:45), 4:15, 6:30, 8:45 Front Street Theaters 900 Front Street, 249–2222 - D - Daily The Banger Sisters - R - Sa-Su (1:15, 4:15), 7:15, 9:55, D (4:15), 7:15, 9:55 Barbershop - PG13 - Sa-Su (12:45, 3:45), 6:45, 9:45, D (3:45), 6:45, 9:45 Red Dragon - R - Sa-Su (1:30, 4:30), 7:30, 10, D (4:30), 7:30, 10 Sweet Home Alabama - PG13 - Sa-Su (1, 4), 7, 10, D (4), 7, 10 Wharf Cinema Center 658 Front Street, 249–2222 Ballistic: Ecks vs. Sever - R - Sa-Su (11:30, 2), 4:30, 7:30, 9:45, D (11:30, 2, 4:30), 7:30, 9:45 Spy Kids 2 - PG - Sa-Su (11, 1:30), 4, 7, 9:15, D (11, 1:30, 4), 7, 9:15 Tuxedo - PG13 - Sa-Su (11:15,1:45), 4:15, 7:15, 9:30, D (11:15, 1:45, 4:15), 7:15, 9:30 Maui Time Weekly Advertising Our Ads Bring Results Call us for details. 661–3786 Maui Time Weekly OCTOBER 3, 2002 17 Da kinecalendar BIG SHOWS Tickets On Sale Now Hachioji Kuruma Ningyo and Shinnai Japanese Traditional Performing Arts Puppets and Narrative Song - Thursday, 10/3. Tokyo’s premier puppet theater dating from Japan’s Edo Period (early 19th century), Kuruma Ningyo performs stories from classic Japanese literature. The troupe is accompanied by shinnai music - narrative songs (the Edo Period’s “top pops”). Maui is honored to welcome shinnai master Tsuruga Wakasanojo XI, designated by the Japanese government as a Living National Treasure, the highest honor for an artist. 7:30pm, Castle Theater, MACC, 242-SHOW. Cecilio & Kapono - Friday, 10/4 & Saturday, 10/5. Since their founding nearly 30 years ago, C&K’s career has become local legend and their influence on contemporary Hawaiian music is unquestioned. 7:30pm, Castle Theater, MACC, 242-SHOW. Maria Muldaur and the Red Hot Bluesiana Band - Friday, 10/4. Nominated for a Grammy, two W.C. Handy Awards and voted Independent Women Album of the year, Maria is at her peak! Presented by The Maui Blues Association and Mehana Brewing Company. 8:00pm, Royal Lahaina Resort, Alii Room, 891-0172. Bad Religion with The Quintessentials Thursday, 10/10. All ages welcome. 7pm, World Cafe, Oahu, (808) 526-4400. San Jose Taiko & Hanayui - Thursday, 10/10. A thundering evening of Japanese drumming. Putting a contemporary spin on traditional drumming, three women of Hanayui from Japan’s Kodo Village join three women from San Jose Taiko in a new work called “Himawari.” The dynamic performance will explore the common roots of the artists’ Japanese ancestry. 7:30 pm, Castle Theater, MACC, 242-7469 . Journey - Saturday, 10/12. The mega-hit band has sold over 60 million records since it began in 1973 and has gone on to become one of the most popular bands worldwide. 7:30pm, A&B Amphitheater, MACC, 242-SHOW. Maui Symphony Orchestra Opening Concert - Monday, 10/15. Oktoberfest! Featuring Vahn Armstrong, violin, and Jolyon Pegis, cello, performing Brahm’s Double Concerto for Violin and Cello. Also this evening, Weber’s overture to “Der Freischutz” and Schumann’s Symphony #2. 7:30 pm, Castle Theater, MACC, 242-7469. John Prine - Friday, 10/18. Grammy Award-winning singer/songwriter. Presented by I Spy Entertainment. 7:30pm, Castle Theater, MACC, 242-SHOW. 2nd Annual Montessori Music Fest with Willie Nelson and Friends - Saturday, 10/19. Willie Nelson & his back-up band Planetary Bandits, also founding Doobie Brother Pat Simmons, Gail Swanson, Los Lonely Boys and Kalice. 6pm, A&B Amphitheater, MACC, 242-7469. unwritten law - Friday, 10/25. Tickets go on sale Sat. Sept. 28. 7pm, World Cafe, Oahu, (808) 526-4400. Goldfinger - Wednesday, 10/30. Opening act: HoinDaWall. Tickets go on sale Sat. Sept. 28. 7pm, World Cafe, Oahu, (808) 526-4400. B.B. King - Saturday, 11/9. King returns to Maui to thrill music fans with one of the world’s most identifiable blues guitar styles. 7pm, A&B Amphitheater, MACC, 242-7469. DINNER MUSIC West Maui BJ’s Chicago Pizzeria – John Kane, Wed., Thurs. and Fri.; Harry Troupe, Sat.; Kaleo Phillips, Sun.; Clay Mortensen, Mon. and Tues.; All sets from 7:30-10pm. 700 Front St., Lahaina, 661-0700. Canoes Restaurant – Marve Blue with tropical jazz, Fri It’s Happening 9pm Joe Benedett Thu San Jose Taiko & Hanayui 5:30-8:30pm Wed. thru Sat. 1450 Front St., Lahaina, 661-0937. Cheeseburger in Paradise – Brooks Maguire, Thurs., Sat., Sun. and Wed.; Harry Troupe, Fri.; Gail Swanson, Mon. and Tues. All sets from 4:30-7:30 and 8-11pm. 811 Front St., Lahaina, 661-4855. David Paul’s Lahaina Grill - Pianist David Swanson, 7:30-11:30pm. 127 Lahainaluna Road, Lahaina, 6675117. Fish & Game Brewing Co. & Rotisserie - Jazz, 7:30-10:30pm Sunday. 4405 Honoapiilani Highway, 6693474. Hula Grill - Ernest Pua’a and Brian Kaui Haia, Thurs.; Ernest Pua’a and Kawika Lum Ho, Fri.; Maurice Bega, Peter DeAquino and Garret Probst, Sat.; Kawika Lum Ho, Ryan Tanaka, Desmond Yap and Franki Ah-Puck, Sun.; Kawika Lum Ho, Albert Kaina and Don Kaulia, Mon.; Jarret Roback, Don Kaulia and Albert Kaina, Tues.; Ernest Pua’a, Brian Kaui Haia and Roy Kato, Wed. Live music is from 3-5 and 6:30-9pm. 2435 Kaanapali Parkway, Building P, Kaanapali, 667-6636. Kahana Terrace Restaurant – Harry Troupe, Thurs. and Tues.; Randy Reno, Sat.; Sets from 6-9pm. Sands of Kahana Resort, 669-5399. Kimo’s – Sam Ahia, 7-8:30pm Wed. thru Sun. 845 Front St., Lahaina, 661-4811. Leilani’s On The Beach – Classic rock with JD & Mario, 2:30-5:30pm, Fri.; Hawaiian music with Kilohana, 3:30-6pm Sat. and Sun. 2435 Kaanapali Parkway, Building J, Kaanapali, 661-4495. Maui Brews - Jonah Livin Band, 6-10pm, Fri. 900 Front St., Lahaina, 667-7794. Moose McGillycuddy’s - Keala & Company, 7:3010:30pm, Fri. 844 Front St., Lahaina, 667-7758. Sat Sun Da Pups Karaoke Rocks w/ Toby OCTOBER 3, 2002 Wed 2439 S Kihei Rd, Upstairs in the Rainbow Mall 891-2200 Must be 21 with Valid ID to Enter 18 11pm Open Mic Nite Jam with Erick & Mike Kick Back with Angie Delight No Cover Specials to da kine calendar * Hawaii State ID for 50% Pizza Deal 50% off Pizza* the grid Thursday, 10/3 Friday, 10/4 Saturday, 10/5 DJ Shawn ‘Til Dawn & Bada Bing Restaurant/Lounge Swing Dance, No cover, 7-9pm Kanekoa, $7, 10pm DJ Kid Fury, $7, 10pm 1945 S. Kihei Rd, Kihei 875-0188 Clay Mortensen & Guest, N/C Tula and Bobby, Latin Jazz Music Kilohana, Island Reggae Music Bocalino 1279 S. Kihei Road, Kihei - 874-9299 No cover, 10pm No cover, 10pm Contemporary & Hawaiian, 10p Bubba Gump Shrimp Co Guys’ Night Out, DJ Modika & 1188 Makawao Ave., Makawao - 572-0220 the Coors Girls, No cover, 10pm 142 Hana Hwy, Paia - 579-9453 compadres BAR & grill Lahaina Cannery Mall - 661-7189 41 E. Lipoa St., Kihei - 879-9001 Dr. Nat & Rio Ritmo, Samba & Salsa, 9:45pm jabba’s place 1280 S. Kihei Rd., Kihei - 891-0989 Mon - Lawai’a, No cover, 10-12:30pm, Wed - Dr. Nat & Pacificaribe, No cover, 8:3011:30pm Karaoke, No cover, 9pm Salsa Night $5, 10pm Wed - Karaoke, No cover, 9pm DJ Jammin J Flava Zone Mon - Uncle Willie K, $7, 10pm; Tu - Ultra Fabulous w/Chilltown Productions, 9pm; Wed - Aloha Nite, 9pm-1:30am Ladies Night, Hip Hop DJs, 10pm Cheryl Rae, No cover, 10:30pm 900 Front St., Lahaina - 667-7400 41 E. Lipoa St., Kihei - 879-2849 Wed - Wild Wahine Wednesday, Casanova’s Famous Ladies’ Night disco, New DJs Ged & Skip, $5 cover after 9:30pm Lawai’a, No cover, 10-12:30pm Hard Rock Cafe henry’s bar & grill Tu - DJ Jammin J, $5, 10pm, Wed - Route 66, No charge, 6pm Mon - Dr.Nat & Guest, Latin Music; Tu - Jaime Lawrence & Jay Molina, Contemporary & Hawaiian Music, Wed - Jay Molina & Gilbert Emata, Jam Night: ALL SHOWS START AT 10PM WITH NO COVER CHARGE! Afrodisiacs, ‘70s rock, 9:45pm Kanekoa, No cover, 9pm-12am Charley’s Restaurant Hapa’s Nightclub Blackout Dance, $5, 10pm Monday, 10/7 – Thursday, 10/10 D.U.H., No cover, 10pm 658 Front Street, Lahaina - 661-8141 Casanova Sunday, 10/6 Gail Swanson, No cover, 5-7pm Swani Bebops, No cover, The Pups, 9p-12am 9pm-12am House of Babylon Drag Show, Dancing w/ DJ Fat Jo, No cover, 10pm $5, 10pm Pioneer Inn – Angie Carr, Thurs.; Greg diPiazza, Fri.; Ed Truthan, Sat.; Ricardo Dioso, Mon.; Rene Alonzo, Wed.; All 6-9pm. 658 Wharf St., Lahaina, 661-3636. Reilley’s Steak House - Dinner jazz with Eve Moffatt, 6-9pm Mon. and Tues. 2290 Kaanapali Parkway, Kaanapali, 667-7477. Sea House Restaurant – Hawaiian music with Albert Kaina and Kincaid Basques, Thurs.; Napili Kai Foundation Show, 6pm Fri.; Kincaid Basques, Sat. thru Tues.; Albert Kaina, Wed.; All 7:30-9:30pm unless otherwise noted. Napili Kai Beach Resort, 5900 Honoapiilani Road, Napili, 669-1500. Whale’s Tale Bar & Grill - Eric Pietsch, Thurs.; Ed Truthan, Fri.; Joe Benedett, Sat.; JD & Mario, Sun.; Patrick Major, Mon. & Tues.; Armadillo, Wed. All sets from 6-9pm. 672 Front St., Lahaina, 667-4044. Upcountry Maui Jacque’s - Greg DiPiazza & Tato Duo, 7-10:30pm Mon. 120 Hana Highway, Paia, 579-8844. Moana Cafe - Jazz w/ Eve Moffatt, 7pm Fri.; Gypsy guitar w/ Bo Shores, 6pm Sun.; Hawaiian music, 6:30pm Wed. 71 Baldwin Ave., Paia, 579-9999. RESORT ENTERTAINMENT West Maui Embassy Vacation Resort – Kaanapali Beach 104 Kaanapali Shores, Lahaina, 661-2000 Ohana Bar & Grill: Ed Truthan w/ contemporary classics, Thurs.; Patrick Major, Fri.; Wayne & Friends, Sat.; Ed & Ron, Sun.; Ernest Pua’a w/ Hawaiian music, Mon.; Scott Baird & Friends w/ contemporary music, Tues.; Howard Ahia w/ Hawaiian music, Wed.; all 5:309:30pm. Torch lighting ceremony nightly. Hyatt Regency Maui 200 Nohea Kai Drive, Kaanapali, 661-1234 Torchlighting ceremony at 6:15 nightly followed by live Hawaiian entertainment 6:30-9:30 nightly in the Weeping Banyan: Sam Fukuhara, Thurs., Sun.Tue.; Larry Gollis, Fri-Sat.; Stephanie Anderson, Wed. “Drums of the Pacific” luau by Tihati, 5:30-8 nightly. Ka’anapali Beach Hotel 2525 Kaanapali Parkway, 661-0011 Black Rock Illusions dinner show at 5:30pm Sun., Tue. & Thu. in the Kanahele Room; Ka’anapali Serenaders, 6-9:30pm Sat.; Free hula show 6:30-7:30 nightly; Auntie Aloha’s Breakfast Luau, 8:15am Mon.Fri.; Paniolo Barbecue w/ live music & dancing, 6pm Mon.; Sunday champagne brunch w/ Hawaiian music by Polinahe, 9am-1pm. Kapalua Bay Hotel / Kapalua Bay Hotel, A Luxury Collection Resort 1 Bay Drive, Kapalua, 669-5656 The Bay Club: Solo pianist from 6-9:30 nightly. Gardenia Court: Hawaiian guitar, 11am-1:30pm Sun.; Lehua Lounge: Hawaiian guitarist 5:30-9:30 nightly. South Maui Bada Bing – Kenny Roberts, Thurs.; Kawika Maikai, Fri.; Pups Unplugged, Sun.; Mondo, Wed.; All play from 5:30-7:30pm. 1945 S. Kihei Rd., 875-0188. Capische? – Live piano music every night with Sal Godinez or Patricia Watson. Call ahead for details. Diamond Resort, 555 Kaukahi, 879-2224. Maalaea Grill – Benoit Jazz Works, 6:30-9pm, Thurs., Fri. & Sun.; Miguel Maldonado Quartet, Sat. Maalaea Village Shops, 243-2206. Marco’s Southside Grill – Mark Johnston solo piano, Wed. thru Sun.; Brian Cuomo solo piano Mon. & Tues. Sets from 7-10pm. 1445 S. Kihei Rd., 874-4041. Tommy Bahama’s Tropical Café – Latin guitar w/ Luis Diaz, Wed.-Fri.; Guitar & vocals w/ Brado, Sat.; Steel drums & sax w/ Brian Wittman, Sun., Mon. & Tue. All from 6-10pm. The Shops at Wailea, 875-9983. Wailea Steak & Seafood - Live music 9-11pm Thurs. thru Sat. 100 Wailea Ike Drive, 879-2875. Central Maui Manana Garage – Neto & Friends, 6:30pm Thurs.Sat., Tues.; Fortunato’s Magic, 7pm Fri.; Bobby & Tula, 6:30pm Wed. 33 Lono Ave., Kahului, 873-0220. Happy hour daily 2pm to 6pm & 10pm–1:30am $200 (16oz) Draft • $250 Well Mad Tonic, No cover, 9pm-12am Sinful Saturday: Dancing w/ DJ Fat Jo, No cover, 10pm Shirtless Tea Dancing & Video M - Movie & Martini Night, No cover, 9pm; Tu - Circuit Party w/DJ Fat Jo, DJ, No cover, 2pm No cover, 9pm; W - Karaoke, No cover, 9pm Napili Kai Beach Resort 5900 Honoapiilani Highway, Napili, 669-1500 Polynesian Dinner Show performed by children of the Napili Kai Foundation, 6pm Fri. Ritz-Carlton Kapalua One Ritz-Carlton Drive, Kapalua, 669-6200 Lobby Lounge: Reiko, solo guitarist & vocalist beginning at 5:30pm nightly. Banyan Tree Restaurant: World fusion duo Ranga Pae 6:30-9:45pm Wed.-Sun. Royal Lahaina Resort 2780 Kekaa Drive, Kaanapali, 661-3611 “Eddie and Eddie” w/ Eddie Lilikoi & Eddie Sebala, 5-9:30 nightly in the Royal Ocean Terrace. Royal Lahaina Luau featuring authentic Hawaiian & Polynesian songs and dances at 5 nightly. Sheraton Maui Hotel 2605 Kaanapali Parkway, 661-0031 Lagoon Bar entertainment w/ hula dancers, 6-8 nightly: Bobby & Ralph, Thu., Mon. & Tue.; Ralph & Allan, Fri.; Fausto & Kawaika, Sat. & Sun.; Nathan & Ralph, Wed.; Torchlighting and cliff diving ceremony at sunset. The Westin Maui Hotel 2365 Kaanapali Parkway, 667-2525 Tropica: Bobby Ingram Trio, Sun., Wed. & Sat.; JD Band, Tue.; Keoki Kahumoku, Mon., all 7-9pm; Fortunato’s magic 6:30-8:30pm Tue., Thu. & Sat. South Maui Four Seasons Resort Wailea 3900 Wailea Alanui, Wailea, 874-8000 Lobby Lounge, Hawaiian Music 5:30-7:30pm & hula 5:30-6:30pm Tue., Thu. & Sat.; Tiffany Lee & Josh 8:30-11:30pm Mon., Tue., Thu. & Sat. Ricardo Dioso & Margie Heart, 8:30-11:30pm Wed. and Fri. Grand Wailea Resort Hotel & Spa 3850 Wailea Alanui, Wailea, 875-1234 Botero Bar entertainment, 5:30-9:30 nightly: Larry Golis, Thu.; Brian Mansano, Fri.; Ricardo, Sat.; Luis Diaz, Sun.-Tue.; Mitch Kepa, Wed.; Strolling Hawaiian duo nightly in the Humuhumunukunukuapua’a. live music & dancing Wed–Thru–Sat Free Pool 2pm to 6pm with purchase 879–0602 2411 S. Kihei Rd. • With this ad - buy one entree & get a 2nd entree of equal or lesser value at 1/2 price. first & still best sports bar on maui • 4 Satellite dishes & 17 tv’s Letters News Cover story surf dining day&night A & E Film da kine calendar the grid Classifieds Maui Time Weekly october 3, 2002 19 Da kinecalendar Aloha Festivals The Fairmont Kea Lani Maui 4100 Wailea Alanui, Wailea, 875-4100 Jazz entertainment from 6-9 nightly in the Lobby Bar. Outrigger Wailea Resort 3700 Wailea Alanui, Wailea, 879-1922 Hawaiian entertainment w/ hula 6-9 nightly in Kumu Bar & Grill. Hawaiian entertainment 9-11 nightly in the Mele Mele Lounge featuring Mitch Kepa & Raymond “Mundo” Medeiros. Paradyse & Ka Poe O Hawaii perform at the Luau, Mon., Tue., Thu., Fri. Renaissance Wailea Beach Resort 3550 Wailea Alanui, Wailea, 879-4900 Sunset Terrace; Jamie Lawrence, Tue.-Sat.; solo guitarist Sun. & Mon., 6-9pm Wailea Sunset Luau, 6-8:30pm Tue., Thu. and Sat. Maui Prince Hotel 5400 Makena Alanui, 874-1111 Molokini Lounge: Ron Kuala’au, Hawaiian &contemporary guitar & vocals, 6-10:30pm Sun., 6- 8:30pm Tue., Thu. and Sat. Mele ‘Ohana duo, 6 -8pm Mon., Wed. and Fri., 8:30-10:30pm Mon.Sat. and 9am-1pm Mon., Wed. and Fri. Workshop on “Quality Child Care Matters” 6:30-8pm, at Kaiser Permanente Wailuku Clinic (2nd floor conference room). No charge. Childcare available. Call to register: 242-1608. Landscaping with Maui’s Native Plants: Helping Hawai’i look like Hawai’i - 7pm, at Hawaiian Islands Humpback Whale Nat’l Marine Sanctuary, Kihei. Free lecture. For more information, call Rhonda at 879-2818. Wednesday, October 9 “Doing Your Business Better” - 8:15am-5pm, at Sandalwood Golf Course. A Marketing Conference for Agricultural/Small Business. Call 871-7711 for info. Lahaina Arts Society’s Outreach Arts Program - 3-5pm, King Kamehameha III School, Lahaina. Free public classes. All materials provided. Ages 5-18. For more info, call 874-3104. ANNOUNCEMENTS dents aged 14 and up and includes an assessment test and educational counseling. Call 244-5911 to register and to schedule an assessment test. East Maui Hotel Hana-Maui Hana, 248-8211 Hawaiian music in Paniolo Lounge, 6:309:30pm Thu.-Sun.; Hula show, 7:30-8:15pm every Thu. and Sun. in the Main Dining Room. CLASSES & WORKSHOPS University of Hawaii Center Maui is pleased to be offering the Post-Baccalaureate Certificate in Secondary Education (PBSCE) and the Master of Education in Special Education (Med SPED) programs. For more information, call 984-3525. GED Foundations offered at Hui Malama Learning Center – Registration is open for stu- workshop to create double-walled forms (which create a sense of mass and volume) and capturing of space. All skill levels welcome. Call Hui Noeau at 572-6560. Thursday, October 3 Monday, October 7 Lahaina Arts Society’s Outreach Arts Program - 3-5pm, West Maui Boys and Girls Club, Lahaina. Also, Upcountry Boys and Girls Club, Makawao. Free public classes. All materials provided. Ages 5-18. For more info, call 874-3104. Beginning Swing Dance instruction - 7-9pm, at Bada Bing in Kihei. Free to the public. Call 875-0188. Maui Filipino Chamber of Commerce’s Sponsored Workshop - 5:30pm, at the Cameron Center Auditorium in Wailuku. Focusing on several financing options for small businesses, understanding the basics of your credit score, and business plans for start-ups and new businesses. Call Marieta Carino at 871-8359 for more information. Lahaina Arts Society’s Outreach Arts Program - 3-5pm, Central Maui Boys and Girls Club, Wailuku. Free public classes. All materials provided. Ages 5-18. For more info, call 874-3104. saturday, October 5 Inside/Outside: Creating Mass Volume through Double-Wall Construction - 9am-4pm, at Hui Noeau Visual Arts Center, Makawao. Hands-on a n n Wa e? c n a D In the H eart Earthdance Hawaii - Global Festival for Peace - Oct. 12. Musicians, Performers & Volunteers Needed! Calling out to art teachers for submissions for our keiki art exhibition (Rachel 268-8651). Inviting sponsors, donors, and advertisers in offers of community support. Call 876-1411. “Hi! It’s Me, Your Dog!” Photo Contest - Oct.131, presented by Borders Books & Music. Enter your funniest, cutest or most adorable photo of your dog. For submission info, please call 1-800-497-4909. 80th Maui County Fair The fair is October 3-6, at War Memorial Gym Complex. Gate Admission Adults $3.50 keiki $1.50. For more info, call 242-2721. Featured events: “In Grandpa’s Garden” - A horticulture exhibition contest with team-based demonstration on produce or vegetation, as well as soil production practices. Call 242-2721. Tuesday, October 8 of O lde M akawao T own Wild Wahine Wednesday C asanova ’ s F amous L adies N ight D isco “B est L ate N ight I n M aui ” • $5 cover after 9:30pm New DJs Ged & Skip! t. oc rd 3 t. oc 4 o m t i r utls y, a o t ors Gir io rid r h F r u sa o g h C i sal & e t d h tmba an a s’icananodctover n y guDJ Mod n D r . Sa , y sda th th t. 5 c d r tu Sa a ,o ay fro s c a i dis “ k oc sr 70’ Make it a Memorable Evening • Dine & Dance at Casanova for reservations and information – call 5 7 2 – 0 2 2 0 • Log on! casanovamaui.com 20 OCTOBER 3, 2002 da kine calendar the grid Friday, 10/4 Thursday, 10/3 Saturday, 10/5 Sunday, 10/6 Monday, 10/7 – Thursday, 10/10 Eclipse, No cover, $7, 10pm Kenny Roberts, No cover, 5-7pm El Nino, No cover, 7pm Gina Martinelli Band, No cover, 6pm Hawaiian by Nature, $3, 10pm 98.3 Power Jam w/AZD, $7, 10pm Diamonds Are A Girl’s Best Friend, $5, 10pm Karaoke, No cover, 10pm Kimo’s Kilohana, No cover, 10pm Midnight VooDooSuns, No cover, 10pm Midnight Crazy Fingers, No cover, 10pm-Midnight La jazzerie Kelly Covington Jazz Vocals w/ Danny Paquette, 7:30-10:30pm Eve Moffatt Jazz&Blues, 11pm-2am Brian Cuomo Duo, 7-11pm kahale’s beach club 36 Keala Place, Kihei - 875-7711 Kahului ale house 355 E. Kamehameha, Kahului - 877-9001 Life’s A Beach Longhi’s M - Karaoke, No cover, 10pm; W - Karaoke, No cover, 10pm Th - Kilohana, No cover, 10pm-12am Karin Holloway Jazz Vocals, 9:30pm-1:30am Spinin’ jazz featuring famous Jazz Artists from The Archives Planet Seed Karaoke - Oliver & Co. Sun Kings The Edge, 8pm 1913 S. Kihei Rd., Kihei - 891–8010 Tu-W - Da Hawaiian & Chico, No cover M-Tu - Jazz Jam, 9pm-2am, W -Chandler & Stover Duo, Karin Holloway Jazz Vocals For all shows, No Cover for La Creperie diners, otherwise $5 in advance, $8 at the door M - Open Mic; Tu - Super G; W - Pups Unplugged, 8pm Crazy Fingers, $5, 9:30pm 888 Front St., Lahaina - 667-2288 Maui Brews 900 Front St., Lahaina - 667-7794 D.U.H., No cover, 10pm Kavika Regidor CD Release Party, 9pm DJ Mackie Mac, No cover, 9pm Louisiana Allstars, No cover, 7:30pm Moose McGillycuddy’s “On Grandma’s Table” - Enter your healthiest cooking recipe in this competition in the Horticulture Exhibit in front of the War Memorial Gym. Registration is on Sat., 10/5 from 1-2pm. Horticulture Contest - Entries will be accepted from 7am-9:30pm on Thursday, 10/3 at War Memorial Gym. For entry forms, call 244-3242. Protea Beauty Pageant - Enter your prettiest, spikiest flowers on Thursday, 10/3 from 8-10am at War Memorial Gym. Call Ann Prouty at 878-2917. Photo Salon Contest - For more info on this contest, please contact Hideo Takeuchi at 871-4239. Turkey SPAM Recipe Contest - 7pm, 10/4, in the War Memorial Gym. Cash prizes will be awarded and the winner also gets the chance to compete in the national competition. For more information, call Pasita Pladera at 871-4115 or Esther Yap at 243-9768. Ultimate Pizza Contest for Kids - 9pm, 10/5, in the War Memorial Gym. Entries will be separated into Junior and Intermediate divisions depending on age, and cash prizes will be presented to division winners. Call Elizabeth Yee at 878-8501 for more info. Saturday Night Beach Party w/ Sunday Night Dance Party w/DJ M - Reggae Monday w/Marty Dread, $5; Tu - 18 & Over Night w/DJ Boomshot, $5 DJ Boomshot, No cover, 9pm Boomshot, No cover, 10pm before 11pm, $10 after 11pm; W - DJ Jammin J, $5 DJ Steve/Live Music, No cover, 7:30pm ANNOUNCEMENTS ON-GOING) Kihei Youth Center Needs Adult Volunteers - For health & fitness programs as well as cultural activites. Call Amber at 879-8698 for more info. Haiku Community Association Seeks Volunteers - Contact Tim Wolfe at 575-7474. Hale Kau Kau Volunteers Needed - 3:30-6:30pm, located at St. Theresa’s Catholic Church, Kihei. Volunteers are needed for meal prep, serve, & cleanup. Donations of food and/or funds accepted. Contact Marie Osaki @ 875-8754. Maui Friends of the Library Used Bookstore 8am-4pm, Mon.-Sat. Central Maui. Accepts donations of books & can always use energetic volunteer help. Call 871-6563. Maui Artists Program - Many of Maui’s finest resident artists display & discuss their original works at Four Seasons Resort Maui on Wed., Fri. & Sun. 8am1:30pm. For more info, call 874-8000. East Maui Animal Refuge - 9am Thu. at refuge, 25 Malu Aina Place, Haiku, for volunteer orientation meet- M - Open Mic Night, No cover, 9:30pm; Tu - DJ Mackie Mac, $5, 9pm; W - DJ Mackie Mac, No cover, 9pm DJ Curty, No cover, 9pm ings & tours. Call Sylvan @ 572-8308. 9th Life Cat Sanctuary – 1pm Thu., Haiku. Volunteer orientation meeting. Call Lela; 573-7877. EVENTS Tadashi Sato Retrospective - An Art Exhibit - Now - Oct. 13, in the Schaefer Int’l Gallery of the Maui Arts & Cultural Center. One of Hawaii’s most respected artists, Sato is known for his spare, restrained abstract compositions and imagery. Call MACC for more info. Moloka`i’s Aloha Festival - Now-Oct. 6, Molokai. The rustic beauty of Moloka’i island is enhanced with the addition of its Aloha Festivals events. Kaunakakai Town comes to life with a parade, ho’olaule’a and Mule Run and daily Hawaiian entertainment. Call 272-0026. Pacific Whale Foundation’s Free Coral Reef Information Stn. – 8am-1pm, at Ulua Beach in Wailea on Mon., Tue. & Thu. and at Kahekili Park (Airport Beach) in Kaanapali on Fri., Sat. & Sun. Call 249-8811. Thursday, October 3 County Fair Parade - 4:45pm. Running from the Maui Community College campus to the fairgrounds at the War Memorial Complex. Call Lei Kihm or Mike Victorino at 242-2721 or 281-9053. Weekly Gathering of Maui’s Top Artists 8:30am, on the grounds of Keolahou Hawaiian Church, 177 South Kihei Road in North Kihei. Call Michael Stark at 879-9337. Maui Live Poets Society - 6:30-9pm, at Wailuku Public Library. Open poetry readings. For more info, call Melinda Gohn at 661-0517. Maui Executive Association - 7:15-8:30am. Business to business lead generating organization. Breakfast meetings. Call Joni Brotherton at 244-1464. Friday, October 4 “Show-and-Tell” at Gallerie Ha - 7pm, 51 Market St. in Wailuku. Portrait Artists - bring your works (not ones that will exhibited at the Portrait Challenge). Gallerie Ha hosts free events the First Friday of each month. For information, call Pat Masumoto 244-3993. The Dances of Universal Peace - 7:15-9:15pm, Kihei Community Center meeting room. Simple circle dances from different religions to emphasize peace in ourselves and the world. Call Kachina at 874-7412. g i l a ’ l n u s M on the blue Maui’s only Irish pub ROCTOBER 12 VOODOO SUns ROCTOBER 19 Best DAMN HARMONICA PLAYEr CONTEST Hosted by whiteboy johnny NO COVER NO COVER CHERyl RAE NO COVER ROCTOBER 5 applications available at the cafe hardrock.com Letters News Cover story OPEN MIC - White Boy Johny TUESDAY hrc MAUI 900 F r o n t S t r e e t , L a h a i n a Info: 8086677400 surf dining 50% off drinks for all musicians wednesday thursday karaoke with Toby “Give New Bands A Chance Night” Howard Ahia & Friends Show your support! no cover Murray Thorne SATURDAY OCT. 5 th FRIDAY CRYIN OUT LOUD No cover before 10pm or with dinner sunday celtic tigers 7pm-10pm followed by DJ Sundance Kid No cover any night except saturday • open early for football Happy hour every day 5 - 7pm 2 for 1 all pupus • $1 off all drinks get ready for halloween 100 Kaukahi St., Wailea. 1ST left after Kea Lani Hotel • 874–1131 day&night A & E Film da kine calendar the grid Classifieds Maui Time Weekly october 3, 2002 21 Da kinecalendar THURSDAY 10/3 Live Music with THURSDAY 8/22 D.U.H. no cover $2 draft special FRIDAY 10/4 jOnah Livin live from 6 to 9pm Friday Night is Art Night in Lahaina 7-10pm. Stroll through dozens of art galleries in Lahaina Town for special gallery shows, featured artists-in-action, and refreshments, all free & open to the public! Call Theo Morrison at 667-9194. Aloha Friday Craft Fair - 9am-2pm, Outrigger Wailea Resort. Maui artisans display and sell their handcrafted island products, part of award-winning Ho’olokahi cultural program. For info, call 879-1922. No Ka Oi Toastmasters Club - 12-1pm, at old MEO location on Kane Street. Communication training. Open to public. Call 877-3875. Saturday, October 5 Seaside Stories - 11am, at Borders Books & Music. Our theme is “Hats Off to Hawaii’s Authors” and our program will feature books written by authors in Hawaii and a craft project. For ages 4 - 10. Each participant will receive a free “Junior Naturalist Bag” of fact sheets and other information relating to the program theme. For information, contact Pacific Whale Foundation at 249-8811. Hana High Students Poetry Reading - 12-1pm, Borders Books, Kahului. Join the DJ DANCE PARTY SATURDAYalino students of Hana High as they read stories, sing songs, chant and perform poetry from their selfpublished book, “The Hala Grove of Wakiu”. For more info, call Heidi at 281-4936. Kealia Pond Restoration - Maui Marine C.O.R.E. (Conserving Ocean Resources through Education) features monthly service projects and recreational outings designed to inspire youth about the natural environment. Membership is open (and free) for all youth in grades 8 - 12. All Marine C.O.R.E. members participating in service projects (on the first Saturday of each month) will be eligible to attend a free monthly recreational outing (on the third Saturday of each month). This month’s service project: helping to restore Kealia Pond. The recreational outing is on October 19: UFO Parasailing! For information: Call Pacific Whale Foundation at 249-8811. Public Meeting re: Waiehu Development Ideas - 10am, at the Paukukalo Community Center. North Shore at Waiehu LLC will present its concepts for use of 64 acres between the Iao & Waiehu streams. The development group will also accept comments from the public. Call Giovanni Rosati at 242-7710. Lahainaluna Booster Club’s Kalua Pig Sale 8-11am. Pickup for the tickets will be at Lahainaluna 10/5 $10 cover with BEACH PARTY DJ BOOMSHOT No cover $3 shot specials SUNDAY 10/6 Dance Party with DJ BOOMSHOT No cover $1 draft specials MONDAY 10/7 Best PLACE IN TOWN FOR MONDAY NIGHT FOOTBALL LIVE VIA SATELLITE 3PM REGGAE MONDAYS Marty Dread $5 Cover $2.50 domestic TUESDAY 10/8 18 & Over with DJ BOOMSHOT ROCK -nCurty “I Am the DJ” Morton 25 TV’s gives everyone the best seat in the house! 10/9 DOLLAR NIGHT DJ Jammin J $5 cover 900 Front Street Lahaina 667-7794 22 OCTOBER 3, 2002 da kine calendar Friday October 4th Wendell’s Hawaiian Music & D ance 2-5 pm Saturday October 5th D.U.H. 10pm $5 cover before 11pm WEDNESDAY Live Music! $3 Margarita’s, $2 Bud and Bud Light Drafts Matty says “Come watch Yankee Baseball!” Lahaina’s Best Happy Hour Everyday 3-6 pm PH# 667-7758 Sunday, October 6 Maui Humane Society’s Dog Day Afternoon For more information, call 877-3680. Swap Meet at the Kihei Open Market - 9am4pm, on Piilani Highway past Tesoro, off Ohukai Street, Kihei. Call 283-0461 or 870-4011. Gift & Craft Fair - 9am-4pm, at the Lahaina Civic Center. $1 Admission. Benefitting the Feline Foundation. Call 879-7594 for information. Monday, October 7 Open Mic. Night - 6:45pm, at Hale Imua Internet Cafe, Wailuku. Express yourself with music, poetry, dance, etc. Hosted by David Gilbertson. Call 242-1896. Maui Symphony Chorus Rehearsal - 7-9pm, in Kahului. Rehearsals will be for its December concerts. Preregistration is required so that music can be ordered. Call Canty 874-3836. Country Western Line Dancing – 7pm for lessons, 8pm for open dancing, at the Lahaina Civic Center. Call Maui Paniolo Posse at 669-4946. High Hopes Square Dance – Monday Nights at 7 pm, at the Pukalani Community Ctr. Call 878-1295. Maui Camera Club - 6pm, at the Hale Mahaolu Elima Community Meeting Room, Kahului. Call Carolyn Pavloff at 242-1033 for more info. Needlework-in-Progress - 6-8pm. Bring any piece of needlework (quilting, needlepointing, x-stitch) for help, encouragement, or technique instruction. Contact Dolphine or Ruth Ann at 662-8554. Wednesday, october 9 ROLL 80’s Sunday 9pm w/ High School or Wells Park in Wailuku. Cost per container is $5. Call 662-4007. Maui’s Swap Meet - 7am-1pm, Puunene Avenue in Kahului. Shop for unique, home-made, hand crafted, quality products at reasonable prices. Admission is 50 cents and free after 12:30pm. For info, call 877-3100. SAFEGUARD OUR OCEANS! WOW! Wailea on Wednesdays - 6:30-9:30pm, The Shops at Wailea. Live entertainment, restaurant specials, art and fashion. Call 891-6770. Pili Aloha Club - 9-9:30am, Kihei Community Center. Breakfast. For seniors 55 & older. New members welcome! Call Louis Gerdts @ 875-7854. Animal Control Board Meeting - 9am, at Hale Mahaolu Elua Board Room, Kahului. Call 270-7855. political events thursday, October 3 Fundraiser for James “Kimo” Falconer - 5-8pm, Lahaina Civic Center. A Birthday Celebration for West Maui County Coucil Candidate James “Kimo” Be witched. . . be awesome! Love Shack at the Open Daily at 10am Exquisite Lingerie Passionate Play Things Wild & Wonderful Videos & DVDs Kama Sutra And Ooh... So Much More! Great halloween ™ (808) 661-3111 On the Water at 889 Front St. Lahaina, Maui costumes Kihei Kalama Village 875-0303 the grid Mulligan’s On the Blue Friday, 10/4 Thursday, 10/3 Live Jazz, No cover, 9pm-Midnight Live Jazz, No cover, 9pm-Midnight Redline Entertainment presents Hip Hop DJs Hawaiian Fridays: Local Music DJ Jammin J Karaoke, 10pm-1am Karaoke, 10pm-1am Karaoke, 10pm-1am Karaoke, 10pm-1am Karaoke, 10pm-1am Harris Moku & Company, No cover, 9pm El Nino, No cover, 9pm Kenny Roberts and Friends, No cover, 9pm Tony Ray Band, No cover, 9pm Louise Lambert Band, No cover, 9pm DJ Dancing, $10, 9:30pm-2am DJ Dancing, $10, 9:30pm-2am Pacific’O Ramon’s 2102 Vineyard, Wailuku - 244-7243 sansei 115 Bay Dr., Kapalua - 669-6286 sansei Kihei Town Center - 879-0004 Sports Page Grill & Bar 2411 S. Kihei Rd., Kihei - 879-0602 Saturday, 10/5 Out Loud, No cover before Murray Thorne, No cover, 8pm Cryin 10pm, show starts at 10pm Howard Ahia Band, No cover, 9pm Stopwatch Sports Bar 1127 Makawao Ave., Makawao - 572-1380 Tsunami Nightclub 3850 Wailea Alanui Dr., Wailea - 875-1234 Falconer! Food, refreshments, entertainment. Call Teresa at 667-6602. Talk Story With Kika Bukoski - 6-8pm, Old Bullock’s Restaurant. Enjoy casual disussion with Upcountry’s State Representative. Call Mickey Vierra at 357-0628. Saturday, October 5 Meet Your Candidates on AKAKU - 12-4pm, on AKAKU Television. Call 871-5554. monday, October 7 Registration Deadline for General Election Monday Morning with the Mayor - 7:05am, on KAOI 1110AM. Mayor Apana talks about the issues and takes public’s calls live on the air. tuesday, october 8 Republican Rally in Wailuku - 5-7pm, at Wailuku Community Center. Call 244-6042 for info. An Evening With Representative Kika Bukoski - 6-8pm, Pukalani Elementary School Cafeteria. Music By Gomega, Special Guest Frank Delima, Ono Pork Roast, Door Prizes. Call Mickey Vierra at 357-0628. wednesday, october 9 Republican Rally in Lahaina - 5-7pm, at Lahaina Civic Center. Call Sherri Dodson at 879-3758. Meet the Candidates Forum to Present Contenders for Council & Mayoral Races 5pm, at the Wailuku Community Center. The program will be filmed and televised by AKAKU until election day. Call Toni Woolley or Alexa Betts Basinger at 2429231 or 244-9149 for more information. SUPPORT GROUPS Sunday, 10/6 Monday, 10/7 – Thursday, 10/10 Celtic Tigers, No cover, 7pm M - Industry Night, No cover, 7pm; T - Open Mic w/Whiteboy Johnnie, No cover, 9pm; DJ Sundance Kid, No cover, 10pm W - Karaoke with Toby, No cover, 9pm Karaoke, No cover, 9:30pm M - DJ CandyMan; T - Oldies Night; W - Karaoke, No cover, 9:30pm W - Joe Benedett, No cover, 9pm monday, October 7 BODY MIND SPIRIT Al-Anon - 12pm, Lahaina Baptist Church. 12-by-12 Study Group. Call Kate @ 661-3906. Hepsters Hepatitis-C Support Group – 6:30pm, at the Hannibal Tavares Community Center multipurpose room. Call Lora- 573-6366, or Mark- 283-7427. Oral HIV Testing - 10am-1pm at Keolahou Church in Kihei. Call Takako @ Dept of Health @ 984-2129. P.A.R.E.N.T.S. - 6pm, old Kihei library. Videos & discussion on parenting techniques. Free. Child care also provided for free. Call Trudy @ 879-3595 Women Helping Women - 6-8pm in Kihei. For women whose lives have been affected by domestic violence. For more info, call 579-9581. Free Mammograms and Pap Screens - at Community Clinic of Maui. For women age 40-49 years old who qualify. Call Kathi Jones at 872-4025. Free Oral HIV Testing – By the Dept. of Health available by appointment only. Call 984-2129. Breast & Cervical Cancer Screening - Service limited to pap smears and breast exams only. Planned Parenthood of Hawaii. Call Ann Robles at 871-1176. Tuesday, October 8 H.E.A.L. (Help Ease a Loss) - 6-7:30pm @ Hospice Maui. Call 244-5555 for more info. Al-Anon – 12pm, at St. Theresa’s Church St. Francis room. Call Ethel at 879-6597. Children’s Support Group - 5:30-7pm in Kahului. For those who have been exposed to domestic violence. Call 877-6888. P.A.R.E.N.T.S. - 6-8pm, Montessori School, Makawao. Techniques. Free. Call Trudy @ 879-3595. Women’s Support Group for Victims of Domestic Violence - 5:30-7:30pm, Kahului. Presented by Child & Family Service. Call 877-6888. Thursday, October 3 friday, October 4 Oral HIV Testing - at Pukalani Community Center “everybody’s place” THURSDAY 10/3 Wednesday, October 9 drag show Happy Day Al-Anon Family Group - 9am, at Iao Congregational Church, Wailuku. Call 242-0296. Overeaters Anonymous - 8:30-9:30am, Kamaole Beach Park III, at picnic tables. Call 244-7572. Women’s Anger Management Groups - 9-11am, Kahului. Call 877-6888. queens of babylon FRIDAY 10/4 4:30pm “tootie’s”-pau hana flashback fridays till 2am koni live 104.7fm live broadcast Saturday 10/5 Sinful saturday Dancing dj fat jo SUNDAY 10/6 Cheryl Rae shirtless tea dance CD RELEASE SHOW Club Video Dance Night sunset to close Happy Hour All Day Friday, October 4 monday 10/7 Nicotine Anonymous - 6-7pm, at Hoololi Room of the old MEO building on Kane St. Call Earl @ 879-5796. A Ho`omalu Ala Al Non – 12pm, at Lahaina Baptist Church. Group meeting. Call Kate @ 661-3906. Women’s Al-Anon - 12-1pm, at St. Theresa’s Church in Kihei. Call Fumi, 879-1432 or Pat, 875-1153. movie $1 Letters News Cover story surf $2 well Martinis dom. drafts & $2 wells 9-midnite Wednesday 10/9 9-midnite karaoke HARD ROCK CAFE serving breakfast, lunch & dinner daily 10:30PM-1:30AM 1280 S. Kihei Road (Next to Kihei Ace Hardware) SAT. OCT. 5TH Sunday, October 6 Pattycake – 4pm. An infertility support group. For more info & location, call 280-0539. Sunshine Group - 1pm, at Sandalwood Restaurant. Fellowship among those who have lost a loved one. Call Carole @ 242-5583 or Paul @ 874-3063. & martini nite TUESDAY 10/8 circuit party dj fat jo Saturday, October 5 Al Anon, Adult Children of Alcoholics - 9:3011am, at Good Shepherd Episcopal Church (room off lanai next to church hall) in Wailuku. Call 242-0296. Overeaters Anonymous - 8-9am, at Kamaole Beach Park III picnic tables in Kihei. Call 244-7572. Monday, October 7 Meditation Group for Reiki Practitioners 11am-12:30pm, 6-7:30pm, 2161 Vineyard Street, Wailuku. Contact Rev. Mary Sukup at 276-6261. Pilates Combo - 6:30-7:30pm, at Kaiser Permanente Wailuku Clinic. Strengthen abdominals, back, neck & shoulders. Cool down with yoga stretching moves. Bring an exercise mat. Register: Joanne Tanaka at 243-6480. Kaiserobics - 5:30-6:30pm, every Thu. thru Nov.21, at Kaiser Permanente Wailuku Clinic. Low impact aerobics class. Register: Joanne Tanaka at 243-6480. Thursday, October 3 United Self-Help Mental Health Support Group - 10am, at the Cameron Ctr. Call 879-7696.F Hana Women’s Support Group - 5:30-7 p.m. Presented by Child & Family Service. Call 877-6888. Wailuku Noon Al-Anon Family Group - Noon at Hina Mauka, Wailuku. Call 242-0296. Cancer Talk Story - 6:30 p.m. at Cameron Center, Hui No Ke Ola Pono. Call 243-2967. 9-11am; at Paia Hawaiian Protestant Church 1-3:30pm. Call Takako at 984-2129. In Azeka Plaza I 891–0989 Her new CD titled “Midnight Rain” available at Tropical Disc, Borders Books & Music, and Partial procedes will be donated to “Feed the Children.” dining day&night A & E Film da kine calendar the grid Classifieds Maui Time Weekly october 3, 2002 23 Da kinecalendar NiGht Club Thursday Ladies night Hip - hop dj’s friday, OCTOBER 4 DJ JAMMIN J Saturday flava zone hawaii drink specials all night monday From Self-Sabotage to Creativity -- A Healing Journey - 6-8pm. Explore your experiences of self-sabotage with the intent of discovering and transforming underlying limiting beliefs & self-sabotaging mechanisms. Contact Debra Greene, Ph.D. at 874-6441. Tuesday, October 8 Vipassana Meditation Classes - 6-7pm, in Kihei. Beginners or experienced students welcome. Call 573-3450 for more information. Children Immunization Clinic - 9-11am, at Lahaina Comprehensive Health Ctr. For children without medical insurance up to age 18. Bring immunization records. Walk-in basis. Free. Call 984-8260. Oral HIV Testing - 8-11:30am and 1-3:30pm, at the Wailuku Health Center. Results returned in 2 weeks. For more info, call Takako at 984-2129. ‘Ohana Connection talk - 8:30am, at the MOA-True Health Center in Kahului. An ongoing speaker’s breakfast to promote awareness for a healthier life in the Maui Community. Call Chalie at 986-0209. HIV Counseling & Testing Clinic - 8:0011:30 a.m. and 1:00-3:30 p.m., Wailuku Health Center. Sponsored by State Dept. of Health. Call 984-2129. A Group of Our Own--Women’s Group - 6-8pm, South Maui. For women who are committed to high level self-exploration & accelerated personal and spiritual growth. Contact Debra Greene, Ph.D. at 874-6441. Wednesday, October 9 Children Immunization Clinic - 12-3pm, at Wailuku Health Center. For children without medical insurance up to age 18. Bring immunization records. Walk-in basis. Free. Call 984-8260. HIV Testing/Counseling Clinic – 9am1:30pm, at Lahaina Comprehensive Health Center. Call 984-2129. SPORTS Red Bull King of the Air Kiteboard Championships - 10am-4pm, Thru 10/6, at Hookipa Beach Park. Contact Josh Kendrick at (310) 460-5254. Friday, October 4 MIL Volleyball Game - Hana vs. Baldwin High - in Hana. Girls: 5:30pm; Boys: 7pm. Call David McHugh at 565-7910 ext. 269. MIL Volleyball Game - Lanai vs. Seabury - on Lanai. Girls: 6:30pm; Boys: 8pm. Call David Willie K Owns Mondays tuesday ultra fab. tues. Alternative Night w/ Chilltown Productions aloha wednesdays All Drinks $2 until midnight 41 E. Lipoa St. Kihei • 879-9001 24 OCTOBER 3, 2002 da kine calendar Tadashi Sato - “Captain’s Chair” McHugh at 565-7910 ext. 269. MIL Volleyball Game - Molokai vs. Maui High - on Molokai. Girls: 6:30pm; Boys: 8pm. Call David McHugh at 565-7910. Saturday, October 5 MIL Volleyball Game - Lahainaluna vs. Hana at Lahainaluna High School. Boys: 5:30pm; Girls: 7pm. Call David McHugh at 565-7910 ext. 269. MIL Girls Volleyball Game - Kaahumanu vs. King Kekaulike - 5:30pm, in Paia. Call David McHugh at 565-7910 ext. 269. Valley Isle Co-Ed Soccer League Games - at Kalama Park in Kihei. 5:30 p.m. - Team A v. C; 7:30 p.m. - Team E vs. D. Call David Jorgensen at 242-4555. Hawaiian Islands Natural Bodybuilding & Fitness Championships - in the McCoy Studio Theater of the Maui Arts & Cultural Center. Noon 2pm, Pre-judging; 6pm, Contest. Natural bodybuilding competition with Ms. Fitness, Ms. Figure and Model Quest classes. Various age groups and classes for men and women, novice and open. Call 242-SHOW. MIL Volleyball Game - Molokai vs. Seabury on Molokai. Boys: 6:30 p.m.; Girls: 8 p.m. Call David McHugh at 565-7910 ext. 269. MIL Volleyball Game - Lanai vs. Maui High - on Lanai. Boys: 6:30 p.m.; Girls: 8 p.m. Call David McHugh at 565-7910 ext. 269. Sunday, October 6 Molokai Hoe Canoe Race - Call Hannie Anderson at 259-7112 for more information. 29th Annual Lester Hamai Memorial Golf Tournament - 7am, at the Maui Country Club with a 7am shotgun start. Wednesday, October 9 MIL Volleyball Game - King Kekaulike vs. Lahainaluna - at King Kekaulike School. Girls: 5:30pm; Boys: 7pm. Call David McHugh at 565-7910 ext. 269. Send your listings for the Da Kine Calendar to 808-661-0446 calendar@mauitime.com MAUItimePersonals visit us online at 1-800-710-8735 WOMEN Seeking Men HAPPY TOGETHER Beach bunny, 38, 5’6”, 125lbs, blonde/brown, windsurfing wild woman, fit and full of life, seeks compatible adventurous man, 5’6”+, to share all 713776 that life has to offer! HUSBAND HUNTING SWF, 5’5”, brunette, seeks, hardworking, handsome, kind and faithful marriage partner, 30-50. Must be dedicated Jehovah’s Witness. 713648 NEEDS A BUDDY Active, fun SWF, 22, enjoys scuba diving, adventure, swimming. Seeking SM, 21-29, to hang out with, hike with, drink with, develop a friendship with, and possibly more. 584061 MUST BE WILLING TO LIE... about how we met. High energy, fun, bright SWF, mid-20s, seeks man in mid-20s, well rounded, easygoing, with great sense of humor and who isn’t afraid to get a little crazy! 384879 CHEMISTRY 101... DWF, 5’7”, 120lbs, no kids. Seeks fit and trim, 40-ish-mid60s, monogamous man, open to LTR. you get a compassionate, spiritual, adventurous, passionate, ocean loving gal. 711167 CLASSY ROMANTIC Vivacious, kind, adventurous, spiritual, spontaneous, attractive, 55, 5’7”, slim, brunette, seeks everlasting friendship and love with quality gentleman, with integrity, who loves life. 708749 EXOTIC BEAUTY Honest SF, 43, giving, romantic, spiritual, caring, fun, likes lahaina, nature, fishing, camping, travel, fine dining, dancing, good conversation and culture. Seeking SM, 39-59, with similar qualities. 611337 CASUAL ONLY SWF, 39, seeks single man, 35-50, selfemployed, independent, interested in camping, hiking, dining out, casual relationship. 576418 ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ 1-900-226-0169 Call costs $1.99/min. Must be 18+. 1-800-721-0152 SEEKS LOCAL BOY If you’re a energetic, nice local boy. You could have found the opportunity of a lifetime. Female, 33, seeks man, 30-42, N/S, for adventure and the best time ever! 569687 IT’S ALL BEEN SAID SWF, 54, enjoys snorkeling, movies, travel, music, theater, music. Seeking SM, with similar interests, for friendship, possible LTR. 517533 FRIENDSHIP IS FINE SWF, 43, 5’8”, 135lbs, brown/brown, N/S, Scorpio, seeks nice, friendly, outgoing man, 3751, N/S, in shape, for friendship first. Enjoys beaches in the afternoon, working out, reading, people. Possible relationship. 494218 EXPLORING LIFE TOGETHER Petite, physically fit female, 46, N/S, loves music, dancing, horseback riding. Seeking honest, gentle WM, 30-55, for good friendship, hopefully more. 478144 ATTRACTIVE, SLENDER... sexy brunette, 35, tall, voluptuous, outgoing, down-to-earth, charming, with really good sense of humor, seeks fun, sexy, down-to-earth guy, for casual dating. 425807 ☎ ☎ ☎ ☎ ☎ Men Seeking Women MEN Seeking Women LET’S ENJOY MAUI SWM, 35 likes wind surfing, tennis, photography, cooking, music, movies, picnics, dancing, anything on the beach. Seeking SF with similar interests to enjoy Maui with. 601414 DOWN-TO-EARTH Attractive SA/WM, 43, looking to meet attractive, independent, adventurous lady, 18-50, for fun, friendship, dating. 442624 MISSING SPECIAL PERSON Happy, hard-working SWM, 37, 6’4”, 220lbs, Pisces, non-smoker, enjoys outdoors, boating, fishing, scuba diving, tennis, golf. Seeking woman, 30-40, non-smoker, for LTR. 423935 SEEKING MY FIRST MATE Fit, tan SWM, 32, dark hair/eyes, loves sailing. If you’re a SB/AF, 22-40, interested in living life on the edge, beautiful sunsets on the water, give me a call. 697900 ROMANTIC DAYS & NIGHTS Hawaiian and Chinese male, 36, likes outdoor sports, romance, beaches, cuddling, sunset walks. Seeking outgoing lady, 25-40, for casual dating. 701884 THE BEACH & GOLF? How about visiting state parks? or kayaking? Hawaii is beautiful! SWM, 39, would like a lady who loves to swim in the ocean and enjoy the bright sun. 669687 SEEKS A ROSE Outgoing, caring SM, 26, fun and happy, enjoys quality times with family and a special woman. If you are honest, fun-loving and open, then your the lady for me.Call! 612561 BRAINS BEAUTY CULTURE Well traveled, professional, educated, intelligent CPSF, 38, N/S, seeks athletic, health conscious, “white collar” professional CPSM, 30’s, enjoys stimulating conversations, walks, arts, culture, tennis, dancing, movies, fine dining, humor. 669770 HAPPINESS SAM, 36, loves dining out, movies, time with a special lady who has a vibrant personality and who loves the outdoors. 688812 ☎ ☎ ☎ ☎ ☎ ☎ MAUI MAN SEEKS.... Spiritual journey with another dear heart, a journey that’s is playful and alive, intimate and safe, spontaneous and free-spirited, relaxed and joyful, passionate and honest. Seeking someone who understands. 709828 BALL’S IN YOUR COURT SBM, 43, Libra, N/S, loves tennis. Seeking a romantic woman, 25-40, N/S, to swing with. 630873 BEEN THERE DONE THAT... and wants to do it over again. Fun-loving, active, talented man, 22, 6’1”, enjoys nature, kayaking, music. Seeking Christian SF, with similar passions to enjoy them with. Musician A+! 707613 ☎ ☎ ☎ ☎ ☎ ☎ DATING. MADE EASIER! The singles scene can be hard. We want to make it easy! Place your ad today and start meeting great new people right away! THE BEGINNING OF THE NIGHT Loves listening to oldies, beach sunsets, swimming in rivers, movies, etc. Long-haired, selfemployed, 35 year-old SAM (5’8”, 17lbs). Looking for honesty. Must have a heart and a sense of humor. 668943 NATURAL RELATIONSHIP SWM, 50, enjoys world music, the ocean. Seeking spiritual, comfortable, relaxed AF, 3451, good personality. 666310 CREATIVE ISLAND SPIRIT Healthy, slim, fair British SWM, 49, 6’, Pisces, non-smoker, enjoys landscaping, the Island life, nature, theater, arts, singing, music, hiking. Seeking woman, 28-48, for LTR. 422500 I KNOW YOUR OUT THERE Friendly SWM, 44, seeks love and friendship. Asian or Islander girl, petite, athletic or average. Must love life and love to smile. 669779 LET THE GOOD TIMES ROLL Romantic, Italian, surfer boy, SWM, 38, Taurus, smoker, enjoys the beach, surfing, dinners, playing drums, making surf boards. Seeking woman, 21-50, for dating and future relationship. 467400 TRAVEL AND ADVENTURE W. Maui SWM fit and good-looking. Who is gainfully self-employed, seeks cute, trim and sexy W. Side SWF, 30-35. 464397 ARE YOU THE ONE? SM, 36, seeks a woman with no kids, a lust for life, great health and attitude. 638907 ESCAPE WITH ME Fun, good-natured, adventurous, playful SBM, 43, 5’10”, 170lbs, goatee, looking for a lady, interested in travel, friendship, dining, romantic nights and lasting relationship. Be my partner in crime. 618040 BE MY LADY Honest, loving SBM, 31, attractive, hardworking and fun-loving, seeks a special lady, intelligent and attractive, to enjoy life with, friendship first. 612928 LAID-BACK Easygoing SHM, 23, brown/brown, enjoys music, sports, travel. Seeking WF, 18-30. 582085 TIME FOR US TO TALK Male, 21, seeks open-minded WF, 20-40, smoker, for LTR. So call or else you’ll miss what you’ve been searching your whole life to find. 572461 LIVE AND PLAY Open-minded SM, 25, enjoys sports, outdoors, the beach, yoga, dancing. Seeking SF, with similar interests, for friendship, possible LTR. 552775 MY HEARTS DESIRE Clean, wholesome SWM, 46, Pisces, nonsmoker, therapist, seeks pretty WF, 35-47, with inner-peace, non-smoker, to grow with each other. 536194 THE DETAILS: Male, 72, Hawaiian, Sagittarius, non-smoker, seeks Hawaiian or Asian woman, 40-60, nonsmoker, friendship that could develop into more. 520544 LOOKING FOR YOU SHM, 36, enjoys reading, movies, quiet evenings at home, sports, boating. Seeking sincere, nice female for friendship and more. 502222 YOUNG AND FRIENDLY SM, 21, enjoys the beach, working out, dancing, travel, movies, dining out. Seeking outgoing, attractive female for dating. 502373 EASYGOING SM, 18, likes sports, music, travel, dining out, movies. Seeking SF, with similar interests, for friendship, possibly more. 504597 ATTRACTIVE MAN Polite, picky SWM, 45, blue eyes, fit, nice smile, many interests. Seeking well-rounded female for casual dating. 517450 LOVE MY LIFE... DWM, 55, business owner, no children, loves classic cars, rock-n-roll music, hiking, camping, golf, scuba diving, kicking back. Seeking SAF, who is passionate about music, for casual to permanent relationship. 442907 NEW MAN ON MAUI SBM, 43, who’s fit, ready to explore, be free, willing to share in life, take chances, and just plain have fun. Life makes no promises/ grantees. It’s all on you. 497674 ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ Start dating now! Place your ad by calling ©2002 TPI GROUP TO LISTEN & RESPOND TO ADS, CALL TO LISTEN AND RESPOND TO ADS USING YOUR CREDIT CARD OR A CHECK, CALL TO BECOME A MEMBER, CALL 1-800-710-8735 ☎ ☎ LOVE THEME SWM, 46, 176lbs, smoker, employed, goodlooking, fun-loving, tries to have a good time whatever he does. Enjoys walks on the beach, movies, dining out, romance, and more. Seeking lady, 25-50. 498490 PLAYS WELL WITH OTHERS SWM, 39, 6’1”, brown hair, N/S, with good sense of humor, enjoys golfing, jogging. Seeking smart, positive woman, 28-45, to travel to unknown places or shopping at the super market. 476538 NEW TO THE ISLAND Romantic, understanding and giving SWM, 45, N/S, professional writer, enjoys the arts, international travel. Seeking woman, 25-45. 476733 DO YOU ENJOY LIFE? SWM, 24, Sagittarius, smoker, loves life, adventures. Seeking mature woman, 18-31, for some good times, serious relationship. 478539 LOVE FOR THE OCEAN Self-employed Hawaiian and Mayan male, 47, long hair, loves surfing, the beach. Seeking local Wahine, 35-50, race open, for casual relationship, friendship, etc. 474572 HEY LADIES! SM, 24, brown/blue, muscular, personal trainer, Pisces, N/S, enjoys the beach, fitness, outdoors. Seeking SF, 21-27, N/S, to have fun. 466498 LOST IN PARADISE Easygoing SM, 30, 5’8”, 175lbs, PortugueseHawaiian, Sagittarius, smoker, seeks real woman, 18-45, for new friends. 465634 FRIENDSHIP AND MORE SM, 38, 6’ musician, enjoys dining out, beaches, outdoor activities. Seeking sweet, outgoing SF, for a great friendship, companionship, maybe LTR. 460808 MAKE ME SMILE Athletic, physically fit SWM, 37, 6’1”, 200lbs, Virgo, non-smoker, seeks beautiful, positive SWF, 18-40, to relax with, dine, play sports, and more. 457272 AT LAST Hard-working SWM, 26, Aquarius, smoker, seeks fun-loving, open-minded, easy to get along with woman, 18-35, for dating. 459389 LET’S CUDDLE Sensual SWM, 43, 5’11”, 180lbs, brown/brown, Pisces, non-smoker, seeks woman, 18-50, to get to know. Race unimportant. 453860 EASY TO GET ALONG WITH Shy, quiet, nice, easygoing SWPM, 36, Pisces, smoker, seeks SAF, 25-35, for companionship, friendship, conversation. 453069 LOVE YOU REGARDLESS Active attorney SWM, 39, Sagittarius, nonsmoker, enjoys working out, traveling. Seeking woman, 21-32, non-smoker, for life-long partnership. No pressure. 452582 I’M GOOD, HUH? Fun-loving SAM, 40s, Leo, smoker, enjoys dining out. Seeking straight-forward, honest, intelligent, nice-looking, good-hearted, slender woman, 18-44, for friendship casual dating, maybe more. 460092 ALOHA SPM, 40, with many interests and hobbies. Seeking SPF to move forward in life with. Serious inquiries only. 394982 LANDSCAPE ARCHITECT Born and raised on Maui. SWM, 47, Pisces, loves the ocean, camping, fishing, plants. Seeking honest female, with similar qualities, for serious relationship. 442623 HEART IS OPEN FOR LOVE Old-fashioned SWCM, 45, N/S, father of two, enjoys all types of music, DVDs. Seeking single female, 35-45, for blossoming relationship. 575398 LIFE IS GOOD Slim, healthy SWM, 45, 6’+, Scorpio, N/S, enjoys indoor and outdoor activities. Seeking healthy, in shape, organized woman, 26-60, who likes music, the forrest, the ocean. 441382 SCUBA SWM, 36, N/S, seeks SWF, 21-38, N/S, who likes the water, the outdoors, scuba diving. 431575 ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ LOVE CAN BUILD A BRIDGE SWPM, 47, 6’4”, 184lbs, dark/dark, N/S, educated, enjoys the ocean, swimming, jogging, social activities. Seeking SF, 20-55, for possible relationship. 429251 DEAR GOD, Independent, spiritual, truthful SWM, 42, N/S, honors internal values, always growing, enjoys simple, natural surroundings. Seeking free-spirited SF, 33-45, N/S, for honest sharing. 429434 REALLY FUN AND HAPPY Active SWM, 45, Leo, non-smoker, enjoys exercising, dancing, music, ocean activities, arts. Seeking energetic, spontaneous SF 35-47, non-smoker, who loves to laugh, for spiritual journey. 420228 STAND BESIDE ME Landscaping SWM, 39, Aries, non-smoker, enjoys biking, boating, hiking, beach, nature, waterfalls, travel. Seeking easygoing, successful woman, 18-48, for LTR. 423793 LOCAL GUY Spiritual SWM, 31, Virgo, N/S, seeks European WF, 25-35, N/S. Enjoys music, hiking, and meditation. 417835 ☎ ☎ ☎ ☎ ☎ Men Seeking Men MEN Seeking Men BE MY GYM PARTNER? Very athletic GWM, 24, 5’10”, reddish-blond/ green, tan, swimmers build, loves working out, dining out, volleyball, soccer, the outdoors, r&b, classical music. Seeking intelligent, open-minded GWM, with similar interests. 411974 WHAT IT COMES DOWN TO Male, 47, 6’, 185lbs, Hawaiian/Chinese, Cancer, smoker, seeks slender man, 20-50, to see how things go. 535956 SEEKING WM Male, 20, 5’8”, 158lbs, brown hair/eyes, likes hiking, outdoor, traveling, movies. Seeking WM, who has the same interests. 654890 LOOK ME UP SM, 29, 5’11”, 175lbs, looking to share good times with a nice, fun, attractive SM, who wants just a little more then a casual date. 619425 DON’T WAIT, CALL The phone lines are open! You can call anytime for this one time offer. SWM, 35, seeks older man, 50-68, for dating, laughter, friendship and more. 572694 DREAMS DO COME TRUE Outgoing male, 33, Libra, smoker, likes cooking, movies, quiet times. Seeking compatible WM, 18-35, to find our place in the sun. 523604 TALK TO ME SBM, 45, N/S, likes camping, hiking. Seeking WM, 26-49, who likes sports, movies, music, for hanging out. 477122 HERE IN PARADISE SBM, 37, looking to meet SBM, 25-50, to enjoy the island, travel, party, have a good time, and relax. 473661 ALOHA! Maui man, 25, average build/height/weight, enjoys outside activities, kind, compassionate people with open-minds, friendship and companionship. Seeking GM, 30-50, to fill the space in my heart. 473629 SAN FRAN TRANSPLANT Male, 39, Scorpio, smoker, homeowner, seeks WM, 25-35. Enjoys dining out, cooking, the ocean, swimming. 418038 ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ☎ Women Seeking Women WOMEN Seeking Women ATHLETIC, ROMANTIC Physically fit SWF, 29, 6’, 175lbs, Cancer, smoker, enjoys surfing, sun-sets, running, skiing. Seeking WF, 24-35, smoker, to make each other happy. 525706 ☎ For customer service call 1-800-252-0920 or email MauiTime@placepersonal.com ABBREVIATIONS: A-Asian; B-Black; C-Christian; D-Divorced; F-Female; G-Gay; H-Hispanic; J-Jewish; M-Male; N/S-Non-Smoker; P-Professional; S-Single; W-White GUIDELINES: The Maui Time Personals Maui Time Personals result The Maui Time Personals agrees not to leave his/her phone number, last name or address in his/her voice greeting. Not all boxes contain a voice greeting. Maui Time Personals Letters News Cover story surf dining 81ads day&night A & E Film 10.25”x11.25” da kine calendar the grid Classifieds Page 1 of 251 Maui Time Weekly october 3, 2002 MauiClassifieds AUTOMOTIVE Bicyles Cars - LESS than $5000 ‘88 Ford Ranger Runs. Rims, Tires Tunes, $1100 OBO. Please call 283-1506. Cars - More than $5000 4-Runner 4 Sale 92 TOYOTA KLEIN Adroit ultralight mountain bike, XT, Judy SL, new condition $1099 obo. 283-8633 or daveran@ aol.com. BMX Bike, Mosh Dirt Jumper, must sell, red, 3 piece cranks, 14 mm hubs, $350, 283-0512 Computer Equipment Beginner home computer Windows 95, 640KB base memory, Internet Explorer, Outlook Express for E-mail, Excel, Word Perfect, no printer $175 669-3866 Jewelry 4-Runner, 4 Door SUV, White, 185K, excellent condition, V6, 4WD, AC, Moonroof, Automatic Trans, Power Everything! Towing package, running boards, 31” tires, runs great!, $8000 OBO, 870-5000, moondog@ shaka.com Wedding / Engagement two ring set, ladies size 6 1/2. Setting has seven diamonds, main diamond is 1/4 karat. Classic, delicate design. Appraised at $1000, asking $650. 669-3866 West Maui. Sports Equipment New woman’s seaquest 7.6.5 wetsuit. Size 7-8. Reg $325, $225 obo. Stay toasty! 283-8633 or daveran@aol.com buy & sell this ad only $15/wk. Music Equipment Drum Kit Electric Blue Pearl Toms (12”, 13”, 16”) & Snare (14”x5”) w/Ludwig Kick (22”)/ Hi Hat stand/ Throne/ Yamaha double kick pedal. (Drums include hard cases, except floor Tom-has soft case) Call Josh 264-3582 call for details ARIES (March 21 to April 19) A bit of red 26 OCTOBER 3, 2002 classified in the long run. SCORPIO (October 23 to November 21) A bigwig disagrees with your approach this week. By sticking to your guns, you’re able to prove that you’re right. Couch this in the most diplomatic of terms. SAGITTARIUS (November 22 to December 21) Although you’re quite perceptive, it’s best to avoid making too many assumptions. You won’t always be right. Later in the week, you’re the unfortunate focus of a power play. CAPRICORN (December 22 to January 19) Someone close to you doesn’t realize you feel somewhat overwhelmed. Don’t let this person smother you. Try to spend some quiet time by yourself. AQUARIUS (January 20 to February 18) The office rumor about a possible promotion is just that. Don’t put too much stock into it. When the time comes, you will get the recognition you deserve. PISCES (February 19 to March 20) It’s not like you to take risks. However, you’re tempted this week by something which sounds lucrative. Don’t take the bait; it’s only greed that’s motivating you. CELEBRITY BIRTHDAYS: Dave Winfield, October 3; Charlton Heston, October 4; Kate Winslet, October 5; Bobby Farrell, October 6; Oliver North, October 7; Rev. Jesse Jackson, October 8; Scott Bakula, October 9; Full Body Touch by a skilled, natural beauty ∫ Divine Sacred Healing Touch & Presence of a Beautiful Woman Learn receiving and being received in a way that gives you confidence and the response you desire in true intimacy. Sarah Ashley 357-0333 283–3964 For men and women Goddess ∫ Sculpt-U B odyworks Phenomenally Healing Deep Release Exquisite Touch l Sensuous Beauty 8 h ol 7 9 - 3 5 0 0 tas ∫ e B o dy E c s ∫ W tape this week won’t stand in your way. You’re able to slice right through it! Over the weekend, judgment is off concerning financial matters. TAURUS (April 20 to May 20) Despite some wanderlust, this is a good week to just stay put. You accomplish more at home than anywhere else. By week’s end, you experience a deep sense of satisfaction. GEMINI (May 21 to June 20) A co-worker puts you in an uncomfortable situation this week. Assert yourself before matters get out of hand. You don’t have to be this person’s personal court jester. CANCER (June 21 to July 22) Work matters are off to a rocky start this week. This sorts itself out by midweek. This weekend, your focus is on out-of-town company. LEO (July 23 to August 22) Feuding friends aren’t in the mood to listen to your advice. Stay out of this one. It will only come back to haunt you in the end if you insist on being involved. VIRGO (August 23 to September 22) Make it a point to stick to your budget this week. While a child’s demands are tempting, don’t give in. You’re not in the financial position to grant all wishes. LIBRA (September 23 to October 22) Your charm is able to bring out the best in those around you. However, someone is suspicious of you. You win the day just by being yourself by Charles Cooper y Yourhoroscope ADULT SERVICES Tantric Loving Massage INANNA Sensual Island Goddess to nurture your body, mind & spirit Unconditional love from the heart in or out call 264-6325 A deeply relaxing hour of exquisite bodywork 250–4817 THIS AD $75 PER WEEK! Acoustic/Electric guitar Gibson ‘Beatle’ J165. Very good cond. $995 obo. Simple bulletproof sm PA system, perfect for Duo. 4-channels, 3 mikes, 2 spkrs, $400. Call Maggie 249-0412. classes & instruction Latin Dance Classes Sundays 12:30pm at The Maui Arts and Cultural Center. $10 per class. For information call 808-739-9666 or visit. Pilates & Yoga Classes Privates & Semi Privates Avail. in Haiku. Mat classes Tu & Th 10-11 am. Workshops 10am-Noon: Pilates & Yoga 10/11&18; Self Massage & Movement 10/25 & 11/1. Call 5759878 for more info. Piano Teacher with master’s degree is scheduling fun and effective piano and keyboard lessons. For more information call 264-6057. Learn German and have fun! Conversation and grammar, for adults and kids. Contact Barbara at 572-2093. EMPLOYMENT Music Director/Keyboardist For Maui Onstage at the Iao Theatre. Exciting musical opeing November 15th! Must have music directing experience and be available for most rehearsals. Will also play piano during show. Call 2426969. PRIVATE TANTRA LESSONS Healing • More Mojo •Fun • Deeper Love Advanced Certified Tantra Educator. Singles/Couples 281-3410 FULL CIRCLE. . . Tantric Bodywork A deep, relaxing, full body energy release. Learn the art of Tantra. Experience the joy of true love and beauty. Women & Couples welcome. 250–8661 In/Out call My Secret Garden A place of Magic An hour of Pleasure A date with Beauty Sensual Bodywork 579–6400 Classified Ads are due every Thursday at Noon. MauiClassifieds Technical Assistants Includes basic operation of sound and lighting equipment at the Iao Theatre. Some record keeping involved. Good verbal communication required, must be reliable. Please call Maui Onstage at 2448680 to apply or fax resume to 242-7134. Set Designer Responsible for designing theatrical sets for Maui Onstage productions. Incl drawings to scale, creating color schemes, furniture and large prop placement, and related design considerations. A positive attitude is essential. Flat fee payment. Must have experience w/ designing for live theatre, and be creative w/ limited budget. Pls call 244-8680 to apply or fax resume to 242-7134. Guitar Instruction Maui Music Conservatory has opened a guitar dept. classical, contemporary, and Hawaiian. Teacher from Conservatory of Spain and MIT in Calif. Call Fernando @ 242-4037 for info. Surfing Coach Nancy Emerson School of Surfing is looking for a surfing coach in Lahaina, starting P/T and possible F/T in future. Minimum of ten years surfing experience, need to be able to surf long and short board with excellent form. Must have own car and insurance for movement of equipt. maui res. at least one year. References important! Please E-mail Nancy at nancy@surfclinics. com or fax resume to 011-6175590-7789 or on maui at fax 8774922 attn. Nancy Emerson Nail Technician & P/T Stylist This is a great opportunity to work with a professional team of stylists in a fabulous salon. French Connection 667-7107. Models Wanted Crazyfingers is looking for models for photo shoot for album cover. Call 573-3124. Caretaker Wanted Retired person or couple for Hana. Furnished studio for part rent, part garden, and housekeeping work. Some additional work available. Must like animals. Refs Pls. Lve msg 248-7725. mauitimeweekly Sales Agents find qualified job applicants for only $25/wk. hhhhh News Work from Home International Company needs full and part time help. $500 to $1500 a month part time, $2000 to $6000 per month full time. Call 808 442-4401. www. richerthani.com (anyone can apply) MIND, BODY & SPIRIT Totally relax And Let Go. Come and experience my healing touch. Rejuvenating and pleasurable. 2504557. Art of Tantra- Learn the ancient art of Tantra from a beatiful goddess. Access your potential! Deeply relaxing, educational bodywork. You Deserve It! 573-3406. Spiritual Gestalt: The Art of Becoming Whole. Psychotherapy & Counseling. Group, Couples, Individual. By appointment only. Maui Meadows. Ed Jor-El Elkin, Ph.D., 879-1103. email: edelkin@ aol.com; website: Free Meditations InstructionsMeditation on The Divine Inner Light and Sound as a way to lasting Peace and Joy and actual God (love) Realization.Tues, 7pm, Kihei. Call 879-0871 Psychic Readings Spiritual Consultant Love, Marriage, Business Readings by phone. Call Rose 707-575-4989. Visa/MC/Amex. Massage Amico’s Massage & Sauna “Friendly, Professional, Affordable” 1045 Piiholo Road. Makawao Massage and Sauna - $60 Special Double-Bookings ~ Save $10 Affordable Weekend Intensive Massage Certification Classes Also Couples Massage Lessons Weddings, Birthdays, & Bachelorette Parties Gift Certificates Available Island wide outcall Avail. till midnight 572-9270/280-0298 Eugenio Amico LMT#5582 Models Wanted for calendar magazine and artistic print work. Ages 18 to 40. 573-3712 hhhhh Letters Maui Time Weekly seeks dynamic professionals for our display and classified advertising teams. Call 661-3786 for an appointment or fax your resume to 808-661-0446, or email to jeff@mauitime.com Hands in Motion Maui Massage by Brooke Helgeson. Professional Theraputic Massage. Specializing in deep tissue massage. Enjoy the benefits of massage in your own room. Call 250-4515. MAT# 6120, member of AMTA 110175 MUSIC & ARTS Musicians Wanted Now you can phone in your free and paid ads! Obsessive and excitable guitarist/ songwriter seeks enthusiastic and energetic drummer, bass and sax player for all original CD, gigs and who knows. Talent, equip, vision and positivity required! Call 2054188. Musicians Available Female Guitarist Regular paying gig for female guitarist needed. Call 879-8620. Versatile Pro Drummer, aoustic and midi, available for gigs and recordings. Call JJ at 572-9270. NOTICES Free Mammograms and pap screens to women aged 40-49 yrs who qualify. Call Community Clinic of Maui and speak with Kathi Jones at 872-4025. Call today, don’t wait. Take care of yourself. Donations needed! Maui Onstage is looking for a donation of 200 padded folding or stacking chairs, two matching sofas for the concession lobby, a refridgerator, sewing machine and period costumes in good condition. Please call 244-8680 if you can assist with any of these items. Communal gardens- Huelo. 100 SF for $30, pre-tilled, manure, watered 2x daily. Call 573-4139. Starts avail 6 for $1. real estate For Sale Ocean Front Lahaina 2BDRM + 1BTH open house Saturday & Sunday 10am-2pm, any time by appointment. David 808-661-7005 Vacation Rentals Hana Maui Botanical GardensOn the way to Blue Pools. Self guided walking tour. Open 9-5 $3/person. 248-7725. Vacation Rental Available. Member Hawaii EcoTourism Assoc.. com/hanamaui Kuau Cove - Ho’okipa Beach Studio - $75/day. 1bd/1ba - $60/ day, 3bd/2ba house. Nice and clean, more locations too!. com, 573-0594. Clean, Affordable accomodations in our vaction rental from $49 per day. Call Toll Free Wailuku Guesthouse 877-986-8270 or www. wailukuhouse.com Maui Lawn Works “we do all the work . . . so you can enjoy your yard” Jason Meyer 573-1920 Looking for Short Stories!! “Chicken Soup for the Music Lover’s Soul”!! If your story is accepted, you receive $300, credit an a promo blurb in the book. Submit your story today! 300-1200 words in length, for more details go to or email to musicloverssoul@aol.com or fax 879-8201. Volunteers Wanted Cat Lovers- We need you to help us w/ various tasks. Craft sale, fundraising, caretaker duties incl. feeding, trapping, transporting, etc. Please contact the Feline Foundation of Maui at 879-3059 or 891-1181. Charge it! Maui Time Weekly accepts credit cards for classified and display ads Haiku Community Assoc. is seeking volunteers from our community to serve on the Board of Directors or on committees of the organization. Please contact Tim Wolfe at 575-7474. Maui Onstage is looking for a few good volunteers! The position of Volunteer Coordinator involves the recruitment and on-site training for other volunteers as well as scheduling, maintaining volunteer database and contacts. Also available: volunteer ushers, concession workers, set/construction crew, set decorators, costume sewers, and stage manager. Please call 2448680 if you are interested in joining a great team! this ad only $15/wk. call for details Classified Ads are due every Thursday at Noon. Mauitimeweekly Classified Line Ad Form Mail: 658 Front St. #126A-7278, Lahaina, HI 96761 Email: classifieds@mauitime.com - Fax: 661–0446 Phone: call Jen at: 661–3786 - Due Date: Thursdays at Noon Name: ______________________________________________________________________________ Address: ____________________________________________________________________________ City: ________________________________________________________________________________ Ph: _________________________________________________________________________________ Category: _ __________________________________________________________________________ Zip: _________________________________________________________________________________ Ad Copy: ____________________________________________________________________________ ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ Services Crystal Clean Are you tired of coming home to a messy house? Honest, reliable, attention to detail. Move in/out. Light or deep cleaning. Call Crystal at 270-4107 Maui Recycling Service Picks up all your glass, plastic, aluminum, tin, mixed paper, & cardboard. Home Pickup; a convenience for $15/mo! Bi-monthly pick up. Commercial accounts avail. Call Now! 244-0443 ____________________________________________________________ List Run Dates / Number of Issues:_____________________________________________________ Determine your cost:__________________________________________________________________ One week, 5 lines $15.62 (includes bold headline). Add $3.12 per extra line. Form of Payment: Cash o Check/Money Order o MC o Visa o Call 661–3786 and ask Name on Card:_______________________________________________________________________ for Classified Sales! Credit Card Number_______________________________________________ Exp________________ Cover story surf dining day&night A & E Film da kine calendar the grid Classifieds Maui Time Weekly OCTOBER 3, 2002 27 to Maui’s hottest new way to meet singles. See Pg. 25 Introducing Maui Time Personals. A fully interactive mobile dating service that makes meeting single people almost too easy. All you have to do is make a simple phone call. OK, you’re done! The rest is up to us. We take your information and match you up with other singles. And here is the best part, we contact YOU and let you know how many matches you have. Try it and get connected. Join today and your FREE membership will start today! 1-800-710-8735 fishing Action Stop Wishin’ & Go Fishin’ 42’ Bertram Private or Share Trips 1/4, 1/2, 3/4, & Full Day Charters Kona: (808) 327-1265 toll FREE 1-800-590-0133 Air Maui Helicopter Tours 96% Success Rate Free Call: 1-877-EVERCLR Get listed with kim Tattoos & Body Buying or Selling a Home? Contact Kimberly Partyka with Prudential. Piercing 808-875-5605, or email kimberly.partyka@pruhawaii.com Models Needed Photo studio hiring models. $50/ hour. Fitness, art, lifestyle. All ages. Mail photo: RC Studios, PO BOX 1758, Kihei, HI 96753 night with Hip-Hop DJ’s friday OCT 4 DJ JAMMIN J. Inter-Island packages- Air-RoomCar from $164*Some restrictions Apply, See page 2 for details. (808) 922-1515 or 1 800 654-4fun . Extraordinary people needed Maui Time Weekly seeks dynamic professionals to join our deadlinedriven, fast-paced display and classified advertising sales teams. A two year Associates Degree (AA, AS) or equivalent experience is required. Applicants must be positive, self- motivated, disciplined, competitive, and have excellent verbal & written communication skills. Excellent earning potential and health insurance included. Please fax resumes to (808)661-0446 Attn: Ad Director or to jeff@mauitime.com. Herpes – Everclr Stops Herpes Outbreaks! Ladies back side Maui’s Only FREE Island-Wide WEEKLY movie listings with Showtimes your social life. See page 19. GO AHEAD, look up any event on our new & improved (got times and cover charges), all-inclusive, centerfold. Thursday octoBER 3, 2002 Movie Showtimes! pg.17 BMX BIKE for Sale! Mosh dirt jumper! Must Sell! 283-0512 THE GRID & Da Kine Calendar The ultimate guide to supersizing to letters@mauitime.com 28 Island-Wide West Maui/Molokai Special Only Air Maui offers this incredible flight! Call now for your 2 for 1 Kama’aina special or special visitor rate! For reservations call 877-7005. Eh brah Send in your letters of dissatisfaction 41 E. Lipoa St. Kihei • 879-9001 Your best value for inter-island travel Click Away! 2 for 1 Special! Effective. List upcoming gigs, booking info, and CD promotions. See pg. 11 NiGht Club Mauitime.com All editorial and Da Kine Calendar! Pleasant island holidays teri@atomic LIC# 752 Sterile l Professional l Friendly 193 Lahainaluna 667-2156 Vote Lance holter Nature is our economy. We must monday flava zone Drink Specials all night Tropical Bouquets or Maui Flowers, Boxed & Ready for Travel. Delivered on Maui or Shipped World Wide. 808–27 SWEET(277–9338) SweetLifeFruitCo@aol.com tuesday ultra fab. Tues. hawaii Willie K Owns Mondays Mondays t Life F Gift Baskets Fruit Baskets For Your Gift Giving Needs preserve, protect, and restore our island home. Wherever you live on Maui, you can vote for Lance Holter Saturday ee t Co. bands ads! Only $40 Only $40! Easy, Affordable, & High visibility! Low Costs! Back Side Classifieds Work! CALL (808) 661-3786 for complete details! Let Six Figure Income show you how. Visit our website at i ru Maui: (808) 667-2774 Back Side $100,000 a year It is as easy as making $15,000/yr. Sw Get Connected ALTERNATIVE NIGHT with Chilltown Production wednesdays s r aloha wednesdays All Drinks $2 until midnight r TM
https://issuu.com/mauitime/docs/mt_6.14
CC-MAIN-2016-40
refinedweb
25,853
66.44
Opened 7 years ago Closed 3 years ago #4429 closed feature request (fixed) Ability to specify the namespace in mkName Description (last modified by ) Given data Foo data Bar = Foo If we do reify (mkName "Foo") then we get the information about " Foo the type", and not about " Foo the constructor". (This is problematic, say, for a quasiquoter [qq| ... Foo ... |] because the quasiquoter is forced to use mkName "Foo" as the Name for reify -- the forms 'Foo and ''Foo are unavailable to it.) I would like a way around this problem. It seems like it would be enough to communicate the namespace to mkName, so that the ambiguity no longer exists. Attachments (3) Change History (38) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by Yes, that would solve my problem. comment:3 Changed 7 years ago by comment:4 Changed 7 years ago by comment:5 Changed 7 years ago by Here's an alternative signature, for which the above could be wrappers lookupName :: TH.NameSpace -> String -> Q Name And if the String looks like "M.x" then it should be treated as a qualified name, just as in source code. Finally, the environment in which the name is looked up is the environment at splice point: it reads the environment captured in the monad. So for example: module M where muggle :: Int muggle = 3 -- This binding is ignored foo :: Q Exp foo = do { n <- lookupName VarName "muggle" ; return (AppE (VarE 'negate) (VarE n) ) } bar :: Q Exp bar = [| \muggle -> muggle + $foo |] ----------- module N where import M muggle :: Int muggle = 5 test1 = $foo -- Expands to (negate muggle) test2 = $bar -- Expands to (\muggle' -> muggle' + muggle) The splice $foo will run the code for foo, which consults N's environment (not M's!), to get the Name for N.muggle. The net result is very similar as if you'd used mkName "muggle", except that it still works if there is an intervening binding that accidentally has the same name, as in test. Subtle stuff. comment:6 Changed 7 years ago by Would you perhaps consider adding some way to recover from failed lookups? For instance, by returning a Maybe: lookupName :: TH.NameSpace -> String -> Q (Maybe Name) or alternatively somehow via qRecover? The reason I ask this is that it should be possible for ordinary Template Haskell users to implement the "totally fresh" semantics I was looking for in #5375 using lookupName, as follows: - generate a long string at random - look it up with lookupName. If the lookup fails, then we have a fresh name, otherwise loop. comment:7 Changed 7 years ago by Yes of course. I was thinking that the lookup would fail, and you could catch the exception with qRecover. But perhaps a Maybe is better because it signals more explicitly that the lookup might fail. comment:8 Changed 7 years ago by I'm about to commit a patch implementing this change. In the end I did not give a NameSpace argument because NameSpace is currently an opaque type. Really what we want is just to say "type namspace" or "value namespace", so I ended up with two functions lookupTypeName :: String -> Q (Maybe Name) lookupValueName :: String -> Q (Maybe Name) Both end up mapping to the same method of the Quasi class class Quasi m where ... lookupName :: Bool -> String -> m (Maybe Name) ... The Bool is True for the type namespace, and False for values. Not beautiful, but most users will use the lookupTypeName and lookupValueName interfaces. Any objections yell now! Simon comment:9 Changed 7 years ago by Fine by me. Thanks for implementing this. Reiner comment:10 Changed 7 years ago by commit 10c882760aea96a679a98bf76a603c1eeb99ecb8 Author: Simon Peyton Jones <simonpj@microsoft.com> Date: Tue Aug 23 13:46:43 2011 +0100 Implement lookupTypeName/lookupValueName, and reification of type family instances This patch (and its TH counterpart) implements Trac #4429 (lookupTypeName, lookupValueName) Trac #5406 (reification of type/data family instances) See detailed discussion in those tickets. TH.ClassInstance is no more; instead reifyInstances returns a [Dec], which requires fewer data types and natuarally accommodates family instances. 'reify' on a type/data family now returns 'FamilyI', a new data constructor in 'Info' compiler/typecheck/TcHsType.lhs | 2 +- compiler/typecheck/TcSplice.lhs | 163 ++++++++++++++++++++++++++++---------- 2 files changed, 121 insertions(+), 44 deletions(-) comment:11 Changed 7 years ago by Done. Reiner: could you supply a regression test, please? Thanks, Simon comment:12 Changed 7 years ago by The patches for the template-haskell library appear to be missing. Should I do more than just sync-all pull? comment:13 Changed 7 years ago by Sorry my fault. Now pushed. comment:14 follow-up: 16 Changed 7 years ago by I've attached a patch with regression tests for this and #5406. I'd appreciate it if you had a look at the shadowing tests in TH_lookupName.hs. I'm not sure if the current behaviour is correct. My specific question is what should this do: {-# LANGUAGE TemplateHaskell #-} import Language.Haskell.TH f = "global" main = print [ $( [| let f = "local" in $(do { Just n <- lookupValueName "f"; varE n }) |] ), $( [| let f = "local" in $(varE 'f) |] ), let f = "local" in $(do {Just n <- lookupValueName "f"; varE n }), let f = "local" in $(varE 'f) ] This currently prints ["global","local","local","local"]. Should the first two really give different results? Changed 7 years ago by comment:15 Changed 7 years ago by comment:16 Changed 7 years ago by This currently prints ["global","local","local","local"]. Should the first two really give different results? Well, yes, that's the current deal. All reify operations consult the environment at the point of the enclosing top-level splice. For a more extreme example, consider module M where funny :: Q Exp funny = do { Just n <- lookupValueName "f"; varE n } f :: Int f = 3 module Top where import M me = $(funny) f :: Bool f = True Here the lookupValueName consults the environment at the top-level splice, which in this case in in module Top, not in M. So the expanded code will bind to Top.f not to M.f. Doing anything else would be hard, and this is consistent with what happens for all other reification. None of this is documented. If I could ask one last favour, would you feel able to expand (or re-structure) the user manual section about Template Haskell? This could range from no-op, through at least documenting the key operations (like reify). Hmm. A complementary (and perhpas better) alternative would be to look at the ridiculously scanty Haddock documentation in Language.Haskell.TH:. Simply documenting the types and operations properly would be a huge step forward. Thanks for considering this. Needless to say I'd be more than happy to answer queries that arise when doing so. Simon comment:17 Changed 7 years ago by PS I pushed the testsuite patch, thank you! comment:18 Changed 7 years ago by Alright, I'm working on documentation. My first effort is on improving the Haddock documentation for Language.Haskell.TH. I'll keep you posted. comment:19 Changed 7 years ago by comment:20 Changed 7 years ago by comment:21 Changed 7 years ago by I'm wondering about the Maybe Dec field in the VarI constructor of Info. Is it ever a Just? My understanding is that reify should return a Just when the RHS is available, but I can't get this to happen. For example, this code: f = 0 $( do { inf <- reify (mkName "f"); runIO (print inf); [d| |] }) prints VarI ReifyVar.f (VarT a_1946157057) Nothing (Fixity 9 InfixL) comment:22 Changed 7 years ago by Here's the complete code for the above example: {-# LANGUAGE TemplateHaskell #-} module ReifyVar where import Language.Haskell.TH f = 0 $( do { inf <- reify (mkName "f"); runIO (print inf); [d| |] }) comment:23 Changed 7 years ago by I've got a similar question, this time about TyVarI. Consider this example: {-# LANGUAGE TemplateHaskell, ScopedTypeVariables, TypeFamilies #-} module ReifyTyVar where import Language.Haskell.TH f :: forall a. a -> a f x = $( do { inf <- reify (mkName "a"); runIO (print inf); [| x |] }) g :: forall b. (b ~ Int) => b -> b g x = $( do { inf <- reify (mkName "b"); runIO (print inf); [| x |] }) The following is printed at compile time: TyVarI b_1627390992 (VarT b_1627391179) TyVarI a_1627390993 (VarT a_1627393676) In both of these cases, the Type field of TyVarI is just a VarT of the Name field. Are there any examples where this is not the case? I thought that g might be such an example, because the type coercion b ~ Int is available, but apparently not. comment:24 Changed 7 years ago by It looks as if I never implemented the Just dec part of VarI! It's not straightforward - We don't have source-code defintions for imported Ids. They've all been converted to Core, and even the Core may not be available if the defn is big. - In principle we do have source code for local-defined Ids, but at the moment we don't carry around a mapping from Ids to their definitions. So currently you always get Nothing. I don't want to change that until it becomes a pressing need for someone, but you are dead right that it should be documented. Just say "always Nohthing" for now! For TyVarI, the situation is this: there is a lexically-scoped, source-code type variable name that maps to an internal type variable, of the sort that appears in types. In principle, you could imagine a system in which a lexically scoped type varaible maps to a type not a type variable: f :: Int -> Int f (x::a) = 3::a Here 'a' maps to 'Int'. Now in fact GHC's design insists that source-language type varaibles map to internal variables, but I didn't want to bake that in too much. And I'm not certain that I guarantee they map to distinct type variables. This is all a bit confusing. I think a better design would indeed identify these internal and external type variables -- the distinction is confusing. But it's another swamp I don't want to enter just yet. Does that help? Thank you for doing the documentation! comment:25 Changed 7 years ago by Yep, that's what I wanted to know. Thanks for clarifying. Changed 7 years ago by New haddock docs Changed 7 years ago by comment:26 Changed 7 years ago by I just attached the progress I've made so far (as a patch, and also as prebuilt html files), and I would appreciate some feedback. I reordered a lot of the exports in Language.Haskell.TH and Language.Haskell.TH.Syntax to create an order which made more sense to me, and to break things up into sections. My patch actually includes a few small changes to the API as well: - I added functions reportError = report Trueand reportWarning = report False, which I think are present a better API than report :: Bool -> String -> Q () - I added some type synonyms, ParentName, Arity, Unliftedfor Infotype - I exported unboxedTupleTypeNameand unboxedTupleDataNamefrom Language.Haskell.TH, since it seems to have simply been an accident that they were omitted. There were a few other changes I refrained from making, but have described at #5469. Reiner comment:27 Changed 7 years ago by Great progress, thank you. I'm happy. Do you need any specific feedback? Simon comment:28 Changed 7 years ago by I guess the main thing I'd like confirmation on is whether it's okay for me to make small API changes as I've been doing so far, or should I leave these out and just make documentation changes for now? comment:29 Changed 7 years ago by By all means propose API changes. The API benefits from the attention you are giving it. comment:30 Changed 6 years ago by comment:31 Changed 6 years ago by comment:32 Changed 6 years ago by comment:33 Changed 5 years ago by comment:34 Changed 4 years ago by Moving to 7.10.1. comment:35 Changed 3 years ago by It looks like work for this ticket was finished a long time ago. Actual changes in comment:10, documentation and test in the following 2 commits: Commit d27c4541937219b60551a75662df805f0e6e54a1: Author: Simon Peyton Jones <> Date: Mon Jul 16 17:42:49 2012 +0100 Add documentation for Template Haskell functions Thanks to Reiner Pope for doing this Commit dee226ccbaa091c9c8214f8764a6ebeaa0634587: Author: Reiner Pope <> Date: Wed Aug 24 09:41:09 2011 +1000 Test #4429, #5406 I think mkNameis the wrong thing for you here. Fundamentally, you want to get the TH.Nameof the data type called "Foo" that is currently in scope, yes? You could give that Nameto reify, or you could use it in a type. Suppose we had that were like mkNameexcept that (a) they are monadic, and (a) they expect the string to be in scope. The would be the precise monadic equivalents of 'Fooand ''Foo. Would that do the job? Anyone else have comments? Simon
https://ghc.haskell.org/trac/ghc/ticket/4429
CC-MAIN-2018-13
refinedweb
2,165
70.73
I am trying to take a list of a list and I would like to split apart the datetime object that is in the first element of each element of the list as shown below. list = [[u'2014-09-02T23:00:00', 1, 1, u'msdn.microsoft.com', u'Engineering & Technology', 1], [u'2014-09-02T23:00:00', 1, 1, u'qr.ae', u'Uncategorized', 0], [u'2014-09-02T23:00:00', 1, 1, u'accounts.google.com', u'General Communication & Scheduling', 0]] I saw the previous Stack question about Unicode to Dates (link) and have tried the following code: date_unicode = str(list[0].split('T')) date = datetime.strptime(date_unicode, '%Y-%m-%dT%H:%S') only to receive an error saying the module object has no attribute 'strptime.' If someone would be able to help me not only split the unicode dates into actual date objects for each element of the list I would really appreciate it. list[0][0]. strptime()format accepts the entire ISO8601 date, so you don't need .split() strptime()is a method of datetime.datetime date_unicode = str(list[0][0]) date = datetime.datetime.strptime(date_unicode, '%Y-%m-%dT%H:%M:%S') print date Finally, you have several elements of the list, so you'll need a loop: newlist = [datetime.datetime.strptime(item[0], '%Y-%m-%dT%H:%M:%S') for item in list] print newlist from datetime import datetime print [datetime.strptime(x[0],"%Y-%m-%dT%H:%M:%S") for x in l] [datetime.datetime(2014, 9, 2, 23, 0), datetime.datetime(2014, 9, 2, 23, 0), datetime.datetime(2014, 9, 2, 23, 0)] You need to use from datetime import datetime or else datetime.datetime.strptime
http://m.dlxedu.com/m/askdetail/3/e881586fa10214d91d509feacfbf2cbf.html
CC-MAIN-2018-47
refinedweb
283
59.4
pthread_attr_setname_np - Change the object name attribute in a thread attributes object #include <pthread.h> int pthread_attr_setname_np( pthread_attr_t *attr, const char *name, void *mbz ); DECthreads POSIX 1003.1c Library (libpthread.so) Address of the thread attributes object whose object name attribute is to be changed. Object name value to copy into the thread attributes object's object name attribute. Reserved for future use. The value must be zero (0). This routine changes the object name attribute in the thread attributes object specified by attr to the value specified. This routine contrasts with pthread_setname_np, which changes the object name in the thread object for an existing thread. If an error condition occurs, this routine returns an integer value indicating the type of error. Possible return values are as follows: Successful completion. The value specified by attr is invalid, or the length in characters of name exceeds 31. Insufficient memory exists to create a copy of the object name string. None Functions: pthread_attr_getname_np(3), pthread_getname_np(3), pthread_setname_np(3) Manuals: Guide to DECthreads and Programmer's Guide pthread_attr_setname_np(3)
https://nixdoc.net/man-pages/Tru64/man3/pthread_attr_setname_np.3.html
CC-MAIN-2020-45
refinedweb
175
50.94
General API') plt.style.use('seaborn-darkgrid') print('Running on PyMC3 v{}'.format(pm.__version__)) Running on PyMC3 v3.6, sigma=1) obs = pm.Normal('obs', mu=mu, sigma=1, observed=np.random.randn(100)) In [4]: model.basic_RVs Out[4]: [mu, obs] In [5]: model.free_RVs Out[5]: [mu] In [6]: model.observed_RVs Out[6]: [obs] In [7]: model.logp({'mu': 0}) Out[7]: array(-141.37324441)}) 37.5 ms ± 356 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 12.3 µs ± 173 ns per loop (mean ± std. dev. of 7 runs, 100000: In In [10]: dir(pm.distributions.mixture) Out[10]: ['] Unobserved Random Variables¶ Every unobserved RV has the following calling signature: name (str), parameter keyword arguments. Thus, a normal prior can be defined in a model context like this: In [11]: with pm.Model(): x = pm.Normal('x', mu=0, sigma : In : In . In [16]: with pm.Model() as model: x = pm.Uniform('x', lower=0, upper=1) When we look at the RVs of the model, we would expect to find x there, however: In [17]: model.free_RVs Out[17]: ):)\) In ., (4 chains in 4 jobs) NUTS: [x] The acceptance probability does not match the target. It is 0.8884785458718986, but should be close to 0.8. Try to increase the number of tuning steps. There were 2, sigma=1) for i in range(10)] # bad However, even though this works it is quite slow and not recommended. Instead, use the shape kwarg: In [24]: with pm.Model() as model: x = pm.Normal('x', mu=0, sigma=1, shape=10) # good x is now a random vector of length 10. We can index into it or do linear algebra operations on it: In [25]:: In [26]: with pm.Model(): x = pm.Normal('x', mu=0, sigma=1, shape=5) x.tag.test_value Out[26]: array([0., 0., 0., 0., 0.]) In [27]: with pm.Model(): x = pm.Normal('x', mu=0, sigma=1, shape=5, testval=np.random.randn(5)) x.tag.test_value Out[27]: array([-1.31813596, -0.44557099, 0.04482665, -1.8167009 , 0.94796326]). In [28]: with pm.Model() as model: mu = pm.Normal('mu', mu=0, sigma=1) obs = pm.Normal('obs', mu=mu, sigma=1, observed=np.random.randn(100)) trace = pm.sample(1000, tune=500) Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [mu] Sampling 4 chains: 100%|██████████| 6000/6000 [00:00<00:00, 7221.92draws, sigma=1) obs = pm.Normal('obs', mu=mu, sigma=1, observed=np.random.randn(100)) trace = pm.sample(cores=4) Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [mu] Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 7935.96draws/s] The acceptance probability does not match the target. It is 0.8978219346041421,', 'SMC', : In [35]:: 100%|██████████| 6000/6000 [00:00<00:00, 14925.91draws/s] The number of effective samples is smaller than 25% for some parameters. You can also assign variables to different step methods. In [36]:]) trace = pm.sample(10000, step=[step1, step2], cores=4) Multiprocess sampling (4 chains in 4 jobs) CompoundStep >Metropolis: [mu] >Slice: [sd] Sampling 4 chains: 100%|██████████| 42000/42000 [00:04<00:00, 8735.76draws/s] The number of effective samples is smaller than 25% for some parameters. 3.2 Analyze sampling results¶ The most common used plot to analyze sampling results is the so-called trace-plot: In [37]: pm.trace_2<< Another common metric to look at is R-hat, also known as the Gelman-Rubin statistic: In [38]: pm.gelman_rubin(trace) Out[38]: {'mu': 1.0006472993307545, 'sd': 0.9999963951762092} These are also part of the forestplot: In [39]: pm.forest_3<<, sigma=1, shape=100) trace = pm.sample(cores=4) pm.energyplot(trace); Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [x] Sampling 4 chains: 100%|██████████| 4000/4000 [00:01<00:00, 2280.44draws/s] (). In [42]: with pm.Model() as model: mu = pm.Normal('mu', mu=0, sigma=1) sd = pm.HalfNormal('sd', sigma=1) obs = pm.Normal('obs', mu=mu, sigma=sd, observed=np.random.randn(100)) approx = pm.fit() Average Loss = 138.84: 100%|██████████| 10000/10000 [00:01<00:00, 6462.56it/s] Finished [100%]: Average Loss = 138.83') 0%| | 0/10000 [00:00<?, ?it] /home/canyon/miniconda3/envs/pymc/lib/python3.7/site-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x /home/canyon/miniconda3/envs/pymc/lib/python3:]) Average Loss = 0.0068883: 100%|██████████| 10000/10000 [00:03<00:00, 2795.26:03<00:00, 2541.45it/s] Finished [100%]: Average Loss = 0.011343 In [46]: plt.figure() trace = approx.sample(10000) sns.kdeplot(trace['x'][:, 0], trace['x'][:,] Stein Variational Gradient Descent (SVGD) uses particles to estimate the posterior: In [47]: w = pm.floatX([.2, .8]) mu = pm.floatX([-.3, .5]) sd = pm.floatX([.1, .1]) with pm.Model() as model: pm.NormalMixture('x', w=w, mu=mu, sigma=sd) approx = pm.fit(method=pm.SVGD(n_particles=200, jitter=1.)) 100%|██████████| 10000/10000 [00:41<00:00, 242.58, sigma=1) sd = pm.HalfNormal('sd', sigma=1) obs = pm.Normal('obs', mu=mu, sigma=sd, observed=data) trace = pm.sample() Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [sd, mu] Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6391.82draws/s] In [66]: with model: post_pred = pm.sample_posterior_predictive(trace, samples=500) 100%|██████████| 500/500 [00:00<00:00, 4278.10it/s] sample_posterior_predictive() returns a dict with a key for every observed node: In [67]: post_pred['obs'].shape Out[67]: (500, 100) In [69]: fig, ax = plt.subplots() sns.distplot(post_pred['obs'].mean(axis=1), label='Posterior predictive means', ax=ax) ax.axvline(data.mean(), ls='--', color='r',. In [70]: import theano x = np.random.randn(100) y = x > 0 x_shared = theano.shared(x) y_shared = theano.shared(y) with pm.Model() as model: coeff = pm.Normal('x', mu=0, sigma=1) logistic = pm.math.sigmoid(coeff * x_shared) pm.Bernoulli('obs', p=logistic, observed=y_shared) trace = pm.sample() Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [x] Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6878.69draws/s] The acceptance probability does not match the target. It is 0.8860942205889539, but should be close to 0.8. Try to increase the number of tuning steps. Now assume we want to predict on unseen data. For this we have to change the values of x_shared and y_shared. Theoretically we don’t need to set y_shared as we want to predict it but it has to match the shape of x_shared. In [71]: x_shared.set_value([-1, 0, 1.]) y_shared.set_value([0, 0, 0]) # dummy values with model: post_pred = pm.sample_posterior_predictive(trace, samples=500) 100%|██████████| 500/500 [00:03<00:00, 157.63it/s] In [72]: post_pred['obs'].mean(axis=0) Out[72]: array([0.014, 0.514, 0.988])
https://docs.pymc.io/notebooks/api_quickstart
CC-MAIN-2019-43
refinedweb
1,222
56.32
import "gopkg.in/bblfsh/sdk.v3/driver/native" var ( // ErrNotRunning is returned when calling Parse on the not running driver. ErrNotRunning = serrors.NewKind("native driver is not running") // ErrDriverCrashed is returned when the driver crashes after parsing attempt. ErrDriverCrashed = serrors.NewKind("native driver crashed") ) var ( // Binary default location of the native driver binary. Should not // override this variable unless you know what are you doing. Binary = "/opt/driver/bin/native" ) Main is a main function for running a native Go driver as an Exec-based module that uses internal json protocol. Driver is a wrapper of the native command. The operations with the driver are synchronous by design, this is controlled by a mutex. This means that only one parse request can attend at the same time. Close stops the execution of the native driver. Parse sends a request to the native driver and returns its response. Start executes the given native driver and prepares it to parse code. Encoding is the Encoding used for the content string. Currently only UTF-8 or Base64 encodings are supported. You should use UTF-8 if you can and Base64 as a fallback. Decode converts specified Encoding into UTF8. Encode converts UTF8 string into specified Encoding. Package native imports 17 packages (graph). Updated 2019-11-07. Refresh now. Tools for package owners.
https://godoc.org/gopkg.in/bblfsh/sdk.v3/driver/native
CC-MAIN-2019-47
refinedweb
219
51.65
"Coding ... the boring bit between builds" ] If you would like to receive an email when updates are made to this post, please register here RSS <a href="">christmas gifts</a> christmas gifts <a href="">christmas gifts</a> christmas gifts PingBack from What is the difference between "<Content Include=...." and "<EmbeddedResource Include=........" in .csproj file. How it is interpreted during build?? It worked great, but now I've encountered a new problem. the items with my new custom build action are not copied to output folder although I set them as 'Copy Always' in the output to target directory property. Does anyone know how to fix that ? @deepak: if they correspond to the targets Content and Embeded resource in VS then it goes like this: When choosing the Content target the file it self is part of the output of the process (it's copied to the compiler destination) When choosing the embed option the content of the file is included in the assembly as a binary resource and can be acces with code similiar to GetType().GetAssembly().GetManifestResourceStream(name) where name usually is the default namespace of the project + . + name of file including extension. The filename part is case sensitive! This only works with built-in project types, such as C#. With a project type created, say, with MPF it doesn't work. Open this post and read what I think about that:, <a href= >east fork campground</a> <a href= >edward vi and sex</a> <a href= >bikini weight loss competition</a> <a href= >illinois i-pass</a> PingBack from PingBack from PingBack from PingBack from PingBack from
http://blogs.msdn.com/msbuild/archive/2005/10/06/477064.aspx
crawl-002
refinedweb
269
60.14
NT device drivers respond to a simple set of file-oriented commands: open, read, write, and close. The NT device driver model supports another command, however: the IOCTL (I/O Control) command. A driver can make available most any custom functionality via an IOCTL command. Many standard Windows NT device drivers provide IOCTL (I/O Control Code) command functionality in addition to the basic device read/write support. These IOCTL commands can sometimes be very useful to applications as well as to other drivers. For example, the NT floppy drive device driver supports an IOCTL command that reports whether or not a floppy is currently inserted in the drive. For file-oriented commands (e.g., open, read, write, and close), an NT application can use familiar functions such as ReadFile() and WriteFile(). Sending an IOCTL command, though, requires calling the somewhat less familiar Win32 function DeviceIoControl(). Device drivers can send IOCTL commands to other device drivers, just as applications do, though it is not as easy as calling DeviceIoControl(). Some programmers actually create filter drivers to obtain information that they could more easily obtain via an IOCTL command. That's a risky practice, since a bug in a filter driver usually is capable of causing many more problems than a bug in code that just sends an IOCTL command. This article demonstrates IOCTL commands from the perspective of both applications and drivers. I will demonstrate three aspects of IOCTL commands: - How to support IOCTL commands in your own device driver. - How to send an IOCTL command to another driver from your device driver. - How to send an IOCTL command to a device driver from an application. To demonstrate these concepts, I wrote a simple application and a simple Windows NT device driver that communicate via a custom-defined IOCTL. When the application sends an IOCTL command to the driver, the driver in turn sends a different IOCTL command to one of the standard built-in Windows NT drivers. This demonstrates how applications send IOCTL commands to drivers and how drivers send IOCTL commands to other drivers. It also provides a template for adding IOCTL commands to your own drivers. In practice, the application could just send an IOCTL directly to the second driver, but this example is contrived to demonstrate all three concepts. You will need the Windows NT DDK to build the examples in this article. Defining the Custom IOCTL Value When you decide that your device driver will support an operation via an IOCTL command, you must define a command code that callers can use. wdj.h (Listing 1) contains the definition of my custom IOCTL command code. Any other application or driver that wants to issue this IOCTL to my driver will #include this file. The actual IOCTL command value is a bitmask that contains several pieces of information. I used the CTL_CODE macro (defined in winioctl.h for applications and ntddk.h for drivers) to define the custom command IOCTL_WDJ_REQUEST for my sample driver. The first argument to CTL_CODE is a value that describes the device type. NT defines several standard device types (such as FILE_DEVICE_DISK for persistent storage devices), but my sample driver doesn't really fit any of the predefined device categories, so I defined a new device type value of FILE_DEVICE_WDJ. Note that since this is an "OEM" device (rather than a standard device type defined by the operating system), I must use a value in the range 0x8000 to 0xFFFF. The second argument of the CTL_CODE macro identifies the specific IOCTL command. When a driver supports more than one custom IOCTL command, this value must be different for each command. Since IOCTL_WDJ_REQUEST is a custom IOCTL command (rather than a standard IOCTL command, such as IOCTL_FORMAT_DISK_TRACKS, that is supported by a whole category of device drivers), I'm required to use a value in the range 0x800 to 0xFFF to define my IOCTL command. The third argument describes the type of data transfer; METHOD_BUFFERED is a typical choice for drivers that transfer a small amount of data. The final argument describes what kind of access the application must specify when opening a handle to this device. A value of FILE_ANY_ACCESS means the caller can open a handle to this device with any access rights and still be able to send the IOCTL_WDJ_REQUEST command via that file handle. The Sample Driver The sample driver code is in wdj.c (Listing 2). This driver has only two requirements: it must support the IOCTL_WDJ_REQUEST command and it must demonstrate how to send an IOCTL command from one driver to another driver. To demonstrate how one driver sends an IOCTL to another, I needed to find a standard NT device driver that accepts some useful IOCTL command. For my target driver I chose the standard Windows NT floppy driver, because nearly all computers have at least one floppy device and the consequences of accidently sending an errant command to the floppy driver are not usually catastrophic. You should be able to experiment with this sample driver on your own system. The standard initialization entry point for all Windows NT device drivers (except certain layered drivers such as SCSI miniport drivers) is DriverEntry(). In DriverEntry(), a device driver typically creates one or more device objects to represent physical or logical devices. My sample driver doesn't actually talk to any real devices, so I create a single logical device object ("\\device\wdjdrv") by calling IoCreateDevice(). Device objects typically have names of the form "\\device\Xxx". This doesn't really give my sample application something to talk to yet, though, because device object names are not directly accessible to user-mode applications. I need to create a symbolic link between my device object and a name that is visible to user-mode applications by calling IoCreateSymbolicLink(). Symbolic link names have the form "\\DosDevices\Xxx". When an application passes the symbolic link name to CreateFile() and then sends read, write, or IOCTL commands to that file handle, the requests are routed to my driver, because my driver created the target device object. Since a driver can create multiple device objects, drivers usually use the private device extension area of the device object to store any information that may need to be retrieved in order to carry out the read, write, or IOCTL command request. My driver also needs to send IOCTL commands to a device driver (the floppy driver), but NT device drivers don't have access to the Win32 API, so my driver can't call CreateFile() to obtain a handle. Moreover, the symbolic links that applications use to open devices represent a namespace that drivers don't have access to. Instead, my driver must use a DDK function to obtain a handle to one of the device objects that represent a physical floppy drive. The floppy driver creates device objects of the form "\\device\floppyX", where "X" is "0" for the first floppy device, etc. My driver retrieves a handle to the device object for the first floppy device by passing "\\device\floppy0" to IoGetDeviceObjectPointer(). IoGetDeviceObjectPointer() returns a pointer to both a file object and a device object for "\\device\floppy0"; since I only need the device object, I save it in my device extension area and dereference the file object. Finally, my driver fills out the dispatch routine table in the device object before exiting DriverEntry(). The I/O system uses the dispatch routine table to route I/O requests targeted for my device object to the appropriate routine within my driver. When an application attempts to open, close, read, write, or send IOCTL commands to the symbolic link for my device object, the I/O system packages that request into an IRP (I/O Request Packet) and sends the IRP to the appropriate routine listed in the dispatch table. Since my sample driver doesn't really support anything other than IOCTL commands, I filled out only the IRP_MJ_CREATE, IRP_MJ_CLOSE, and IRP_MJ_DEVICE_CONTROL entries. I also provided an "Unload" routine for my driver. This routine is not called in response to an I/O request, so it is not passed an IRP or a device object. Rather, it is called just before my driver is unloaded. In WdjDrvUnload(), I simply delete the symbolic link and the device object that I created when the driver was initialized. WdjDrvDispatch() handles more than one command (create, close, and IOCTL commands), so it examines the MajorFunction field in the current IRP stack location to identify the command. Since I specified METHOD_BUFFERED as the transfer type when I defined the IOCTL command, any data passed by the caller can be found in the AssociatedIrp->SystemBuffer field, and any data I transfer back to the caller will also be copied back into this buffer. Status information is communicated back to the I/O system (and ultimately back to the caller) by filling out the IoStatus.Status and IoStatus.Information fields in the IRP structure. You set the Status field to an NTSTATUS value that indicates whether or not the call was successful ( STATUS_SUCCESS), and if not, what kind of error occured. In my case, the Information field is filled out with the size of any data that I copied to the SystemBuffer. I'm not actually doing anything special during the create and close commands, but I provided code stubs in case any readers wanted to add their own caller-specific initialization code. In response to the IOCTL_WDJ_REQUEST command, I need to send an IOCTL command to the floppy driver. Recall that I already have a handle to the target device object tucked away in my device extension. First, I build an IRP to represent the IOCTL request for this device object by calling IoBuildDeviceIoControlRequest(). In this case, I'm calling the floppy driver's IOCTL_DISK_CHECK_VERIFY command, which tells me whether or not a floppy is present in this floppy drive. After building the IRP, I call IoCallDriver() to pass it to the target driver. Note that since some IOCTL commands are handled asynchronously, callers pass a kernel-mode event handle to IoBuildDeviceIoControlRequest(); the event is signaled when the request completes. If the Status field is set to STATUS_NO_MEDIA_IN_DEVICE on return, then I know that the floppy drive is empty. I copy a Boolean value to the user-mode application's buffer to let them know whether or not a floppy is present in this floppy drive. Finally, to complete the original IOCTL_WDJ_REQUEST command, I call IoCompleteRequest() and then return from WdjDrvDispatch(). You can programmatically install the sample driver by calling CreateService(). For more information, see "Dynamically Loading Drivers in Windows NT" in the May 1995 issue of WDJ. The Sample Application Now that I've demonstrated how drivers send IOCTL commands to other drivers, I'll show how applications send IOCTL commands to drivers. My sample application is app.c (Listing 3). The user-mode application must first open a handle to the appropriate device object by passing the symbolic link name to CreateFile(). Recall that the symbolic link name created in the sample driver is "\\DosDevices\wdjdrv". When passing this symbolic link name to CreateFile(), you should use the form "\\.\wdjdrv". It's important to specify the OPEN_EXISTING flag so that the CreateFile() call will fail appropriately if the driver is not loaded. Once you have a handle open to the appropriate device object, you can send IOCTL commands to it by calling DeviceIoControl(). DeviceIoControl() is also used by applications running on Windows 95 to send commands to VxDs (note that the format for specifying the device name in the call to CreateFile() is different for VxDs). In addition to the file handle and the IOCTL command value, callers can pass an input buffer and an output buffer to DeviceIoControl(). For my custom IOCTL_WDJ_REQUEST command, no input buffer is required, but I do need to specify an output buffer that is at least large enough to hold the ULONG return value. DeviceIoControl() supports both synchronous and asynchronous operation via the lpOverlapped parameter. If you wish to call DeviceIoControl() asynchronously, then the file handle must have been opened with the FILE_FLAG_OVERLAPPED flag. As I mentioned before, this example is contrived. My sample application could have bypassed the sample wdj.sys driver and called the floppy driver directly by opening a handle to "\\.\a:" and then passing the IOCTL_DISK_CHECK_VERIFY command directly to that file handle. I introduced the complication of wdj.sys just to demonstrate how drivers implement IOCTL commands, and how drivers can pass IOCTL commands to other drivers. Summary Drivers have access to a lot of useful information and can perform many useful tasks for applications. If a driver already supplies an IOCTL command that meets your needs, then it is a quite trivial matter for an application to call it. Likewise, drivers can sometimes avoid reinventing code by calling IOCTL commands in other drivers. If you need only to pass IOCTL commands to another driver, it is definitely overkill to layer yourself on top of that driver. Filter drivers are risky in that poorly written filter drivers can compromise the functionality of the driver on which they are layered. It is much simpler and safer in this case to get a pointer to the device object, build an IRP, and send it to the driver when necessary. Paula Tomlinson has been developing DOS, Windows, and Windows-NT based applications and device drivers for nine years. The opinions expressed here are hers alone. She can be contacted via the internet at [email protected]. Get Source Code
https://www.drdobbs.com/windows/sending-ioctls-to-windows-nt-drivers/184416453
CC-MAIN-2021-21
refinedweb
2,255
52.09
Opened 4 years ago Closed 21 months ago #5412 closed bug (fixed) dataTypeConstrs gives unhelpful error message Description (last modified by simonpj) Take the following program: import Data.Data foo = dataTypeConstrs (dataTypeOf (0 :: Int)) -- raises: *** Exception: Data.Data.dataTypeConstrs The implementation of dataTypeConstrs has the type available as a string at this point, so could easily say: Exception: Data.Data.dataTypeConstrs is not supported for Int, as it is not an algebraic data type Attachments (4) Change History (23) comment:1 Changed 4 years ago by simonpj comment:2 Changed 4 years ago by igloo - Milestone set to 7.6.1 comment:3 Changed 3 years ago by igloo - Milestone changed from 7.6.1 to 7.6.2 comment:4 Changed 22 months ago by klangner - difficulty set to Unknown - Owner set to klangner Changed 22 months ago by klangner Patch Changed 22 months ago by klangner comment:5 Changed 22 months ago by klangner - Status changed from new to patch comment:6 Changed 22 months ago by nomeata - Owner klangner deleted - Status changed from patch to new Dear Krzysztof, thanks for the patch. Note that SPJ made a more elaborate proposal in the ticket, namely printing the name of the type with the error message. Do you think you can update your ticket to print a message similar to what Simon proposed? comment:7 Changed 22 months ago by nomeata - Owner set to klangner comment:8 Changed 22 months ago by klangner I can try :-) I can create the following message: Data.Data.dataTypeConstrs is not supported for Prelude.Int, as it is not an algebraic data type As you can see there is Prelude.Int not Int. This name is returned by dataTypeName. I don't know how to get the name without Prelude prefix. Changed 22 months ago by klangner comment:9 Changed 22 months ago by klangner - Status changed from new to patch comment:10 Changed 22 months ago by nomeata - Owner changed from klangner to nomeata I just checked the API as well, and there does not seem to exist a clean way of getting the unqualified name. But as this is just an error message, I think it’s fine. I’ll validate and push. comment:11 Changed 22 months ago by nomeata - Owner changed from nomeata to klangner Wait: The second patch only modifies dataTypeConstrs while the first patch changes the error message of several functions. Oversight? Also, I’d add a “.” at the end of the message, after all, it is a full sentence. comment:12 Changed 22 months ago by klangner Yes you are right. The problem is that this kind of message can't be used in other places. So I was not sure if I should change other functions as well. On the other hand it would be nice to improve some other error messages since probably sooner or later this will return as new ticket. How do you think would be better? - Change only function from this ticket description - Change only messages where ADT is expected (4 places) and leave other messages intact. - Try to improve other error messages in this module as well. Third option will require more work and I'll probably need some help with the message texts. comment:13 Changed 22 months ago by nomeata Go for 3! The message will always be of the kind „Data.Data.... is not supported for ..., as it is not ...“ comment:14 Changed 22 months ago by klangner Ok. I fixed 8 previous messages. But I'm not sure what to do with this code: Case 1. instance Data Char where toConstr x = mkCharConstr charType x gunfold _ z c = case constrRep c of (CharConstr x) -> z x _ -> error "Data.Data.gunfold(Char)" dataTypeOf _ = charType This instance is defined for Char type. So it is not possible to pass wrong type here. But I guess there still could be a problem with constructor representation. There are similar instances for other data types as well. Lots of them. Case 2. In function: repConstr exception is thrown when parameters have different data types. So there is a need for another message. I would say something like: "Data.Data.repConstr requires the same data type for both parameters." or similar. comment:15 Changed 21 months ago by nomeata Case 1. A better error message here would be "Data.Data.gunfold: Constructor " ++ show c ++ " is not of type Char". But if you don’t feel like working on that class of error message right now you can concentrate on the others and skip these for now. Case 2. Sounds good. Or maybe "Data.Data.repConstr: The given ConstrRep does not fit to the given DataType", possibly extended with actually showing both parameters, or using dataTypeName. Changed 21 months ago by klangner comment:16 Changed 21 months ago by klangner Ok. Done. comment:17 Changed 21 months ago by nomeata - Owner changed from klangner to nomeata Looks good; validating right now... comment:18 Changed 21 months ago by Joachim Breitner <mail@…> comment:19 Changed 21 months ago by nomeata - Resolution set to fixed - Status changed from patch to closed Validated and pushed. Thanks for your first contribution to GHC, and hoping to receive more of them! Good idea! Could you send a patch? Thanks! Simon
https://ghc.haskell.org/trac/ghc/ticket/5412
CC-MAIN-2015-40
refinedweb
885
73.58
A computer algebra system written in pure Python . To get started to with contributing replacewith a pattern. >>> a, b, c = Wild('a'), Wild('b'), Wild('c') >>> expr.replace((a**b)**c, a**(b*c)) y integrate(Heaviside(x,0), (x,-1,1) =1, but instead sympy returns Integral(Heaviside(x, 0), (x, -1, 1)). Works fine if I omit the second argument of Heaviside, but I think it should work in any case, as the value at x=0is irrelevant for the integration (set of measure zero). @laolux:privacytools.io Many integrators like meijerint work by looking up the results in a table. The table may contain Heaviside(x) but not Heaviside(x, 0) which is a different object. >>> Heaviside(x) == Heaviside(x, 0) False It may be possible to extend the matching code to handle this but that may not be easy to implement. Another entry should probably be added. I ran into the following issue/error while trying to get the extrema of a simple 4th order polynomial: import sympy as sym x_sym = sym.symbols("x", real=True) # single variable pot_sym = x_sym**4 - x_sym**2 + x_sym * 1/10 # function f pot_prime_sym = sym.diff(pot_sym, x_sym) # first derivative of function df/fx extrema = sym.solve(pot_prime_sym) # get extrema via df/dx == 0 for extremum in extrema: print(sym.N(extremum), " == ", sym.N(sym.simplify(extremum))) # print values returns: 0.050253826762553 - 0.e-23*I == 0.050253826762553 + 3.70576914423756e-22*I 0.680639276423668 + 0.e-23*I == 0.680639276423668 -0.730893103186221 + 0.e-23*I == -0.730893103186221 The last two results are equal, but the first entry does seem to cause an error of I simplify the result before returning its numeric value. There should be no complex contribution 3.7e-22*I, thus something goes wrong here. Is this a know issue, did I do something wrong, or is this a not-yet know issue and I should open a issue on github? I am using sympy 1.7.1. print(type(curl(H, doit=False))) His an Addobject whose arguments are Mulobjects, not a vector. In [10]: srepr(H) Out[10]: "Add(Mul(T.x, Function('Hx')(Symbol('x'), Symbol('y'), Symbol('z'))), Mul(T.y, Function('Hy')(Symbol('x'), Symbol('y'), Symbol('z'))), Mul(T.z, Function('Hz')(Symbol('x'), Symbol('y'), Symbol('z'))))" Hi, I'd like to express a vector norm with indexed variables for differentation, but I don't quite know how. Every search I came up with led to matrix norms, which aren't really in question here. The variables are indeed vectors, but I don't care about the dimensionality in this case. Formulating a difference is easy enough, but I'm not sure how to continue. # || x_a - y_b || x = sp.IndexedBase('x') a = sp.Idx('a') y = sp.IndexedBase('y') b = sp.Idx('b') diff = (x[a] - y[b]) How could this be achieved? I'm open for any other formulations as well! Much appreciated.
https://gitter.im/sympy/sympy?at=609be311ba27415f9b918ef0
CC-MAIN-2021-49
refinedweb
494
68.16
Tobii Pro SDK and OpenSesame Hi all, I am trying to link OpenSesame and a Tobii X20-30 eye tracker using the Tobii Pro SDK. I updated the Pygaze files as suggested here: I installed the Tobii Pro SDK using 'pip install tobii-research', and tobii-research is listed when I check this using 'pip list' Nevertheless I get an error when running pygaze_init: - item-stack: experiment[run].new_pygaze_init[run] - exception type: AttributeError - exception message: 'module' object has no attribute 'find_all_eyetrackers' I guess the find_all_eyetrackers is an Tobii Pro SDK script, and the SDK may not be installed properly after all? Any suggestions of what could be wrong are highly appreciated! Does anyone have suggestions or tips on how to use a Tobii eye tracker with OpenSesame/Pygaze? It seems I was missing a tobii_research script in the OpenSesame directories (error with copying..?). It is now running fine! Do you remember which directories in OpenSesame were missing this? I am getting the same error, and installed via pip as well. Hi, I am getting the same error. I thought that it was something related with python version. But, using Anaconda, I created an environment with Python 3.6, and the error remains. My Eye tracker is a Tobii Pro Nano and I am working on macos Catalina (10.15.7) Thank you 😊 Hi all, Could it be that you pip installed into a standalone Python installation instead of into OpenSesame's built-in Python? To check, run the following in the Debug Window: import pip pip.main(["install", "tobii-research"])
https://forum.cogsci.nl/discussion/comment/18010
CC-MAIN-2020-50
refinedweb
260
64.41
Scaling Based on Amazon SQS This section shows you how to scale your Auto Scaling group in response to changing demand from an Amazon Simple Queue Service (Amazon SQS) queue. Amazon SQS offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. If you're not familiar with Amazon SQS, see the Amazon Simple Queue Service Developer Guide for more information. There are several scenarios where you might think about scaling in response to activity in an Amazon SQS queue. Suppose that you have a web app that lets users upload images and use them online. Each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group that is configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times. The app places the raw bitmap data of the images in an Amazon SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. This architecture works well if the number of image uploads doesn't vary over time. What happens if your upload levels change? If your uploads increase and decrease on a predictable schedule, you can specify the time and date to perform scaling activities. For more information, see Scheduled Scaling for Amazon EC2 Auto Scaling. A more dynamic way to scale your Auto Scaling group, scaling by policy, lets you define parameters that control the scaling process. For example, you can create a policy that calls for enlarging your fleet of EC2 instances whenever the average number of uploads reaches a certain level. This is useful for scaling in response to changing conditions, when you don't know when those conditions will change.<< Choosing an Effective Metric and Target Value The number of messages in your Amazon SQS queue does not solely define the number of instances needed. In fact, the number of instances in the fleet determine your backlog per instance, start with the Amazon SQS metric ApproximateNumberOfMessagesto determine your target value, first calculate what your application can accept in terms of latency. Then, take the acceptable latency value and divide it by the average time that an EC2 instance takes to process a message. To illustrate with an example,. Because the backlog per instance is currently at 150 (1500 / 10), your fleet scales out by five instances to maintain proportion to the target value. The following examples create the custom metric and target tracking scaling policy that configures your Auto Scaling group to scale based on these calculations. Configure Scaling with Amazon SQS The following section shows you how to set up automatic scaling for an SQS queue using the AWS CLI. The procedures assume that you already have a queue (standard or FIFO), an Auto Scaling group, and EC2 instances running the application that uses the queue. Tasks Step 1: Create a CloudWatch Custom Metric Create the custom calculated metric by first reading metrics from your AWS account. Then, calculate the backlog per instance metric recommended in an earlier section. Lastly, publish this number to CloudWatch at a 1-minute granularity. Wherever possible, you should scale on EC2 instance metrics with a 1-minute frequency (also known as detailed monitoring) because that ensures a faster response to utilization changes. Scaling on metrics with a 5-minute frequency can result in slower response times and scaling on stale metric data. By default, EC2 instances are enabled for basic monitoring, which means metric data for instances is available at 5-minute intervals. You can enable detailed monitoring to get metric data for instances at a 1-minute frequency. For more information, see Monitoring Your Auto Scaling Groups and Instances Using Amazon CloudWatch. ApproximateNumberOfMessagesmetric by the fleet's running capacity metric. Publish the results at a 1-minute granularity as a CloudWatch custom metric using the AWS CLI or an API. A custom metric is defined using a metric name and namespace of your choosing. Namespaces for custom metrics cannot start with "AWS/". For more information about publishing custom metrics, see the Publish Custom Metrics topic in the Amazon CloudWatch User Guide. searching for it using the search box. For help viewing metrics, see View Available Metrics in the Amazon CloudWatch User Guide. Step 2: Create a Target Tracking Scaling Policy Next, create a target tracking scaling policy that tells the Auto Scaling group to increase and decrease the number of running EC2 instances in the group dynamically when the load on the application changes. You can use a target tracking scaling policy because the scaling metric is a utilization metric that increases and decreases proportionally to the capacity of the group. my your scaling policy is working. You can test it by increasing the number of messages in your SQS queue and then verifying that your Auto Scaling group has launched an additional EC2 instance, and also
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
CC-MAIN-2019-35
refinedweb
831
52.49
How to Create Your Own Functions in Python on the Raspberry Pi One of the things the Raspberry Pi allows you to do in Python, and many other programming languages, is create a function. A function can receive some information from the rest of the program (one or more arguments), work on it, and then send back a result. Before you can use a function, you have to define it, which you do using a def statement. To tell Python which instructions belong in the function, you indent them underneath the def statement. Here’s an example program to familiarize you with the idea of functions, and how we’ll be using it: # Example of functions def dictionarycheck(message): print "I will look in the dictionary for", message return "hello" dictionarycheck("test message") result=dictionarycheck("test message2") print Reply is:, result We’ll talk you through that program in a moment, but here’s a glimpse of what is shown onscreen when you run it: I will look in the dictionary for test message I will look in the dictionary for test message2 Reply is: hello This is a short but powerful program because it tells you nearly everything you need to know about functions. As you can see, the function was defined at the + start of the program, with this line: def dictionarycheck(message): This sets up a function with the name dictionarycheck(), but also sets it up to receive a piece of information from the rest of the program and to put it into the variable we’ve called message. The next line prints out a statement saying I will look in the dictionary for followed by the contents of the variable message. That means it prints out whatever information is sent to the function. The next line starting with return exits the function and sends a message back, which in our example is hello. Functions are self-contained units so the variable message can’t be used by the rest of the program (it’s what’s known as a local variable). When you’re writing your own functions, you should give them a job to do, and then use return to send the result back to the rest of the program. Functions aren’t run until you specifically tell the program to run them, so when Python sees the function definition, it just remembers it for when it needs it later. That time comes shortly afterwards, when you issue the command: dictionarycheck(test message) This runs our dictionarycheck() function, and sends it the text test message to work with. When the function starts, Python puts test message into the function’s variable called message, and then prints the text onscreen that contains it. The text hello is sent back by the function, but you don’t have a way to pick up that message. The next code snippet shows you how you can pick up information coming back from a function. Instead of just running the function, you set a variable to be equal to its output, like this: result=dictionarycheck("test message2") print Reply is:, result When the text hello is sent back by the function, it goes into the variable result, and the main program can then print it on the screen. This simple example illustrates a few reasons why functions are a brilliant idea, and have become fundamental building blocks in many programming languages: Functions enable you to reuse parts of your program. For example, we’ve used our function to display two different messages here, just by sending the function a different argument each time. When you use more sophisticated programs, being able to reuse parts of your program makes your program shorter, simpler, and faster to write. Functions make understanding the program easier because they give a name and a structure to a set of instructions. Whenever someone sees dictionarycheck() in our program, they can make a good guess at what’s going on. As you work on bigger projects, you’ll find readability becomes increasingly important. It makes it easier to maintain and update your program. You can easily find which bits of the program to change, and all the changes you need to make will be in the same part of the program. If you think of a better way to do a dictionary look-up later, you just modify the function, without disturbing the rest of the program. Functions make prototyping easier. That’s what we’ve done here: We’ve built an experimental program that takes some text and sends back a message. That’s what our finished dictionarycheck() function will do, except that this one just sends the same message back every time and the finished one will send different messages back depending on what the player said. You could build the rest of the program around this prototype to check it works, and then go back and finish the dictionarycheck() function.
http://www.dummies.com/how-to/content/how-to-create-your-own-functions-in-python-on-the-.navId-817451.html
CC-MAIN-2015-11
refinedweb
826
61.4
Hello. I need to test a couple of flex games and I wanted to do it with use of Sikuli. Though I have a following problem: Click on the button goes off only when ClickDelay = 0.20 or more but if decrease that value, then the click goes off less often. For instance, for value ClickDelay = 0.05 I can visually see the click but nothing happens in the app itself. I can use 0.20, it's not big of a deal but this is not a trustworthy way of testing. I'm wondering whether I can somehow get a more reliable result? (something like "implicitly click delay") Or maybe you can suggest some other way to do the testing? Thanks a lot. Question information - Language: - English Edit question - Status: - Answered - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Last query: - 2017-08-11 - Last reply: - 2017-08-11 I think to make your own function instead of click() is possible. def myClick(psmrl): ClickDelay = 0.20 Settings. click(psmrl) myClick( "image. png")
https://answers.launchpad.net/sikuli/+question/655798
CC-MAIN-2017-43
refinedweb
175
75.71
XML Document Object Model The .NET framework offers classes that can read and manipulate XML documents. These classes are located in the System.Xmlnamespace. .NET uses the XML Document Object Model which is a set of classes that represents different parts of an XML document (and the document itself). The following are some of the classes offered by XML DOM. Figure 1 – XML DOM Classes An XML Document is represented by the XmlDocument class. It contains the actual XML markup that you will be reading or manipulating. An XML Document should start with a root element which is also called the document element. This can be represented as an XmlElement. Everything inside it is its child nodes. You must also understand the concept of parent and child nodes. A parent node is a node that contains child nodes. As an example, consider this element that contains multiple elements: <Product> <ProductID>001</ProductID> <ProductName>Shampoo</ProductName> <Price>10</Price> </Product> Here, the parent node is the Product element. It contains two child nodes which are the ProductID, ProductName, and Price elements. If for example the ProductName still has more elements inside it, then it will be considered as a parent node of those elements it contains. Let’s take a look at some properties and methods of each of these classes. XmlNode The XmlNode represents any part of the XML and is the base class of other classes in the XML DOM. For example, the XmlElement class is derived from the XmlNode so it shares XmlNode’s properties and methods. The following are some useful properties of the XmlNodebase class. Figure 2 – XmlNode Properties The Attributes property of a node contains a list of attributes that a node or element contains. Each attribute is represented by the XmlAttribute class. Consider this example: <Person name="John Smith" age="30"></Person> This element’s Attributes property contains two XmlAttribute objects that hold data for the name and age attribute. The ChildNodes property is a collection of child nodes of the current node. Each child node is also an XmlNode object. You can test if the node has child nodes by checking the value of the HasChildNodes property. This will return true if it has at least one or more child nodes, and false if it has none. The FirstChild and LastChild properties simply get the first and last child nodes of the current node. PreviousSibling and NextSibling properties gets the previous and next nodes of the current node. <PreviousNode></PreviousNode> <CurrentNode></CurrentNode> <NextNode></NextNode> The Parent node gets the reference the parent of a child node. If no parent exists, then this property contains null. The Name property indicates the name of the element. <Person>Example</Person> The Name of the above element is Person. The InnerText retrieves the text located between the opening and closing tags of a control. If you are working with a comment, then this property returns the text of the comment. If the element has multiple nodes inside it, the InnerText property gets all the inner texts of all the child nodes and combine them into a single string. Consider the following XML markup. <Example> <Sample1>Text1</Sample1> <Sample2>Text2</Sample2> <Sample3> <Sample4>Text3</Sample4> </Sample3> </Example> The InnerText property will produce the output Text1Text2Text3. Notice that the third child element has another child node and the InnerText property still successfully retrieve the text inside that child node. The InnerXml is similar to InnerText property, but this property returns the actual XML markup inside an element. The OuterXmlproperty is similar to InnerXml but includes the current node in the results. The OwnerDocument property returns the reference to an XmlDocument that owns the current node. The NodeType contains a value from System.Xml.XmlNodeType enumeration. You can tell the type of a node by using this property. The table below lists the values of the XmlNodeType enumeration. Figure 3 – XmlNodeType Enumeration Values Finally, the Value property specifies the value of the node. The Value property varies on each type of node. For example, calling the Value property a comment returns the text of the comment. The Value property of an attribute is the value of the attribute. Note that you must use InnerText or InnerXml to access contents of an XmlElement instead of using the Value property. The following are some useful methods of the XmlNode class. Figure 4 – XmlNode Methods Some of this methods will be demonstrated in later lessons about writing and reading XML documents. XmlDocument The XmlDocument class represents the actual XML document and all its contents. The following are some properties exclusive to this class. The following are some methods exclusive to XmlDocument class. Notice the methods which starts with Create. This methods are used to create XML DOM objects. For example, if you want to create an element for the current document, then you use the CreateElement method. These is because using the DOM classes’ constructors are prohibited due to its protection level. Most of these methods will be demonstrated in later lessons. XmlElement The XmlElement class represents a single XML element. The following are some useful properties of the XmlElement class. The following are some methods exclusive to XmlElement. The XmlText and XmlComment classes represents a text inside an element and a comment respectively. They are too simple to have their own sections in this lesson so I decided not to discuss them in detail.
https://compitionpoint.com/xml-document-object-model/
CC-MAIN-2021-31
refinedweb
902
57.57
You have a lot of data you need to present visually, and you want to arrange that data in columns . Use an SWT table based on the Table class. SWT tables can display columns of text, images, checkboxes, and more. Here's a selection of the Table class's methods : Adds the listener to the collection of listeners that are notified when the table's selection changes Deselects the item at the given zero-relative index in the table Returns an array of TableItem objects that are selected in the table Returns the zero-relative index of the item which is currently selected in the table ( -1 if no item is selected) Returns the zero-relative indices of the items that are currently selected in the table Returns true if the item is selected, false otherwise Selects the item at the given zero-relative index in the table As an example ( TableApp at this book's site), we'll create a simple table displaying text items that catches selection events. We'll create a new table and stock it with items using the TableItem class, then report which item has been selected in a text widget. Here are some popular TableItem class methods: Returns true if the table item is checked, false otherwise Returns true if the table item is grayed, false otherwise Sets the checked state of the checkbox for this table item Sets the grayed state of the checkbox for this table item Sets the table item's image Sets the table item's text Here's how to create the table in this example: Table table = new Table(shell, SWT.BORDER SWT.V_SCROLL SWT.H_SCROLL); And here's how you can stock it with TableItem objects: for (int loopIndex=0; loopIndex < 24; loopIndex++) { TableItem item = new TableItem (table, SWT.NULL); item.setText("Item " + loopIndex); } All that's left is to handle item selection events, which you can do as shown in Example 9-4; you can recover the item selected with the item member of the event object passed to the handleEvent method. package org.cookbook.ch09; import org.eclipse.swt.*; import org.eclipse.swt.widgets.*; public class TableClass { public static void main(String[] args) { Display display = new Display( ); Shell shell = new Shell(display); shell.setSize(260, 300); shell.setText("Table Example"); final Text text = new Text(shell, SWT.BORDER); text.setBounds(25, 240, 200, 25); Table table = new Table(shell, SWT.BORDER SWT.V_SCROLL SWT.H_SCROLL); for (int loopIndex=0; loopIndex < 24; loopIndex++) { TableItem item = new TableItem (table, SWT.NULL); item.setText("Item " + loopIndex); } table.setBounds(25, 25, 200, 200); table.addListener(SWT.Selection, new Listener( ) { public void handleEvent(Event event) { text.setText("You selected " + event.item); } }); shell.open( ); while (!shell.isDisposed( )) { if (!display.readAndDispatch( )) display.sleep( ); } display.dispose( ); } } The results appear in Figure 9-9. When you select an item in the table, the application indicates which item was selected. That's fine up to a point, but this rudimentary example just gets us started with tables (in fact, this simple version looks much like a simple list widget). To add columns, check marks, images, and more, see the following recipes. By default, tables allow only single selections. To allow multiple selections, create the table with the SWT.MULTI style instead of the SWT.SINGLE style. In Eclipse 3.0, the SWT table widget supports setting the foreground and background colors of individual cells . In addition, the Table widget enables you to set the font for a row or an individual cell . Recipe 9.15 on creating table columns; Recipe 9.16 on adding check marks to table items; Recipe 9.17 on enabling and disabling table items; Recipe 9.18 on adding images to table items.
https://flylib.com/books/en/1.259.1.156/1/
CC-MAIN-2019-51
refinedweb
622
62.48
2010/4/18 Łukasz Langa <lukasz at langa.pl>: > This is not a proper reply in terms of e-mail but I registered just a second > ago just to write this post. So, here goes, replying to Frederik's message > from Thu Apr 08 15:01:06 2010: > > Are you using some virtual env thing that might move modules around, > btw? I tried messing with the path to see if I could trick Python > into importing the same thing twice on Windows, but failed under 2.6. > > This is actually quite simple. When you easy_install PIL, you get the > "pollute the global namespace" variant (e.g. import Image). Django on the > other hand expects the "be pollite within your own namespace" variant (e.g. > from PIL import Image). So if there's any .pth file or symbolic link that is > supposed to cover for that, your going to have an error: > $ python > Python 2.6.5 (r265:79063, Mar 26 2010, 16:07:38) > [GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import _imaging >>>> import PIL._imaging > AccessInit: hash collision: 3 for both 1 and 1 Interesting. Can you repeat this with -v and report from where the modules are loaded? </F> PS. For reference, here's the -v output for the above commands on a 2.6.5 stock install on Windows: >>> import _imaging # c:\python26\lib\encodings\cp850.pyc matches c:\python26\lib\encodings\cp850.py import encodings.cp850 # precompiled from c:\python26\lib\encodings\cp850.pyc import _imaging # dynamically loaded from c:\python26\lib\site-packages\PIL\_imaging.pyd >>> import PIL._imaging import PIL # directory c:\python26\lib\site-packages\PIL # c:\python26\lib\site-packages\PIL\__init__.pyc matches c:\python26\lib\site-packages\PIL\__init__.py import PIL # precompiled from c:\python26\lib\site-packages\PIL\__init__.pyc import PIL._imaging # previously loaded (c:\python26\lib\site-packages\PIL\_imaging.pyd) Note the last line: python correctly figures out that both imports refer to the same module. > The problem is, the default easy_install distribution does not provide the > PIL.* variant whereas no stable Django version is as of yet using the global > namespace version (see). The fact > is that they on multiple occasions refused to make that change, I wonder > what made them change their decision. That way or the other, it's a > setuptools PIL packaging issue. Maybe the easiest and most compatible > solution would be to simply include a PIL.py file along the distro with > contents of the like: > import _imaging > import Image > import ImageFile > ... > I don't really have the experience to make any remarks here, let alone > decisions. So, it's up to you :) > -- > Best regards, > Łukasz Langa > tel. +48 791 080 144 > WWW > > _______________________________________________ > Image-SIG maillist - Image-SIG at python.org > > >
https://mail.python.org/pipermail/image-sig/2010-April/006209.html
CC-MAIN-2018-05
refinedweb
476
69.58
Frequently Asked Questions (LINQ to SQL) The following sections answer some common issues that you might encounter when you implement LINQ. Additional issues are addressed in Troubleshooting (LINQ to SQL). Cannot Connect Q. I cannot connect to my database. A. Make sure your connection string is correct and that your SQL Server instance is running. Note also that LINQ to SQL requires the Named Pipes protocol to be enabled. For more information, see Learning by Walkthroughs (LINQ to SQL). Changes to Database Lost Q. I made a change to data in the database, but when I reran my application, the change was no longer there. A. Make sure that you call SubmitChanges to save results to the database. Database Connection: Open How Long?). Updating Without Querying. Unexpected Query Results Q. My query is returning unexpected results. How can I inspect what is occurring? A. LINQ to SQL provides several tools for inspecting the SQL code it generates. One of the most important is Log. For more information, see Debugging Support (LINQ to SQL). Unexpected Stored Procedure Results (LINQ to SQL). Serialization Errors). Multiple DBML Files Q. When I have multiple DBML files that share some tables in common, I get a compiler error. A. Set the Context Namespace and Entity Namespace properties from the Object Relational Designer to a distinct value for each DBML file. This approach eliminates the name/namespace collision. Avoiding Explicit Setting of Database-Generated Values on Insert or Update. Multiple DataLoadOptions Q. Can I specify additional load options without overwriting the first? A. Yes. The first is not overwritten, as in the following example: Dim dlo As New DataLoadOptions() dlo.LoadWith(Of Order)(Function(o As Order) o.Customer) dlo.LoadWith(Of Order)(Function(o As Order) o.OrderDetails) DataLoadOptions dlo = new DataLoadOptions(); dlo.LoadWith<Order>(o => o.Customer); dlo.LoadWith<Order>(o => o.OrderDetails); Errors Using SQL Compact 3.5 Q. I get an error when I drag tables out of a SQL Server Compact 3.5 database. A. The Object Relational Designer does not support SQL Server Compact 3.5, although the LINQ to SQL runtime does. In this situation, you must create your own entity classes and add the appropriate attributes. Errors in Inheritance Relationships Q. I used the toolbox inheritance shape in the Object Relational Designer to connect two entities, but I get errors. A. Creating the relationship is not enough. You must provide information such as the discriminator column, base class discriminator value, and derived class discriminator value. Provider Model Q. Is a public provider model available? A. No public provider model is available. At this time, LINQ to SQL supports SQL Server and SQL Server Compact 3.5 only. SQL-Injection Attacks. Changing Read-only Flag in DBML Files. Warning If you are using the Object Relational Designer in Visual Studio, your changes might be overwritten. APTCA Mapping Data from Multiple Tables Q. The data in my entity comes from multiple tables. How do I map it? A. You can create a view in a database and map the entity to the view. LINQ to SQL generates the same SQL for views as it does for tables. Note The use of views in this scenario has limitations. This approach works most safely when the operations performed on Table<TEntity> are supported by the underlying view. Only you know which operations are intended. For example, most applications are read-only, and another sizeable number perform Create/Update/Delete operations only by using stored procedures against views. Connection Pooling). Second DataContext Is Not Updated (LINQ to SQL). You can also set ObjectTrackingEnabled to false, which turns off caching and change tracking. You can then retrieve the latest values every time that you query. Cannot Call SubmitChanges in Read-only Mode Q. When I try to call SubmitChanges in read-only mode, I get an error. A. Read-only mode turns off the ability of the context to track changes. See Also Tasks Troubleshooting (LINQ to SQL)
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/bb386929(v=vs.90)
CC-MAIN-2018-17
refinedweb
664
61.02
Recently our documentation team hosted a survey on how you use VS and .NET Framework documentation. Here are a few things that I thought was interesting. I'd love to have your comments as well... Any thoughts from you on this? It seems that the the majority of developers in our community are using the latest (3.5) version of the .NET Framework. Most are also using 2.0 as well... On the question of how you would like to see .NET version information in the docs, it seems most folks want to see it all, with a filter! - Documentation should be specific to the .NET Framework version I am developing with - Documentation should be cumulative (including all versions of the .NET Framework with version specific information inline) - Documentation should be cumulative (including all versions of the .NET Framework with the ability to filter on specific versions) In terms of what folks use the docs for in their daily development, the .NET Framework reference is the winner by far! And how do you find information? Well, not surprising, web search engines win out by a high margin. Although, it does seem from the feedback that if we could improve performance of offline Help and F1 folks wouldn’t need to search online as much. Does that seem right to you? And, as always, the verbatim comments were very helpful as well. Here are a couple I thought was valuable both positive and constructive: - Mostly good, mostly accurate, certainly better than most of the competition. - First let me say that the overall quality of the documentation is very, very high. In general, Visual Studio / .NET documentation is the gold standard for technical documentation. - I would like to be able to specify my preferred language(s) so I do not see language examples that are not relevant to my needs. - I would like to see more tutorials - for new technology. ScottGu's blog if often a better source of information. - Help loads way too slowly. Pressing F1 often brings up the wrong article. Entering a search term also brings up the wrong articles. The only way to navigate help is to use Related links, or "See also" at the bottom of each article PingBack from Another form of documentation that should not be overlooked are the assemblies themselves. I use Lutz Roeder’s Reflector as my first port of call to find out about a managed API or simply to see how something works. After that I’ll look on MSDN about that API by using the menu option in Reflector. Only then, if that approach, fails will I resort to using a search engine; typically Google. I tend to work with new/beta technologies in my line of work, so there is generally little useful documentation written at that point, so the assemblies speak volumes (Silverlight 2 beta 2 is great example – I’m currently writing course ware for Silverlight 2 and what the docs say is what should be rather than what is – the assemblies never lie!) I hope this helps. I just want to put out there that the current way the framework/Visual Studio documentation is organized, with the version links in the upper right hand corner legend is awesome. It’s really nice to find something from 2.0/2005 and be able to jump directly to the 3.5/2008 version. The first graph is interesting to me. A while back, when I reported a mistake in the 2.0 docs for Hashtable (), I was told that the 2.0 docs were being deprecated and replaced by the 3.0 docs. While that makes some sense (since everything in 2.0 is exactly the same in 3.0, which makes no sense at all), apparently it never happened, since lots of people are still using the 2.0 docs. I think this underlines a point I made in the discussion of that bug report: even though you could theoretically deprecate the 2.0 docs in favor of 3.0, most people don’t think of themselves as using 3.0, and therefore don’t think that the 3.0 docs apply to them. it seems that the local help is very, very slow. when the web search returns results before local help, the local help is of no use. after a while, one defaults to just having a browser open to live search or google and running the help search from that search facility. ps. to make make matters worse, I find that google does a better job of finding and getting me to things at the microsoft site better than "native" web search facilities provided by MS for its own site. ouch. I’d say that I *like* the the version switch in the top right corner, I’m glad it’s there, but I do wish for something that would be better (but unlikely to happen): Let me see what changed between the versions. For example, if I’m looking at the 3.5 docs (because it’s the default) and the method I’m looking at hasn’t changed since 1.1, then the links for 2.0 and 3.0 should be ‘greyed out’ or give some other hint that there has been a behavior change. Ideally the member list (when viewing in 3.5 mode) would have little ‘new in 2.0’, ‘new in 3.0’ indicators to let me get an idea of what the scope of changes was. I say this knowing it may be a massive effort and unlikely to happen, but I consider it the best practice to look at the current docs (3.5) as I go, but able to see where I’m breaking compatibility. My background is that I am developing a 2.0 application. The 3.5 redist is too huge and the client profile doesn’t look to solve the issue for our needs, but I still like to know what is going on. The coutnerexample is how Win32 documentation is done on MSDN. It is absolutely horrible to fidn a function that does what you want to do via PInvoke, code it, test it, then after the fact realise tha tyou missed a tiny note about compatibility at the bottom of the page saying ‘Windows Vista’. And all your XP testers smack you on the head. I know it’s my fault not checkign the dependencies, but I do feel the Win32 docs minimize the imporatance of compatibility; with the .Net style I propose I would see on the top right which platforms are supported quickly and easily. Whoever said that the Microsoft docs are the "gold standard" is a moron. Microsoft documentation is consistently both verbose and vague, with a large dollop of just plain wrong. While I’m am it — who came up with the graph of which versions people use? And why in the world would they split the "3.5+20" people from the "3.5+3.0+2.0" people? Is there any use to that distinction? Once you make that change, it become clear that the overwhelming majority of users (a) use 3.5 (b) use earlier versions too They way you have it presented, you just know that you have a big mismash. I really wish you would reply to my email with regards to VS2008 performance on XP 64-bit. If your reply bounces back, please send as plain text. Regards… That’s true… to find some help within MS search is worst then to find out with Google. Almost always I reach to a link inside MS going thru Google because searching from MS does not find what I was looking for… Please consolidate the documentation in the following way: Put all articles for a specific .NET api call on the same page with the most recent docuemnation version listed first and then older versions listed at the botton in reverse order. Too often, the documentat has mulitple pages for the same topic each for a different product version which means that you a) cannot find the docuemtnation for the particualr version you are interested in and b) the documentation for the most recent version generally is less complete than the older versions I second the comment on how the existing supported .NET version documetation get ignored when a new .NET version is released or just about ready to be released. Please update the older doucmentation and not let it stagnate. This is why we want all of the docuemtaion for the same API call on the same page for different .NET framework versions. Lastly and most importantly, the newest version of the .NET framework documentation needs to be much more than than the ouput of the function headers (e.g., like reflector and ndoc). This includes good sample code also. MSDN has turned into barely usable trash; granted all of the documents not being loaded will some day a year or so be found once again eh? Let’s not forget the foolish use of framesets which break page zoom and impose horizontal scrolling. The decision to deploy framesets has proven stupifying. The way help now functions with VS2008 is convoluted and rarely expedient. All in all from my perspective documentation is very very disappointing having made what was once actually quite decent and at least tolerable into what? GFS and FUBAR comes to mind when the deciding factor is usability. MSDN is mostly pretty good. Takes some getting used to vs. say Java’s documentation. It would be nice if you had a streamlined version (no images, etc) that loaded much faster. Overall the documentation has improved dramatically I think. I especially like having examples that actually show the key features (i.e. are not totally trivial). I feel like you should expand the info on classes to highlight which members are useful for which purposes so I don’t have to click through to a bunch of pages for the class members to find the one I want (ofthen I find the remarks section of pages to be the most useful by far – perhaps you should move it up on the page). Possibly you can make collapsing a section for a language in the page actually uncheck that option in the language filter so that it’ll "just work" for more people. Also – one of the more anying things in .net is when you have a class that has a property which is some custom struct (delegate, event, enum, etc.), so that to get to the info i’m interested in (say for example, which flag i want to set) I end up having to click through like 7 pages (search for the class in live search, click the class page, go to it’s members, property, that type’s definition… you get the picture). If it would just excerpt from the other page (perhaps make it ajaxy?) that would save a lot of trouble I think. Anyhow, keep up the good work. Remember the days when you pressed F1, and context-sensitive help appeared within 1 second within the visual studio window? Why can’t we have that back? That was helpful. I am a C++ developer, not .NET. I found that since the incorporation of all the language developer packages into .NET I am no longer able to find relevant information for C++ APIs! The help system is a total mess – even when I explicitely filter my search to Visual C++ only I keep getting swamped by articles meant for Visual Basic, C#, .NET general concern articles, ADO, WEB scripting, and so on. I do not do databases! I do not do WEB scripting! I do not do Visual Basic, C#, or .NET development, I do C++! But I do get these articles 90% of the time! Worst of all, in many cases when I look at an article it’s not even clear where it belongs, whether it’s a C++, C#, ADO, scripting or other article – I can only guess from the apparent style being used. I don’t know how people working on all these other areas get along and whether they find it equally difficult to filter on articles that relate to their own development base. Maybe they’re better of, or maybe they’ll just avoid the VS.Net search facilities alltgether? I suppose because of my difficulties finding appropriate offline documentation, I should be googling instead. However, since I am stuck in a tiny SW development team inside a big industrial company my access to the internet is limited. And considering how slow offline help already is, I don’t care to make it even slower by channeling it over the internet… All said and done, If I had a set of actual books containing all the documentation I would lose a lot less time digging for it! I find the documentation itself quite good. The consistent layout makes it easy to find what you’re looking for, but often I find the help a little too vague or that the examples are too simple. It seems though that the later documentation is getting better (e.g. WPF). What I miss is the ability to go directly to a class’ interface or base class. For instance looking at the System.Windows.Controls.Primitives.ButtonBase, I would like to be able to go to the ICommandSource, but there is no link in the ButtonBase documentation for that. One thing that irritates me though, is that the “Language Filter” gets reset from time to time. Haven’t figured out exactly when it happens, but every once in a while I have to go through the list and filter out my preferred languages. When it comes to speed, I must agree to some of the inputs here that it certainly could have been faster. I use the online version so I expect a bit of latency, but it seems that the actually rendering of the page is a bit slow. Also; it would have been a really nice feature if I could go on the online version and choose to “download this documentation locally”. Maybe with some options of what parts of the docs to download (based on namespace and/or technology) and that the download respects my language settings (that is; don’t download any VB-content when I have checked off only C#). And the offline content should then off course always be synchronized with the online version (using some kind of background sync). I’m sorry to say that VS and .NET documentation is horrible. F1 is SLOW! It takes less time for me to type in a search term and get excellent, useful results that have often been reviewed (and corrected or expanded upon) by web site visitors. F1 brings up poor search hits and the hits that at first seem like they will be useful end up being vague or missing important details. It takes *far* too long to load; I can open another browser session, Google for the answer and find it while waiting for this lumbering bohemoth to open up. I also find it difficult to find actual *useful* information and, often, meaningful examples. Many pages tell you how something is declared in four langauges and what the data types are (which you can get from Intellisense – now that is truly an excellent idea), but no indication of how it should be used or links to anything useful. I would describe it as "full of information, but short on knowledge". I second Mike Johnson’s comment. In general, I do find the documentation to be mostly helpful however I find that the examples are often somewhat weak & I rarely take it as gospel without first checking other sources. Codeproject is a fantastic source of information and examples, MS would do well to partner with CP to use some of the community content provided there. I never use Live Search any more as it’s just too slow. When I want results, I want relevant results & I want them ASAP. By far & away the majority of developers I know or have worked with use Google when they need to search MSDN. I also have no idea why that tree view for navigation is still there. I’d love to be able to set a preference to have it never show up again. Maybe other people find it useful but I never have. It’s the first thing I toggle when I use MSDN. All in all, not too bad but could most definitely be better. <i>Although, it does seem from the feedback that if we could improve performance of offline Help and F1 folks wouldn’t need to search online as much. Does that seem right to you?</i> I actually prefer offline help, but the load time is horrible. I mitigate this by opening the documentation viewer first thing every morning, but if I press F1 it has to load a new window; the end result is I rarely use F1. I used to use MSDN online for documentation. In the past, the tree view was a separate frame that was loaded once. This was awesome, and MSDN was fast. When MSDN switched to the "improved" layout, (with the angry, hostile red color scheme that doesn’t match the calm blue scheme of the documentation) the frame was replaced with a tree view that has to be reloaded every time. Every. Single. Time. The page doesn’t display until the tree view finished downloading, since it’s part of the page. This is why, after switching from offline help to online help due to having a faster connection, I’m moving back from online help to offline help. If I’m browsing several different members, I don’t want to wait 10-15 seconds per page. The Good: – context sensitive help is awesome – index and searches are great (local) – agreed, even with its deficiencies, I’d take this documetation over almost any other product documentation out there. In fact, yes, I agree it’s the "gold standard" among product/packaged documentation….to be clear, I’m not saying it’s "awesome", its a comparative assessment. The Bad: While it’s easy to find the topic (local), the way its written needs work. This goes for online MSDN as well. When you need a separate site, i.e. ASP.NET, to make sense of documentation, well, that’s a big sign. On that note, why doesn’t VS help include MSDN blogs, ASP.Net blogs, forums and basically all things .Net related, coming from the source (Microsoft itself) in the search option??? As an example, Scott Guthrie’s blog is just, well, awesome. It’s proven to be more productive to start off at his blog (skip all the Google searches, etc.) to get clarity on some mind-twisting verbiage found in documentation. If this was a case of "the product came out first" before the blogs, then I’ve asked in the past for a way to customize the search function in VS so that users can pick/choose their "trusted content" (as additional sources only, not a replacement for local documentation). We all think, read and comprehend differently, so it seems to me that allowing us to pick addtional searchable content to search (instead of whatever is offered out of the box) makes sense? I mean, that what most people here are saying right? "I use Google" this and that. Why don’t you incorporate the same web search tools into VS (which you can "point" to a specific domain)? Even offer what you already know are "trusted sources"? Lke ASP.Net blogs, MSDN blogs, a host of Microsoft forums, etc.? Online MSDN is just a chore to use. There’s a comment above on using frames, which is right on. I personally think frames in this use is fine, but it’s just "stupid" in the way it works. That TOC refreshes on practically each link…negating the whole point of a frameset TOC – don’t you think? I’m actually assuming it’s a frameset…regardless, it needs work…in fact if its not a frameset, then think about doing so, properly. Is there a survey anywhere for the native code/C++ side of things? I’d love to see the comments/make comment? Mike I find that every major change to the online help has reduced usability of it. I attribute this to the following factors: 1: The amount of information has exploded along with the complexity. I remember when developers were confounded by some 800 APIs. But the documentation was well thought out, and mostly well documented. It was relatively easy to find, partially because you could browse through it and learn as you did. 2: Microsoft has let technology drive the functionality of the online help, rather than addressing end-user functionality. 3: Much of the help material now appears to be auto-generated (from developer comments?) and less documentation expertise is included in formulating good documentation. 4: When I’m looking for something by its functionality and don’t know its name, I often find myself looking at something which sounds like it may be related, but unless I already know what it does, there is no way to figure it out from the description. In most cases the description of a function is just a rewrite of the function name. That is not helpful. 5: Examples (when available) tend to be too trivial. A good example will avoid the completely trivial while remaining simple enough to not obscure the main points. A good exaple may be enhanced by comments regarding special considerations (border conditions, special cases, design and developer considerations, what causes errors, etc.). Heck, why not publish the unit tests for the function… 7: I tend to do unfiltered searches, but pick from the result list what seems to be the most appropriate. 8: I much prefer my online help to be LOCAL ONLY. However, I don’t always upgrade my MSDN Library, so the local results will become outdated. I would love two simple changes: – Provide a button that allows to check for (and download) updates to the CURRENT HELP PAGE. – Provide a search result button for "Check for additional search results online" which would check current result entries for updates as well as identify hits that are new. I should then be allowed to download the new and updated articles (one/selected or all). Most points captured above. I found the most interesting to be "use the assemblies themselves as documentation". Why? Because they can’t lie. I would argue that the #1 most frustrating thing for a developer is to code something small according to the instructions (documentation) and spend days trying to figure out why it doesn’t work because a) you have no source code. b) the documentation did not completely describe the behavior of the code you are trying to use. People may laugh, but one of the best aspects about MFC (remember that "old" framework) was that you had the source code for it, and if you coded to the doc, and it was wrong, you could at least drill into the source code to try to find the unexplained behavior. The source code may have been poorly written or optimized to the point of incomprehensibility, but at least you could invest time and gain knowledge to use in the next design. With the .NET framework, you invest time and gain opinion which then has to be tested to determine if the opinion is factual. And although I appreciate some assemblies having pdb files online, if they aren’t ALL available, then it is next to useless. Hmmm… The above comments seem to confirm that it will be surprising if you ever create documentation that makes every one happy… From a personal point of view While Some people may say your documentation represents a "Gold Standard" at best that would have to be a Relative comparison to the types of documentation they regularly use. From my point of view I find the Documentation style …. "functional" and best , generally Dry and mostly Terse , I never use it other than to clarify the syntax on a function I already know something about. It is useless to try and use it to "lean" about anything new… the most you may find is that a Certain Function may exist … then the very next thing you always do is use your favourite Search engine to find real world *Useful* examples of how to use the dam thing. I know that the the help manual is not meant to replace good quality books but there is a point where you wonder why a few more USEFUL examples that explain what the function do could not be incorporated. There are 3 basic types of information people are looking for when they look for help 1) syntactical help ie they know what they want to do but they they need a memory jogger to clarify the syntax or a particular option ( a lot of this is taken care by intellisense now) 2) Functional support You have a "rough" Idea of what you want to do but it is in an area that is new or unfamilia , and therefore you need help to find a method or function to do what you want togeather with a good example of how to use it. ( this in my opinion is the number one reason why especially new programmers "Reinvent" functions that are already built in … they just didn’t know a function already existed to do what they wanted. …because they couldn’t find it in the documentation ) and 3)Debugging Support You have a problem in the code that is not doing what you thought it should be doing … once you have check that you haven’t done anything silly like passed the wrong parameter ( syntactical support above) your then stuck trying to find good examples of how the function is "Suppose" to be used and any "Gotchas" … and help from other people who want to do the same thing who have gone through the Pain of discovering that you really should be using a completly different Function to do what you want. … unfortunately the documentation is Usually only good for level 1 issues. That is not to say I have not found some very useful "Nuggets of information" but it us usually the exception not the norm. William Performance of offline Help is very important, offline search should be as fast as google is. I have enough RAM to cheep the index in money but it still seams to take for ever. Often there are 3 or 4 pages on a subject that mostly repeats the same information that could have been combined into a single more useful page. It as if each page was wrote by a different person that did not have permission to change the other pages, so just wrote a new page. 【原文地址】 Some Results from Visual Studio and .NET Framework Developer Documentation Survey 【原文发表日期】 11
https://blogs.msdn.microsoft.com/brada/2008/08/11/some-results-from-visual-studio-and-net-framework-developer-documentation-survey/
CC-MAIN-2019-04
refinedweb
4,541
70.94
How Deeply Should Feature Flags Be Embedded In Your Application? At work, we've been using Feature Flags (otherwise knowns as feature toggles) in our application for about a year. The usage started out in a single place; then, it was sprinkled throughout the application, as needed. Unfortunately, we never really took a moment to think about how feature flags should be used; or, how the use of feature flags should be reflected in the application architecture. This has caused a certain degree of friction when trying to refactor code. So, I wanted to take a minute and think out loud about how deeply the concept of feature flags should be woven into the fabric of the application. For a little background, a feature flag is a mechanism for deploying code to a production environment in such a way that it is not immediately available to all users. At work, we use LaunchDarkly with ColdFusion, which is basically a "Feature Flags as a Service" provider; but, I've also played around with rolling my own Redis-backed feature flag system in ColdFusion. Feature flags typically work by returning a True / False Boolean indicator based on a some sort of identifier. This identifier might be a user's unique account identifier. But, it could just as easily be something like an IP address, a security group (ex, "Beta Users"), or a "bucket" allocation. The returned Boolean value can then be used to expose user interfaces (UI) or manage request fulfillment. Despite the open-ended nature of a feature flag's identifier, the feature flag is almost always associated with a user or group. After all, feature flags have to behave consistently; and, they can only do that if a given user is consistently associated with the same set of feature flags on every request to the application. Randomly assigning feature flags on every request would immediately defeat the purpose of a feature flag. That said, at what point in the request life-cycle should a request be associated with a set of feature flags. To frame the conversation, here's the way that I think about the layers of a web-based application: Keep in mind, of course, that this is just my own personal mental model for application design. Your application may be different; your mental model may be different; but, at least this graphic will anchor the conversation. When I look at this graphic, the first thing that I notice is that the deeper the request goes into the application, the farther away the request gets from the User. This means that the deeper portions of the application are quite likely to have less information on which to base the feature flag assignment. A method call in the bowels of the business layer may have a "User ID"; but, what about the user's session data? Or the client IP address? Or the browser's User-Agent string? Probably not. The closer that we can move the feature flag assignment to the user's request, the more informed our assignment will be. So, what about doing the feature flag assignment in the Workflow / Use-Case layer? I view this layer as the top layer of the "core" application. Meaning, this layer encapsulates the web-agnostic portion of the application. As such, it makes it hard to think about using this layer to perform browser-related decisions. For example, what if I wanted all "Safari Mobile" users to see a particular feature? To perform this assignment from within the user-case layer, I'd have to either break encapsulation and reach directly into the request data, which is ewww gross; or, I'd have to start passing the browser's User-Agent string into the use-case method, which also feels icky. It seems to me, the only place to perform the most informed feature flag assignment would be in the Controller layer. This is where you know the most about the incoming request from a user-agnostic standpoint (ex, user-agent string, IP address, HTTP cookies); but, it's also the place where we know enough about the web-application as a whole such that we can easily access session, security, and user data. If I could go back and re-architect the way our application uses feature flags, I would ensure that all feature flag assignments were done in the Controller layer. Then, those feature flags would be passed into the Workflow / Use-Case layer as static values. The use-case layer could then marshal requests using these pre-calculated feature flags, perhaps even passing the feature flags down into the lower layers (though that starts to feel a bit funky as well). A side-effect of this controller-initiated set of feature flags would be that the internal layers of the application are easier to test. Since each subsequent layer action would be based on inputs, the control flow and output calculations would be much easier to reason about. I'm still relatively new to the concept of feature flags. But, I'm not so inexperienced to know that we've made some unfortunate decisions in the way that we've implemented feature flags. We've made decisions that make refactoring the code harder because we've made feature flag assignments too deeply within the layers of the application. If I could go back and do it over, I would keep such decisions as high up in the application architecture as possible. Reader Comments Been thinking about this a great deal myself recently, as I too find it tends to litter itself across the codebase. The trouble is, it's very nature is that you need access to it at all points throughout the stack, perhaps I'm adapting the UI slightly based on it, perhaps I'm locking access to a given controller/action, perhaps I'm augmenting some JSON data returned by an API... the list goes on. My current approach is to bundle the feature flags into my authorization process, which has a helper method which is exposed in the controller and view layer: <% if can?(:update, @profile) %> This then authorises the user to perform that action, based on their role/permissions, but also against a feature flag. Accessing this deeper down the stack is still something which I'm bugged by though, as these things should not really have access to the request context, and therefore the user. Tricky. @Robert, Trust me, we're currently in the same boat. We have a "FeatureFlagService" that we can inject into anything. So, it's really easy for any developer to just inject that into some service and call: if ( featureFlags.getFeature( user.id, "newLoggingThing" ) ) { .... } The problem that I've run into, which made me realize that something was terribly wrong with this was that for some of these type of operations, I had to change "user.id" to something else -- something that I didn't have access to in the current execution context because I was too deep in the application. The work-around for us was to push some logic into the inner-workings of the feature-flag service itself. The problem with this is that it required a database-lookup, which may not be called several times in any given request depending on how many parts of the code reference feature flags. We've tried to mitigate the issue by adding internal caching for the DB request; but, clearly, the architecture is making it hard to refactor. @Ben Glad to know other people have similar challenges. I think it's that whenever I find myself accessing the request context in the model it always smells a little; but without refactoring a bunch of things within the domain model you just have to suck-it. One way I've seen this being handled is to allow instantiation/injection of the 'scope' into a given model, which can then be used to perform contextual checks, so something like. class SomeController bar = Bar.new(context: current_user) end class Bar def do_thing if feature_flags.get_feature(context.id, "newLoggingThing" )... end end This allows the context to be switchable down the line, and doesn't tie the model directly to the user class, or the way in which the user is found or authenticated etc. With regards to performance, obviously you're dealing at massively different scales to me, but we stick this stuff in Redis rather than in a traditional DB. @Rob, That's very interesting. We don't really have a "domain model" that allows for new-ing Objects - we mostly have a pile of Singletons that "do stuff". Basically a bunch of procedures wrapped in superficial OO. So, we can't really swap contexts. But, we can certainly pass data into the methods since the singletons are, more or less, stateless.
https://www.bennadel.com/blog/3172-how-deeply-should-feature-flags-be-embedded-in-your-application.htm
CC-MAIN-2021-04
refinedweb
1,475
58.72
How to tune Hyperparameters with Python and scikit-learn Introduction: Whenever we train a machine learning model with classifier we use some levels to train it for pulling and turning purpose. These values are hyperparameters. We generally use them in KNN classification or in SVM. In the case of other algorithms, the weights, terms and or some particular values who actually change the whole result of the model just by changing that are hyperparameters. But this is a technical thing let us start with the basics and coding part as well. This blog is going to explain the hyperparameters with the KNN algorithm where the numbers of neighbors are hyperparameters also this blog is telling about two different search methods of hyperparameters and which one to use. Uses: Hyperparameters are also defined in neural networks where the number of filters is the hyperparameters. We mostly use hyperparameters in CNN in the case of neural networks. For example, If we are training a model of cats and dogs images or car and two-wheeler images than we use CNN to train it and there we use hyperparameters. Also when we work with sound data or applying KNN, GMM, SVM type os algorithm we prefer to hyperparameter. How it works?: Here this blog is actually using the KNN algorithm - Well, In the KNN algorithm hyperparameters are the number of k_neighbors and similar function for example distance metric - There are two searches in hyperparameters grid search and then a randomized search. - We define hyperparameter in param dictionary as shown in the code, where we define n_neighbors and metric - After that, we can use either the grid search or randomized search hyperparameter to train each and every value. - Most of the time grid is expensive and costly. - In randomized search hyperparameter we need to wait and as long as we wait the performance will increase but this makes us impatient and also it reduces the iteration numbers. Coding part: from imutils import paths import numpy as np import imutils import time import cv2 import os from sklearn.neighbors import KNeighborsClassifier from sklearn.grid_search import RandomizedSearchCV from sklearn.grid_search import GridSearchCV #first of all param dictionary: params = {"n_neighbors": np.arange(1, 31, 2), "metric": ["search1", "search2"]} #second step is for grid search or randomized search: model = KNeighborsClassifier(n_jobs=args["jobs"]) grid = GridSearchCV(model, params) start = time.time() grid.fit(X_train, trainLabels) accuracy = grid.score(testData, testLabels) #Always choose one between grid search and randomized search #randomized search: grid = RandomizedSearchCV(model, params) start = time.time() grid.fit(trainData, trainLabels) accuracy = grid.score(X_train, testLabels) In the above code: –>, First of all, we are importing all the libraries –>Now here I am showing you with each hyperparameter but in the original training you re supposed to use one of them according to your need –>In param dictionary we can see n_neighbors which shows the number of neighbors and metric shoes the image you want to search. –>than we fit this in grid search and wait for the output, it can take time according to the hardware of your machine. –>it is optional you can also use the randomized search for hyperparameter tuning. Conclusion: After fitting in the param dictionary we have two choices either go for randomized search or grid search. It totally depends on the user what is needed in that particular condition. Most of the times Users prefer randomized search unless and until the training data is small and can be trained using grid search.
https://www.codespeedy.com/tune-hyperparameters-with-python-and-scikit-learn/
CC-MAIN-2021-10
refinedweb
580
51.99
THE UNITED STATES, BRITAIN & THE COMMONWEALTH in Prophecy 2016 Church of God, a Worldwide Association, Inc. All Scripture quotations, unless otherwise indicated, are taken from the New King James Version (© 1982 by Thomas Nelson, Inc.). Used by permission. All rights reserved. Author: Erik Jones Contributing Writers: M. Noland Morris, Ph.D.; David Treybig Publication Review Team: Peter Hawkins, Jack Hendren, Don Henson, Harold Rhodes, Paul Suckling Editorial Reviewers: Mike Bennett, Clyde Kilough Doctrine Committee: John Foster, Bruce Gore, Don Henson, David Johnson, Ralph Levy Design: Elizabeth Glasgow THE UNITED STATES, BRITAIN & THE COMMONWEALTH in Prophecy The’s ahead, we must understand their biblical identity and prophesied future. What does the future hold for these nations and our world? TABLE OF CONTENTS 6 Introduction I 10 The Amazing Story of God’s Promises to Abraham II 26 The Rise and Fall of Ancient Israel 21 Genesis 49: A Key to Understanding Israel’s End-Time Identity 34 Did Ancient Israel Receive All That Was Promised to Abraham’s Decendants? 46 David’s Throne in Prophecy III 54 The Migration of the “Lost” Israelites IV 70 Britain and the United States Inherit the Birthright Blessings 98 God’s Intervention in British and American History V 104 What’s Ahead and What Should You Do About It? LifeHopeandTruth.com 5 INTRODUCTION iStockphoto.com 6 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY “I am God, and there is none like Me, declaring the end from the beginning, and from ancient times things that are not yet done, saying, ‘My counsel shall stand, and I will do all My pleasure’” (Isaiah 46:9-10). T oday’s world is teetering on the brink of chaos. Wars, nuclear weapons, ethnic conflict, terrorism, belligerent nations, religious and ideological divisions, financial meltdowns, disease epidemics and many other factors make our world extremely dangerous. Many worry about the future for their families. Thousands of years ago, God inspired the writers of the Bible to foretell a future period called “the time of the end” (Daniel 8:17; 11:35, 40; 12:4, 9)—a time of war, chaos, famine and disease. While directly affecting the entire world, it will particularly hit hard those living in the great Western nations—the United States, the Commonwealth, the nations of Western Europe and the Jewish nation of Israel. The world order, as we know it today, will be shattered, producing the greatest time of warfare and suffering in the 6,000 years of recorded history. Many thinkers today see a crisis is coming simply by observing world events, but it is in Bible prophecies that we find how and why this future time will come. In order to understand this frightful future, one must understand the reason for the world order that has existed for the last 200 years. Only by understanding the past and present can we understand the future. For the past two centuries, any semblance of order has been maintained by primarily two nations—Great Britain and the LifeHopeandTruth.com 7 United States. When threats have arisen endangering humanity—tyrants and dangerous ideologies like militarism, Nazism, Fascism, Marxism and Islamic fundamentalism—it has generally been the Anglo-Saxon nations that have fought and defeated these forces and maintained global stability. Why have these nations risen to such global prominence? And will they remain the world’s most powerful nations, holding off the forces of destruction threatening the world? In the Bible God claims that He possesses the ability to both see the future and direct the course of history. Foretelling the future is called prophecy, and prophecy, in fact, is one of the greatest themes of the Bible, comprising more than a quarter of its content! God’s claim of the authority to declare what will happen in the future— before it happens—lies in His exclusive ability and power to make it happen (Isaiah 46:10). The Bible contains prophecies that have been or are being fulfilled, as well as prophecies that are yet to be fulfilled, all witnessing to the existence and power of the true God. This booklet focuses on one of the most amazing examples of fulfilled prophecy in the Bible. It is a prophecy that has impacted the lives of millions of people, but is understood by very few. It is largely missed or ignored even by those who profess belief in God and His Word. Yet a basic question must be answered: Could the Bible’s prophecies have overlooked the great modern powers that have impacted our world so profoundly in the last 200 years—the British Empire (and Commonwealth) and the United States of America? These great nations—the Anglo-Saxon peoples—have held immense national power, wealth and influence such as the world has never seen, in spite of many reasons why their rise to power should never have happened. Their history includes many stories of miraculous events that seem to show divine providence on their behalf. Was the rise of Great Britain and America merely good fortune—or is there more to the story? This booklet presents the bold thesis that nearly 3,500 years ago the Bible predicted the rise of the British and American peoples to global dominance! Not all of the promises and prophecies about the descendants of the patriarch Abraham were fulfilled in the ancient nations of Israel and Judah—some were, in fact, end-time prophecies to be fulfilled in the time leading up to the second coming of Jesus Christ! Understanding these ancient prophecies not only provides fascinating insight into the whys of history and the identity of modern nations, but it provides a key to understanding other biblical prophecies that are yet to be ful8 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY filled. This vital key of understanding unlocks many mysteries of history and prophecy and can strengthen your faith in God’s existence and the Bible’s reliability. You will learn that the Bible truly is a living book, essential for understanding the world today, as well as the only authority for how we should live. You will learn that God is truly the God of history, the One who shapes events to fulfill His promises and accomplish His will (Daniel 2:21). Keep reading to discover the incredible identity of the United States and British Commonwealth of Nations in Bible prophecy! Learn about the origins and future of these nations—a future that includes both trial and hope. In the process, you will also learn what God expects of these nations—and you— today! The message of this booklet will change your life! LifeHopeandTruth.com 9 CHAPTER 1 10 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY THE AMAZING STORY OF GOD’S PROMISES TO ABRAHAM “I will make you a great nation; I will bless you and make your name great” (Genesis 12:2). T Jacob crosses his arms, giving Ephraim the primary blessing with his right hand and Manasseh the secondary blessing with his left hand (artwork by Keith Larson). he Bible’s beginning book, Genesis, includes foundational material essential to understanding everything that follows. Genesis opens by revealing God as the creator of the entire universe, earth and all physical life (Genesis 1). We read of God creating humankind in His image (verse 26), the first humans quickly rejecting Him (Genesis 3) and the resulting widespread evil that led God to destroy the majority of mankind through a worldwide flood (Genesis 6-7). We then see humanity build a great civilization—Babel—and again openly defy God, leading Him to disperse people throughout the earth to form separate civilizations (Genesis 11). Interspersed in this sweeping story, we see rare individuals standing out as faithful to God in the midst of evil and corrupt societies; people like Abel, Enoch and Noah. Then Genesis 11 introduces us to one of the most influential men in history—one whose descendants continue to shape the world today. That man was Abraham. Abram’s calling He wasn’t always known as Abraham. When he’s first presented, we see his name is Abram. Abram was from Ur of the LifeHopeandTruth.com 11 Chaldeans, a commercial city located in ancient Mesopotamia (modern Iraq), but his way of living differed from those around him. God would eventually choose to begin through this godly man a unique family—a family that would grow into nations through which He would impact human history in many important ways. God did not do this haphazardly. He used a series of tests to see whether Abram would faithfully obey Him. As Abram passed each of these tests, God revealed more details about his future descendants. The first test We find Abram’s faith first tested in Genesis 12:1: “Now the Lord had said to Abram: ‘Get out of your country, from your family and from your father’s house, to a land that I will show you.’” Hebrews 11:8 tells us the test of faith lay in Abram’s having no idea where he was going. God had, however, promised Abram that rewards came with obedience: ). Among these key components of God’s promise to Abram was the assurance that his descendants would attain national greatness. But this promise was conditional on Abram’s obedience. What was Abram’s response? To delay? To ask for more information? To convince God to let him stay where he was? No. “So Abram departed as the Lord had spoken to him” (Genesis 12:4). Abram believed God and obeyed without question. After Abram and his household arrived in the land of Canaan, God told him that this was the land He would give to Abram’s descendants. Abram responded by building an altar to the Lord. Abram was growing in his relationship with God (verse 7) to the point that he would eventually be known as God’s friend (2 Chronicles 20:7; James 2:23). As we follow the story, we will see that God expanded on His promise, adding many more details about Abram’s descendants becoming a great nation. But a huge obstacle remained—Abram and his wife Sarai were childless. How could a great nation arise from a man with no offspring? The second test So God appeared to Abram again, revealing that “one who will come from your own body shall be your heir” (Genesis 15:4) and telling him to 12 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY “look now toward heaven, and count the stars if you are able to number them. … So shall your descendants be” (verse 5). On the surface, God’s promise of an heir and countless descendants through Sarai, who was old and well past her childbearing years, seemed impossible! But Abram was unfazed and passed this second test by trusting in God’s ability to do what was humanly impossible. He “believed in the Lord, and He [God] accounted it to him for righteousness” (verse 6). Because of his monumental faith, God declared Abram as righteous. The Bible consistently ties the concept of righteousness to obedience to God (Psalm 119:172). Faith is belief, but that belief is verified by one’s actions (James 2:20). After Abram so confidently demonstrated his faith, God formalized His promise as “a covenant” in which He bound Himself to fulfill His promise to make Abram a “great nation” and to give his descendants the land of Canaan as an inheritance (Genesis 15:18-21). Though Abram and Sarai were faithful, they were not perfect. In Genesis 16, we find that Sarai could not see past the physical barrier of her age and infertility, so she proposed a solution: use her personal maidservant Hagar as a surrogate mother (verse 2). Abram agreed and took Hagar as a second wife (verses 2-4). From this union came a child, Ishmael, who would become the progenitor of some of the Arab peoples. The third test Thirteen years after the birth of Ishmael, God appeared again to Abram. All this time, Abram had assumed that Ishmael would be his heir, the child through whom God would fulfill His promises. But God had other plans. His promise would not be fulfilled through human solutions—He was determined to fulfill His promise through a divine miracle! God reaffirmed the covenant He had made years earlier (Genesis 17:1-2), then expanded the promise even further! “As for Me, behold, My covenant is with you, and you shall be a father of many nations” (verse 4, emphasis added throughout). From Abram, God said, would come not just one nation, but many nations. Then, to impress on Abram the significance of this expanded promise, God changed his name from Abram (“exalted father”) to Abraham (“father of a multitude”). He also changed Sarai’s name (which means possibly “my princess” or “she that strives”) to Sarah (“princess”), for she would be “a mother of nations” and “kings” would come from her and Abraham (Genesis 17:5-6, 15-16). God also added other new promises to His covenant with Abraham: LifeHopeandTruth.com 13 • “I will make you exceedingly fruitful.” • “I will make nations of you.” • “Kings shall come from you” (verse 6). All of these promises described physical blessings to be fulfilled through Sarah, who now was 90 years old! “She shall be a mother of nations; kings of peoples shall be from her” (verse 16). This prospect seemed humanly implausible, but Abraham would soon understand how God is truly “the God who does wonders” (Psalm 77:14). But again, these covenant blessings were conditional—Abraham would have to “walk before Me and be blameless” (Genesis 17:1). He would also have to fulfill a third test: He and his descendants would have to be circumcised (have the male foreskin surgically removed) as “a sign of the covenant” (verse 11). Though the prospect of undergoing this procedure as an adult must have been very uncomfortable and unpleasant, Abraham, Ishmael and the other males of his household obeyed and were circumcised (verses 23-27). To learn more about circumcision, read “The Sign of Circumcision” on the Life, Hope & Truth website. The fourth test With the covenant and promises sealed, all that was needed was an heir. Although the likelihood of a 90-year-old woman becoming pregnant seemed ludicrous, God reminded Abraham and Sarah, “Is anything too hard for the Lord?” (Genesis 18:14). Then, as promised, “the Lord visited Sarah as He had said, and the Lord did for Sarah as He had spoken. For Sarah conceived and bore Abraham a son in his old age” (Genesis 21:1-2). They named him Isaac, which means, “he laughs,” likely a reminder of Abraham and Sarah’s prior amusement at the idea of their having a child. But Isaac was indeed the son through whom the promises would be fulfilled—“in Isaac your seed shall be called,” God declared (verse 12). As we will see, the lost tribes of Israel would even be identified by terms derived from the name Isaac. Abraham, now 100 years old, had experienced many trials, hardships and tests throughout his lifetime, yet now he finally held the son born from the wife of his youth, Sarah. The promises now seemed assured. But God had one final test in store for Abraham—the hardest test of his life. Some years later, God told him, “Take now your son, your only son Isaac, whom you love, and go to the land of Moriah, and offer him there as a burnt offering on one of the mountains of which I shall tell you” (Genesis 22:2). 14 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY One can scarcely imagine Abraham’s feelings as the God he had faithfully obeyed now gave him this command. But Abraham hadn’t followed God this far only to reject Him now. He had seen God’s miraculous power firsthand through Isaac’s conception and birth, and had been assured that God would fulfill the promises through Isaac. So Abraham made his choice—he obeyed. Hebrews 11:17-19 clearly explains his rationale: .” As Abraham bound Isaac to the altar and lifted his knife, fully intent on obeying the command, “the Angel of the Lord” (which careful study shows was the One who became Jesus Christ; see our article “The Angel of the Lord”) dramatically stopped him! “Abraham, Abraham,” said a powerful voice. “Do not lay your hand on the lad, or do anything to him; for now I know that you fear God, since you have not withheld your son, your only son, from Me” (Genesis 22:11-12). Abraham had passed God’s final test, proving his faithfulness in the ultimate trial. God then reinforced the promises as unconditional and added yet more key elements: 16-18). Note the two distinct components of this promise: • Material blessings for Abraham’s physical descendants. National greatness would come upon Abraham’s physical descendants. Their population would grow, becoming practically innumerable, eventually possessing the gates (passages) controlling the economic and military movements of competing nations. How these birthright blessings were literally fulfilled is covered in greater detail later in this booklet. • Spiritual blessings for all people. This second component hearkens back to God’s original promise in Genesis 12:3, that in Abraham “all the families of the earth shall be blessed.” In the New Testament God revealed He was pointing to the opportunity for humanity to receive salvation from sins through Jesus Christ (Galatians 3:8, 16). Peter quoted this in Acts 3, explaining its meaning in verse 26: “To you first, God, having raised up His Servant Jesus, sent Him to bless you, in turn LifeHopeandTruth.com 15 ing away every one of you from your iniquities.” This was connected to the scepter promise through the royal line of David (Genesis 49:9-10; Luke 1:32; Revelation 5:5). Jesus’ ministry was and continues to be one of reconciling sinners to God that we might be part of God’s eternal Kingdom (Colossians 1:13, 19-23). His death made it possible for all humans to be forgiven of their sins and receive eternal life (John 3:16). This great act of grace is freely given to all who repent of their sins, have faith in and follow Jesus Christ as their Lord and Savior, are baptized, receive God’s Holy Spirit and live as He commands. Those who do are called “Abraham’s seed,” regardless of their sex, race or nationality (Colossians 3:11; Galatians 3:28-29). Birthright passed down to Isaac and Jacob At Abraham’s death, the unconditional birthright blessings were passed on to Isaac. Ishmael was Abraham’s firstborn, but he did not receive the birthright. Instead, God decreed that it be passed on to Isaac, the son born of the union between Abraham and Sarah (Genesis 25:11; 26:2-5). Isaac and his wife Rebekah had twin boys, the firstborn named Esau, and the second Jacob (Genesis 25:25-26). Again, birth order would have given the birthright blessings to Esau and his descendants, but God intended the blessing to go to the younger Jacob (verse 23). Notice that Isaac’s blessing of Jacob provides even more details about the physical birthright blessings: “Therefore may God give you of the dew of heaven [favorable weather conditions], of the fatness of the earth, and plenty of grain and wine [agricultural prosperity]. Let peoples serve you, and nations bow down to you. Be master over your brethren, and let your mother’s sons bow down to you. Cursed be everyone who curses you, and blessed be those who bless you!” (Genesis 27:2829). These national blessings flowed from the blessing originally bestowed on Abraham (Genesis 28:4). As it passed to Isaac and then Jacob, more details were added. One important event in Jacob’s life needs special focus. Shortly before he was reunited with Esau later in life, a mysterious individual met Jacob in the dark of the night. This “Man” (actually the One who later came to earth as Jesus Christ, Hosea 12:3-4) “wrestled with him [Jacob] until the breaking of day” (Genesis 32:24). Though severely injured, Jacob persevered, insisting that the Man bless him. The blessing he received was a new name that acknowledged his growth in character. No longer would he be Jacob, meaning “supplanter,” 16 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY THE 12 SONS OF ISRAEL (JACOB) Leah Reuben Simeon Levi Judah Issachar Zebulun Rachel Joseph Bilhah Rachel’s Handmaid Dan Naphtali Benjamin Zilpah Leah’s Handmaid Gad Asher but now he would be called Israel, meaning “prevailer” or “overcomer with God” (verse 28). A few years later, God appeared to Israel (Jacob) and added another critically important detail to the birthright blessing: “God said to him: ‘I am God Almighty. Be fruitful and multiply; a nation and a company of nations shall proceed from you, and kings shall come from your body’” (Genesis 35:11). This detail is vital to understanding the fulfillment of these blessings. Up to this point, we have seen the birthright blessing expand from “a nation” LifeHopeandTruth.com 17 (Genesis 12:2) to “nations” (Genesis 17:6), and now we see that eventually the birthright blessing would be fulfilled through a “nation” and a “company of nations.” The sons of Jacob Jacob fathered 12 sons (Genesis 35:22-26), each of which had offspring who eventually grew into tribes that later developed into nations. Collectively, the descendants of Jacob’s 12 sons would be called the children of Israel or the 12 tribes of Israel. Remember, Jacob’s name was changed to Israel. So, at this stage Israel meant the 12 sons of Jacob and their descendants. By birth order, Jacob’s firstborn son was Reuben. Normally, it was the firstborn who would receive the birthright blessings. But God chose not to give Reuben the birthright blessings because of a particular sexual sin (Genesis 35:22; 1 Chronicles 5:1). Instead, God chose Joseph, the firstborn son of Jacob’s marriage with Rachel. We encourage you to read the fascinating account of Joseph’s trials and ultimate triumph in Genesis 39-45. Joseph’s story, which included betrayal by his brothers, years of trials and then eventual triumph in Egypt, foreshadows the story of his descendants in the end time. Once again, the passing of the birthright blessings to the next generation happened in a unique manner. At that time Jacob was living in Egypt and nearing his death. So Joseph brought two of his sons, Ephraim and Manasseh, to see their grandfather before he died, and it was at this visit that a remarkable series of events occurred. Jacob adopts Joseph’s sons After recounting how God’s blessing had been passed down from Abraham (Genesis 48:4), Jacob said to Joseph, “And now your two sons, Ephraim and Manasseh, who were born to you in the land of Egypt before I came to you in Egypt, are mine; as Reuben and Simeon, they shall be mine” (verse 5). Jacob was adopting Joseph’s two sons as his own for the purpose of the birthright. Instead of simply being passed to Joseph, the birthright blessing was divided between his two sons. Ephraim and Manasseh would not just be considered the sons of Joseph—they would also be considered the sons of Israel, or Jacob (verse 16). Jacob crosses his arms Joseph then brought the two boys to Jacob for the physical blessing. Generally a primary blessing would be bestowed by the right hand (which symbol18 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY THE BIRTHRIGHT BLESSING God’s Promises to Abraham and His Descendants A GREAT NATION.’” LAND AND NUMEROUS DESCENDANTS GENESIS 13:14-17 .’” MANY NATIONS AND KINGS GENESIS 17:4, 16 “As for Me, behold, My covenant is with you, and you shall be a father of many nations.” “And I will bless her [Sarah] and also give you a son by her; then I will bless her, and she shall be a mother of nations; kings of peoples shall be from her.” THE GATE OF THEIR ENEMIES GENESIS 22:16-18 .” AGRICULTURAL BLESSINGS AND GEOPOLITICAL DOMINANCE!” NATION AND A COMPANY OF NATIONS GENESIS 35:11 “I am God Almighty. Be fruitful and multiply; a nation and a company of nations shall proceed from you, and kings shall come from your body.” EPHRAIM: A MULTITUDE OF NATIONS MANASSEH: A GREAT PEOPLE GENESIS 48:19 “But his father refused and said, ‘I know, my son, I know. He also shall become a people, and he also shall be great; but truly his younger brother shall be greater than he, and his descendants shall become a multitude of nations.’” LifeHopeandTruth.com 19 ized strength), so since Manasseh was the firstborn, Joseph situated him on Jacob’s right side and Ephraim on his left. But something unusual happened: “Israel stretched out his right hand and laid it on Ephraim’s head, who was the younger, and his left hand on Manasseh’s head, guiding his hands knowingly, for Manasseh was the firstborn” (verse 14). In essence, Jacob crossed his arms. Jacob then formally passed on the birthright blessing with these words: “Bless the lads; let my name be named upon them, and the name of my fathers Abraham and Isaac; and let them grow into a multitude in the midst of the earth” (verse 16). Note these key things from Jacob’s words: • The name of Israel was specifically bestowed onto the sons of Joseph. The descendants of Joseph are often referred to as “Israel” in Bible prophecies. • The sons of Joseph would also carry the name of Abraham and Isaac, signifying that they would be the primary recipients of the birthright blessings passed down among the patriarchs. • Only the physical blessings of national greatness were bestowed upon Ephraim and Manasseh. The promise of spiritual blessings would later be given to Judah (Genesis 49:8-10). A nation and a multitude of nations Joseph, assuming Jacob was confused, tried to move Jacob’s crossed arms (Genesis 48:17), saying to his father, “‘Not so, my father, for this one is the firstborn; put your right hand on his head.’” But Jacob knew full well what he was doing and replied, ” (verses 18-20). God inspired Jacob to reveal keys necessary to identify Joseph’s descendants in the future: • Manasseh’s descendants, when they fully inherited the birthright blessings, would become one single nation. This single nation would be described as great. • Ephraim’s descendants would become a multitude, or group of nations, that would also be called great. The younger Ephraim was set before the older Manasseh, signifying that he was receiving the greater blessing. Ephraim would not only inherit more territory than Manasseh, but would receive the birthright blessings first. 20 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY INSET GENESIS 49: A KEY TO UNDERSTANDING ISRAEL’S END-TIME IDENTITY “Gather together, that I may tell you what shall befall you in the last days” (Genesis 49:1). S hortly after bestowing the birthright blessings on Ephraim and Manasseh, Jacob gathered his 12 sons together. He was nearing death (Genesis 48:1-2, 21) and wanted to address them as a family. This was no ordinary farewell speech. Jacob must have grabbed their attention when he said, “Gather together, that I may tell you what shall befall you in the last days” (Genesis 49:1). In biblical prophetic language, the “last days” refers to the era preceding Jesus Christ’s second coming (Deuteronomy 4:30; 2 Timothy 3:1; 2 Peter 3:3-4). This “last days” setting of the Genesis 49 prophecy about each of the 12 sons of Israel has compelled many history and prophecy students to examine these characteristics to try to find parallels in modern nations today.. The scepter promise Before examining the characteristics of Joseph’s descendants, let’s consider Jacob’s prophecy to Judah, since it is crucial to understanding how God’s promises to Abraham were fulfilled: “Judah, you are he whom your brothers shall praise. … Judah is a lion’s whelp; from the prey, my son, you have gone up. He bows down, he lies down as a LifeHopeandTruth.com 21! This landmark prophecy shows the descendants of Judah have an important role in bringing salvation to the entire world (John 4:22; Romans 1:16). We recommend you read our Life, Hope & Truth article “All Blessed Through Abraham” to understand more about how Jesus Christ fulfilled this promise.). See “David’s Throne in Prophecy” on page 46. The descendants of Judah are easily identified today, since they still bear a form of their ancestor’s name—Jews (a nickname for Judah). However, it is of utmost importance to understand the distinction between “Jews” and “Israelites.” Most people mistakenly assume the term Jews refers to all Israel, when in reality it primarily refers only to the descendants of the tribe of Judah, and others of the house of Judah, 22 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY © by Intellectual Reserve, Inc. Jacob blesses his 12 sons by Harry Anderson. including Benjamin and Levi.. However, one should note that even though Jacob had earlier divided Joseph’s birthright blessing between his two sons, in this prophecy he still refers to Joseph, not Ephraim and Manasseh, thus keeping them unified under their father’s name. This is a clue that Ephraim’s and Manasseh’s descendants would be linked in many ways (including culturally and linguistically), yet still be distinct national entities. LifeHopeandTruth.com 23 Notice the details of Joseph’s blessings: An expansionist people “Joseph is a fruitful bough, a fruitful bough by a well; his branches run over the wall” (verse 22). As a healthy bough (or vine) rapidly spreads, Joseph’s descendants would be characterized by their expansion and would successfully colonize many lands. Many geopolitical enemies “The archers have bitterly grieved him, shot at him and hated him. But his bow remained in strength, and the arms of his hands were made strong by the hands of the Mighty God of Jacob” (verses 23-24). However, Joseph’s descendants would face many enemies who would try to destroy them—perhaps partially out of jealousy for the high standard of living the birthright blessings would bring to them. But the prophecy tells us to watch for clear acts of providence preserving Joseph from these attacks. Material fruitfulness “By the God of your father who will help you, and by the Almighty who will bless you with blessings of heaven above, blessings of the deep that lies beneath, blessings of the breasts and of the womb” (verse 25). Joseph’s descendants would possess vast material blessings, such as land in ideal climates for agricultural production and access to natural resources beneath the earth’s surface—both of which would be key factors allowing these nations to support large populations at a high standard of living. Separate from the other brothers “The blessings of your father have excelled the blessings of my ancestors, up to the utmost bound of the everlasting hills. They shall be on the head of Joseph, and on the crown of the head of him who was separate from his brothers” (verse 26). The blessings given to Joseph’s descendants would exceed those of the other brothers (Genesis 48:22), and Ephraim and Manasseh would literally be separate from their brother nations. 24 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY In chapter 4 we will revisit these and other specific elements of the birthright blessings to identify the descendants of Joseph in modern times.. LifeHopeandTruth.com 25 CHAPTER 2 Wikimedia Commons 26 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY THE RISE AND FALL OF ANCIENT IS). I Ancient Israel reached the peak of its power under King Solomon. This artist’s image shows the queen of Sheba visiting Solomon’s court (oil painting by Edward Poynter). n chapter 1 we covered key portions of the history of the family of Jacob in Egypt. Early in His relationship with Abraham, God had declared to him, “Your descendants will be strangers in a land that is not theirs, and will serve them, and they will afflict them four hundred years” (Genesis 15:13). That foreign land was Egypt, to which Jacob’s family had fled to survive a famine in their homeland of Canaan. Joseph, through divine providence, had risen to high political office in Egypt’s government and was able to feed and shelter his father and brothers and their families from the years of famine. At that point, Jacob’s descendants comprised nothing larger than a family of about 70 people (Genesis 46:27; Exodus 1:5). But God had promised that Abraham’s descendants would become a nation (Genesis 12:2). Now, given a choice portion of Egypt in which to live (Genesis 47:6), “the children of Israel were fruitful and increased abundantly, multiplied and grew exceedingly mighty; and the land was filled with them” (Exodus 1:7). Their exploding population threatened the Egyptians, who proceeded to enslave the Israelites. By allowing this to happen, God demonstrated to Israel that they could only rise through His power and faithfulness to His covenant with Abraham—not because of any abilities of their own (Deuteronomy 7:7; 10:22). LifeHopeandTruth.com 27 Moses leads Israel out of Egypt In order for the Israelites to become a sovereign nation as prophesied, God first had to free them from Egyptian slavery, which He accomplished through Moses. Through a series of remarkable miracles, God prepared Moses to be Israel’s deliverer. When he was a baby, his mother placed him in a basket and set it in the Nile River, hoping to save his life from an Egyptian edict to kill all male Israelite infants (Exodus 2:1-4). An Egyptian princess drew Moses from the water, and he was adopted into the Egyptian royal family as a prince (verses 5-10). Despite rising to prominence as an Egyptian, Moses came to identify with the suffering of his people, and when one day he saw an Egyptian beating an Israelite, he killed the Egyptian and was forced to flee to the wilderness (verses 11-15). Moses survived in the wilderness for 40 long years, until God one day spoke to him from a burning bush and appointed him to return to Egypt to lead Israel out of slavery. Though it took 10 devastating plagues to convince Pharaoh to release the Israelite slaves, God prevailed. The small family that totaled about 70 when they entered Egypt, left as a nation of some estimated 2 million (Exodus 12:37)! Moses led the children of Israel to Mount Sinai where a momentous event occurred—God and the people of Israel entered into a covenant, a separate covenant from the unconditional Abrahamic covenant. Known as the Old Covenant, this agreement between God and the burgeoning nation of Israel included God’s essential moral law—the 10 Commandments (Exodus 20). It promised physical blessings for continued obedience and curses for disobedience (Leviticus 26). Though the fulfillment of the promises to Abraham was assured, within this covenant God decreed that if the physical nation of Israel refused to obey His law, they would be punished, including losing their land temporarily (verses 18, 33-35). The Sabbath sign An important element of God’s covenant with Israel was the seventh-day Sabbath, which God designated as holy time on the seventh day of creation (Genesis 2:1-3). It was now codified as the fourth of the 10 Commandments (Exodus 20:8-11). God ended His instructions with a strong statement). God clearly declared that keeping the Sabbath would be a distinctive, iden28 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY tifying sign for Israel. The Hebrew word translated “sign” is oth and means “a distinguishing mark” or a “banner” (Brown-Driver-Briggs Hebrew Lexicon). Just as a flag or seal identifies modern nations, the people of Israel were to be identified by their observance of the seventh-day Sabbath! In fact, God elevated this to a distinct covenant within the covenant (verse 16). Even if they disobeyed God, they would still understand who they were as long as they kept the Sabbath. If they stopped keeping the Sabbath, they would not only be punished but also lose their identity as Israel. As we will see, the Sabbath day is one reason a portion of Israel (Judah, or the Jews) has retained its identity and another portion (the 10 tribes) has lost it. Israel takes its place among the nations After consenting to the covenant on Mount Sinai, the people of Israel quickly forgot it and slipped into a cycle of sin, disbelief and unfaithfulness. They found themselves consigned to wandering in the wilderness for 40 years before they could inherit the Promised Land (Numbers 32:13). Finally, under Joshua’s leadership, God allowed them to enter the land of Canaan. But even after being miraculously helped by God time and again to take possession of the land, Israel continually disobeyed Him. Over the next 300 years, a series of judges arose to deliver Israel from specific crises, but overall they remained weak, disorganized and prone to sin (Judges 21:25). They were far from being a unified nation. Eventually, frustrated with this situation, the people asked Samuel to give them a king. They failed to see they were rejecting God, but nevertheless He gave them their king (1 Samuel 8:7, 21-22). However, He told them that Israel’s monarchy had to be different from those of the surrounding nations. Israel’s king was to be subject to God’s law—just like everyone else in the nation (Deuteronomy 17:18-19). This principle of the rule of law would later be an identifying characteristic of the modern nations of Israel. As previously mentioned, Israel’s first king, Saul, began his reign in humility, but over time he drifted away from and disobeyed God (1 Samuel 9:2; 15:11). God then removed Saul from the throne and gave it to a young man named David, just a shepherd boy when he was anointed to be king (1 Samuel 16:11-13). Though David lacked Saul’s impressiveness, God isn’t swayed by physical stature. He said, “For the Lord does not see as man sees; for man looks at the outward appearance, but the Lord looks at the heart” (verse 7). God saw David’s attitude, character and potential. David developed into a great king—and one of the most important figures in history. And with God’s blessing he molded the 12 tribes into one powerful LifeHopeandTruth.com 29 nation called Israel (2 Samuel 5:1-3; 1 Chronicles 12:23, 38), establishing Jerusalem as the capital and skillfully guiding the nation into its eventual position of prominence (2 Samuel 5:6-10; Psalm 78:70-72). With King David’s reign, certain prophecies began to be fulfilled: • Establishing Israel as a kingdom initiated the fulfillment of God’s Genesis 12:2 promise to make Abraham’s descendants a “great nation.” • Since David was a descendent of Judah, establishing his dynasty began to fulfill the prophecy that “the scepter shall not depart from Judah” (Genesis 49:10; Psalm 78:67-72). Remember, early in his reign God made a special covenant with David, promising “your house and your kingdom shall be established forever before you. Your throne shall be established forever” (2 Samuel 7:16). David’s dynasty was to last forever because of his godly character (Psalm 78:72; Acts 13:22). God’s covenant did not guarantee, however, that David’s descendants would rule without problems. God determined that if the kings from David’s line rebelled against Him, they would be punished (2 Samuel 7:14; Psalm 89:30-32). As we will see, the unified kingdom would be taken from David’s descendants—and his monarchy replanted elsewhere. But the covenant specifically promised that David’s throne would continue perpetually (1 Kings 2:4; 8:25; 9:5; Psalm 89:33-37). To understand this in greater detail, read our inset chapter “David’s Throne in Prophecy.” Solomon and Israel’s “golden age” “By the time of David’s death, then, a carefully devised political and religious apparatus was in place. The old tribal distinctions still existed, but with David there had come at last a sense of national unity in both secular and spiritual affairs. Israel was now a full-fledged nation among the nations of the world. All the constituent elements associated with nationhood—army, political bureaucracy, and central cult—were well established” (Eugene Merrill, Kingdom of Priests, 1996, p. 284). After David’s death “the kingdom was established in the hand of Solomon” (1 Kings 2:46). Many refer to King Solomon’s 40-year reign as ancient Israel’s “golden age” when, unified as one nation, it quickly ascended to great heights marked by distinctive characteristics such as: • Territory. Israel reached its territorial peak with Solomon having “dominion over all the region on this side of the River from Tiphsah even to Gaza, namely over all the kings on this side of the River; and he had peace on every side all around him” (1 Kings 4:24; see also 2 Chronicles 30 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY 9:26). This vast territory extended to the Euphrates River in the north and to Ezion Geber (on the Gulf of Aqaba) and the border of Egypt in the south. • Economic prosperity. “Judah and Israel were as numerous as the sand by the sea in multitude, eating and drinking and rejoicing” (1 Kings 4:20). Israel partnered with the Phoenicians and their maritime trading networks (1 Kings 10:21, 27; 2 Chronicles 9:27), which brought in an abundance of precious metals, and also gleaned revenue from small nations that paid annual tribute to Israel (1 Kings 10:15). • Peace. David fought many wars to secure Israel’s borders (1 Kings 5:3-4), and Solomon enjoyed the results, with no major wars fought during his reign. In fact, from his position of strength, he negotiated peace treaties with major nations such as Egypt (1 Kings 3:1) and Tyre (1 Kings 5:12). • Public works projects. Free from heavy military expenses, Solomon raised a huge labor force to build a permanent temple in Jerusalem (1 Kings 5:13) as well as other building projects that reinforced the kingdom’s infrastructure (1 Kings 9:15). • International maritime trade. Solomon developed an extensive trade partnership with the Phoenician city-state of Tyre through his personal friendship with King Hiram. His ties with Tyre gave him access to the prized timber from Lebanon used in building the temple (1 Kings 5:8-10). Israel’s maritime strength was impressive. Solomon “had merchant ships at sea with the fleet of Hiram. Once every three years the merchant ships came bringing gold, silver, ivory, apes, and monkeys” (1 Kings 10:22). They were technologically advanced enough to manufacture vessels that could withstand long ocean voyages. Israel’s and Tyre’s trade expeditions, the Bible tells us, reached all the way to Tarshish—located in modern-day Spain (2 Chronicles 9:21)—and Ophir—likely a location on the Indian subcontinent (1 Kings 9:28; 10:11). Solomon could reach Ophir by having a port at Ezion Geber (1 Kings 9:26) on the Gulf of Aqaba, providing access to India via the Red Sea, Gulf of Aden and Arabian Sea. As an interesting side note, the most prominent seafaring group within Israel was the tribe of Dan (Judges 5:17). The Danites were likely among the Israelites who embarked with the Phoenicians on these long trips, possibly establishing colonies and trading outposts in Cyprus, Greece and as far away as Ireland. Danites had a proclivity to name geographic locations after themselves (Joshua 19:47; Judges 18:12, 29). As we will see in the next chapter, their descendants left a trail of place names that include “Dan” as they migrated throughout Europe to their present-day home in Ireland. LifeHopeandTruth.com 31 Solomon’s 40-year rule truly was the peak of Israel’s existence as a nation. But, behind the economic prosperity and peace, major problems were brewing that would soon dramatically impact the nation. First, toward the end of Solomon’s reign, adversaries began to seriously challenge Israel’s domination of the region (1 Kings 11:14-25). Within a few short years, Israel lost control of the territory that extended to the Euphrates River and other surrounding areas, such as Edom. Second, Solomon’s massive bureaucracy and taxation system created civil discontent, with his citizens burdened by the increasing levies imposed to support his building projects and governmental costs (2 Chronicles 10:4). Third, and most important, Solomon compromised his relationship with God. The principle was clear: Israel’s kings were not above God’s law (Deuteronomy 17:1819). Regardless, Solomon took to marrying many women from foreign nations, a practice God expressly forbade, and in an incredible exploitation of power, he accumulated 700 wives and 300 concubines (1 Kings 11:3). Eventually, undoubtedly in an attempt to appease many of them, Solomon integrated pagan religious worship into Israel (verses 4-8). The united kingdom of Israel at its peak under Solomon’s reign. Israel divides into two nations Solomon’s sins, especially his infidelity and compromise with paganism, brought dire consequences on Israel. God had already decreed during David’s time that if his sons were unfaithful, the Davidic dynasty would be punished (2 Samuel 7:14; Psalm 89:30-32). 32 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY LifeHopeandTruth.com 33 INSET DID ANCIENT ISRAEL RECEIVE ALL THAT WAS PROMISED TO ABRAHAM’S DESCENDANTS? M any assume that all the promises and prophecies about Israel found in Genesis (covered in chapter 1) were fulfilled in the ancient kingdom of Israel under David and Solomon. While ancient Israel saw the fulfillment of some of the promises made to Abraham’s descendants, does this mean that by the end of Solomon’s kingdom these prophecies were totally fulfilled? An honest look at all the promises found in Genesis and the description of Israel at its height (1 Kings 4:20-34) shows some major elements missing. • Abraham was to be the “father of many nations” (Genesis 17:5). Israel was one nation, which later divided into two nations. Today, a portion of the Jews constitutes the modern state of Israel. But these can hardly be described as “many nations” or, literally, a multitude of nations. Ancient Israel could never legitimately be called a multitude of nations. • God told Abraham that his descendants would be “as the stars of the heaven and as the sand which is on the seashore” (Genesis 22:17). Some point to the use of this phrase to describe Israel under Solomon (1 Kings 4:20) as evidence that its only fulfillment was in ancient times. But Solomon’s kingdom was only a small type of what would be fulfilled in the future. Moses prophesied that God would cause Israel to grow “a thousand times more numerous than you are, and bless you as He has promised” (Deuteronomy 1:11). Hosea (prophesying nearly two centuries after the death of Solomon) spoke of a future time when Israel “shall be as the sand of the sea” (Hosea 1:10). These passages are examples of duality in prophecy. • Abraham’s descendants were to “possess the gate of their enemies” (Genesis 22:17)—strategic passageways that allow a nation to control the movement 34 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY of other nations. Neither the Bible nor secular history shows the ancient nation of Israel possessing this kind of strategic geopolitical power over enemy nations. In fact, throughout most of its history, Israel was weak and was an open gate to its enemies (Nahum 3:13). • Jacob’s descendants were to become “a nation and a company of nations” (Genesis 35:11). Later this promise was given specifically to Joseph’s two sons, Manasseh and Ephraim (Genesis 48:19). Ephraim and Manasseh never fulfilled these prophecies in ancient times. • The prophecies in Genesis 49:22-26 describe Joseph’s descendants as far surpassing the rest of the tribes of Israel in power and physical blessings. Yet the time of Israel’s greatest power was under the rule of Solomon (a descendant of Judah). During ancient Israel’s pinnacle of power, Ephraim and Manasseh were merely tribes under the authority of the Davidic throne ruling from Jerusalem. Even after the northern 10 tribes revolted, their history was mostly a record of national decline and moral decay, and they are never described as having the grand blessings promised to the descendants of Joseph. If these prophecies and promises were not fulfilled in the time of ancient Israel, there are really only two possible explanations: Option 1: God was simply using grand hyperbolic language to describe the birthright blessings and never intended for these promises to be fulfilled literally. The problem with this idea is that it contradicts many scriptures that show God always means what He says and He fulfills His word (Numbers 23:19-20; Isaiah 46:11; Titus 1:2; 2 Timothy 3:16). Option 2: The ancient kingdom of Israel did not fulfill these promises in totality. Instead, God intended to fulfill these promises in modern times— years after the rise and fall of ancient Israel. Genesis indicates that these promises would not be fulfilled in ancient times, but “in the last days” (Genesis 49:1)—a biblical term for the era preceding the return of Jesus Christ (2 Timothy 3:1; 2 Peter 3:3). The purpose of this booklet is to show that God did fulfill these prophecies in modern times, primarily through the United States and the nations of the British Commonwealth. LifeHopeandTruth.com 35 “So the Lord became angry with Solomon, because his heart had turned from the Lord God of Israel. …:9, 11-13). When Solomon died around 928 B.C., his son Rehoboam became king. Early in his reign, a delegation from the northern 10 tribes led by a man named Jeroboam (who had been prophesied in 1 Kings 11:31-35 to lead a secession), asked Rehoboam to lighten the taxation and service burdens Solomon had levied on them (1 Kings 12:4). Rehoboam, foolishly ignoring the elders’ advice, harshly responded that he would increase these burdens (verse 14). His despotic response led the northern 10 tribes to rebel. “What share have we in David? We have no inheritance in the son of Jesse,” they declared. “To your tents, O Israel! Now, see to your own house, O David!” (verse 16). They quickly installed Jeroboam as their new king, declared themselves independent from the Davidic dynasty (verse 20), and thus formed a new nation—the kingdom of Israel. Only the tribes of Judah and Benjamin remained loyal to Rehoboam and the Davidic dynasty, which now became known as the kingdom of Judah. This was a crucial point in the history of Abraham’s descendants. At this point, the previously unified Israel divided into two entirely separate nations. From this point forward, the term Israel would primarily refer to the northern 10 tribes. The term Judah would refer to the two southern tribes—Judah and Benjamin (plus many Levites)—who stayed loyal to the Davidic dynasty. Many people to this day are confused on this important detail, assuming the terms Jew and Israelite are synonymous! One of the modern contributing factors to this confusion is that in 1948 the Jews called their new nation Israel, so in the minds of many they are one and the same. But in the Bible, Jew refers to an inhabitant of Judah (thus, a Jew could be a descendant of Judah, Benjamin or Levi who lived in Judah). In fact, the first time the term Jews is mentioned in the King James Version of the Bible, it’s describing the Jews (the nation of Judah) being at war with Israel (2 Kings 16:5-6). This distinction is vitally important to remember: All Jews are Israelites (since they are all descendants of Jacob), but not all Israelites are Jews (since Judah is only one of the 12 tribes). 36 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY After the northern 10 tribes seceded, Israel and Judah became two separate kingdoms. Northern kingdom descends into apostasy Israel—the northern 10 tribes—quickly fell into a pattern of dynastic instability. “Between the reigns of Jeroboam ben Nebat and Hoshea ben Elah, the throne of the Northern Kingdom of Israel was seized by usurpers nine times within two centuries” (Tomoo Ishida, The Royal Dynasties in Ancient Israel, 1977, p. 171). Under Jeroboam, who quickly proved to be more concerned with consolidating his power than pleasing God, it didn’t take long for Israel to abandon God and His laws. Jeroboam feared that Israelites traveling to the temple in Jerusalem to worship God on the appointed feast days would become nostalgic and want to reunify with the Davidic dynasty (1 Kings 12:27). To prevent that, he instituted a counterfeit religious system in Israel, setting up two calves of gold, and saying to the people, “It is too much for you to go up to Jerusalem. Here are your gods, O Israel, which brought you up from the land of Egypt” (verse 28). His new religious system included decentralized worship, a priesthood not derived from Levi, the tribe chosen of God, and a substitute Feast of Tabernacles held one month later than the time God commanded (verses 31-33). Jeroboam’s version of religion imitated, but perverted, the system God had instituted through Moses. LifeHopeandTruth.com 37 Because of his apostasy, many Levites and a small number of Israelites who were also determined to remain faithful to God migrated to Judah (2 Chronicles 11:13-17). Many of these individuals assimilated into the Jewish nation, while some maintained their tribal identity (Luke 2:36). The northern kingdom of Israel would only last for a little over 200 years after its secession from Judah, and would not recover from this apostasy. Eighteen more kings followed Jeroboam, all of them described as essentially wicked and idolaters. Two were noted specifically for directly worshipping Baal, a prominent false god of the Canaanites (1 Kings 16:31; 22:53). God knew this tendency toward paganism would continue to plague Israel and decreed during Jeroboam’s reign that He would “strike Israel, as a reed is shaken in the water. He will uproot Israel from this good land which He gave to their fathers, and will scatter them beyond the River, because they have made their wooden images” (1 Kings 14:15). Thus we have another key to identifying Israel’s descendants later in history—because they abandoned the God of Abraham, Isaac and Jacob (and His laws), they essentially became a pagan people during the centuries when they became lost to history. God sent many prophets to the northern kingdom to warn that their national sins would bring the curses promised for these sins (Leviticus 26:1445). The most prominent of those prophets were Elijah, Elisha, Amos and Hosea. Throughout two centuries and multiple kings, they implored Israel to repent of its idolatry and Sabbath-breaking (Hosea 2:11; 8:2-6; Amos 8:5). Through the prophet Hosea, God summed up Israel’s core spiritual problem: “They have ceased obeying the Lord” (Hosea 4:10). Assyria’s rise and Israel’s fall During the period shortly after Israel and Judah split, a new world power was rising to the north—Assyria (centered in modern-day northern Iraq). Assyrian King Ashurnasirpal II (883-859 B.C.) invaded and controlled the Aramaean states north of Israel, bringing them into the growing Assyrian Empire, strikingly close to Israel and Judah during the kingship of northern Israel’s King Omri (885-874 B.C.). Omri reigned for 12 years over the northern kingdom and had many accomplishments, including establishing the strategic Samaria as Israel’s capital, conquering Moab and making peace with Judah. Archaeologists and historians have found more written in extrabiblical historical sources about Omri than any other Israelite king. For instance, the Mesha Stele (a stone with detailed historical inscriptions by King Mesha of Moab) prominently mentions King Omri as oppressing and ruling over the Moabite kingdom. 38 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY The Mesha Stele provides details about Omri’s reign not mentioned in the Bible; the Black Obelisk depicts Israel’s King Jehu bowing before King Shalmaneser III; a close-up of the Black Obelisk shows King Jehu bowing before King Shalmaneser III. Wikimedia Commons; The British Museum/CC BY-NC-SA 4.0 His reign was so notable that surrounding nations began referring to the northern kingdom of Israel by his name. His “international significance is to be seen in the fact that Assyria, throughout its history even a century after Omri’s death, referred to Israel as Bit Humri or Bit Humria (House of Omri) and referred to her kings as Mar Huumrii (Son of Omri)” (Jack P. Lewis, Historical Backgrounds of Bible History, 1971, p. 94). This fact is significant. First, it shows that the northern 10 tribes were not always known simply by the name Israel. Secular history often identifies LifeHopeandTruth.com 39 Israelites by names given to them by other nations. Second, the name Bit Humri (house of Omri) has linguistic ties to other names that help identify where Israel migrated—specifically Cimmerians and Gimmiri. We will cover this in the next chapter. Omri’s son Ahab followed him and became one of the most infamous kings in the northern kingdom’s history because of his evil ways (1 Kings 16:30). Married to the equally infamous and wicked Queen Jezebel, he reigned throughout the time of God’s prophet Elijah. Assyria’s shadow loomed over the region until finally, in 841 B.C., Shalmaneser III invaded Israel during the reign of King Jehu and made it a vassal state. Though not recorded in the Bible, this is clearly recorded on the Black Obelisk (housed in the British Museum). It shows an image of King Jehu bowing before Shalmaneser III and lists the tribute Jehu had brought to Assyria. This Assyrian artifact identifies Jehu as “son of Omri.” Since Jehu was not of Omri’s line, this shows that Israel’s enemies continued to identify Israel by Omri’s name. During the reign of King Jeroboam II (782-753 B.C.), the northern kingdom gained some reprieve from the encroachment of the Assyrian Empire. Though Jeroboam II was an evil king, he reigned for 41 years and temporarily increased Israel’s territory (2 Kings 14:23-28) due to a short period of Assyrian weakness. The books of Amos and Hosea were written during the reign of Jeroboam II, and in their prophecies they warned that despite the relative peace of the day, Israel’s serious national sins were about to bring on God’s punishment (Amos 3:2). These prophecies revealed Israel’s impending punishment—national defeat and captivity (Hosea 13:16; Amos 4:2; 5:27). These two prophetic books also give important clues we will examine in the next chapter to identify Israel after their experience in captivity. After Jeroboam II died in 753 B.C., Israel went into a tailspin. The northern kingdom would exist for 24 more years “of continued degeneration of the social structure and unstable leadership” (Henry Jackson Flanders, et al., People of the Covenant, 1988, p. 289). These last two decades would see six different kings rule Israel, three of whom would be assassinated. As Israel’s civil government was coming apart, Assyria was regaining its strength and imperial ambitions in the region, with the goal of controlling Syria (north of Israel), the Holy Land and Egypt. During King Menahem’s reign, the Assyrian King Tiglath-Pileser III (Pul) threatened Israel once again and was appeased only by Menahem paying tribute to stave off invasion (2 Kings 15:19-20). 40 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY The first wave of Israel’s captivity The Assyrian threat intensified during King Pekah’s reign (740-732 B.C.), as Tiglath-Pileser III invaded the northern portion of Israel and took the Israelite inhabitants captive to Assyria (2 Kings 15:29; 1 Chronicles 5:26). This is called the first wave of the Assyrian captivity of Israel. Cuneiform records verify these events from the Assyrian perspective: “BetOmri [Israel] all of whose cities I had added to my territories on my former campaigns, and had left out only the city of Samaria. … The whole of Naphtali I took for Assyria. … The land of Bet-Omri, all its people their possessions I took away to Assyria” (quoted by Werner Keller, The Bible as History, 1980, p. 244). Thousands of Israelites from the tribes of Reuben and Gad and half of Manasseh fell captive at this time (1 Chronicles 5:26). Note that the northern kingdom was called “Bet-Omri” (sons of Omri)— not Israel! Hoshea assassinated King Pekah in 732 B.C. (2 Kings 15:30) and, according to Assyrian records, became a vassal king under Assyria. Hoshea would become the last king over the northern kingdom, which by this time was extremely weakened. Hoshea made a strategic error that would ultimately lead to the fall of the northern kingdom and the fulfillment of Amos’ and Hosea’s prophecies. Roughly six years into his reign, he tried to free Israel from the Assyrian yoke by forming an alliance with Egypt against Assyria (2 Kings 17:4). Hoshea ceased paying the required tribute to Assyria. Assyria would not tolerate this rebellion. The final fall of the northern kingdom Now under Shalmaneser V, Assyria again invaded Israel beginning around 724 B.C. (2 Kings 17:5). During the three-year siege of the city of Samaria, King Hoshea was captured and imprisoned (verse 4). In the late summer or early fall of 721 B.C. Assyria breached Samaria’s walls and the last stronghold of the northern kingdom of Israel fell. ). “So Israel was carried away from their own land to Assyria, as it is to this day” (verse 23). Thus the northern kingdom of Israel tragically ended, with Samaria’s population now joining the thousands of Israelites already taken captive in the first wave of Assyrian captivity. The Annals of Sargon record: “I besieged and occupied the town of Samaria, and took 27,280 of its inhabitants captive” (Records of the Past, Vol. IX, 1873, p. LifeHopeandTruth.com 41 After falling to Assyria, the inhabitants of Israel were taken into captivity and placed in different regions of the Assyrian Empire. 5). This number does not represent the totality of Israelite captives taken out of the land. It strictly applied to the inhabitants of the city of Samaria. Also, often ancient records only counted adult males, excluding females and children. The first wave of Assyrian captivity had included hundreds of thousands of Israelites from northeastern Israel. Israel’s captivity came as a result of its national sins and rejection of God (2 Kings 17:7-23; Leviticus 26:17, 25, 33). 42 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY The biblical record is very clear that Assyria took captive all of the northern Israelite tribes: “The Lord was very angry with Israel, and removed them from His sight; there was none left but the tribe of Judah aloneâ€? (2 Kings 17:18). Israel in captivity Assyria had an interesting practice of shuffling its captives. They would LifeHopeandTruth.com 43 remove the entire population of a nation and resettle them in another distant location, while replacing them with conquered peoples from other lands. This is exactly what happened in Israel. The northern 10 tribes were moved to Assyrian territory northeast of their former homeland and the Assyrians then transplanted conquered peoples from Babylon into the area of Samaria (2 Kings 17:24). The new inhabitants of the land introduced their own forms of pagan worship into the area, which later became syncretized with the religion of Israel (verses 29-33). These people became known as Samaritans, who were despised by the Jews living in Judah (John 4:9). What became of the people of the northern 10 tribes, now captives of the mighty Assyrian Empire, living in foreign cities surrounded by strangers? The Assyrian Empire lasted only another 109 years before being destroyed by the rising Babylonian Empire in 612 B.C. In the chaos and confusion of Assyria’s downfall, the Israelite captives disappeared from history—becoming known as the “lost” 10 tribes of Israel. They never returned to their homeland in Israel. They seemingly vanished. But how do scores of thousands of people simply disappear in a roughly 100-year period? Secular historians assume that the Israelites were assimilated into the Assyrian Empire and ceased to exist. Why? Simply because historical records do not show any group in the area calling itself “Israel.” But this assumption is incorrect. The northern 10 tribes did not become absorbed into the peoples they were surrounded by. God specifically stated this would not happen: )! We will take a closer look at this remarkable prophecy in the next chapter. Instead of disappearing, Hosea prophesied, “My God will cast them away, because they did not obey Him; and they shall be wanderers among the nations” (Hosea 9:17). It would be easy for these itinerants to appear as pagans because they embraced non-Israelite religions and didn’t observe the weekly or annual Sabbaths (1 Kings 12:32-33). After the northern tribes went into captivity, the Bible turns its focus to Judah, which Babylon would take captive more than a hundred years later. The Jews, however, would retain their identity, primarily because they maintained the seventh-day Sabbath. God had decreed that the Sabbath would be an identifying sign of His people (Exodus 31:13). Because the Jews maintained a basic knowledge of the Sabbath (and still do to this day), they continue to understand their identity as a tribe of Israel. 44 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY Though the Bible does not focus on the northern tribes after their captivity, it does not forget them. If Israel was absorbed into the nations where they were carried captive and ceased to exist as a people, why did God continue to inspire prophets with messages specifically for Israel centuries later? But in order to trace the tribes and their eventual resettlement in a new land to inherit the unfulfilled birthright blessings, from this point we must turn to secular history. Using clues found in both the Bible and secular history, we can trace the general migrations and modern identity of the lost 10 tribes of Israel today. The next chapter shows why Israel wasn’t merely assimilated into Assyria, and tells the incredible story of where they went, who they became and how they can be identified! LifeHopeandTruth.com 45 INSET DAVID’S THRONE IN PROPHECY “And she shall be a mother of nations; kings of peoples shall be from her” (Genesis 17:16). C hapter 1 describes how the promises to the sons of Jacob were divided into two major parts: the birthright blessings (material blessings to the descendants of Joseph) and the scepter promise (kingly blessings to the descendants of Judah). Genesis 49:10 contains a key promise: “The scepter [symbol of kingship] shall not depart from Judah.” The promise of this kingly line would not be fulfilled until more than 600 years later. After God led Israel out of Egyptian captivity through Moses, He governed Israel as a theocracy. In other words, God ruled the people directly through His law and revelation to His servants (Moses, Joshua, the judges and Samuel). But this period came to an end when the people of Israel demanded a king like the nations around them (1 Samuel 8:5), fulfilling God’s prediction centuries earlier (Deuteronomy 17:14). A man named Saul became Israel’s first king, but he did not fulfill the scepter promise. Rather, through his rebellion against God and his misrule, Saul demonstrated that neither he nor his descendants were qualified to rule Israel (1 Samuel 13:13-14; 15:26-28). He typified, as God foretold, the destructive results of a king not being subject to God and His law (Deuteronomy 17:15-20). God chooses David God then led Samuel the prophet to the small village of Bethlehem and revealed the man He had chosen as the new king of Israel—David (1 Samuel 16:1-13). A descendant of Judah through Perez (Genesis 38:29; Matthew 1:2-6), David fulfilled the scepter promise that kings would descend from Abraham through the line of Judah (Genesis 17:16; 49:10). David differed greatly from Saul. Even though he sinned at times, God declared that David was “a man after My own heart, who will do all My will” (Acts 13:22). It took some time, but David eventually became the ruler over all 12 tribes of Israel (2 Samuel 5:1-5). 46 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY The Davidic covenant established After David established his kingship over all Israel and demonstrated his faithfulness, God entered into a covenant with David, an agreement that stands separately from the other covenants we have read about so far. Known as the “Davidic covenant,” it states: “When your [David’s]). Essentially, God said that: • David’s dynasty would continue through his son Solomon (1 Kings 1:30, 37-39; 1 Chronicles 28:5). • The dynasty, God emphasized three times, would last forever. • If kings who descended from David rebelled, they would be punished, but the dynasty would not permanently end. God reaffirmed this covenant in 1 Kings 2:4 and 1 Chronicles 22:10, and in Psalm 89:4 we read, “Your [David’s] seed I will establish forever, and build up your throne to all generations.” In other words, monarchs descended from David would continue to exist in every generation—and on into eternity! The New Testament reveals this throne will be in existence when Christ returns to earth to rule and occupy it (Luke 1:31-33). As chapter 2 shows, Solomon led Israel to its pinnacle of national greatness as a united kingdom, but because of his disobedience, God restricted the rule of the Davidic dynasty to only a few of the tribes of Israel (1 Kings 11:9-13).). When Nebuchadnezzar, the king of Babylon, invaded and destroyed Judah, Zedekiah was captured LifeHopeandTruth.com 47 by Babylonian forces and imprisoned until the day he died (2 Kings 25:4-7; Jeremiah 52:11). Nebuchadnezzar wanted not only to kill King Zedekiah, but intended to destroy the Jewish monarchy completely. To that end, he gathered all of Zedekiah’s sons (the princes of Judah) and executed them in front of their father (2 Kings 25:7; Jeremiah 52:10). With all the male heirs to the throne dead, it seemed like the Davidic dynasty had ended when Zedekiah died years later. But what about God’s promises to preserve David’s dynasty, that “the scepter shall not depart from Judah” (Genesis 49:10) and that David’s throne would “be established forever” (2 Samuel 7:16)? Were these promises and covenants broken? Jeremiah’s mysterious commission During Judah’s downfall a prophet named Jeremiah rose to prominence. God inspired him to record many prophecies and the history of Judah’s demise in the book bearing his name. As the nation of Judah began unraveling and before the impending murder of all the male heirs to the throne, God repeated the Davidic covenant: “David:17, 20-21). But how could God fulfill His promises to both Judah and David if the heirs were all to be destroyed by the Babylonians—something that did indeed occur? The book of Jeremiah provides a fascinating clue! After Jerusalem fell, Nebuchadnezzar appointed a man named Gedaliah to govern the Jews who were not taken captive to Babylon. Gedaliah, however, was assassinated by a man named Ishmael, who took captive the people who had been subject to Gedaliah. Within this record is something that can easily be overlooked. Notice: “Then Ishmael carried away captive all the rest of the people who were in Mizpah, the king’s daughters and all the people who remained” (Jeremiah 41:10). 48 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY Top, King Nebuchadnezzar killed Zedekiah’s sons in front of him, to ensure the end of the Davidic monarchy. Bottom, the Coronation Chair on which the British monarchs have been crowned since King Edward I. Though all the male heirs to David’s throne were eventually killed, Zedekiah had daughters who survived the Babylonian invasion! Not only that, but we will see they were close to Jeremiah himself. Eventually, this group was rescued from Ishmael’s control by a man named Johanan (verse 16) who, fearing that Babylon would retaliate against the remnant in Judah, decided to flee with this group to Egypt despite Jeremiah’s warning that this was not God’s will (Jeremiah 42:11-16). Notice Jeremiah’s description of the group that fled to Egypt: “But Johanan the son of Kareah and all the captains of the forces took all the remnant of Judah who had returned to dwell in the land of Judah, from all nations where they had been driven—men, women, children, the king’s daughters, and … Jeremiah the prophet and Baruch the son of Neriah” (Jeremiah 43:56). The book of Jeremiah mysteriously ends with him, his assistant Baruch, and King Zedekiah’s daughters in Egypt. But what happened to them? To answer this question, we must LifeHopeandTruth.com 49 Wikimedia Commons examine the beginning of Jeremiah’s prophetic work. In the first chapter of Jeremiah, God gave him multiple commissions—he was not called simply to preach God’s words to Judah (Jeremiah 1:9); he was also given a commission that transcended that nation: “See, I have this day set you over the nations and over the kingdoms, to root out and to pull down, to destroy and to throw down, to build and to plant” (verse 10). This is a critically important statement! Jeremiah prophesied not only of the destruction of Judah, but other nations, and was told furthermore that he was “to build and to plant” nations (plural) and kingdoms (plural)! The prophet Isaiah had actually prophesied this years earlier: “And the remnant who have escaped of the house of Judah shall again take root downward, and bear fruit upward” (Isaiah 37:31). And in Ezekiel 17 God used a “riddle” or “parable”—a simple story that represented a deeper message—directed to “the house of Israel” to describe what would happen (verse 2). This style of writing, also called allegory, requires interpretation, and scholars have offered various explanations as to what God was indicating. What seems clear to most is that the riddle describes the rebuilding of a nation. What doesn’t seem as clear is when and how it was fulfilled. We believe this account explains how God used Jeremiah to fulfill the commission to build nations and fulfill His promise that David’s throne would continue. We further believe that the allegory of a twig cropped off and planted elsewhere indicates a new Israelite nation being built. Here is the riddle with the apparent meanings in brackets: “Thus says the Lord God: ‘I will take also one of the highest branches [King Zedekiah] of the high cedar and set it out. I will crop off from the topmost of its young twigs a tender one [one of Zedekiah’s daughters; the Hebrew word used here has a feminine connotation; see Deuteronomy 28:56], and will plant it on a high and prominent mountain [nation]. On the mountain height of Israel [an Israelite nation, but not the tribe of Judah] I will plant it; and it will bring forth boughs, and bear fruit, and be a majestic cedar [representing growth and prosperity].). Comparing this imagery to the description in Genesis 49:22 leads us to conclude that Zedekiah’s daughter would be planted in the land that would be 50 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY given to one of the sons of Joseph—the primary tribe bearing the name “Israel” (Genesis 48:16). Putting everything together, we see how God maintained His perpetual, unconditional covenant with David. God used Jeremiah to transplant one of King Zedekiah’s daughters to a new land (one ruled by Israelites) where she would be “planted” (established) and where her descendants would grow, rule and prosper—continuing the Davidic throne. God’s law for Israel decreed that if a man didn’t have a son, his inheritance would pass on to the daughters (Numbers 27:8). Through God’s miraculous intervention, Zedekiah’s daughters were spared—and through one of them God would maintain David’s ruling lineage! Extrabiblical clues The book of Jeremiah closes with Jeremiah, Zedekiah’s daughters and Baruch in Egypt. It’s likely that Jeremiah completed his book before departing from Egypt to fulfill the last part of his commission “to build and to plant” (Jeremiah 1:10), and we are not provided the details of how Jeremiah “planted” a daughter of Zedekiah into an Israelite nation. Remember that Egypt was a coastal trading nation on the Mediterranean Sea. The Phoenicians had strong trade connections with Egypt and operated ships throughout the Mediterranean, with far western ports, such as Carthage and the Gates of Hercules (Gibraltar). Jeremiah and his party could have easily boarded a Phoenician trading ship and headed westward, making their way to Celtic Spain and even eventually Celtic Ireland. Though we have no details of their journey, some speculate that Jeremiah could be identified in Irish legends as Ollam Fodhla—a seer and legislator. It is interesting that Ollam and Fodhla are similar to Hebrew words olam (meaning “antiquity” or “old”) and pala (meaning “extraordinary” or “wonderful”). Legends connect this man with another named Simon Brach (linguistically similar to the Hebrew name Baruch) and a young princess named Tea or Tara (female name that means “palm tree” in Hebrew). The meaning of this name is interesting when we compare it to the Ezekiel 17:22 prophecy of the tender twig. Other interesting clues also link the modern-day British monarchy to the Israelite throne of David: • The Stone of Destiny (also called the “Lia Fáil”) has been used in the coronation ceremonies of Irish, Scottish and English kings for centuries. Encyclopaedia Britannica summarizes its origin according to one Celtic legend: “The stone was once the pillow upon which the patriarch Jacob rested at Bethel when he beheld the visions of angels [see Genesis 28:10-22]. From the Holy Land it purportedly traveled to Egypt, Sicily, and Spain and reached Ireland LifeHopeandTruth.com 51 The coat of arms of the British monarchy bears many symbols that connect it with the throne of David. Wikimedia Commons about 700 [B.C.] to be set upon the hills of Tara, where the ancient kings of Ireland were crowned. Thence it was taken by the Celtic Scots who invaded and occupied Scotland” (“Stone of Scone”). There are other theories regarding the origin of the stone, but, if this legend is true, it seems that Jeremiah and Baruch could have transported the pillar stone from Judah. • harp is also a national symbol of Ireland, which is logical since it appears David’s throne was “planted” there after it was “uprooted” from Judah. • The coronation ceremony of English kings is based on the coronation ceremonies for King David and King Solomon. The coronation ritual consists of anointing with oil (unction), prayers based on the virtues of Old Testament kings and the recitation of Unxerunt Solomonem—the words spoken at Solomon’s coronation recorded in 1 Kings 1:39-40. English monarchs have been installed with this basic coronation formula for over a thousand years (Roy Strong, Coronation, 2005, p. 5). 52 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY • The British monarchy was one of the few major monarchies to survive the upheavals of the 18th, 19th and 20th centuries, which saw the fall of many of the royal families of Europe. It continues to endure today through the reign of Queen Elizabeth II. Though some in the United Kingdom oppose the monarchy, the majority of citizens continue to strongly support it. A 2013 poll showed that 66 percent of Britons support the monarchy (“Confidence in British Monarchy at All-Time High, Poll Shows,” The Telegraph, July 27, 2013). The Bible declares that the throne of David will last for “all generations” (Psalm 89:4). When Jesus Christ returns, He will be given “the throne of His father David” (Luke 1:32). That throne will exist somewhere on earth when He returns. The weight of evidence supports the throne’s existence today in Great Britain—where the British monarchy (descendants of King David through a daughter of Zedekiah) currently reigns over the Israelite nation of. Though these publications contain helpful information, Life, Hope & Truth does not endorse all their content. LifeHopeandTruth.com 53 CHAPTER 3 Photo by dynamosquito/CC BY-SA 2.0 54 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY THE MIGRATION OF THE “LOST” ISRAELITES “For surely I will command, and will sift the house of Israel among all nations, as grain is sifted in a sieve; yet not the smallest grain shall fall to the ground” (Amos 9:9). T he Bible does not mention the immediate fate of the thousands of captive Israelites after their Assyrian conquerors fell to the Babylonians. They never returned to their former homeland, nor do they appear in secular historical records under the name Israel. To many, it seems that they simply vanished. But a lack of documents from this time is not surprising. Ancient empires rarely kept detailed records of their own collapse and its impact on captive minorities, especially in a time when records were chiseled into stone or recorded on clay tablets. In fact, very few Assyrian records exist from this time. But that doesn’t mean it’s impossible to trace the “lost” Israelites. Though the Bible doesn’t track them, it does provide clues that help identify them in history. On the Behistun Rock in Iran, the last captive (with the pointed cap) is identified as “Skunkha, the Scythian.” Biblical clues to tracking the tribes Consider these major biblical clues that help us track Israel’s post-captivity identity: • A pagan people. When looking for the “lost” tribes of Israel, we are not looking for people worshipping the biblical God of Abraham. After seceding from Judah, the northern kingdom quickly abandoned the true God and His holy days (1 Kings 12:26-33). This descent into idolatrous pagan- LifeHopeandTruth.com 55 ism continued throughout their history as a sovereign nation—causing God to declare that they had “rejected knowledge,” “forgotten the law of your God” and “ceased obeying the Lord” (Hosea 4:6, 10). Even before their captivity, God testified they had “begotten pagan children” (Hosea 5:7)—meaning they were raising the next generation in paganism. And 2 Kings 17:15 shows that Israel “followed idols, became idolaters, and went after the nations who were all around them.” • A people ignorant of their identity. As a result of abandoning the seventh-day Sabbath, Israel lost the identifying sign God had given that He said would distinguish them as His people (Exodus 31:13; Ezekiel 20:1213). • A people labeled by their captors with names derived from “Bit Humri.” Israel was primarily known by other nations as “Bit Humri” (house of Omri). When on the historical trail searching for the Israelites, derivatives of “Bit Humri” and other such names with connections to their identity as Israel become very important. • A migrating people. Instead of immediately organizing and establishing nations, the “lost” Israelites would become “wanderers among the nations” (Hosea 9:17). God also said He would “sow them among the peoples” (Zechariah 10:9), and that He would “sift the house of Israel among all nations, as grain is sifted in a sieve; yet not the smallest grain shall fall to the ground” (Amos 9:9). This indicates they would migrate through different ethnic groups and locations before emerging to fulfill the prophecies made about them. In addition, in Isaiah 49:12; Jeremiah 3:11-12; and 31:7-10 we find statements telling us that the Israelites would wander in a generally northwestern direction from the Middle East, ultimately settling in faraway lands surrounded by water. • Prophetic descriptions of modern Israel. By identifying the modern nations that fit the Genesis 49 prophecies, we can trace their history backwards to see where these people came from. The emergence of the Cimmerians and Scythians The Bible tells us the Israelite captives were taken to at least two different parts of the Assyrian Empire: the north-central region below the Black Sea (Halah, Habor and the Gozan River) and the far eastern area southwest of the Caspian Sea, the cities of the Medes (2 Kings 17:6; 18:11; 1 Chronicles 5:26). Surveying this historical location within roughly 100 years after their captivity, we find pagan tribes emerging to the west and north of Assyria, near the Black Sea and the Caspian Sea. Historians broadly label them as Scythians and Cimmerians. 56 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY Scythians first appear in historical records as nomadic tribes and clans in the eighth century B.C.—the same century in which the northern 10 tribes of Israel became captive and disappeared—and they allied with other groups in the region to fight and weaken Assyria. Historians offer various theories about how the Scythian tribes originated, but the biblical clues above help us understand that many of these tribes were actually the “lost” Israelites emerging from captivity. The eighth century also saw the rise of another mysterious tribal group— the Cimmerians—who first appear in Assyrian records in 714 B.C. in the area now known as Turkey (adjacent to Assyria to the north and west). They appear in Assyrian records as joining “a coalition against the Assyrians” (K. Jettmar, Art of the Steppes, 1967, p. 24). How interesting! In looking for the Israelite tribes in the historical record, we find these two groups who are antagonistic toward the Assyrians (as they naturally would be as a result of their brutal deportation and captivity), appearing at exactly the right time (eighth century B.C.), and in a logical location (adjacent to the Assyria Empire). In their cuneiform tablets, the Assyrians referred to the Cimmerians as the Gamira (or Gamiri), which is linguistically similar to their word for Israel— Khumri (house of Omri). Since ancient languages were primarily preserved orally, how the sounds were recorded in stone and cuneiform inscriptions often varied. It appears the Assyrian word Khumri evolved into several variants that included Humri, Humriya, Gimirraja and Ghomri. Danish historian Anne Kristensen’s research on the Cimmerians led her to this conclusion: by Jørgen Læssøe, the Royal Danish Academy of Sciences and Letters, No. 57, 1988, pp. 126-127). History also shows the Cimmerians were closely related to the Scythians. The Behistun Rock is a Persian historical inscription of King Darius I’s conquests, written in the Persian, Babylonian and Median languages. The inscription transliterates the Persian word Scythia (Saka) into the Babylonian word Gimiri (Cimmerians), showing that the Persians viewed the Scythians and Cimmerians as related people. The Cimmerians and Scythians were likely “two tribal confederations that formed within one and the same people” (Jettmar, p. 38). LifeHopeandTruth.com 57 The Encyclopaedia Iranica backs up the assertion that the Cimmerians and Scythians were a homogeneous people: ” (“Cimmerians”). Though the Cimmerians and the Scythians were related, history records the two peoples at war and the Scythians driving the Cimmerians around the Black Sea, forcing them into a generally westward migration. As the Cimmerians disappear from recorded history, groups called Keltoi (by the Greeks) and Celtae (by the Romans) begin to appear migrating westward through Europe. “The earliest reference to the existence of a specifically Celtic people in documented history originates from a sixth-century B.C. sea journey” (Kevin Duffy, Who Were the Celts? 1996, p. 4). The emergence of the Celts closely aligns with the disappearance of the Cimmerians from the Black Sea region, and many historians trace the Celts’ origins to the Cimmerian and Scythian peoples of that area. For example, they identify Scythian/Celtic connections such as advanced horse-riding skills, craftsmanship and artistic work, and even their clothing styles. They trace the Celts’ northwestern migration through the Hallstatt and La Tène cultures found in Europe, from southern Europe north through modern-day France and Germany and into the British Isles and Ireland. As they wandered throughout Europe, various Celtic tribes settled in areas where their descendants remain today: the Celtic Gauls (France), the Celtic Belgae (Belgium), the Celtic Helvetians (Switzerland) and the Celtic Pritani (Ireland and Britain). These Celtic tribes were actually distinct Israelite tribes, settling in lands that would eventually fulfill the promises made to them in Genesis 49. A closer look at the Scythians The name Scythians provides other linguistic links to the Israelites. Scythian was a general term given to wandering tribes who migrated from Central Asia (around modern-day Iraq, Iran and Turkey) through the Caucasus Mountains and settled north of the Black Sea. One of the prominent Scythian groups was the Saka, or Sacae, who appear in the records of Persian King Darius I, Assyrian tablets and in the writings of Greek historian Herodotus. Saka or Sacae is also linguistically linked to the Israelites. Israel was prophesied to bear the name of the patriarch Isaac (Genesis 21:12; Amos 7:16). Since 58 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY Images of Scythian males on gold artwork reveal they were “wholly Europoid” in appearance (Jettmar, p. 24). Alamy.com; Wikimedia Commons ancient languages, such as Hebrew, contained no written vowels, words were written only with consonants. The name Isaac would include the combination Sk or Sc. Students of the lost 10 tribes have long noticed the linguistic similarity between the hard SK (or SC) sound of Saka or Sacae and the name Isaac. The word Scythians (a derivative of Sacae) also contains the consonants SC. Tribal groups would later emerge from the Scythian tribes with names that would also have the consonants SC, such as Saxons, Scolotoi and Scots. The modern nations of Scotland and Scandinavia still include these consonants today. In the Declaration of Arbroath (1320), the Scots trace their origin to the Scythians: “They [the Scots] journeyed from Greater Scythia by way of the LifeHopeandTruth.com 59 SIMILARITIES BETWEEN THE ISRAELITES AND SCYTHIANS ISRAELITES SCYTHIANS Disappear from history in the eighth century B.C. Appear in history in the late eighth century B.C. Disappear within the region of the ancient Assyrian Empire; particularly the “cities of the Medes” (2 Kings 17:6) in modern Iran. Appear in history adjacent to the Assyrian Empire. Many first appear in history in modern Iran. Abandon the religion of Israel and embrace pagan worship (2 Kings 17:15). Are pagan peoples who worship nature and have elaborate rituals for the dead, including self-mutilation. Have extensive experience with horses. King Solomon employed 12,000 Israelite horsemen in his army (2 Chronicles 1:14). Are known as horsemen of the steppes for their extensive use and domestication of the horse. Were prophesied to be “wanderers among the nations” (Hosea 9:17). Are distinguished as nomadic wanderers who did not build established civilizations or cities. Are made up of multiple tribes, each having internal clans and families, but sharing a similar culture and origin (1 Chronicles 4-8). Are “different groups, but they had the same way of life and similar burial customs” (“Masters of Gold,” National Geographic, June 2003). 60 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY Tyrrhenian Sea and the Pillars of Hercules, and dwelt for a long course of time in Spain.” Much earlier in time, the Scythian tribes allied with the Medes to defeat the Assyrian Empire in 612 B.C. After Assyria fell and the Babylonian Empire dominated the region, the Medes and Scythians did not remain allies for long. Around the beginning of the sixth century, the Medes drove the Scythians out of western Asia (the land of their captivity), causing them to migrate northwest and settle north of the Black Sea in the Eurasian steppes. For roughly the next 400 years the Scythians lived in this region, occupying the land from generally the Carpathian Mountains in the west to the Don River in the east (primarily modern-day Ukraine and southwestern Russia). In the summer of 2015, archaeologists discovered a seal engraved in Hebrew on the shores of the Black Sea in the Russian city Rostov-on-Don— the area occupied by the Scythians before they were pushed westward. When we understand that the Scythians were the wandering, “lost” Israelites, this discovery should not surprise us. To learn more, read “Ancient Hebrew Artifact Found in Russia May Confirm Biblical Prophecy.” The Scythians never formed a nation state throughout this period, but remained nomadic, horse-riding tribes unified by a single culture. Hermann Parzinger, a historian of the Scythians, provides this insight: “From ancient sources we know the names of several tribes, and they seem to be Iranian names. They were different groups, but they had the same way of life and similar burial customs” (quoted by Mike Edwards, “Masters of Gold,” National Geographic, June 2003). Remember, a large portion of the Israelites were held captive in “the land of the Medes,” which is modern-day Iran. The Scythians mainly herded sheep, horses and cattle. “The livestock provided not only their food but also leather and wool for the clothes on their backs” (Frank Trippett, The First Horsemen, 1974, p. 14). They lived in relative peace on the edges of the great Mediterranean empires that would rise and fall throughout this time—Babylon, Persia, Greece and Rome. We know very little about their everyday lives because they left behind no writing or historical records, which leads historians to believe they were an illiterate society. Most of what we know about them comes from their burial mounds and artistic creations. The Scythians remained a footnote in history for nearly 1,500 years until Russian archaeologists began excavating artifacts in the 20th century. British historian Tamara Talbot Rice has compiled the findings of the Russian archaeologists and early historians, particularly Herodotus, in her definitive book, The Scythians. She documents their artistry with gold, which was beyond anything LifeHopeandTruth.com 61 The Danube River runs through Budapest, Hungary. iStockphoto.com TRACING THE TRIBE OF DAN Though not the main subject of this work, the descendants of Jacob’s son Dan warrant special attention. As we already saw, the descendants of Dan had a proclivity for naming places after their forefather Dan (Joshua 19:47; Judges 18:11-12, 29). The descendants of Dan can often be traced since they named landmarks with the consonants Dn. Geography reveals an amazingly large number of rivers, towns and even nations with the Dn signature. These include: The Dnieper, Dniester, Danube and Don Rivers; and the nations of Denmark and Sweden. The majority of Dan’s descendants apparently settled in Ireland, which includes the towns of Dunshaughlin, Dunleer, Donegal and Dungloe. Study a map, and you will find dozens of other places in Ireland that are marked with the consonants Dn. One of the most common clan names in Ireland was Dunne or Dunn. Irish traditions include stories of an ancient and mystical tribe called the Tuatha de Dannan, which can literally be translated as “the tribe of Dan” or “the children of Danu.” 62 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY expected of nomadic tribes often described as “barbarians” by the Greeks and Romans. However, the ancient Israelites had achieved a high level of cultural sophistication and were renowned for their craftsmanship, including gold and silver, especially in the times of David and Solomon (1 Kings 10; 1 Chronicles 29:1-5). One of the most significant points in Ms. Rice’s work concerns the artistic similarities between the Scythians, the Celts and the English. Noting the “turnout” of horse mounts, she states that both the Scythians and English placed great importance on the elaborate and sophisticated horse equipment. “Can the inhabitants of England have inherited this outlook together with the decorative elements which affected ‘Celtic’ art?” she asks (p. 74). Ms. Rice observes other examples of similarities: • Scythian artwork often contained images of large-beaked birds. Remarkably, the Anglo-Saxon Sutton Hoo treasure found in Suffolk, England (dating to A.D. 655-656) contains almost identical artwork (p. 191). • Stone slabs found in England show a carved image of a stag of “wholly Scythian character.” She wrote, “The man who carved this stone must have felt the wind blowing westward from southern Russia across Scandinavia, wafting a last flicker of inspiration from a long-dead Scythian source” (p. 192). These connections raise an important question: Are they mere coincidence, or do they show a link between the Anglo-Saxons and the Scythians? Further research of Scythian burial sites has shown evidence that they had an aristocratic social structure, with wealthy regional chieftains dominating particular areas (similar to the European feudal system of the Middle Ages). They also employed a ritualistic form of paganism where mourners for the dead “cut off parts of their ears, slashed their arms, and pierced their left hands with arrows” (Mike Edwards, “Searching for the Scythians,” National Geographic, September 1996, p. 74). This is important, because God had warned the Israelites about these pagan rituals long before (Leviticus 19:28; Deuteronomy 14:1), but they nevertheless practiced almost identical customs before they went into captivity (1 Kings 18:28). Historians can only offer theories as to why the Scythian domination of the steppes began to weaken in the mid-fourth century B.C. Some believe Sarmatian tribes (followed by other Asiatic tribes) began to cross the Don River, pushing the Scythians out of the Black Sea region. Others theorize climate changes killed the abundant grasslands of the steppes, forcing the Scythians to look elsewhere for fertile grazing land. Over the next 200 years, the Scythians further weakened, suffered defeat and ultimately, around A.D. 200, “vanished from the pages of history as abruptly as they had entered” (Rice, p. 178). LifeHopeandTruth.com 63 This map shows the general migration routes taken by the descendants of the “lost” 10 tribes after their captivity in Assyria. But, like their Israelite ancestors, the Scythians did not simply vanish. Throughout the 200-year period of their decline, Scythian tribes were pushed out of the Black Sea area and migrated west—where they entered central Europe and reappeared in history among the Germanic tribes on the outskirts of the Roman Empire. (Note: The term Scythian eventually took on a broader definition. Sometimes the term referred to any tribe or group of tribes that occupied the land formerly inhabited by the Scythians—which became known as Scythia. The term was also used as a general term like barbarian, describing those outside of Greco-Roman culture who didn’t speak Greek. So not every group labeled as Scythians throughout history was actually descended from Israel.) 64 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY The Scythians in the first century Though weakened, the Scythians were still a recognizable entity at the time of the first century. The apostle Paul wrote: “There is neither Greek nor Jew, circumcised nor uncircumcised, barbarian, Scythian, slave nor free, but Christ is all and in all” (Colossians 3:11). The context of Paul’s use of Scythian is contrasting gentile peoples with Israelites—making the point that Christ unites all Christians, regardless of nationality. It is possible that Paul was actually contrasting “barbarians” (the Roman name for tribes outside of the rule of Rome) with “Scythians”—understanding that Scythians were descendants of the 10 tribes of Israel. In fact, the possibility is given more weight by strong New Testament evi LifeHopeandTruth.com 65 dence that the locations of the 10 tribes of Israel were known to Jesus Christ and the 12 apostles. In Matthew 10:6, Jesus commissioned His disciples to go “to the lost sheep of the house of Israel.” As we have already explained, the Jews were just one part of the 12 tribes of Israel. There is evidence that the original 12 apostles traveled widely and preached the gospel to places inhabited by descendants of Israel— possibly Ireland, Britain and other parts of northwest and central Europe. The apostle James made an even more direct statement that implies the apostles knew where the tribes were located. James addressed his letter: “To the twelve tribes which are scattered abroad” (James 1:1). The epistle of James is known as one of the General Epistles because it is not written to just one area, but was to be distributed to a wide general audience. It appears that James knew where the 10 tribes were and intended his epistle to reach them! The apostle John, in Revelation 7, was given a vision about a future time when 144,000 descendants of the tribes of Israel would be “sealed” (receive God’s Holy Spirit). He even lists the 12 tribes by name (verses 5-8). Not only does this prove that the 12 tribes of Israel will exist in the end times, but that John (living in the first century) understood they still existed and were identifiable. The Jewish historian Josephus, in Antiquities of the Jews, completed around A.D. 93, provides evidence that the Jewish community of the first century knew the 10 tribes still existed and were identifiable as a people. He wrote: “The ten tribes are beyond Euphrates till now, and are an immense multitude, and not to be estimated by numbers” (The Works of Josephus, Antiquities of the Jews 11:5:2, 1987, p. 294). God inspired these New Testament scriptures to show us that the 10 tribes continued to exist as a distinct people in the first century. The quote from Josephus is further historical evidence to provide backing to the words of Scripture. The migrations continue An important theme to notice when studying the migrations of the “lost” 10 tribes of Israel is that when they seemingly vanish from the historical record, other groups mysteriously appear close by. The Scythians disappear from the historical record around A.D. 200. Where did they go? Interestingly, around A.D. 300, a period began in central Europe that German historians call the “Völkerwanderung” (“wandering of the peoples”). It was a time of mass migration of people generally known as the Germanic tribes, a general term the Romans gave to regions outside of their empire. 66 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY (German, in this context, does not strictly refer to the people who make up the modern-day German nation.) The encroaching Sarmatians (modern-day Slavs), Huns (from the Far East) and climate change had begun pushing these tribes—which included the Goths, Vandals, Jutes, Franks, Angles, Saxons and Lombards—westward, bringing them into conflict with the Roman Empire. Consequently, as the Huns moved “across Europe from east to west, they slowly drove out the Celts from Germany and the central plains, and took possession of the whole district between the Alps, the Rhine, and the Baltic” (Grant Allen, Anglo Saxon Britain, 2014, Kindle edition, pp. 6-7). Historian John Ridpath traces the origins of the Germanic tribes to the area of the Black Sea: “The course of migration which brought the Germanic race into Europe was out of Armenia, around the Black sea, to the northwest” (Great Races of Mankind, Vol. IV, 1893, p. 623). Of course, as we have already seen, the people migrating northwest out of the Black Sea region were primarily the Scythian tribes. Along the way, many of these tribes fought the Romans along their borders, which led to the weakening of the Roman Empire. As the Western Roman Empire crumbled and fell in the late fifth century (coming to an end in A.D. 476), the Romans retreated from Britannia, the isles northwest of continental Europe, which they had ruled since A.D. 43. These islands were primarily inhabited by Celts who had migrated to the islands before the Romans arrived. As the Romans retreated, the Angles, Saxons and Jutes launched a series of invasions and gradually gained control of the British Isles. By the early seventh century, the main island was nearly completely dominated by seven Anglo-Saxon kingdoms known as the Heptarchy. Our modern word England comes from Old English words that literally meant “the land of the Angles.” Who were the Anglo-Saxons? Historians differ over the origins of the Anglo-Saxons, but we know that at the same time the Scythians were seemingly vanishing from the Black Sea region, the Saxon tribes suddenly appear in northern Europe. They do not become a historical certainty until the late third century, but, “From 286 A.D. onwards we find them [Saxons] perpetually mentioned by the Roman historians as pirates infesting the North Sea” (Charles Oman, A History of England Before the Norman Conquest, 1994, p. 215). British historian Sharon Turner (1768-1847) wrote one of the definitive chronicles of these people, concluding they were descendants of the Scythians. “The Saxons were a German or Teutonic, that is, a Gothic or Scythian tribe,” LifeHopeandTruth.com 67 he wrote. “And of the various Scythian nations which have been recorded, the Sakai, or Sacae, are the people from whom the descent of the Saxons may be inferred, with the least violation of probability. Sakai-suna, or the sons of the Sakai, abbreviated into Saksun, which is the same sound as Saxon, seems a reasonable etymology of the word Saxon. The Sakai, who in Latin are called Sacae, were an important branch of the Scythian nation” (History of the Anglo-Saxons From the Earliest Period to the Norman Conquest, Vol. 1, 1840, p. 59). And, as shown earlier, a linguistic link exists between Sacae and the patriarch Isaac. If the Sc sound of Isaac truly is the origin of the Sacae, and Saxon is a derivative of this word, then the name can reasonably be interpreted as “Isaac’s sons” or the “sons of Isaac.” God promised Abraham that his descendants would carry the name of Isaac—“in Isaac your seed shall be called” (Genesis 21:12). And, like English, the Scythians’ language “undoubtedly belongs to that of the Indo-Germanic family” (George C. Swayne, Herodotus, 1870, p. 87). Summary God’s prophecy that the northern 10 tribes of Israel would go into Assyrian captivity as punishment for their sins came to pass. Though many today believe the Israelites were assimilated into the lands of their captors, God also prophesied that they would survive their captivity but lose their identity, becoming “wanderers among the nations” (Hosea 9:17). The Bible also provides clues that they would migrate northwest (Isaiah 49:6, 12; Jeremiah 16:1415) and eventually resettle in a new land. This is exactly what happened. Because they lost their former tribal names, the migrating tribes of Israel are difficult to trace in history and identify today. But by a careful study of the Bible’s clues and secular history, we can reconstruct the northwesterly migration of the 10 tribes of Israel through the record of various nomadic tribes. The Cimmerians, Scythians and Saka appeared on the fringes of the Assyrian Empire as it dissolved in the 600s B.C. The Cimmerians became the Celts and spread far and wide throughout Europe, while the Scythians settled north of the Black Sea. Eventually, the Sarmatians and Huns began marauding across the steppes, driving the Scythians out of the area. Shortly after the Scythians disappear, the Anglo-Saxons tribes appeared and the migration period commenced throughout northwest Europe. Then, as the Romans abandoned the British Isles, the Anglo-Saxons invaded and settled these islands. Though it took more than a thousand years of wandering, by the fifth century the descendants of Joseph were settled in their new home—the islands 68 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY that would become known as Great Britain. Later, a portion of their descendants would form the United States of America. The next chapter explains the amazing story of how the promised birthright blessings (covered in chapter 1) were fulfilled through these two peoples. LifeHopeandTruth.com 69 CHAPTER 4 MFA.org 70 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY BRITAIN AND THE UNITED STATES INHERIT THE BIRTHRIGHT BLESSINGS “Joseph is a fruitful bough, a fruitful bough by a well” (Genesis 49:22). A George Washington’s famous passage of the Delaware River on Dec. 25, 1776 (painting by Thomas Sully). s we have seen, detailed prophecies were given regarding the descendants of Joseph in Genesis 48 and 49, but those prophecies were not fulfilled during the time of the ancient kingdom of Israel. The tribes springing from Joseph—Ephraim and Manasseh—were merely two tribes within the greater kingdom of Israel. During those ancient times, Ephraim never became the prophesied “multitude of nations,” nor did Manasseh become the “great” nation (Genesis 48:19). Does that mean, then, that the promises about Joseph’s descendants failed? Absolutely not—God’s promises never fail (Isaiah 46:10-11)! Remember Jacob’s key statement at the beginning of Genesis 49—these prophecies were to be fulfilled “in the last days” (Genesis 49:1). They were to be fulfilled in modern times—not ancient times. As the 10 tribes of Israel became “wanderers among the nations” (Hosea 9:17)—identified by different names (Scythians, Cimmerians, Celts, Anglo-Saxons, etc.)—God was still going to fulfill the birthright blessings He had promised. By looking at modern history, we can learn how God fulfilled those promises—primarily to the descendants of Ephraim and Manasseh. LifeHopeandTruth.com 71 Geographical clues We find in the Bible two primary clues revealing where the Israelites would ultimately settle: • To the northwest of their former homeland in the Middle East (Isaiah 49:12). • On islands and areas with coastlands (Isaiah 41:1, 5; 51:5; Jeremiah 31:9-10). These geographical locations are exactly where the wandering tribes migrated—the areas of northwest Europe and the British Isles. Anglo-Saxons settle in the British Isles Among the Anglo-Saxons who entered and dominated the British Isles beginning in the late fourth century were the descendants of Joseph. Here they would settle, grow and begin to receive the birthright blessings of Genesis. King Alfred the Great was instrumental in consolidating the various Anglo-Saxon kingdoms into one English kingdom. Later, the Hundred Years’ War between England and France from 1337 to 1453, resulted in England developing a national identity separate from Europe and freeing it from the French influence existing since William the Conqueror had invaded and ruled England in 1066. England thus began to disengage from continental Europe and focus on overseas colonial expansion. British imperialism and prophecy British exploration and colonization began in the 15th century, when the Tudor dynasty took control of the English crown. Under the rule of Queen Elizabeth I, Protestant Britain began competing with Catholic Spain for colonial domination of the New World. Their growing competition, combined with religious tensions, led King Philip II to launch the Spanish Armada to invade Britain. But a series of miraculous events led to the Armada’s defeat, and England maintained its independence from continental Europe. In 1707 the parliaments of England and Scotland agreed to form the Kingdom of Great Britain. Shortly afterward the British began looking for resources outside their relatively small island and continued to build their naval power to protect against security threats from continental Europe. Throughout the 18th century the British and French competed over sections of North America and Asia, with Great Britain ultimately prevailing and becoming the world’s dominant imperial power. France lost both the French and Indian War in 1763 and the Battle of Plassey in 1757, giving Britain dominance over both North America and the Indian subcontinent. After the French and Indian War, Great Britain’s empire was already larger than the Roman Empire, and it had still not reached its peak! 72 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY France tried once again to become the world’s dominant power under Napoleon. His growing threat actually spurred Britain to develop a more powerful army and industrialize its economy, which led to cementing its place as the global hegemon in the landmark event of 1815: “Napoleon’s final defeat at the battle of Waterloo left Great Britain the undisputed master of its universe” (Walter Mead, God and Gold: Britain, America, and the Making of the Modern World, 2007, p. 96). Not only did Napoleon’s wars weaken continental Europe—propelling Great Britain past the rest of Europe in industrial production, colonial power and military power—but his sale of the Louisiana territory to the United States made possible another giant step toward fulfilling the birthright blessings (more on this later in this chapter). By checking Spain’s and France’s international power and by developing the most powerful navy the earth had ever seen, Great Britain built “the largest Empire in the history of the world” (Pax Britannica: The Climax of an Empire, 1998, p. 3). Ironically, the British Empire did not reach its zenith until after it had lost the American colonies in the American Revolution (17751783). As we will see, the break between Great Britain and the North American colonies was necessary for prophecy to be fulfilled. The greatness of the British Empire The British Empire that emerged after the American Revolution is known historically as “the Second British Empire” and fulfilled the promise that the descendants of Ephraim would become a great “company of nations” (Genesis 35:11; see also 48:19). Based on “limited government and the rule of law, its empire increasingly relied on trade rather than dominion” (To Rule the Waves, p. xviii). It also emerged as the world’s “international policeman” (ibid., p. xix), using its powerful navy to maintain international order, open sea lanes, defend human rights and even abolish the international slave trade. Historians call this era of peace and security Britain brought to the world Pax Britannica—British peace—recalling the Pax Romana. (Of course, this is not to say that British colonialism was always perfect—it wasn’t!) Let’s look closer at how the British peoples fulfilled the prophecies given to Joseph. The birthright blessings to Ephraim “His descendants shall become a multitude of nations” and a “fruitful bough by a well; his branches run over the wall” (Genesis 48:19; 49:22). This prophecy stands as one of the greatest proofs that the birthright bless LifeHopeandTruth.com 73 JOSEPH’S BIRTHRIGHT WITHHELD 2,520 YEARS Leviticus 26 is an important chapter revealing a series of blessings and curses for Israel. The first 13 verses show the incredible blessings of prosperity and protection Israel would receive as a result of obeying God, but verses 14-45 show the curses that would come for disobedience. One of the curses actually reveals the timing of Israel’s rise after their fall in 721 B.C. Notice verse 18: “If you do not obey Me, then I will punish you seven times more for your sins.” God repeats this declaration three more times (verses 21, 24, 28). “Seven times” refers to the intensity and duration of punishment. In the Bible a time often refers to a year (Daniel 4:32; Revelation 12:14). A prophetic year consisted of 360 days. Also, in Bible prophecy, punishment is often given using the principle of a day symbolizing a year. For instance, Israel was punished with 40 years of wandering in the wilderness because of their rebellion. God decreed the 40-year punishment based on the 40 days of the spying expedition (Numbers 14:34). Years later, Ezekiel was told to lie on his side for 390 days to represent 390 years of Judah’s sinfulness: “I have laid on you a day for each year” (Ezekiel 4:6). Now back to the “seven times” punishment of Leviticus 26. Since a prophetic year is 360 days, using the day for a year principle, multiplying 360 times 7 gives us 2,520: 360 x 7 = 2,520. Israel went into Assyrian captivity in 721 B.C. By adding 2,520 years, we come to right at the beginning of the 1800s. Chapter 4 shows that this is the time period when the British Empire was growing and the United States was making territorial purchases that would result in it emerging as a great nation. It seems the fullness of the birthright blessings prophesied in Genesis 48:19 were withheld, because of Israel’s sin, for 2,520 years. By aligning this prophecy with history, we see that the emergence of the descendants of Joseph to national prominence was right on schedule! 74 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY Wikimedia Commons The British Royal Navy’s victory in the Battle of Trafalgar ended Napoleon’s plans to invade England and affirmed British naval supremacy. In this battle, 27 British ships defeated 33 French and Spanish ships. ings were not fulfilled in ancient Israel. The tribe of Ephraim never fulfilled this prophecy in ancient times. It was a “last days” prophecy (49:1) fulfilled in the British Empire and the Commonwealth that followed it. Ephraim was to be not just one powerful nation, but a strong group of nations. This is exactly what happened. The British Empire was called “the empire on which the sun never sets” because its inhabitants were spread around the globe. In addition to controlling over 200 smaller colonies, at its peak the Empire included England, Wales, Scotland, Ireland, Canada, South Africa, India, Australia and New Zealand. Within its realm were “more than fifty distinct governments of various kinds” (A Survey of the British Empire, Historical, Geographical and Commercial, 1904, p. 13). The British Empire ruled over a quarter of the world’s land (over 13,000,000 square miles) and a quarter of the world’s population (over 500 million people). It was “three times as large as the whole continent of Europe, and more than a hundred times as extensive as the whole of the British Islands” (ibid., p. 17). The Genesis 49:22 prophecy likened Joseph’s descendants to a “fruitful bough by a well; his branches run over the wall.” This describes expansion of trade (fruitful) and territory (run over the wall). Although only a small island nation, Britain’s “branches” (or influence) extended throughout the globe. LifeHopeandTruth.com 75 In order to connect the many lands that made up the British Empire, the British had to control the seas and, in fact, “at no other time in history has one power so completely dominated the world’s oceans as Britain did in the midnineteenth century” (Niall Ferguson, Empire, 2002, p. 139). It accomplished this by developing the strongest navy on earth and through the “control of key chokepoints around the world ocean [which] helped assure access to far-flung colonial possessions and dominate maritime commerce” (Michael A. Morris, The Strait of Magellan, 1989, p. 23). This fulfilled the promise that Abraham’s descendants would control “the gate of their enemies” (Genesis 22:17). See the chart “Strategic Sea Gates” on pages 78-79 . Today, the descendants of Ephraim continue to be a “multitude of nations” spread throughout the world, but primarily living in the United Kingdom, Canada, South Africa, Australia and New Zealand. “Blessings of heaven above … of the deep … of the breasts and of the womb” (Genesis 49:25). Joseph’s descendants were to be given great physical blessings, including ideal climates for food production, natural resources and high fertility rates. In fact, these three promises are closely linked together as abundant agricultural production (resulting from ideal weather conditions) and natural resources are necessary factors for a nation to sustain population growth. The British population exploded during the time the promises to Ephraim were beginning to be fulfilled. “From 1770 the English annual growth rates began to rise powerfully and pulled well away from both France and Sweden over the period 1770 to 1815” (E.A. Wrigley and R. S. Schofield, The Population History of England 1541-1871, 1989, p. 215). The fulfillment of the other two blessings played a notable role in this population explosion. “Agricultural productivity, proto-industrialisation, the growth of manufacturing and new mineral technologies, along with the arrival of factories, had helped the economy to industrialise” (Kenneth Morgan, “Symbiosis: Trade and the British Empire,” BBC, Feb. 17, 2011). This led to Great Britain becoming the world’s greatest manufacturing nation, exporter of capital and protector of trade markets. “By the God of your father who will help you, and by the Almighty who will bless you” (Genesis 49:25). One fascinating aspect about the history of the British Empire was their belief that by divine providence the Judeo-Christian God had granted them their dominion. In The Expansion of England, John Robert Seeley wrote this about the unlikely nature of his small island nation coming to control the largest empire in history in an apparently unplanned way: “We seem, as it were, to 76 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY have conquered and peopled half the world in a fit of absence of mind” (1883, p. 8). But when we understand the Genesis prophecies, we see that the British didn’t build their empire by happenstance or accident—it resulted from the unconditional promises God made to Abraham and his descendants. “Separate from his brothers” (Genesis 49:26). Joseph’s descendants were to be separate from the descendants of the other tribes of Israel. Being an island detached from continental Europe gave the British easier access to the sea, protected them from foreign invasion and allowed them to develop a culture distinct from that of continental Europe. “Unlike every other European power, the United Kingdom had the good fortune never to suffer foreign occupation or permanent defeat,” writes Norman Davies. “In this, the British experience was much closer to that of the Americans” (The Isles: A History, p. 899). The end of the British Empire For over 200 years the British Empire, the largest in world history, dominated the globe. But with the coming of the 20th century, it would begin to see its end—though the core nations of the empire would remain together in another configuration. Germany twice challenged the British Empire, and though Germany ultimately failed, the strain of fighting two bloody and costly world wars proved to be a major factor in the empire’s unraveling. With the help of the other Allied powers, Great Britain emerged from World War I victorious and larger, expanding the empire to its peak after the war by adding over 1 million square miles of territory in the Middle East and over 13 million subjects. Its expansion, however, sowed the seeds of its fall: • The empire’s immense size proved to be expensive and ungovernable. • The enormous debt Britain incurred to fight the war and the financial burden of supporting and defending its immense empire weighed down the economy. • British citizens grew weary of the costs of the empire. • Many of its finest leaders died in battle. While Germany’s second attempt also failed, the convulsions and repercussions of World War II proved to be the eventual undoing of the British Empire. The British fought bravely and valiantly against Hitler’s Axis alliance from 1939 to 1941, but after nearly two years of fighting, it was obvious that by itself the empire could not defeat the remilitarized Germany. Nor could it do so with the aid of the Soviet Union, which joined the war in 1941. LifeHopeandTruth.com 77 STRATEGIC SEA GATES One of the birthright blessings promised to Abraham’s descendants was the possession of “the gate of their enemies” (Genesis 22:17; see also Genesis 24:60). On the national level, a gate is a strategic location that allows passage for military and economic movement. The British became “the master of the seas, controlling its lanes and pathways” (Fareed Zakaria, “The Future of American Power,” Foreign Affairs, May/ June 2008). STRAIT OF GIBRALTAR 1704-present HAWAIIAN ISLANDS 1893-present The U.S. originally annexed the island string of Hawaii as a gateway between the U.S. and Asian markets. It became a strategic naval base during the SpanishAmerican War and World War II. The peninsula at the point of connection between the Atlantic Ocean and Mediterranean Sea allows Britain to essentially control entry and exit from the Mediterranean. During World War II, the British were able to keep German and Italian ships in the Mediterranean from accessing the Atlantic Ocean. PUERTO RICO 1899-present PANAMA CANAL 1914-1999 This strategic canal links the Atlantic and Pacific Oceans and saves the roughly 7,000mile trip around the tip of South America. Roughly 14,000 ships use the canal each year. Today, the Panama Canal is controlled by Panama and administered by a Chinese company (HutchisonWhampoa). 78 The United States received this island at the end of the Spanish-American War, and the excellent port of San Juan Bay provides a strategic naval base helping the U.S. defend and control the Caribbean. CAPE OF GOOD HOPE 1795-1931 The southern tip of Africa connects the Atlantic and Indian Oceans, and this passage controlled European access to India and Asia through the Atlantic Ocean. The port at Cape Town was a strategic stopping point for THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY British ships en route to India and Australia. OTHER STRATEGIC POINTS: Alexandria, Egypt; Zanzibar; Falkland Islands; Strait of Hormuz; Solomon Islands; Cyprus; Malta; and Ceylon. PORT OF DOVER British control of this port played a vital strategic role during both World Wars, limiting German access to the Atlantic Ocean via the English Channel. HONG KONG GUAM 1842-1997 1898-present This former British colony allowed access to Chinese goods as Hong Kong became the center of British trade and finance in Asia. It was also of strategic value to both the British and Americans during the Cold War. The United States received this island after the Spanish-American War. Guam provided the U.S. a naval base in the Pacific Ocean. Guam was captured by Japan during World War II, but was taken back and remains a U.S. territory hosting multiple military bases. SUEZ CANAL 1875-1956 Connecting the Mediterranean Sea to the Red Sea, this canal created a shorter passageway between Europe and south Asia (bypassing the 4,300-mile trip around Africa via the Cape of Good Hope). It also gave Britain easy access to oil from the Persian Gulf. During World War II, the Allies were able to keep Germany and Italy from Middle East oil and access to Japan and the Eastern Front. STRAIT OF MALACCA 1867-1957 This strait between Malaysia and Indonesia is the main channel linking the Pacific and Indian Oceans. Strategically, this passageway allows goods from Pacific Asia (e.g., Korea, Taiwan, Japan, Hong Kong and China) to reach India and Europe. Under British rule the port of Singapore became one of the busiest and most important ports in the world. LifeHopeandTruth.com 79 British Prime Minister Winston Churchill understood that the world could only be spared from Hitler’s Third Reich through the help of the United States of America. As 1941 drew to a close and Britain fought for survival, Hitler’s troops were moving farther south into the Mediterranean and Africa, aiming to penetrate the “gates”—the Suez Canal (and the Middle Eastern nations controlled by Britain) and the Strait of Gibraltar—that kept Germany from connecting with its allies outside of Europe. Had these fallen to Hitler, Germany would have gained control of the Mediterranean with easy access to the Atlantic Ocean and direct supply lines with Japan. Not only would this have destroyed the British, but it would have virtually assured Germany of victory and global supremacy in place of Great Britain. But history changed on Dec. 7, 1941, when Japan attacked the American military base at Pearl Harbor, Hawaii. The United States’ immediate entry into the war shifted the balance of power to the Allies, leading to the ultimate destruction of the Axis alliance in 1945. World War II was an important turning point in history. It marked the beginning of the end of Ephraim’s (Britain’s) world dominance and the transition to Manasseh’s (America’s) inheritance of its full portion of the birthright blessing. The British Empire emerged from the war a shadow of its former glory—having lost over 400,000 of its population in the war, needing to rebuild its bombed-out cities, with its trade networks running at a third of its prewar level and having lost a quarter of its national wealth (David Dimbleby and David Reynolds, An Ocean Apart, 1988, p. 176). It could no longer afford to support its possessions around the globe but, more important, the British people lost their will to maintain the empire. As they turned inward, so began the dismantling of the greatest empire the world had ever seen. The first obvious crack appeared in 1947 when Britain gave India, the “crown jewel” of the empire, its independence. In the ensuing decades Britain withdrew from many of its former possessions or transferred protection of those lands to the United States. “At its height it had covered a quarter of the world’s land surface and governed around the same proportion of its population. It took just three decades to dismantle, leaving only a few scattered islands” (Ferguson, p. 301). Though the British Empire had many flaws, it brought many blessings to the world, including defending human rights, protecting the world from tyranny, keeping sea lanes open and free to peaceful nations, and establishing parliamentary democracy and liberal capitalism in many nations. “Britain has arguably been the most successful exporter of its culture in human history,” wrote Fareed Zakaria (“The Future of American Power,” 80 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY Foreign Affairs, May/June 2008). .” Simply put, no people have ever influenced the world culturally and linguistically on a scale like the British-descended peoples. Transition to the Commonwealth Though the British Empire ended shortly after World War II, the “company of nations” that Ephraim was to become has not disappeared. The nations descended from Ephraim are still connected in a unique institution known as the Commonwealth of Nations. Formed in 1949 to give former members of the British Empire the option to maintain a formal relationship while still being given self-rule, the Commonwealth of Nations is, in fact, one of the strangest anomalies of history. Not only were the former colonies given independence in a relatively peaceful manner, but the majority voluntarily chose to remain aligned with Britain in this institution. One of the unifying factors of the Commonwealth is Queen Elizabeth II, who serves as the ceremonial head of state of a number of Commonwealth countries. The Commonwealth currently consists of 53 member states, representing about one-third of the world’s population. Today’s most prominent members of the Commonwealth include the United Kingdom, Canada, South Africa, Australia and New Zealand, nations that continue to hold the diminishing birthright blessings given to Ephraim. When we read the end-time prophecies about Ephraim, these are the nations we look to. But what about the other half of the birthright blessing—the promise that Joseph’s firstborn son Manasseh would become a “great” nation? Promises to Manasseh fulfilled in the United States When Israel bestowed his blessing upon his grandsons, he broke tradition by crossing his hands and “set Ephraim before Manasseh” (Genesis 48:20). Only later, when the British Empire rose to prominence chronologically before the United States, could we see how this prophetically symbolic move was fulfilled. The empire also became greater in terms of territory and global dominance. Manasseh, however, was prophesied to become “a people” and “great” (verse 19). In other words, his descendants would not rule the globe through colonial expansion, but would become a great single nation. LifeHopeandTruth.com 81 UNITED STATES 1. Continental United States 2. Alaska 3. American Samoa 4. Guam 5. Hawaii 6. Midway 7. Northern Mariana Islands 8. Panama Canal Zone 9. Philippines 10. Puerto Rico 11. U.S. Virgin Islands BRITISH EMPIRE British Isles 1. England 2. Wales 3. Isle of Man 4. Scotland 5. Ireland 6. Channel Islands Europe 7. Gibraltar 8. Minorca 9. Malta 10. Cyprus Africa 11. Union of South Africa 12. South West Africa 13. Bechuanaland (Botswana) 14. Basutoland (Lesotho) 15. Swaziland 16. Southern Rhodesia (Zimbabwe) 17. Northern Rhodesia (Zambia) 18. Nyasaland (Malawi) 19. Tanganyika (Tanzania) 20. Zanzibar 21. Kenya 22. Uganda 23. Anglo-Egyptian Sudan (Sudan) 24. Egypt 25. British Somaliland 26. British Cameroons (Nigeria and Cameroon) 27. Nigeria 28. Togoland (Togo) 29. Gold Coast (Ghana) 30. Sierra Leone 31. Gambia 32. Ascension Island 33. Saint Helena 34. Tristan da Cunha 35. Seychelles 36. Mauritius 37. Socotra 82 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY TERRITORY HELD BY THE UNITED STATES AND THE BRITISH EMPIRE AT THE HEIGHT OF THEIR POWER Asia 38. Aden 39. Maldives 40. Diego Garcia 41. Kuwait 42. Bahrain 43. Pakistan 44. India 45. Ceylon (Sri Lanka) 46. Burma (Myanmar) 47. Singapore 48. Malaya (Malaysia) 49. Brunei 50. Hong Kong 51. Weihai Australasia/Oceania 52. Cocos Islands 53. Australia 54. Tasmania 55. New Zealand 56. Solomon Islands 57. Territory of Papua 58. New Guinea 59. Nauru 60. Western Samoa (Samoa) 61. Tonga 62. Fiji 63. Gilbert and Ellice Islands 64. Pitcairn Islands 65. New Hebrides (Vanuatu) North America 70. Canada Central America/South America/Caribbean 71. Bermuda 72. Bahamas 73. British Honduras (Belize) 74. Mosquito Coast (Nicaragua and Honduras) 75. Cayman Islands 76. Jamaica 77. Turks and Caicos Islands 78. British Virgin Islands 79. Anguilla 80. Antigua 81. St. Christopher and Nevis (Saint Kitts) 82. Montserrat 83. Dominica 84. St. Lucia 85. Barbados 86. St. Vincent 87. Grenada 88. Trinidad and Tobago 89. British Guiana (Guyana) 90. Falkland Islands 91. South Georgia and the South Sandwich Islands LifeHopeandTruth.com 83 The British laid the foundations for the rise of the United States in 1607 by establishing the Jamestown colony. They actually lagged behind their rivals— the Spanish, French and Dutch—all of whom established colonies before the British. As we’ve already pointed out, the history of Joseph’s descendants contains many mysterious successes that defy logic, among which is the rise of the British in dominating North America. England was a weak island nation in the 16th and 17th centuries—much less powerful than the Spanish, Portuguese and French—and any unbiased observer during the 1500s would have predicted that North America would be dominated by Spain or Portugal. But through divine providence, the British gained control of the continent. They gained their foothold first by establishing 13 colonies along the Atlantic coastline during the 17th and 18th centuries. Ephraim and Manasseh separate As we’ve seen, Ephraim’s and Manasseh’s descendants were to grow into two distinct peoples. In order for that prophecy to be fulfilled, the American colonies couldn’t remain subjects of the British Empire, and that’s why one of the most unlikely insurrections in world history—the American Revolution— occurred in the late 1700s. Historians look back at this world-changing event and ponder how it ever happened. Few of the common factors for a revolution existed. Consider: • The American colonists strongly identified with Britain. They considered themselves Englishmen and wanted political rights equal to those in Great Britain. In fact, after the end of the French and Indian War in 1763, “an immense surge of British patriotism spread throughout the American colonies” (David Goldfield, et al., 2008, The American Journey, p. 114). • The revolution took place shortly after the French and Indian War (17541763), when Great Britain intervened to defend the American colonies against French intrusion. • By standards of the time, Britain was one of the world’s most progressive nations, “with a minimalist government and a tradition of freedom of speech, assembly, the press, and (to some extent) worship” (Paul Johnson, A History of the American People, p. 125). The colonists enjoyed freedoms rarely held by the subjects of empires throughout history. • The colonists prospered under British rule, and were not victims of economic oppression. But, against all odds, the American Revolution happened anyway. After the French and Indian War, British leadership decided the colonists should bear some of the financial burden of the war and the British defense of North 84 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY BIBLICAL PRINCIPLES THAT CONTRIBUTED TO BRITISH AND AMERICAN SUCCESS The British and American peoples share a common origin, a similar history, one language and a majority religion (Protestant Christianity). They are also united by their practice of certain principles that have contributed to their national success. Many of these attributes have their origins in the Bible and the ancient nation of Israel. Here are three of those principles: RULE OF LAW Kings of Israel were not to govern on whim or for their own self-aggrandizement, but were to govern within the confines of God’s law (Deuteronomy 17:1620). This was a revolutionary concept in world history, as the majority of history’s kings, emperors and dictators have ruled above any law. The principles of constitutional monarchy, parliamentary democracy and representative democracy in the Anglo-Saxon nations are based on this ancient biblical principle that human government is to operate under law, not above it. The governments of the ancient Israelites were expected to serve the people, administer justice and avoid corruption (Deuteronomy 16:18-20; 1 Kings 3:28; 10:9; Proverbs 29:4; Isaiah 10:12). In other words, they were to operate within the confines of law. The Magna Carta, British Common Law and the U.S. Constitution uphold this principle. PROPERTY RIGHTS The principle of protecting private property has strong roots in the Bible. The protection of property was enshrined in the 10 Commandments (Exodus 20:15, 17). When the Israelites entered Canaan, each tribe was given land that was to be divided among families and protected (Numbers 26:53-56; 34:1-29). INDIVIDUAL LIBERTY The Bible teaches that people are made in the image of God (Genesis 1:26) and are to be treated with respect. Governmental leaders were to rule with justice and fairness, and not abuse the people under their rule. The protection of private property and the freedom to make economic decisions are two ways biblical law protected individual liberty. These principles helped the descendants of Joseph develop strong economies, free societies and a high standard of living. Spreading these principles throughout the world has resulted in blessings to other nations (Genesis 12:3). Though neither ancient Israel nor these modern nations have followed these principles faithfully, when they have, they have reaped the blessings. LifeHopeandTruth.com 85 America. They began imposing new taxes and more closely regulated the colonial economies to benefit Great Britain. These regulations (though hardly repressive by the standards of other world empires) coincided with an era known as the Enlightenment. The philosophers promoting new ideas of personal liberty, economic freedom and the rights of individuals greatly influenced many American colonialists, who used their ideas to make the case that they were being economically oppressed by Great Britain. This led to another example of divine providence in history. As Paul Johnson described it, “The generation that emerged to lead the colonies into independence was one of the most remarkable groups of men in history” (ibid., p. 127). Without these men, the American Revolution would likely have never even begun. Interestingly, as many great American leaders emerged, Great Britain suffered from poor leadership, as King George III, the British Parliament and the colonial governors all contributed to mismanaging the American protests. If the beginning of the American Revolution was unlikely, the idea that the American colonies would actually win was even more unlikely. The colonists were mainly merchants and farmers with no trained standing army—only localized militias. The Continental Army never exceeded 20,000 and faced the most powerful empire on earth, the well-trained and disciplined British force consisting of about 50,000 troops helped by over 30,000 Hessian mercenaries. The Revolutionary War should have been an easy British victory. But a number of factors and providential miracles led to the unthinkable. For example, early in the war it seemed that the British stood to easily defeat George Washington’s poorly supplied and trained Continental Army. When the British invaded New York in August 1776 and quickly drove Washington back into New Jersey, he retreated with only 3,000 of his original 18,000 troops. But instead of pursuing and destroying the struggling Continental Army, the British wavered, and Washington’s army survived to fight another day. Historian Joseph Ellis points out that “if the British commanders had prosecuted the war more vigorously in its earliest stages, the Continental Army might very well have been destroyed at the start and the movement for American independence nipped in the bud” (Founding Brothers, 2000, p. 5). On Sept. 11, 1776, British Admiral Lord Richard Howe offered to pardon all the revolutionary leaders of treason if the colonists would retract the Declaration of Independence. The American representatives refused, despite the poor military and financial condition that made victory over the British extremely unlikely, and the war continued on. 86 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY But in what we consider another act of divine providence, Washington maintained the morale among the depleted and destitute American troops and in his famous crossing of the Delaware River on the night of Dec. 25, 1776, led them in a surprise attack on the Hessian mercenaries. They scored a strategic morale-boosting victory and a week later, at the Battle of Trenton, again defeated the British. These victories “at Trenton and Princeton boosted morale and saved the American cause” (The American Journey, p. 155). On Oct. 17, 1777, the Americans defeated British forces descending from Canada at the Battle of Saratoga, preventing them from linking up with the British army in New York City. The ensuing brutal winter of 1777 proved harsh and trying for the Continental Army camped at Valley Forge, but they endured and gained much needed training and discipline. The American victory at Saratoga, though far from ensuring total victory, gave the French confidence to support the American cause. The French had little interest in the colonists’ independence, but were motivated by what they saw as an opportunity to weaken Great Britain. The Spanish later joined the cause, hoping to regain the Strait of Gibraltar from British control. The Americans now had allies to provide financial and naval support. These extraordinary developments demonstrate how God can use nations to fulfill His promises. While France and Spain were trying to take elements of the birthright away from the descendants of Ephraim, they were inadvertently helping the Americans to fulfill the birthright blessing promised to Manasseh. The French support and American resilience proved to be key factors that ultimately led to the American victory, with the British surrendering on Oct. 19, 1781. Paul Johnson summarized it this way: “So the British, who had begun the war with an enormous superiority in trained men and guns and with complete control of the sea, ended it outnumbered, outgunned, and with the French ruling the waves” (A History of the American People, p. 165). The Preliminary Articles of Peace, signed in Paris, on Nov. 30, 1782, officially ended the war and gave the United States of America full independence. The treaty promised the British withdrawal of all forces from the United States while Canada remained a part of the British Empire. Its signing completed the prophesied separation between Ephraim and Manasseh, and set the stage for the fulfillment of the Genesis 48:19 prophecy: America was independent and prepared to emerge as a great people, while Great Britain was now poised to become a multitude of nations through the Second British Empire. LifeHopeandTruth.com 87 “From sea to shining sea.” This map shows how the Americans came to possess the land that would make up the “great nation” promised to Joseph’s son Manasseh. The amazing story of American expansion The United States did not receive the full physical birthright blessings immediately after the Revolution. In fact, those blessings would slowly mature over the next 150 years. While Britain rose to its pinnacle as a prophesied empire throughout the 19th and early 20th centuries, America was slowly building the great nation that would fully realize its physical blessings in the mid-20th century. After the Revolution, the United States remained relatively weak. Many believed a republic could not work in a nation as large and diverse as the United States. Indeed, its competing regional interests—mainly the commercial north and the agrarian south—immediately threatened the fledgling nation, and the issue of slavery would increasingly polarize it and ultimately lead to civil war. 88 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY Added to those internal political and economic issues, the United States’ territory was still essentially restricted to the Atlantic Coast, while the British, Spanish and French controlled the remainder of the continent. Jay’s Treaty (signed Nov. 19, 1794, between the U.S. and Great Britain) unlocked the first key to western expansion. Though very controversial at the time, the historical ramifications of the treaty were immense as the British agreed to evacuate all northwest forts east of the Mississippi River. The treaty also gave “most favored nation” status to the U.S. in British trade and allowed America to trade in the West Indies. Not only did this impact the economic development of America, it opened up the Ohio Valley for American expansion. Shortly after Jay’s Treaty, the U.S. signed the Treaty of San Lorenzo (Oct. 27, 1795), whereby Spain recognized the land east of the Mississippi as American territory. This essentially opened the gate to unhindered American expan LifeHopeandTruth.com 89 Napoleon Bonaparte (right) hoped to build a “New France” in North America. But instead Thomas Jefferson was able to purchase the land from Napoleon on behalf of the United States. The Louisiana Purchase doubled the size of the young nation and gave it possession of some of the richest farmland in the world. Wikimedia Commons sion into Trans-Appalachia (land east of the Mississippi River and south of the Ohio Valley). This treaty also gave the U.S. free access to the Mississippi River and the port of New Orleans. The Louisiana Purchase: America miraculously doubles in size But Spain still controlled the strategic Mississippi River and owned the entire territory referred to as Louisiana—beginning at the port of New Orleans and extending northwestward into the Great Plains all the way to the Canadian border (see illustration). That changed on Oct. 1, 1800, when Spain secretly ceded Louisiana to France (under Napoleon Bonaparte) in the Treaty of San Ildefonso. Napoleon wanted to create a “New France” in the Louisiana territory as part of his grand plan to build a French empire rivaling the British. President Thomas Jefferson acutely understood the threat of French control of Louisiana. He noted the strategic importance in an 1802 letter to Robert Livingston (the U.S. minister to France): “The cession of Louisiana …. … The day that France takes possession of New Orleans … we must marry ourselves to the British fleet and nation” (quoted in Habits of Empire: A History of American Expansion, p. 59). 90 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY But two factors—a bloody uprising in Saint-Domingue (modern Haiti) and the fear of an impending war with Great Britain—thwarted Napoleon’s plans. Jefferson was determined to regain access to the Mississippi and New Orleans, which had been closed off by Spain on Oct. 16, 1802. He instructed Livingston and James Monroe to try to negotiate a deal with Napoleon to either purchase the Mississippi River and port of New Orleans or at least to broker an agreement for Americans to have guaranteed access to the strategic port and river. Congress authorized Monroe to offer up to $2 million to France. But both Livingston and Monroe were shocked when Napoleon offered to sell the entire 530,000,000 acres of the Louisiana territory for $15 million! All parties quickly agreed to the deal and on July 4, 1803, President Jefferson announced to the nation that the United States had purchased the entire Louisiana Territory. Immediate benefits came from total control of the Mississippi River and New Orleans, allowing free and unhindered flow of trade throughout the country. But the greater benefit lay in the potential the young nation had instantly gained in doubling its size! This now guaranteed westward expansion of the American people, and ultimately, 14 additional states would form from the Louisiana Purchase. Within this territory would lie some of the most productive farmland and natural resources that were promised to the descendants of Joseph (Genesis 49:25). Over the next 50 years, the United States would gain possession of the remainder of the land that makes up the 48 connecting states. This expansion was guided by a popular belief called Manifest Destiny—the conviction that God destined America to expand westward and become a nation “from sea to shining sea.” Just like the British expansion around the globe during the period of the Second British Empire, the American expansion across the continent was unprecedented. Many of those involved in it sensed God’s hand in bringing it about. Habits of Empire: A History of American Expansion provides this summary of the nation’s historically unparalleled growth: “When the United States began its recognized existence as an independent country in 1783, it had fewer than four million people spread over less than 900,000 square miles. … Between then and 1854, the density doubled, the area tripled, and the population exploded eight or nine times over” (p. 221). The Civil War: the great threat to Manasseh’s blessing While Great Britain enjoyed its pinnacle of power in the 19th century, the LifeHopeandTruth.com 91 United States found itself dealing not only with exciting development opportunities, but crucial domestic affairs that would ultimately determine its future. One of the greatest issues it faced was that of slavery. Great Britain was ahead of the United States, banning slave trade throughout the British Empire in 1807 and prohibiting all slavery in 1834. In the United States, though, the issue grew intensely divisive as the founders and leaders of the nation continually made compromises that forced future generations to deal with the issue. Those issues came to a head when Abraham Lincoln became president in 1860. Because of his stand against the expansion of slavery, within months all the slave states of the lower south seceded from the Union and formed the Confederate States of America. This rebellion directly threatened the fulfillment of the birthright blessings to Manasseh. Had this rebellion succeeded and the South permanently formed a separate nation, the promise of Manasseh becoming a great single nation would have failed. Despite the South’s early military successes in the Civil War, ultimately they could not withstand the superior industrial power of the North and the devastating war ended in April 1865. Work was immediately begun to reintegrate the South into the Union. The promises to Manasseh would stand, and it could now fulfill its birthright blessing free of the stain of slavery (which the U.S. abolished through the 13th Amendment on Jan. 31, 1865). America becomes an industrial power The triumph of the North set the nation on the course to become a major industrial power. Over the next three decades the industrial output of the United States consistently increased. Between 1865 and 1914, the U.S. gross national product grew remarkably by over 4 percent a year. By the beginning of the 20th century, “the United States had the largest and most modern industrial economy on earth, one characterized by giant corporations undreamed of in 1865” (John Steele Gordon, An Empire of Wealth, 2004, p. 205). Did this great industrial progress occur as a direct result of the birthright blessings promised to Joseph? Remember God’s promise in Genesis 49:25: Joseph’s descendants would receive “blessings of heaven above, blessings of the deep that lies beneath” (natural resources) and “blessings of the breast and of the womb” (population growth). Consider: • Natural resources. The abundant natural resources controlled by the United States made industrial expansion possible. Within the American borders lie vast amounts of resources such as iron ore, timber, oil, coal 92 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY and waterpower. Not only did the large iron ore deposits allow the United States to become the world’s leading producer of steel, but the energy sources (coal and oil) provided the power that spurred American industry. For instance, American steel and coal powered the booming railroad industry in the post-Civil War era, which impacted many other industries by providing efficient, quick and cheap transportation of raw materials and goods. • Population growth. In the 40 years following the Civil War the U.S. population exploded. At war’s end the population stood at 39,818,449, but within a short 15 years it grew 25 percent to 50 million! By 1890 it had grown another 25 percent to 62.9 million, and by the turn of the century, the U.S. had over 75 million people. Three major factors fueled this explosive growth: falling infant mortality rates, higher life expectancy and mass immigration. Could either Ephraim (Great Britain) or Manasseh (United States) have experienced such growth without the immense richness of these vast natural resources, blessings given to Joseph? This rise of industry and population propelled the United States to the pinnacle of the birthright blessings promised to Manasseh. From depression to world superpower Despite the explosive economic, industrial and population growth after the Civil War, the U.S. economy grew prone to “boom and bust” cycles, times of great economic growth frequently followed by times of recession. The most extreme example of economic growth came with the “Roaring Twenties,” followed by the economic crash and Great Depression of the 1930s. Nevertheless, over time the United States became the world’s wealthiest nation. Except for its late and brief entry into World War I, the U.S. distanced itself politically, maintaining a strong isolationist mind-set and weak military, relying on the Atlantic and Pacific Oceans for its defense. Events in Europe, however, would soon force the U.S. into actions that would change the global balance of power and propel it to superpower status. The single most important catalyst for this shift came with World War II. The United States at first resisted joining the Allied fight against the Axis powers of Germany, Italy and Japan, putting its focus first on rebuilding and stabilizing its economy. In fact, 80 percent of Americans preferred neutrality. But everything—including the course of human history—changed on Dec. 7, 1941, when Japan launched a surprise attack on Pearl Harbor. President Franklin Roosevelt declared it a “date which will live in infamy.” The Pearl Harbor attack pushed the United States to enter World War II, LifeHopeandTruth.com 93 aligned with the Allied powers of Great Britain and the Soviet Union. By then France had fallen, and Britain’s forces had been practically run out of continental Europe (its army of over 300,000 amazingly spared of a far worse defeat by the Dunkirk evacuation, which Winston Churchill hailed as a “miracle of deliverance”). Though the English withstood Germany’s air assault (successfully thwarting Hitler’s plan to invade and destroy Great Britain), Britain and the Soviet Union alone could not defeat the Nazi war machine. The U.S. entry into World War II had two colossal effects on history: 1. The scales tipped in the Allies’ favor, leading to the ultimate fall of the Axis powers. The rapid mobilization of the American economy and military proved the greatest physical factor in defeating Nazi Germany. The U.S. contributed over 16 million military personnel, 296,000 planes, 102,000 tanks and 88,000 naval ships from 1942-1945. It also produced the atomic bomb and effectively ended the war when it destroyed Hiroshima and Nagasaki, Japan. God had foretold, “The archers have bitterly grieved him, shot at him and hated him. But his bow remained in strength, and the arms of his hands were made strong by the hands of the Mighty God of Jacob” (Genesis 49:23-24). Were not the sons of Joseph, fighting together in this most destructive war in history, fulfilling the prophecy that their descendants would be blessed with military superiority? 2. The United States ascended to economic and superpower status. The marshalling of the United States’ three greatest assets—industry, natural resources and population—not only outproduced and defeated the Axis powers, but catapulted America out of the Great Depression. It emerged as the world’s military superpower, and its financial resurgence is one of history’s greatest economic success stories. In six short years between 1939 and 1945, the U.S. gross national product rose astoundingly from $88.6 billion to $135 billion, with industrial production growing over 15 percent a year (Paul Kennedy, Rise and Fall of the Great Powers, pp. 357-358). “The war acted as an immense bull market,” Paul Johnson wrote, “encouraging American entrepreneurial skills to fling the country’s seemingly inexhaustible resources of materials and manpower into a bottomless pool of consumption” (A History of the American People, p. 780). But the post–World War II era also marked a key transition in history as the birthright blessing promised to Ephraim (Great Britain) began to decline and the blessings to Manasseh (United States) reached their zenith. Just as the 19th century belonged to the British, the 20th would be the American century. 94 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY The American century: Pinnacle of Manasseh’s birthright blessings Until the mid-20th century, the United States was truly a nation of unrealized potential. It only reached its potential as it emerged from World War II as the world’s most powerful nation by almost every measure—militarily, economically and industrially. Let’s look closer at the prophecies to Manasseh and how they were fulfilled in the United States. “He also shall become a people, and he also shall be great” (Genesis 48:19). Manasseh’s descendants differed from Ephraim’s in one major way: Whereas Ephraim would become a colonizing people inhabiting multiple nations around the world, Manasseh would become one single great nation. The United States put its attention on westward expansion across the North American frontier—joining this land “from sea to shining sea” as a single nation—and had little involvement in global politics. Even after World War II, when it enjoyed superpower status, “the United States did not create a new colonial empire for itself on the British model” (God and Gold, p. 112). Instead, the U.S. supported the nations it defeated in rebuilding and creating a global order based on free trade and democracy. Even when it had the power to colonize and subjugate peoples, the U.S. historically has been uncomfortable with the idea of imperialism. “Joseph is a fruitful bough … his branches run over the wall” (Genesis 49:22). Being rooted on a large continent containing abundant natural resources meant that the U.S. could build its base of wealth by expansion, rather than global colonization. In the post–World War II era, though, America began spreading its branches throughout the world as an informal empire, creating alliances and global trade networks to maintain world order and economic ties that further benefited it. Notice this insight from historian Niall Ferguson: … relatively short” (Colossus: The Rise and Fall of the American Empire, 2004, p. 13). Is this not exactly what we would expect of Manasseh’s form of imperialism based on Jacob’s prophesied distinction between Ephraim and Manasseh? LifeHopeandTruth.com 95 “The archers have bitterly grieved him, shot at him and hated him. But his bow remained in strength, and the arms of his hands were made strong” (Genesis 49:23-24). God was clear—enemies would lie at Joseph’s gates, but his descendants would be militarily strong and victorious during the height of their blessings. Many aggressor nations have opposed the United States throughout its history—Nazi Germany and Imperial Japan, the Soviet Union and other communist nations during the Cold War, and, more recently, radical Islamic terrorist organizations. To this point, the U.S. has been able to defeat and subdue most of these aggressors through superior military strength. In fact, it inherited Great Britain’s former role of “world policeman” after World War II. “Blessings of heaven above, … of the deep, … of the breasts and of the womb” (Genesis 49:25). History has rarely seen a nation grow so speedily in attaining such a high level of affluence and wealth as the United States. But never has a nation come into such a vast land teeming with such abundant natural resources of coal, copper, lead, uranium, gold, iron, nickel, silver, natural gas and petroleum. American industry exploded after World War II, using these resources to meet the consumption demands of a growing nation. Historians label the post-war population spike “the Baby Boom,” as GIs returned from the war and found stable jobs and began building families. The sustained fertility rates between 1946 and 1964 were staggering, with annual births consistently tallying over 4 million. This not only dramatically impacted the U.S. population, but also coincided with an enormous increase in the middle class, raising the American living standard and fueling a huge boom in economic growth driven by consumer consumption. Continued Anglo-American world dominance? History clearly chronicles the British and American dominance over the world throughout the last 200 years by nearly every measurement—economic, political, cultural and military. This did not just happen. The power and prosperity experienced by these peoples grew from the fulfillment of blessings and prophecies made thousands of years earlier. Though the era of British geopolitical dominance ended in the 20th century, Ephraim’s descendants continue to enjoy high standards of living and remain moderately powerful on the world stage. The United States continues to stand as the world’s mightiest nation—but it is also a nation in decline by multiple measurements. Manasseh’s descen96 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY dants no longer dominate the world as they did in the post–World War II era, and they are continually challenged on many fronts—by Islamic extremism, a growing China, a belligerent Russia and a feisty Europe that no longer blindly follows America’s lead. Bible prophecy reveals that the blessings given to the United States and British-descended nations will not continue forever. In fact, many prophecies reveal that because of their sins, these nations will go through a horrible time of national punishment before the return of Jesus Christ. Read the concluding chapter to see what prophecy says about the future of these nations. Whether or not you live in one of these nations, the prophesied events will greatly impact your life. LifeHopeandTruth.com 97 INSET GOD’S INTERVENTION IN BRITISH AND AMERICAN HISTORY I n Genesis 49:23-24 Jacob prophesied that Joseph’s descendants would be hated and attacked by many enemies. God promised their “bow” (symbolic of military power) would remain “in strength, and the arms of his [Joseph’s] hands were made strong by the hands of the Mighty God of Jacob. … By the God of your father who will help you” (verses 24-25). In other words, even though frequently attacked, God would help them prevail. This prophecy has come to pass many times in the history of the British and American peoples as both nations have faced many enemies who have tried to destroy them. Here are a few examples of God’s clear intervention on behalf of Ephraim and Manasseh. Defeat of the Spanish Armada In May 1588 King Philip of Spain dispatched his country’s famed armada of ships to conquer England, intending to secure Spain’s supremacy of Europe and restore Catholicism to the British Isles (at this time under the rule of the Protestant Queen Elizabeth I). On July 19, 1588, the English fleet spotted the Spanish Armada and gave chase. As historian John Richard Green explained, “In numbers the two forces were strangely unequal; the English fleet counted only 80 vessels against the 130 which composed the Armada. In size of ships the disproportion was even greater. Fifty of the English vessels, including the squadron of Lord Howard and the craft of the volunteers, were little bigger than yachts of the present day. … “Small, however, as the English ships were, they were in perfect trim; they sailed two feet for the Spaniards’ one. … Closing in or drawing off as they would, the lightly-handled English vessels, which fired four shots to the Spaniard’s one, hung boldly on the rear of the great fleet as it moved along the Channel … till the Armada dropped anchor in Calais roads” (A Short History of the English People, 1874 edition, pp. 410-411). Neither side lost ships in the first week of fighting, but on July 28 at midnight the 98 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY English sent eight fire ships toward the Spanish vessels, causing many of them to cut their anchors and scatter in confusion. As the Spanish tried to regroup, the English expended all of their remaining gunpowder firing their cannons upon their enemy. Wikimedia Commons W.B. Grant summarized what then inexplicably happened: “Three great galleons had sunk, three had drifted helplessly on to the Flemish coast; but the bulk of the Spanish vessels remained. The work of destruction, however, had been left to a mightier foe than Drake. King Philip II of Spain hoped to defeat England, remove Queen Elizabeth I from the throne and establish Spain as the world’s dominant colonial power. Spain’s fleet of 130 ships were defeated by 80 English ships. “Supplies fell short and the English vessels were obliged to give up the chase but the Spanish were quite unable to re-form, their last chance to do so being destroyed by a gale. The wind was so violently against them that they were forced to steer in a circuit around the British Isles, and on this journey to their home port many of the already damaged and battered vessels were driven ashore on the coasts of Scotland and Ireland” (We Have a Guardian, 1972, p. 4). Historians recount that the Spanish lost more ships and sailors to the stormy weather than to combat. Only about half of the Spanish vessels returned to Spain, and approximately 5,000 Spaniards died. The English lost fewer than 100 men and no ships. LifeHopeandTruth.com 99 Had Philip’s plan worked, it would have been Spain, not England, achieving global naval dominance and colonizing North America. Had that happened, the United States of America would never have existed. But as a result of the amazing English victory, Spain fell into permanent decline as a naval power while England further developed the navy that would eventually rule the seas. The miracle of Dunkirk Sometimes a miracle occurs following a major defeat. Such was the case for the British Expeditionary Force (BEF) at Dunkirk, France, during World War II. By all accounts, the British effort to help defend Europe, which had begun only months earlier in 1939, was an abject failure. Not understanding how to successfully fight the Germans, the British forces retreated to the town of Dunkirk (on the French coast). By May 24, the Germans had surrounded Calais, located just a short distance from Dunkirk, and the British, Belgian and French troops were in grave danger. Great Britain faced the terrifying reality that their entire mainland force could be killed or captured—leaving the British Isles extremely vulnerable. Then came a most surprising turn of events. The German commander ordered a halt to the advance, which Hitler later approved. In hindsight, historians consider the order to halt one of the greatest mistakes the Germans made in the entire war and one of the greatest breaks for the Allies. The British quickly devised “Operation Dynamo,” a plan to evacuate 338,226 British, Belgian and French troops from Dunkirk between May 27 and June 4, 1940. Completely unforeseen by both sides, two surprising weather systems developed that greatly aided the evacuation. As C.B. Mortlock reported in The Daily Telegraph on June 8, 1940: “As the story is told, two great wonders stand forth; and on them have turned the fortune of the troops. … The first was the great storm which broke over Flanders on Tuesday, 28 May. The second was the great calm which settled on the English Channel during the days following.” The storm allowed soldiers to march to Dunkirk from 8 to 12 miles away without having to worry about German aircraft grounded by the bad weather. The calm waters on the English Channel allowed many smaller English vessels, needed to evacuate soldiers from the beach to larger transport ships, to cross the channel and assist in the operation. 100 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY With German troops closing in on them and with their backs to the English Channel, over 330,000 Allied troops were saved in what became known as “the miracle at Dunkirk.” Wikimedia Commons In the words of Winston Churchill to the House of Commons, the experience was a “miracle of deliverance” following a “colossal military disaster.” In an article titled “Dunkirk: The Miracle of Deliverance,” The Telegraph stated: “The evacuation from Dunkirk was undoubtedly the final phase in a defeat. But, had this culminated in the BEF’s surrender and capture, it is inconceivable that Britain would have fought on. The Germans might not have invaded our Island, but instead, as Hitler always hoped, Britain would have been forced to agree [to] peace terms. The escape of the BEF followed by the failure of the Luftwaffe to win the Battle of Britain bought a precious commodity: time, allowing the British to absorb the lessons of the campaign in France and Flanders, to re-equip and retrain her Army. “In 1939, the United States Army was ranked 17th in size in the world after Romania. It is therefore out of the question that America could have played any part in stopping the expansion of Germany had Britain capitulated. Without Britain, and her Empire and Commonwealth, continuing to resist, Hitler could have won the war, even after the invasion of Russia. The evacuation of the BEF at Dunkirk truly was a retreat to ultimate victory over Nazi Germany.” LifeHopeandTruth.com 101 Again, were the Allied troops who were evacuated from Dunkirk just lucky? Or did God influence the Germans to halt their advance short of Dunkirk, and did He bring the weather that facilitated this large-scale operation? The miracle of D-Day Weather also contributed mightily to the Allies’ success at the critical turning point of the war on the Western front—D-Day. This invasion of Normandy, which began June 6, 1944, was the largest amphibious invasion in military history. Those planning the invasion determined that only a few days each month met the criteria for the conditions necessary for the invasion. General Dwight Eisenhower had selected June 5 as the operation date, but high winds, heavy seas and low clouds on June 4 required that the operation be delayed. After British meteorologists predicted that the weather would improve sufficiently to launch the operation on June 6, General Eisenhower discussed the situation with other senior personnel and gave the command to begin. While the planners of D-Day saw their window of opportunity in the weather, the Axis powers had a different picture. The meteorological station in Paris providing weather information for the German military operations indicated several weeks of bad weather. As a result, many German commanders, confident that no invasion could take place under these conditions, took temporary leave of their posts and gave their soldiers time off. Commenting on the amazingly cooperative weather for the Allied forces, The Times reported on Sept. 2, 1944: “On the morning of the assault the wind had moderated, and the cloud was not only well broken, but its base was at least 4,000 feet high, ideally suited for the large-scale airborne operations. In the hour preceding the landings, when perfect conditions for pinpoint bombing were so essential, there were large areas of temporarily clear sky, and throughout the critical time medium and light bombers were unhampered.” Even though the D-Day invasion caught the Axis powers by surprise, the operation was costly. By the day’s end, more than 9,000 Allied soldiers lay killed or wounded. Even so, the losses were less than Eisenhower had anticipated. Most importantly, a front had been established that allowed more than 100,000 soldiers to enter continental Europe and ultimately defeat Nazi Germany. 102 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY Men of the 16th Infantry Regiment, U.S. 1st Infantry Division land on Omaha Beach on the morning of June 6, 1944. Wikimedia Commons The Allied planners and participants considered D-Day a miracle. It seems that God once again intervened in the weather on behalf of the descendants of Ephraim and Manasseh. Coincidence or providence? Some may consider the British and American peoples to have been simply lucky at key junctures in military history and successful because of their resources and their enemies’ mistakes. But viewed within the context of God’s promise to protect Joseph’s descendants, these events serve as a testimony to the identity of the recipients of the birthright blessings given to Joseph. God not only has the power to direct historical events, but also has control of the weather (Isaiah 46:9-10; Leviticus 26:3-4). However, despite His past intervention on behalf of these nations, the Bible also reveals a time is coming when God will remove His divine protection and blessings from the British and American peoples. LifeHopeandTruth.com 103 CHAPTER 5 iStockphoto.com/Dan Kitwood 104 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY WHAT’S AHEAD AND WHAT SHOULD YOU DO ABOUT IT? “Cast away from you all the transgressions which you have committed, and get yourselves a new heart and a new spirit. For why should you die, O house of Israel?” (Ezekiel 18:31). A Protesters burn American and British flags outside the American Embassy, in Grosvenor Square, on Sept. 11, 2010, in London, England. s we have shown, God has been faithful to the promises He made to Abraham over 3,500 years ago. He divided those promises into two basic parts: the birthright blessing (the material blessings of national greatness) and the spiritual blessing that would apply to all nations through Abraham’s Seed—Jesus Christ (Galatians 3:16). The opportunity to respond to God is not based on one’s race, gender or ethnicity (Galatians 3:28). Many people today recognize the fulfillment of the promise of grace through Jesus Christ. But how God fulfilled the material and national blessings to Abraham’s descendants in the “last days”—the time just before Christ’s return—has largely remained a mystery (Genesis 49:1). Most assume these promises were fulfilled in the ancient nation of Israel, but, clearly, ancient Israel never received these blessings in their fullness. Yet God declares that He is faithful to the promises He makes—He does not lie or exaggerate (Numbers 23:19; Titus 1:2; Hebrews 6:18). The 10 tribes of Israel went into captivity and lost their identity because of their national sins. But God knows who they are today and gives us clues to identify their descendants in modern times—and the biblical, historical and archaeological evidence LifeHopeandTruth.com 105 points to the modern descendants of Jacob existing today in the United States, Britain and the Commonwealth nations, and other nations of northwest Europe. These nations enjoy their prosperity and physical blessings not because they are physically superior to any other people, but solely because of the faithfulness of their progenitor, Abraham, and the other patriarchs. In fact, the Israelites ancient and modern have been far from perfect. They were punished for their sinfulness in the past and will face a similar punishment in the future. We now come to two very important questions. What is the significance of all this information? And how should you respond? Significance of identification Does the identity of the Israelite nations carry any significance today? Some dismiss it as merely historical fascination that has no relevance. Others label it as racist because misguided people have ignorantly used similar ideas to promote racism. We reiterate and emphasize that these blessings were not due to any sort of genetic or physical superiority—in fact, God emphatically called Israel “the least of all peoples” (Deuteronomy 7:7). Again, what is the significance of knowing the identity of the modern descendants of Israel today? The answer is that this knowledge helps us understand the messages of the prophets and the future of these nations before Jesus Christ’s return. Identifying the descendants of modern Israel is an essential key to understanding many Bible prophecies. What are some of the significant prophecies God gave regarding these descendants of Abraham? Future prophecies By understanding the modern identity of Israel, one can understand where these nations are addressed in Bible prophecy. Instead of using their modern names in prophecies, God addressed them by their ancient names. Numerous Bible prophecies yet to be fulfilled refer to the nations or families of “Manasseh,” “Ephraim,” “Joseph,” “Jacob” and “Israel.” Understanding that the United States, Britain and the nations of the Commonwealth are the modern descendants of Manasseh and Ephraim, the grandsons of Jacob (Israel), helps us know where to look for those prophecies to unfold. Before moving forward, we need to remember that the modern nation called Israel (located in the Middle East) is the homeland of many Jews, descendants chiefly of Judah, one of Jacob’s 12 sons (Genesis 49:8-12). As we 106 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY THREE KEYS TO UNDERSTANDING OLD TESTAMENT PROPHECIES It’s important to understand three basic applications of prophecy when studying the Old Testament. The correct application is determined by studying the context and time frame in which the prophecies were written. Prophecies can have: 1. A historical application, usually a warning to people of impending punishment for their sins and a call to repent. Isaiah 1:1-16, for example, shows the prophet urging Judah’s kings and people to repent of their evil doings. 2. A dual application to both the ancient Israelites and their modern descendants today. The blessings for obedience and the curses for disobedience to God’s law recorded in Leviticus 26 and Deuteronomy 28 are examples of dual application. What applied to the ancient Israelites still applies today. 3. A future application. Sometimes God’s messages through His prophets came after their respective nations had already fallen, and the context of the passage shows that the prophecy was for the descendants of Israel in the end times (Genesis 49:1; Daniel 11:40). For more keys to understanding the Bible’s prophecies, read the articles in the “Prophecy” section on LifeHopeandTruth.com. have explained, this nation does not represent all the other sons of Jacob. When we read future prophecies of Jacob and Israel, they are usually not referring specifically to the Jewish state of Israel. The more accurate biblical identification for this state is Judah. Increasing national sin In the past, the British and American peoples had a reputation for at least a show of morality. When the British would colonize an area, missionaries spreading Bibles and Judeo-Christian teaching always followed. The founders of America often hearkened back to biblical principles—belief in God and Christian tenets played instrumental roles in the founding and ongoing suc LifeHopeandTruth.com 107 cess of the nation. In fact, Thomas Jefferson and Benjamin Franklin suggested that the “Great Seal of the United States” include a picture of Israelites and Moses following the pillar of fire. For a large portion of their history, many biblical values were the standard in Anglo-Saxon nations. People commonly acknowledged God as their source of blessings, sexual immorality was generally taboo, and the 10 Commandments served as the basis for morality. But over the course of the 20th century, these nations gradually discarded the foundation of biblical morality and increasingly embraced secular and antibiblical morality. A careful comparison of God’s commands to the modern nations of Israel today shows that these nations are brazenly rejecting God and His ways on a national and individual level. Moral standards continue to slide in these nations, paralleling the increasing skepticism about God and the Bible and the outright rejection of His 10 Commandments as a standard of morality. Blatant examples of breaking of God’s commandments include trampling on the seventh-day Sabbath, idolatry of many sorts, sexual immorality and the breakdown of the traditional family. Much of this moral breakdown is traceable to the sexual revolution of the 1960s. Cohabitation before marriage is now generally accepted, abortion is legal and used as a method to escape the consequences of illicit sex, and homosexual marriage is now legal in nearly all nations of Israelite descent. Instead of setting an example of morality and goodness, it’s sadly ironic that even as many of these nations lead the world in terms of technology and innovation, they also produce a large percentage of the world’s pornography, exporting sin as entertainment. For more information on these problems, read “Why Is God Angry With America?” on LifeHopeandTruth.com. Instead of worshipping the true God, the modern Israelite nations practice idolatry by putting many things before God, including rampant materialism, false religion and elevating freedom of choice over biblical principles of morality. Because of their increasing sins, which are really a slap in the face of the God who has given these blessings, He has already foretold that He is going to severely, and justly, punish the nations of modern Israel to bring them to their senses. Let’s now focus on prophecies directed to Israel today. Jacob’s trouble: A time of national punishment Jeremiah 30 holds a sobering prophecy of the future for the modern descendants of Jacob. First, note the setting for this message—it’s after both 108 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY the northern kingdom of Israel and the southern kingdom of Judah had fallen that God told Jeremiah to record His words for the future. In verses 3-4 we read, .” Note that God spoke of bringing both “Israel and Judah” back to the Holy Land. But only the Jews, representing the tribe of Judah, returned to this land after 70 years of captivity in Babylon. The other tribes of Israel never returned! So this has yet to occur, but when? God then explained that prior to their coming back they would see a time of “fear, and not of peace” when people’s faces will turn pale (verses 5-6). “Alas! For that day is great, so that none is like it; and it is the time of Jacob’s trouble, but he shall be saved out of it” (verse 7). This prophecy was not for ancient Israel and Judah—they had already experienced their time of punishment—but for a future time of trouble on the modern nations of Israel and Judah. This “time of Jacob’s trouble” will be an unprecedented time of difficulty for the Israelite nations prior to Jesus Christ’s return to earth. God will bring this punishment because the sins of Jacob’s descendants will have “increased” (verse 15)—something we are seeing occur before our eyes. In addition to his prophecies of the destruction of Jerusalem, the prophet Ezekiel also recorded many prophecies (long after the fall of ancient Israel) of a future punishment on these people. He prophesied “toward the mountains of Israel” of end-time punishment for their sins (Ezekiel 6:2). In the Bible, “mountains” typically represent governments or nations—so this prophecy is directed to the modern nations of Israel. He declared that God will “bring a sword [representing military force] against you, and I will destroy your high places” (verse 3). Part of this defeat will be the destruction of cities (verse 6). With today’s widespread nuclear capabilities it’s no longer inconceivable that major cities—such as London, New York, Chicago, Los Angeles, Toronto, Sydney, etc.—could be suddenly destroyed. Famine and pestilence—often the byproduct of war (verses 11-12)—will come as well. Ezekiel 7 prophesies that modern Israel will experience total military defeat (verses 14, 21, 24), widespread terror and suffering (verses 16-18) and economic collapse (verse 19). That’s not all. The book of Revelation describes a European power (called “the beast”) that will economically and militarily dominate during the end LifeHopeandTruth.com 109 times (Revelation 13:11-18; 17:12-18), replacing the United States, Britain and the Commonwealth nations as the world power before Christ’s return—and will be responsible for the fall and captivity of the modern Israelite nations. Yes, as hard as it may be to imagine, the United States and other nations descended from Britain will fall during the coming Great Tribulation (Jeremiah 30:8). Read through Leviticus 26, which describes the horrifying curses to come on Israel for national disobedience. In their stubbornness against God, the modern nations of Israel, primarily the United States, Britain and the Commonwealth nations, will face these punishments. God declared that as a result of national sins, He would “break the pride of your power” (verse 19). Though the United States, United Kingdom, Canada, South Africa, Australia, New Zealand and the state of Israel are some of the most prosperous and powerful nations on earth today, this power will be taken away. We see, in fact, that power already beginning to weaken. A time of trouble for all people Several other prophets also spoke of this perilous time coming to the descendants of Jacob and the peoples of all nations before the return of Christ. Daniel called it “a time of trouble, such as never was since there was a nation” (Daniel 12:1). Zephaniah referred to a “day of wrath, a day of trouble and distress” (Zephaniah 1:15). Many prophets announced the coming Day of the Lord. Isaiah added that this “indignation of the Lord” will be “against all nations” (Isaiah 34:2). Joel described it as “the great and awesome day of the Lord” (Joel 2:31). Explaining to His disciples what would happen before His return (Matthew 24:3), Jesus cast these terrible days as a time of “great tribulation, such as has not been since the beginning of the world until this time, no, nor ever shall be. And unless those days were shortened, no flesh would be saved; but for the elect’s sake those days will be shortened” (verses 21-22). Revelation reveals times coming that will be worse than any previous war and suffering ever endured by humanity: “By these three plagues a third of mankind was killed” (Revelation 9:18). At current population levels this means over 2 billion people will die, a staggering number once unbelievable but now possible in the nuclear age. But despite this unprecedented suffering, the majority of people will stubbornly refuse to repent and turn to God (verses 20-21). For further study on this troublous time, see the LifeHopeandTruth.com articles: “Great Tribulation” and “Wrath of God.” 110 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY God’s message to Israel today God’s servants take these sobering prophecies very seriously. God’s Church today—like the prophets, Jesus Christ and the early Church—is commissioned to preach a message of warning, repentance and hope to the people of Israel today. Long ago God revealed to Ezekiel the message His servants were to take to the nations of Israel. They were, and are, a “watchman for the house of Israel” (Ezekiel 33:7). Watchmen in Israel were posted to warn of impending threats; spiritually, God’s watchmen were told to sound the warning and call people to repentance—to “warn the wicked to turn from his way” (verse 9). God declares: “I have no pleasure in the death of the wicked, but that the wicked turn from his way and live. Turn, turn from your evil ways! For why should you die, O house of Israel?” (verse 11). This message contains two essential elements: first, a warning of the consequences of national and personal sins and, second, a plea for repentance, imploring people to turn away from sin and toward the true God. Jesus Christ preached the same message: “Jesus came to Galilee, preaching the gospel of the kingdom of God, and saying, ‘The time is fulfilled, and the kingdom of God is at hand. Repent, and believe in the gospel’” (Mark 1:14-15). After His death and resurrection, His Church continued the proclamation (Acts 2:38; 3:19; 17:30; 26:20), and it is being preached through God’s servants today—both to the nations of Israel and to all nations on earth. In the end, it is not a message of “doom and gloom,” but an incredible message of; compare Micah 6:8). These prophetic messages are all about cause and effect. Blessings come from obedience; curses come from sin (Hosea 10:12-15). God’s desire is for us as His children to repent of our disobedience and then strive to obey His good and beneficial laws so we can be blessed. Someday humanity will understand, but for the moment, the reality is that humanity is only increasing in sin. Our responsibility to repent What about you? Even if whole nations don’t repent, it is still possible for individuals to repent, turn to God and be blessed and protected! God protected righteous individuals like Noah and Lot, who lived in societies surrounded by wicked LifeHopeandTruth.com 111 ness (Genesis 6:7-8; 19:16-17). And He also shows through other prophecies that many righteous individuals will receive divine protection from the coming Great Tribulation (Luke 21:36; Revelation 3:10; 12:14). The all-important question is: What will you do with this message? Will you respond by repenting of your sins and turning your life to humble obedience to God? Or will you be like the people during the days of Noah, who rejected Noah’s preaching and lived as they wished until punishment came (2 Peter 2:5)? We sincerely urge you to listen to God, to humbly confess and repent of your sins and commit to living according to God’s commandments. Hope remains for Israel The good news for the nations of Israel, and the entire world, is that hope lies beyond the coming end-time suffering. Jesus Christ will return to earth and save all of humanity from self-destruction. He will establish His Kingdom and rule all nations. Many descendants of the ancient Israelites will eventually repent, the Bible assures us. One of the first actions Christ will take is to destroy the “beast” power and free the Israelite nations from national captivity. He will then bring Israel back together: ; also see Psalms 14:7; 85:1-2). He will rescue them from the severe punishment of the time of “Jacob’s trouble,” and bring them home, back to their land of origin. The northern 10 tribes will be reunited with Judah—healing the breach that has existed since the two kingdoms split under Rehoboam and Jeroboam. God told Ezekiel to join two sticks together as one, symbolizing a remarkable prophecy He then gave to bring comfort to the Jews and their “lost” brothers of Israel: “And I will make them one nation in the land, on the mountains of Israel; and one king shall be king over them all; they shall no longer be two nations, nor shall they ever be divided into two kingdoms again” (Ezekiel 37:22). The prophet Isaiah also eloquently described how the people of Israel will be gathered from all over the world and miraculously returned to their homeland (Isaiah 43:2, 5-6, 14-17). Try to imagine the Israelite peoples coming from places such as the United Kingdom, Canada, the United States, New Zealand 112 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY and Australia back to their homeland—finally understanding their identity and the true God! Were these just the mystical ramblings of deluded old men, as some scoff, or were these the promises of God, assurances that Judah and Israel will finally repent of their sins and be restored? Yes, there is hope beyond the coming trouble! Though the Israelite nations will be brought down and punished, God will restore them to greatness. They will abandon their idolatrous and sinful ways and “will again obey the voice of the Lord and do all His commandments” (Deuteronomy 30:8). Furthermore, the future is so clear in God’s plan that He went so far as to explain that the resurrected King David will rule over the reunited Israel (Jeremiah 30:9; Ezekiel 37:24-25). Then Jesus Himself explained that under David, the individual tribes will be ruled by the 12 apostles (Matthew 19:28; Luke 22:30). At that time, Jerusalem will become the world’s capital and “out of Zion shall go forth the law, and the word of the Lord from Jerusalem” (Isaiah 2:3). Eventually, the knowledge of God will fill the entire earth (Isaiah 11:9). People of all nations will obey and reap the blessings of obedience! The true Sabbath and holy days will be restored to Israel and will eventually be faithfully observed by all nations around the world (Isaiah 66:23; Zechariah 14:16-19), resulting in worldwide blessings and happiness—a stark reversal of the curses that have come through rejecting God’s days of worship. Eventually, Israel will be united with the gentile nations who formerly enslaved them in serving and obeying the true God: ). Israel will finally fulfill its intended destiny to be a model nation, setting the example of obedience toward God, which brings such positive blessings (Isaiah 2:3; 27:6; Zechariah 8:23). The Kingdom of God only starts with Israel—God will see that it spans the globe to include all peoples and nations! This is the good news of the coming Kingdom of God—the heart of the true gospel message (Mark 1:14). Your response Yes, God has set on the horizon hope for all humanity, but between now and the return of Christ we seem determined to chart our own course, one that is leading to destruction! Why? Why is our world today so bent on following a path of ruin and suffering? Why are we grasping for solutions to our LifeHopeandTruth.com 113 problems but finding no answers? What are we missing? Is the Bible right after all—that the root cause of our problems is our choosing to live by our own ideas while ignoring God and His Word? God foretold centuries ago that the world would inevitably come to this point. The apostle Peter warned). Humanity has never been particularly obedient to God, but today the “scoffers” Peter spoke of are growing as never before, casting God’s words behind and influencing others to do likewise! Many even ridicule the thesis of this booklet, scorning the idea that remnants of the “lost tribes of Israel” could exist in any form today. What about you? Do you look at the handwriting on the wall of today’s declining moral and spiritual conditions and fear where this is leading, wondering what it means for your family and loved ones, and what you can do about it? God gives a clear-cut answer. He pleads with all of His children today, just as He did with Israel—turn to Me, He says, seek Me, repent and change your ways! What could be more important than listening to God and seeking to draw close to Him? Fulfilled prophecy stands as a mighty witness to God’s truthfulness, His power, even His existence! Why would anyone ignore the prophecies of earthshaking events yet to unfold? Will you pay heed to the prophetic messages of the watchmen God has sent—the sober warning of the consequences for personal and national sins, and the incredible promises of the blessings for obeying Him? Will you wholeheartedly respond to God, turn to Him in heartfelt repentance of your sins and fully dedicate yourself to loving and obeying Him? God’s prophecies about the world in general, and the descendants of Israel in particular, are prime examples of Jesus’ words: “for wide is the gate and broad is the way that leads to destruction, and there are many who go in by it” (Matthew 7:13). That’s why He fervently urged those listening to “enter by the narrow gate.” Yes, it’s a narrow and difficult path, He said, and “there are few who find it,” but it is “the way which leads to life” (verse 14). Please, choose life! 114 THE UNITED STATES, BRITAIN AND THE COMMONWEALTH IN PROPHECY WHAT SHOULD YOU DO NOW? A time of great trouble is coming on all nations—that is certain. What remains to be seen is the role these prophecies will play in your life. Will you be counted worthy to escape it? Will you make the changes in your life that God desires you to make? No prophecy can answer that—only you can. Here are four immediate action steps you can take: 1. 2. 3. 4. BEGIN STUDYING THE BIBLE TO LEARN WHAT GOD EXPECTS OF YOU. Begin by studying the 10 Commandments (Exodus 20). These are basic laws that govern life. Studying and immediately implementing the 10 Commandments will bring blessings to your life (Deuteronomy 11:27; 28:1). To learn how to start applying these laws today, read our free booklet God’s 10 Commandments: Still Relevant Today. LEARN HOW TO PRAY AND COMMUNICATE WITH GOD. When you learn His standards and where you have fallen short (sinned), go to Him in repentance. Ask for His forgiveness and help to change. Seek genuinely to overcome sin and change your life. To learn how this process works, study our free booklet Change Your Life! START KEEPING GOD’S SABBATH AND HOLY DAYS. Israel was punished for neglecting these special days. If you really want to be blessed in your life, you can’t afford to make the same mistake. The true biblical Sabbath is on the seventh day of the week (from Friday sunset to Saturday sunset). The biblical holy days are observed in the spring and the fall (and do not include Easter or Christmas). To learn how to integrate God’s special days into your life, read The Sabbath: A Neglected Gift From God and From Holidays to Holy Days: God’s Plan for You. REACH OUT. This booklet is published by the Church of God, a Worldwide Association. We have pastors all around the world who can help you. If you are heeding the warning contained in this booklet and want to change your life, we are here to help. We also welcome you to join us in this effort to proclaim this message to the nations. Visit our website to learn more about us and to contact a pastor in your area: cogwa.org. LifeHopeandTruth.com 115 Recommended Reading From LifeHopeandTruth.com God’s 10 Commandments: Still Relevant Today The Sabbath: A Neglected Gift From God Why is the world so violent? Why do half of Western marriages end in divorce and so many children live in singleparent families? What are so many overlooking? What is the missing key to living a happy and productive life? If you want true peace and happiness, acting on the biblical lessons in this booklet is vitally important to you! From the beginning, God designed a day of rest and refreshment as a special blessing for humanity. So why do so few Christians today observe the Sabbath? In this booklet, explore the fascinating biblical story of the Sabbath and how you can enjoy the wonderful benefits of this neglected gift from God. Change Your Life! Welcome to the Church of God, a Worldwide Association Do you want to see your life change, but wonder what to do? This booklet will help you identify the most important changes you can make, and it shows how these changes will make all the difference in your life! You really can experience the life God wants you to have—one full of meaning, satisfaction and joy! All of these booklets are made available free of charge by the Church of God, a Worldwide Association. Who are we? What are our beliefs, and what is our mission? Find out what we are all about in this booklet. To keep up with world trends and Bible prophecy, subscribe to our Insights Into News & Prophecy blog and our bimonthly magazine Discern: A Magazine of Life, Hope & Truth.HopeTruth @LifeHopeTruth Life,Hope & Truth Life,Hope & Truth LifeHopeandTruth info@cogwa.org
https://issuu.com/discern/docs/united-states-britain-and-the-commo?e=10472610/37466957
CC-MAIN-2017-39
refinedweb
33,954
59.43
Is there an ideal or suggested method for finding multiple non-overlapping paths in a GridGraph? Preferably in a deterministic fashion. Currently it seems to me like I'm going have to either set the pathfinder to singlethreaded only (since each path needs to calculate and then claim its path nodes in the grid completely before the next path can run) and occasionally manually push items to the front of the work queue, which will be bad for my others graphs in the scenario which don't have this restriction and could safely use multiple threads; or devise a separate work queue just for requests on the specified graphs which, while it would allow me to use multiple threads, seems potentially over-complicated for the task at hand. Hoping that I'm missing something already built in that might help with this. For some situational context: I am using gridgraphs to calculate paths for connections on a circuitboard-esque editing surface, so no two paths can intersect on any nodes, and the editor cannot determine placement of items in some situations unless it is verified that all connections to it can also find (unique) paths to the new location. Thanks! Hi To solve that problem optimally is unfortunately a very hard problem (NP-Complete if you know that term).It can be reduced to this problem: can usually be solved very well using linear programming, e.g using the simplex algorithm. However those algorithms are well outside the scope of this package. If you are just looking for an approximation that may or may not be optimal you should be able to use the ITraversalProvider to make sure that future paths do not use any of the same nodes as the paths that you have already calculated. The paths have to be calculated sequentially, otherwise there is no way to ensure that they do not use the same nodes. One could calculate them in parallel speculatively and fall back to sequential calculation if there are collisions, but I don't think that would help much. I wrote an example for this use case. Here's some copy and paste from it (it is not included in the current documentation, it will be included in the next version): This example shows how to use the Pathfinding.ITraversalProvider interface to generate paths like on a circuit-board.The image below shows a test case when using the script to calculate 4 paths on a small grid. The visualization of the paths has been improved manually using an external photo-editing application. Note that finding paths on a circuit-board in an optimal way is a very hard problem (NP-Complete). For further information about that, see Attach this script to any GameObject and fill in the 'items' array with the start end endpoints of each path. See Also Utilities for turn-based games, ITraversalProvider using UnityEngine; using System.Collections.Generic; using Pathfinding; public class CircuitBoardExample : MonoBehaviour { [System.Serializable] public class Item { public Transform start; public Transform end; } public Item[] items; class Blocker : ITraversalProvider { public HashSet<GraphNode> blockedNodes = new HashSet<GraphNode>(); public bool CanTraverse (Path path, GraphNode node) { // Override the default logic of which nodes can be traversed return DefaultITraversalProvider.CanTraverse(path, node) && !blockedNodes.Contains(node); } public uint GetTraversalCost (Path path, GraphNode node) { // Use the default costs return DefaultITraversalProvider.GetTraversalCost(path, node); } } void Update () { var traversalProvider = new Blocker(); for (int index = 0; index < items.Length; index++) { var item = items[index]; // Create new path object ABPath path = ABPath.Construct(item.start.position, item.end.position); path.traversalProvider = traversalProvider; // Start calculating the path and put the path at the front of the queue AstarPath.StartPath(path, true); // Calculate the path immediately path.BlockUntilCalculated(); // Make sure the remaining paths do not use the same nodes as this one foreach (var node in path.path) { traversalProvider.blockedNodes.Add(node); } // Draw the path in the scene view Color color = AstarMath.IntToColor(index, 0.5f); for (int i = 0; i < path.vectorPath.Count - 1; i++) { Debug.DrawLine(path.vectorPath[i], path.vectorPath[i+1], color); } } } } I had not considered this problem as isomorphic to MCF before; excellent observation! Thankfully I do not require an optimal solution to this, simply one that is at least somewhat performant, produces a relatively repeatable that makes sense to the user, and of course isn't too terrible to maintain. ITraversalProvider is precisely the kind of thing I was looking for and I'm not sure how I never noticed it when looking through the code! I think it will be a huge improvement over my current implementation which relies heavily on registering graph updates for execution in between each individual path (to claim the nodes by disabling their walkability) and enforcing some order to ensure that any other paths outside of the 'set' don't get computed until it is safe to do so. I am still left with the downside of having to run single threaded, which is fine for the graph in question but less than ideal for the other graphs that do not have this requirement and could benefit from additional threads, but I think that's an acceptable tradeoff that's not worth finding a solution for at the moment. Thank you for the very thorough response; incredibly helpful! And excellent product, by the way. You do not have to use a single thread for all pathfinding code. The only constraint is that these paths are calculated in sequence, not in parallel. That is accomplished in the above code using the BlockUntilCalculated method call (but you don't want to calculate everything in one frame you can use path.WaitForPath instead using a coroutine). Followup question for you on this! While overriding the traversal provider did allow me to get most of the functionality I wanted, I still have a couple issues. Currently one of my last remaining annoyances is in relation to diagonal connections on the graph. Normally in an eight-way graph, I expect (and desire) that I should not be able to path from a node to its upper-right neighbor (offset 5, by the numbering you use in code) if offsets 1 and 2 are both already occupied. However, since I am using custom data to inform the traversal provider, I am not doing connection updates/recalculations as usual, so these paths wind up being valid and causing my "wires" to cross over each other. So, I've been trying to figure out how best to add this logic to the traversal provider and haven't had much luck. The CanTraverse method, as expected, accepts the argument for the node being traversed "to", but I'm not seeing how to find the node being traversed "from" in the path argument. Without this, it seems impossible to reject diagonal connections. Am I missing the information somewhere, or maybe taking the wrong approach entirely? (I do have another solution involving doing the walkability/connection updates as usual, but there are other issues with that, so I'm still preferring this one for now) Currently this is not possible using the ITraversalProvider interface.You will have to modify the GridNode.cs class. Find this line: if (!path.CanTraverse(other)) continue; immediately after that, add a new call to whatever method you want to use to validate the connection.The current node is "this" and the node that we are moving to is "other". That seemed to do it; thanks! My walkability-based solution would have required changing some of the code to make maxNearestNodeDistance graph-specific anyhow, so I suppose having to changed things in the pathing classes is kind of unavoidable either way. If I am right, the solution proposed can - and to be exact - lead to no paths found when paths exist because it is dependent on the order the paths are used. In the example image, imagine on path should go from top to bottom and one path uses all cells of one row (even if not required)
http://forum.arongranberg.com/t/finding-multiple-non-overlapping-paths/4169
CC-MAIN-2018-26
refinedweb
1,333
50.26
As we know HCI comes with pre shipped content for various integration scenarios, for example SAP Cloud for Customer Integration with SAP CRM or SAP Cloud for Customer Integration with SAP ERP. The content uses standard IDOC , which sometimes need to be extended to cater to customer specific requirement. In this blog post I will show how the extended IDOC can be used in place of standard IDOC while retaining the mapping provided by SAP for standard elements Assumption: you have already extended an IDOC. Step 1: Use the report SRT_IDOC_WSDL_NS to download the IDOC definition in form of a WSDL (See NOTE 1728487) Note: Standard report download the file with extension WSDL, which should be renamed to wsdl, else HCI will not recognize the service definition. For example in my case the file was saved with name COD_EQUIPMENT_SAVE01.WSDL, which I renamed to COD_EQUIPMENT_SAVE01.wsdl Step 2: Open the wsdl file with a text editor and remove the target namespace from the schema element as shown below Before: After: Step 3: Replace all the occurrence of tns: with empty value Step 4: Add maxOccurs attribute to the IDOC element to support bulking as in the original mapping Save your service definition file. Step 5: Copy the wsdl to the package src.main.resources.wsdl of the desired iFlow project Step 6: Finally replace the original IDOC definition with the imported IDOC definition in the mapping editor as shown below Step 7: Select the request message type Step 8: Save your mapping and make sure there are no errors reported Step 9: Verify your changes and make sure the existing mappings are retained, example: below I can see the IDOC occurrence has been set to 9999 Thanks for sharing this useful documentation Abhinash. 🙂 This is great! I never knew how to download the updated IDOC WSDL and keep the mappings – THANKS, Abinash!! Abinash, I think you have to change the screenshot at step 3 Amzing blog. very helpful Hi Abinash, I am still not getting it working. Maybe you can have a quick look at my recent post. Best Regards Florian Great Blog Abinash…Thanks 🙂 Hello Abhinash, I make the changes to the WSDL as suggested by you in the blog. Why I understand that this field is non mandatory and my scenario works without it, I am wondering why this doesn’t work once I assign the WSDL to the Channel. Any thoughts? Regards, Bhavesh Hello Bhavesh, The WSDL needs to have service, port and binding information before you can assign it to the IDOC adapter (look for any async iFlow from C4C to back end system and check the WSDL that is attached to the sender adapter ). But anyway I would advise not to attach the WSDL to IDOC sender as this will result in not being able to process bulk IDOC (IDOC packages as a single message). By leaving the WSDL from the IDOC adapter we are simply skipping the IDOC message validation step, which is what we expect. Also note if you attach WSDL to the sender IDOC adapter you need to send the content type as Application/x-sap.idoc in WE21, and because of limitation with SOAP will only support one IDOC per SOAP message. Best regards, Abinash Hi Abinash, any experience in using extended IDOC including the mapping of extension fields for sales arrangement in SAP C4C? I’ve created another SCN discussion for this one: Customer replication WSDL: extension fields of sales arrangement not included I would be happy if you guys can have a look and give me some help with that. Cheers Sven Hi Abinash, it’s me again 😉 I’m currently struggling with the integration mapping for the communication from ERP to C4C for customer master. Somehow there are mandatory fields not mapped and the mapping is errounous in Eclipse. I can’t figure out why because the only difference between standard and the extended mapping is the mapping of the extension fields in both systems. Would it be possible to send the Eclipse project to you so that you can check it for the underlying issue? Cheers Sven Ok Sven, please attach the iFlow, so that I or one of my colleague can have a look at it. Best regards, Abinash I would like to but as it seems I can only attach images, videos or URL links… How can I send or attach files like .zip? Ok, I am not too sure about that, may be you can try attaching it to a SCN thread. I would not like to provide my mail id in the forum 🙂 . Else you can also create a CSS ticket Best regards, Abinash I can totally understand that 😉 We’ve created message number 0000743490 2015. Thx for having a look at it. Cheers, Sven Hi Abinash, do you have any experience if there are special things to consider when doing this for extended DEBMAS IDOC? We are currently struggling with a xsl issue when trying to send customers from ERP to C4C. The result is this error message in HCI: Processing exchange ID-vsa522027-od-sap-biz-56865-1440790256443-20-6 in sap-map-pi:ERP_COD_BusinessPartnerERPBulkReplicateRequest{ Error = com.sap.xi.mapping.camel.XiMappingException: com.sap.aii.mappingtool.tf7.IllegalInstanceException: Cannot produce target element /ns0:BusinessPartnerERPBulkReplicateRequest/BusinessPartnerERPReplicateRequestMessage.. Cheers Sven Hi Camelot, Where you able to fix the issue? I also getting the same error. There is a custom segment in the IDOC WSDL. These fields are mapped correctly. But when we pass the IDOC from ECC to HCI I am getting the same error. Please let me know what you have done to fix it. Any help is much appreciated. Regards, Indra hello have you solved this error? i am facing the same problem @Abinash Nanda can you help as this error remains for 3 years without solving! Hi Abinash, Thanks a lot for your post. I have just one question. What do you mean by replacing all the occurrence of tns: with empty value? I have to delete from my wsdl file all the lines with “tns:” like the following one. “<xsd:element name=”E1MARC1″ type=”tns:E1MARC1.000″ minOccurs=”0″/>” Thanks in advance. Best regards. Maurin Hello Maurin, No, please do not delete any lines from the wsdl. Simply replace the character string “tns:” with empty string “”. Before: <xsd:element name=”E1MARC1″ type=”tns:E1MARC1.000″ minOccurs=”0″/> After: <xsd:element name=”E1MARC1″ type=”E1MARC1.000″ minOccurs=”0″/> Best regards, Abinash Abinash, Thank you a lot. Have a good day. Best regards. Maurin Good blog. Thank you Abinash Nanda Hello Abinash, I Followed Your allsteps, But Facing problem in Maping scr.main.resource.mapping while adding Target Element (ERP Customer) I am getting In global element extension fileds Instead DEBMAS_DEBMAS06 ,Please check below image. Thanks, Hello Tasnim, Can you please send me the WSDL that you downloaded from your Business suite system using the transaction code SRT_IDOC_WSDL_NS? Best regards, Abinash Hello Abinash, Thanks Issue Got resolved I was Needed to modify that WSDl little then its Fine, Thanks Hi Abhinash, I am have downloaded my idoc wsdl same way as you have suggested , but while replacing this wsdl in my mapping in eclipse i am getting this error. Please help me on this. HI A few days ago i made an extension (enhancement) to the IDOC MATMA05 , according to the steps on this forum: (I attached the WSDL generated). I was able to upload this WSDL to HCI and can make de deploy to the iflow, however when test the IDOC into the ERP, the following error is showed (monitoring by WE02 tx) No IDoc saved in target system (SOAP HTTP) Message No. EA391 Diagnosis The SOAP application was started in the target system. However, errors occurred in the target system, which mean that IDocs cannot be saved. The following error message was sent: An internal error occurred during message processing. For error details check the tail log in Integration Operations and the audit log. Procedure Inform your system administrator. I make the following verifications The result is: I changed the path prefix having on mind that in the new generated WSDL the message name has changed to the following: I changed the path prefix like this: I also made the parameters.prop file modification in the iflow in HCI And made the connection test, the result is as follows: Someone has the same problem? I appreciated your comments Best regards. I’m hoping that you can tell me what I’m doing wrong. I need to loop thru and concatenate the E1EDKT2-TDLINES for a specific TDID. What I have is working fine as long as the TDID that I’m wanting to perform this for is the first encountered. If it’s not, the target field is being populated with the TDLINE values from the first occurrance of TDID. What am I doing wrong? This is the mapping: I have the context of TDLINE set to E1EDKT1. I greatly appreciate any assistence!!! Thank you!! Hello Abinash Thank you very much wonderful blog which helped me achieve my requirement Inbound business activity replication from COD- CRM using Custom Idoc &message type ( CRMXIF_ORDER_SAVE) . But In my case we need custom Idoc needed only for the Inbound Business Activity COD-CRM and we are expecting confirmation message to COD from CRM using from existing standard flow & Idoc after activity save with inbound Custom Idoc . unfortunately am able to see 2 outbound idoc from CRM system 1) Confirmation message with Custom Idoc type and 2 ) standard Idoc which initiated because of change pointers . Can you suggest what is best way to handle this situation ? because we are using standard Idoc in other process as well ,so Deactivating the change pointer will have impact .
https://blogs.sap.com/2015/05/06/handling-extendedcustom-idoc-in-hci-with-standard-content/
CC-MAIN-2019-35
refinedweb
1,630
61.46
H... Happy ? ^^. I am concerned that you are still claiming that running threads is more scalable than a non-threaded approach. It is false. More flexible, yes, more scalable (or better performing), no. As per usual, I'll note that threads are best suited for CPU bound tasks with relatively rare synchronization. Popup/Micro-threads doing trivial tasks are typically short-lived with high overhead. Thread creation and accounting cost can be reduced to the bare minimum, but synchronization primitives will always be an inherent bottleneck since they cannot operate at CPU speeds and must be serialized over the bus. As the number of micro-threads increases, the bus itself becomes a point of contention and thus the entire system looses performance.. Tasks can even consume many more CPU cycles and still beat the micro-threaded model. This doesn't even get into the prevalence of multi-threaded bugs and the difficulty in debugging them. If a long running thread is really needed, then the asynchronous code can spawn one off deliberately (and still without blocking). If you are using threads to achieve some type of kernel protection between stacks, then I wish you'd just say that. Please don't claim that micro threads are used for scalability when other alternatives are technically more scalable. I know this is just a rehash of our previous discussions, but it is still a relevant criticism. Anyways, since you don't mention it, how do you handle the acknowledgement of interrupts? Is this done implicitly by the global interrupt handler? Or is this done within the thread that handles the interrupt? The reason I ask is if you ack the int right away, then the hardware could fire off another interrupt and have you create a new thread for the same device before the first thread is serviced. Do you protect from this somehow or do you leave it up to individual drivers?'m suffering from exhaustion at this point, but I'll try to respond quickly: As to the number of threads versus cores, you could distinguish between "scheduled threads" and "active threads". To minimize thrashing, a scheduled thread should not start until there is a free core. Unless of course there is a deadline/priority scheduler with a pending deadline on a higher priority thread. Hmm, I think you've got it figured out already. "The thing is, not all interrupts are equal from that point of view. As an example, processing keyboard or mouse interrupts is nearly pure I/O." I don't think the keyboard will be a problem. "But processing a stream of multimedia data can require significantly more number crunching, to the point where the amount of I/O is negligible, especially if DMA is being used." A surprising amount of IO can be done without much CPU effort, copying files between disks, sending files over network (with adapters supporting multi-address DMA and checksum). The CPU can get away with communicating memory addresses instead of actual bytes. However, in cases where the driver has to do more with the data (1000s of cycles), then there is no doubt a thread is necessary to prevent latency of other pending requests. Interestingly the x86 interrupt hardware already has a form of prioritization without threads. Low IRQs can interrupt even when higher IRQs haven't been acked yet, the inverse is not true. So even long running IRQs at low priority are theoretically ok since they'll never interfere with high priority IRQs. ...but the problem is that the driver code is no longer atomic, and therefor must be re-entrant. It becomes impossible to use a shared critical resource from two handlers. The second interrupt obviously cannot block, since doing so would block the first interrupt holding the critical resource as well. So, while this has it's place in extremely low latency applications, I doubt it's helpful for you. "Sorry, did not understand this one. Can you please elaborate ?" Consider an async model on a single processor. Now multiply that by X processors running X async handlers.." Well, there's no denying that last point. Unless you allow userspace threads which don't require a syscall? "I've a bit went the other way around : threads sounded like the cleanest multicore-friendly way to implement the model of processes providing services to each other which I aim at, and then I've tried to apply this model to interrupt processing and drivers." I think a clean & consistent design is a good reason to use light threads. However, part of the difficulty in providing an async userspace interface under linux (which is still a mess) was the fact that the kernel needed threads internally to handle blocking calls, even though there was no associated userspace threads to block. It's doable but not clean. You hit some good points further down, but I'm so tired...I'd better not tackle them today. Edited 2011-05-21 11:20 UTC? Yup, I can't see a reason why processing IRQs on different processors couldn't just as well be done with a threaded model. The problem will be to balance the IRQ load fairly across all processors, I think. Some IRQs fire very frequently (clock), some never happen (MachineCheck), but many interrupts happen frequently at a moment and then stop happening for minutes or hours. As a silly example, sound card interrupts (buffer is full/empty) only occur when you play or record sound. Well, it's not as if I could forbid them ^^ But looking at Tanenbaum's book (my OS development bible), they seem to have several issues. Like, you know, the fact that you cannot use clock interrupts to distribute time fairly across them, or the fact that if an userspace thread blocks for I/O, all other userspace thread are blocked for I/O at the same time, since it's the same thread from the kernel's point of view. I'd be happy to read this ... Edited 2011-05-21 20:13 UTC Neolander, "If by performance benefits you mention the fact that async has only one task running at the time and as such doesn't have to care about synchronization and that pending tasks cost is kept minimal, then yes this model may provide that." The thing is, you are going to have async and threaded drivers running side by side. This implies that even the async drivers will need to use synchronization when accessing shared resources. I throw out this as an example: Two userpace apps read from two different files. In the threaded model, this results in two user+kernel threads blocking in the file system driver, which itself has blocked in the block device driver. Ignoring all the synchronization needed in the FS driver (and writes), the block driver (or the cache handler) must/should create a mutex around the blocks being read so that other threads requesting the same blocks are blocked until it is read. I think such a structure would require at least two mutexes, one for the specific blocks being read, and another for the structure itself. Therefor a thread reading a cached block would only need to synchronize against one mutex, find it's data, and return immediately. A thread reading an uncached block would synchronize against the structure mutex, latch onto a new or existing block read mutex, queue the disk request (if new) and release the structure mutex. This way the structure mutex is only ever held momentarily, and the read mutex is held until the disk reads a block. After which all blocked read threads can resume. I expect that this is more or less what you'll be writing? Now, my point about async drivers is that zero synchronization is needed. It's true, that requests to this driver will be forced to be serialized. However: 1) Many drivers, like the one described above, need a mutex to serialize requests anyways. (This could be mitigated by dividing the disk/structure into 4 regions so that concurrent threads are less likely to bump into each other, however then you need another mutex to serialize the requests to disk since IO to one device cannot be executed from two CPUs simultaneously). 2) If the driver's primary role is scheduling DMA IO and twiddling memory structures with very little computation, then these tasks are not really suitable for parallelization since the overhead of synchronization exceeds the cost of just doing the work serially. "The assumption behind this is that in many situations, the order in which things are processed only matters in the end, when the processing results are sent to higher-level layers. Like rendering a GUI..." Yes, I won't deny that some layers will benefit from parallelism. However, propagating that parallelism into drivers which are fundamentally serial in nature will make those drivers more complex and could even slow them down. These drivers will require thread synchronizations when an async model could handle it's state without synchronizations (more on this later for the other poster). I'd like to emphasize that I'm not arguing against the threaded model, particularly when they make or break the design of the paradigm. I'm just trying to suggest that sometimes there are cases where the best MT implementation performs worse than the best ST implementation, and device drivers may be one of those cases.. "though it would be difficult to envision a programming model that, with all the callbacks, would make it clear how the program flow goes." It certainly is different if you're not accustomed to it. For threads, a linear sequence of events can be visualized easily. (the complexity arises from interaction with other threads). On the other hand, in the async model, I can look at any single event callback, look at the state of the system and determine exactly where to go from there to get to the next state. Debugging individual callbacks becomes almost trivial. It is far easier to prove correctness under the async model than the thread model. Here is a short program which tests a multithreaded checksum. Nothing impressive here. All that's interesting are the benchmarks below. (Dual-Core Intel(R) Xeon(R) CPU E3110 @ 3.00GHz) // twiddle.c -- Compare performance of single threaded and multithreaded code. // gcc -O3 -o twiddle -lpthread twiddle.c #include <stdio.h> #include <pthread.h> #include <stdlib.h> #include <string.h> #include <time.h> typedef struct { int*mem; int len; int ret; } WORK; double Now() { struct timeval tv; gettimeofday(&tv, NULL); return tv.tv_sec + (float)tv.tv_usec/1000000; } void* DoWork(void*work) { // Do some arbitrary nonsense work WORK*w = (WORK*)work; int*mem = w->mem; int*stop = mem + w->len; int sum = 0; while(mem<stop) { sum+=*mem; mem++; } w->ret = sum; } int main(int argc, char*args[]) { int len=500000000; // 500M ints, or 2GB int*mem = (int*)malloc(len * sizeof(int)); WORK work[2]; double elapse; pthread_t thread[2]; int run; memset(mem, 1, len * sizeof(int)); printf("Array Len = %d\n", len); for(run=0; run<3; run++) { // Method 1 - Do work in one shot elapse = Now(); work[0].mem = mem; work[0].len = len; DoWork(work); elapse = Now() - elapse; printf("Run %d Method 1 - One shot = %d (%0.2fs)\n", run, work[0].ret, elapse); // Method 2 - Split work in half elapse = Now(); work[0].mem = mem; work[0].len = len/2; work[1].mem = mem+len/2; work[1].len = len/2; DoWork(work+0); DoWork(work+1); elapse = Now() - elapse; printf("Run %d Method 2 - split single thread = %d (%0.2fs)\n", run, work[0].ret + work[1].ret, elapse); // Method 3 - Split work in half, threaded elapse = Now(); work[0].mem = mem; work[0].len = len/2; work[1].mem = mem+len/2; work[1].len = len/2; pthread_create(thread+0, NULL, &DoWork, work+0); pthread_create(thread+1, NULL, &DoWork, work+1); pthread_join(thread[0], NULL); pthread_join(thread[1], NULL); elapse = Now() - elapse; printf("Run %d Method 3 - split dual thread = %d (%0.2fs)\n", run, work[0].ret + work[1].ret, elapse); } // run } Output: Run 0 Method 1 - One shot = 1345479936 (0.36s) Run 0 Method 2 - split single thread = 1345479936 (0.36s) Run 0 Method 3 - split dual thread = 1345479936 (0.31s) Run 1 Method 1 - One shot = 1345479936 (0.36s) Run 1 Method 2 - split single thread = 1345479936 (0.37s) Run 1 Method 3 - split dual thread = 1345479936 (0.33s) Run 2 Method 1 - One shot = 1345479936 (0.37s) Run 2 Method 2 - split single thread = 1345479936 (0.37s) Run 2 Method 3 - split dual thread = 1345479936 (0.31s) Now, did this achieve the speedup we'd like? Since there is virtually no synchronization between threads to cause overhead, one might have expected 2 threads to cut the time in half, so why didn't it?. Keep in mind we haven't even introduced the need for multithreaded synchronization, which is extremely expensive. For these reasons, it's typically better to exploit parallelism at the macro-level instead of parallelizing individual steps. This is usually a good thing because it's easier to do with much less synchronization. It also lends credence to my cherished view that async IO interfaces usually can perform better than blocking threaded ones. Again, if it's pure I/O and (almost) no computation, I admit that it's totally possible, and even frequent When I saw your operating system articles, I thought they were the norm for OSNews, and I was intrigued by it. But honestly, the non-technical media hypefest going on here seems to be as bad as other sites. The technical articles like this don't even generate much interest (no offense). It's over people's heads here. jal_, you're ok in my book, after all, you knew about "intel-mnemonic"! I'm just looking for some place to fit in better. I have the same problem in my offline life - I don't know people with my interests. I can't even find a relevant job to apply my skills in. Employers seem to be judging my past employers and connections (who are not impressive) instead of my personal abilities (which I think are well above the curve). Do either of you feel totally under-appreciated? Edit: Well, my topic seems a bit ironic now, but I'll leave it! Edited 2011-05-23 17:44 UTC Ah, these articles Well, let's put it this way : I see OSnews as not being strictly about the gory details of OS internals. Some people just come here to read about the evolution of existing OSs and the ecosystems around them. It's like car magazines : some people who read them are actual car mechanics who could put a car together from spare parts if they had to, but most people read them simply to learn about new cars and things which car mechanics can do for them. If you want something more OS development-specific, I highly recommend OSdev's forum ( ). The bulk of the traffic is about people having implementation issues, in the slightly misnomed "OS development" subforum, but there are also regularly more interesting theoretical discussions in the "OS Design&Theory" and "General Programming" forums. You may prefer like me the constructed, easy on the eyes articles of blogs to forums for daily viewing, but except for OS designer's blogs like mine, which tend to have low and irregular publishing rates, I don't know of any blog specifically targeting this subject. Well, looking how much hits my blog gets each time I publish something about it on OSnews... No. (see ) I don't feel like articles on the subject are under-appreciated. I just wish I had a more regular subscriber audience, but I guess this is not going to happen, at least until I have something more tasty for normal people than kernel internals to write about (like, you know, this resolution-independent and resizing-friendly tiling GUI which I care so much about). The current version of OSnews doesn't allow you to change it anyway =p Edited 2011-05-24 12:02 UTC
http://www.osnews.com/comments/24763
CC-MAIN-2017-09
refinedweb
2,709
63.19
Hi, I’m only relatively new to the world of BrickPi and have recently updated the firmware on my device as so it can accept the EV3 sensors which I have with my Lego Mindstorms kit. Thought I might share some code that I’ve put together. It uses the motors plugged into MA and MB and the EV3 touch sensor plugged into S4. Pressing the button will spin the motors forward, pressing the button again will spin the motors in reverse and so on and so forth. To run it, place the attached code under Desktop/BrickPi_Python/SensorExamples and call it say, Switch-Motor.py call the code by issuing the following line: sudo python Switch-Motor.py It’s only a simple example, but I’m going to look into the Infrared and the Colour Sensor next and see what I can come up with. Anyway, in the meantime, I hope this example helps others out with the EV3 sensors. from BrickPi import * #import BrickPi.py file to use BrickPi operations BrickPiSetup() # setup the serial port for communication BrickPi.MotorEnable[PORT_A] = 1 #Enable the Motor A BrickPi.MotorEnable[PORT_B] = 1 #Enable the Motor B BrickPi.SensorType[PORT_4] = TYPE_SENSOR_EV3_TOUCH_DEBOUNCE #Set the type of sensor at PORT_4 BrickPiSetupSensors() #Send the properties of sensors to BrickPi power = 200 buttonstatus=0 buttonpress=0 while True: BrickPi.MotorSpeed[PORT_A] = power #Set the speed of MotorA (-255 to 255) BrickPi.MotorSpeed[PORT_B] = power #Set the speed of MotorB (-255 to 255) BrickPiUpdateValues() r1 = BrickPi.Sensor[PORT_4] BrickPiUpdateValues() r2 = BrickPi.Sensor[PORT_4] BrickPiUpdateValues() r3 = BrickPi.Sensor[PORT_4] if (r1==1 and r2==1 and r3==1): #The button has been pressed if buttonstatus==0: # button has been pressed since last sampling power=-power #Get ready to reverse the motor buttonstatus=1 #Remember the button has been pressed. buttonpress=1 else: if buttonpress==1: #button has been released in this sampling and had currently been recorded as pressed buttonstatus=0 #Remember the button has now been released. time.sleep(.1)
https://forum.dexterindustries.com/t/brickpi-ev3-touch-sensor-and-motors/678
CC-MAIN-2022-33
refinedweb
334
55.74
IntroductionNow, we will see how to install an assembly using the merge module witch is a tool provided by the MS Visual Studio 2005. The merge module tool is used to wrap components designed especially in order to be shared later. Components can be dll files or user controls objects that are consumed, in general, by the developer who wants to use or reuse already existing components, for example, he can develop an application against given dll files. So, using this method can provide him possibility to enjoy with those dll's or component's services. Assuming that we want deploy an assembly witch called ClassLibrary1 with this feature: using System; using System.Collections.Generic; using System.Text; using System.Windows.Forms; namespace ClassLibrary1 { public class Class1 { public Class1() { MessageBox.Show("You are using ClassLibrary1"); } }}After building it, we save the project and we add a new one by clicking File, New and Project Figure 1After the merge module project is opened, select Module Retarget able Folder just at right under the File system target machine node, then, right click and select add, then select project output as the figure shows bellow:Figure 2Select the primary output element list representing the assembly that we want to deploy. If you want to deploy other elements such as documentation files to get support to the developers or XML serialization assemblies, you can add them with the primary output.Figure 3Now, expand the Merge module properties grid and set the author name, for example, "Me" as the figure shows bellow:Figure 4There is a property witch I find very useful by the way, I mean, the "Search Path" property used to determine the path used to localize assemblies, files or even merge modules on the development computer.Now, build the solution and browse to the application directory, the merge module project with name "MergeModule1" is there. Browse to the debug file an open it. A file with *.msm extension is created; this file represents the merge module project output. The mission is not completely accomplished because the merge module can not be installed directly. To install it, we must add an Installer project in order to consume the merge module; in fact, this one can't be installed by its self.So, add a new setup project by selecting File then Add then New Project, and select Setup ProjectFigure 5Now, select Application Folder then Add then Project output menu item and click on it.Figure 6Select the Merge Module 1 project in the combo box list as shown bellow:Figure 7Expand the setup properties grid and change the author property "." By "Me" and the Manufacturer property value "." By "Me" before build the project, otherwise, it can not be installed later. Build the setup project. After that, select it, right click and choose install in the context menu and click on it.Figure 8The install process will be launched.Figure 9After the project deployment, swap to the configuration panel and open add and remove program. You can find the setup 1 among the installed programs.Figure 10The developer, or let us say the intermediate user, can browse to %root%\ProgramFiles\Me, there, he can find the new installed dll file..
http://www.c-sharpcorner.com/uploadfile/yougerthen/part-iv-step-by-step-procedure-of-how-to-install-an-assembly/
CC-MAIN-2015-35
refinedweb
536
52.19
- Engineering - Computer Science - assignment 321 working with unix processes in c assignment instructions... Question: assignment 321 working with unix processes in c assignment instructions... Question details Assignment 3-2-1: Working with UNIX Processes in C++ Assignment Instructions: You must use the Cloud9 environment for this assignment. Be sure to compile and test your program to be certain it works as expected. If you aren't sure how to compile and run a C++ program, refer to the Cloud9Compiling&RunningCPPPrograms Slideshow. When a program is complete, follow the instructions in the Cloud9DownloadingCPPFiles Slideshow for downloading the .cpp file. Assignment Objectives: The purpose of this activity is to explore and use functions related to process creation & handling in Unix-based OSs. Assignment Tasks: An online version of the Linux manual can be found here:. For this activity, you will mainly need to refer to the system calls section of the manual. If you need help with navigating the file system through a command line terminal, refer to this: . Assignment Setup (0 points) Note: You need to use the bash shell to compile and run the program. Do not use the Cloud 9 GUI. You will need to download, compile, and execute a small program using your Cloud9 C++ environment. Type the following command into the terminal window to pull the project repository from GitLab: git clone Change directory into the newly created directory (folder) named ITSC_3146_A_3_2. Issue the following command to compile the code: g++ Assignment_3_2.cpp Processes.cpp -o Assignment_3_2 Issue the following command to execute the program: ./Assignment_3_2 Part 1: Working With Process IDs (5 points) Modify the getProcessID() function in the file named Processes.cpp The function must find and store the process's own process id The function must return the process id to the calling program. Note that the function currently returns a default value of -1. Hint: search for “process id” in the “System Calls” section of the Linux manual. Part 2: Working With Multiple Processes (10 points) Modify the createNewProcess() function in the file named Processes.cpp as follows: The child process must print the message I am a child process! The child process must then return a string with the message I am bored of my parent, switching programs now The parent process must print the message I just became a parent! The parent process must then wait until the child process terminates, at which point it must return a string with the message My child process just terminated! and then terminate itself. Hint: search for “wait for process” in the “System Calls” section of the Linux manual. IMPORTANT: Do NOT type the messages, copy-paste them from here. Your output must be exact. Part 3: Working With External Commands (15 points) Modify the replaceProcess() function in the file named Processes.cpp as follows: The parent process will use the fork() function to create a child process. This step has been done for you. The child process must then change its memory image to a different program by using the execvp system call (). The parameter args that has been passed to the replaceProcess() function is the array of parameters to be passed to execvp, telling it which program to execute and what parameters to pass to that program. For example, the test code provided to you executes the “ls” (directory list) program with the parameter “-al” by setting the args array as follows: char * args[3] = {(char * )"ls", (char * )"-al", NULL}; IMPORTANT: Although the test code executes the “ls” program, we must be able to change the command that execvp executes. So, DO NOT hardcode the command to be passed to execvp. Simply use the args array provided. Finally, in the parent process, you must make sure to invoke the necessary system call to wait for the child process to terminate. Once the child terminates, exit the program. Expected Output: If you execute the code you pull from GitLab without any modifications, it will produce the following output: If you make all required changes correctly, your program should produce output similar to the following: below is my code which isnt giving the correct output when i run it // // Processes.cpp // ITSC 3146 // // Created by Bahamon, Julio on 1/12/17. // /* @file Processes.cpp @author student name, @author student name, @author student name, @description: <ADD DESCRIPTION> @course: ITSC 3146 @assignment: in-class activity [n] */ #ifndef Processes_cpp #define Processes_cpp //Import required header file #include "Processes.h" using namespace std; //Part 1: Working With Process IDs pid_t getProcessID(void) { // TODO: Add your code here //Get the pid of the process pid_t id = getpid(); //return the id return id; } //Part 2: Working With Multiple Processes //Implement method to create child process string createNewProcess(void) { //Create a child process pid_t id = fork(); // DO NOT CHANGE THIS LINE OF CODE process_id = id; //If the id is -1 if (id == -1) { return "Error creating process"; } //If the ID is 0 else if (id == 0) { cout << "I am a child process! "; return "I am bored of my parent, switching programs now"; } //Otherwise else { cout << "I just became a parent! "; int status = 0; wait(&status); return "My child process just terminated!"; } } //Part 3: Working With External Commands" //Implement the method to replace the process void replaceProcess(char * args[]) { //Create child process pid_t id = fork(); //Execute with the new process execvp(args[0], args); } #endif /* TestProg_cpp */ Solution by an expert tutor
https://homework.zookal.com/questions-and-answers/assignment-321-working-with-unix-processes-in-c-assignment-instructions-162482322
CC-MAIN-2021-17
refinedweb
898
60.24
Product Version = NetBeans IDE 7.2 Beta (Build 201205031832) Operating System = Windows 7 version 6.1 running on x86 Java; VM; Vendor = 1.7.0_03 Runtime = Java HotSpot(TM) Client VM 22.1-b02 [code] header.h header2.h file.h #include "header.h" [/code] While editing file.h and going to header.h, all code within the header class are greyed. Comments outside of the class specification appear normal. Editing another header file, header2.h, during the same session does not change the code contained in the class. Specifically: [code] header.h /* coments */ #ifndef HEADER_H # define HEADER_H <class specification> #endiif [/code] The comments appear normal, all code and comments from the #define to the end of the file are grayed. In file.h, references to header.h are flagged as "unable to resolve identifier" errors. I have a similar issue in Final - NetBeans IDE 7.3. Everything between #ifndef and #endif is greyed out. Closing NetBeans does not resolve the issue. The only way I'm able to fix it is removing all the #include statements throughout the program. It's happened 3 times today, and it's getting frustrating, since I have to do that with 9 files. Code works fine, with no errors, but it's harder to develop code. Never had this issue before today (I've been using NetBeans heavily for the past 9 months). I can not reproduce the problem in 8.0.2 candidate. Could you try it, please? Thanks!
https://netbeans.org/bugzilla/show_bug.cgi?id=215041
CC-MAIN-2015-22
refinedweb
247
70.9
Here's an MWE of some code I'm using. I slowly whittle down an initial dataframe via slicing and some conditions until I have only the rows that I need. Each block of five rows actually represents a different object so that, as I whittle things down, if any one row in each block of five meets the criteria, I want to keep it -- this is what the loop over keep.index accomplishes. No matter what, when I'm done I can see that the final indices I want exist, but I get an error message saying "IndexError: positional indexers are out-of-bounds." What is happening here? import pandas as pd import numpy as np temp = np.random.rand(100,5) df = pd.DataFrame(temp, columns=['First', 'Second', 'Third', 'Fourth', 'Fifth']) df_cut = df.iloc[10:] keep = df_cut.loc[(df_cut['First'] < 0.5) & (df_cut['Second'] <= 0.6)] new_indices_to_use = [] for item in keep.index: remainder = (item % 5) add = np.arange(0-remainder,5-remainder,1) inds_to_use = item + add new_indices_to_use.append(inds_to_use) new_indices_to_use = [ind for sublist in new_indices_to_use for ind in sublist] final_indices_to_use = [] for item in new_indices_to_use: if item not in final_indices_to_use: final_indices_to_use.append(item) final = df_cut.iloc[final_indices_to_use] From Pandas documentation on .iloc (emphasis mine): Pandas provides a suite of methods in order to get purely integer based indexing. The semantics follow closely python and numpy slicing. These are 0-based indexing. You're trying to use it by label, which means you need .loc From your example: >>>print df_cut.iloc[89] ... Name: 99, dtype: float64 >>>print df_cut.loc[89] ... Name: 89, dtype: float64
https://codedump.io/share/7MUmZ1sp0xgf/1/quotindexerror-positional-indexers-are-out-of-boundsquot-when-they39re-demonstrably-not
CC-MAIN-2018-47
refinedweb
264
61.63
update: By relying on dictFind, this implementation makes an assumption that you'll be storing strings in your set. This'll crash if the set is encoded as an intSet. Open up t_set.c and look at setTypeIsMember. You should either call this directly, or use it for inspiration. In part 1 we set up our xfind method and hooked it into Redis. Now it's time to write the actual implementation. As a reminder, our goal is to take a sorted set, subtract a set from it, and providing paging (offset/count). The lookupKeyReadOrReply function we used returned a redis object ( robj), which we briefly described. robj has one other member which is of interest to us: encoding. You see, most data structures in Redis have multiple possible implementation. A sorted set, for example, can be either be a skiplist or a ziplist. Based on the data being stored (the type of values or the number of values), Redis will pick one implementation instead of another. It'll also take care of converting one implementation into another as needed. Unfortunately, this makes our life more difficult. If you browse though Redis' codebase, you'll find many lines that go something like if (r->encoding == XYZ) { ... } else { ... }. In many cases, wrappers are available to abstract this implementation detail - however they aren't always publicly available. While the accessibility is something we could easily fix, we'll take a different path. Instead of coding against sorted sets and sets as generic data structures, we'll code against specific implementations. There are already some commands in Redis which do this. For example, if you sort with a sorted set, Redis will convert it to a skiplist, which is exactly what we'll do: zobj* zobj = lookupKeyReadOrReply(c, c->argv[1], shared.czero); if (zobj == NULL || checkType(c, zobj, REDIS_ZSET)) { return; } zsetConvert(zobj, REDIS_ENCODING_SKIPLIST); zset *zset = zobj->ptr; Similarly, sets (which is what we'll diff with) use a special encoding when only storing integers. Since our set will contain strings, we can safely assume that our set is encoded as a hashtable, rather than a intset. sobj = lookupKeyReadOrReply(c, c->argv[2], shared.czero); if (sobj == NULL || checkType(c, sobj, REDIS_SET)) { return; } dict *diff = (dict*)sobj->ptr; A nice thing about the skiplist implementation we are programming against is the ability to easily walk forwards or backwards through the items. For now, we'll only concern ourselves with going backwards, but it would be trivial to add support for an order (asc/desc) parameter. Our code will first get the tail of the skiplist, and then move backwards: zskiplist *zsl = zset->zsl; zskiplistNode *ln = zsl->tail; while(ln != NULL) { //todo ln = ln->backward; } The full loop, including applying the diff, looks like: long found = 0, added = 0; while(ln != NULL) { robj *item = ln->obj; if (dictFind(diff, item) == NULL && found++ >= offset) { addReplyBulk(c, item); if (++added == count) { break; } } ln = ln->backward; } This can can broken down in a few steps. First, we get the value of the current element. If this item is not in our dictionary, and we are beyond our offset, we can add the item to our reply. If we've added all that was expected, we can break. Our response needs to be prefixed with the number of values. We can't know that value upfront (could be less than count). Redis provides a function to allocate space for the length and fill it in after the fact: void *replylen = addDeferredMultiBulkLength(c); while(ln != NULL) { //... } setDeferredMultiBulkLength(c, replylen, added); And that, dear reader, is a simple (and very tailored) implementation of xdiff. There's all types of ways to improve this. We could add an order parameter or implement something like sort's GET feature. One last improvement we'll make is to skip directly to offset, since we know we won't find an item that fits in our page until then. Instead of pointing to the set's tail, we can simply use the zslGetElementByRank function. Our complete code looks like: #include "redis.h" zskiplistNode* zslGetElementByRank(zskiplist *zsl, unsigned long rank); void xdiffCommand(redisClient *c) { long offset, count, added = 0; robj *zobj, *sobj; zset *zset; dict *diff; void *replylen;; } zsetConvert(zobj, REDIS_ENCODING_SKIPLIST); zset = zobj->ptr; diff = (dict*)sobj->ptr; long zsetlen = dictSize(zset->dict); zskiplistNode *ln = zslGetElementByRank(zset->zsl, zsetlen - offset); replylen = addDeferredMultiBulkLength(c); while(ln != NULL) { robj *item = ln->obj; if (dictFind(diff, item) == NULL) { addReplyBulk(c, item); if (++added == count) { break; } } ln = ln->backward; } setDeferredMultiBulkLength(c, replylen, added); } Maybe this implementation will prove too specific, but hopefully it'll provide some insight into how to build your own.
https://www.openmymind.net/Writing-A-Custom-Redis-Command-In-C-Part-2/
CC-MAIN-2019-13
refinedweb
774
63.29
Author: Ken Cochrane Fork of: Fork Description: I reorganized the code, added caching, and made a few tweaks here and there. Description: Django middleware and view decorator to detect phones and small-screen devices Version: 0.1.6 Last Update: 9/30/2011 Requirements: Django 1.1 or newer Django caching to be enabled if you want to cache the objects How to use: Using django-mobi is very simple. Simply place the mobi package into your project's path, and then do one of the following: Using the mobi.MobileDetectionMiddleware Middleware This middleware will scan all incoming requests to see if it is a mobile device. If it is it will set the request.mobile property to True. To use all you have to do is add mobi.MobileDetectionMiddleware to your MIDDLEWARE_CLASSES tuple in your settings.py Then in your view you can check request.mobile - if it's True then treat it like a small screen device. If it's False then it's probably a desktop browser, or a spider or something else. If you want to have some items not triggered by the middleware (for example iPad) then add a settings called MOBI_USER_AGENT_IGNORE_LIST and add the item to the list. MOBI_USER_AGENT_IGNORE_LIST = ['ipad',] Using the mobi.MobileRedirectMiddleware Middleware This middleware will scan all incoming requests to see if it is a mobile device, if so it will redirect the request to a different URL. This is good if you want to force all mobile traffic to a mobile only version of your site. To use all you have to do is add mobi.MobileRedirectMiddleware to your MIDDLEWARE_CLASSES tuple in your settings.py, and also add MOBI_REDIRECT_URL = "" where is the website you want to redirect all mobile traffic. Not using the Middleware If you only have certain views that need the distinction, you can choose not to search every request you receive. All you need to do is wrap the relevant views like this: from mobi.decorators import detect_mobile @detect_mobile def my_mobile_view(request): - if request.mobile: - #do something with mobile
https://bitbucket.org/kencochrane/django-mobi/src/51cf1cb09961/?at=default
CC-MAIN-2015-32
refinedweb
342
57.98
> > Like maybe just having folks point a source-path to their local working > copies. Source-path is supposed to > take precedence over anything found in the library path. It slows down > compilation, of course, but also means you don't have to build the SWCs > before testing your changes. > > Well, this is exactly what I do today. I isolate the problem to a particular project in the frameworks folder and add that project's src as a source path to my test project. There maybe dependencies and I work through to add all the relevant src directories until the compile errors go away. This makes it much easier for me to work on the SDK code while testing it without having to recompile all the swcs all the time. I wasnt sure what the correct/better way was. Thanks, Om > -Alex > > On 7/25/13 6:16 AM, "Erik de Bruin" <erik@ixsoftware.nl> wrote: > > >There is a gap between how you build the SDK and how you make and test > >changes to the SDK: > > > >(working from memory here) After you build the SDK ('ant main') you > >run the script 'ide/constructFlexForIDE' to prepare the SDK for use in > >Flash Builder. You then create your test application (in which you > >will reproduce the bug) to use this newly build and prepared SDK. > > > >Now a question to all of you: can we make an app (or extend the > >Installer) so the steps to prepare a system for building the SDK are > >performed automatically? > > > >- I think we can download and launch the installers (not sure if we > >can poll for completion, though) > >- we can create an env.properties with all the paths set, bypassing > >the need to set system wide variables in obscure settings panes/files > >- we can create a preset directory structure to hold the source files > >and their dependencies (AIR SDK, playerglobal etc.) > >- we can find and edit mm.cfg and create a FlashPlayerTrust file > >- etc. > > > >Does anyone see any major obstacles that I'm overlooking? > > > >EdB > > > > > > > >On Thu, Jul 25, 2013 at 1:31 PM, Justin Mclean <justin@classsoftware.com> > >wrote: > >> Hi, > >> > >> I took some notes while fixing this bug. > >> > >> > >> Any feedback and questions welcome. > >> > >> Bug information > >> Note that it's marked as "easyfix" and a RTE. Generally these sort of > >>bugs don't take much to fix. Looking at the report you can see it's for > >>a mobile project and Adobe Flex 4.6 so the line numbers are not going to > >>match up to the current development branch. Search for the line of code > >>where the error occurs, as the code may of change first look for the > >>function name. You can see that the error is now on line 1581. > >> > >> Reproduce the Bug > >> There is no sample code so you need to work out how to reproduce it, so > >>create a new sample mobile project containing a horizontal spark list > >>and run it. See if it can reproduce the issue using the 4.6 SDK. No > >>luck. (See the JIRA issue for the code used) > >> > >> Try and work out how to generate the RTE. Looking at the snapElement > >>method it looks like the error would only occur when scrollProperty is > >>null and that could happen if both canScrollHorizontally and > >>canScrollVertically are false. It's possible that this could happen when > >>size changes removes the scrollbars when screen orientation changes. > >>This is probably why the bug is hard to reproduce as it depends on the > >>amount of content in the list and the screen size. The easy way to > >>simulate this is to turn off both horizontal scrolling and vertical > >>scrolling and call the mx_internal method. Modify the code to call the > >>method directly with both scroll bar policy off and smapping set to > >>something other than none. Bingo we have the RTE! > >> > >> protected function init(event:FlexEvent):void > >> { > >> list.scroller.mx_internal::snapElement(10, false); > >> } > >> > >> <s:List >> >> > >> <s:layout> > >> <s:HorizontalLayout /> > >> </s:layout> > >> </s:List> > >> > >> > >> Test in the develop branch > >> Change to using the latest develop branch and see if the issue still > >>exists and yes it does. > >> > >> Fix the bug > >> To fix add a null check and recompile the spark project by changing to > >>the frameworks/projects/spark directory and run ant to compile, this > >>should only take less than a min to compile. > >> > >> Clean the FB project so it picks up the framework change. Sometime it > >>will cache the swc and may require swapping to and old SDK and then > >>back again. Double check you are using the SDK you made the change in. > >> > >> Test the Change > >> Run the program again and/or text code path in the debugger to see that > >>the issue has been fixed. Play about with the sample application to make > >>sure nothing else has been broken. > >> > >> Running mustella tests > >> A change to the scroller class could effect a lot of tests but we can > >>run the basic tests to make sure and assume the CI will pick up any > >>other issues. For a change like this I wouldn't expect any issues or > >>side effects as the RTE would normally occur, but it's best to be safe. > >> > >> ./mini_run.sh tests/gumbo/components/ScrollBar > >> ./mini_run.sh tests/gumbo/components/Scroller > >> > >> Both sets of test pass as expected. > >> > >> [java] ===================================================== > >> [java] Passes: 122 > >> [java] Fails: 0 > >> [java] ===================================================== > >> > >> [java] ===================================================== > >> [java] Passes: 74 > >> [java] Fails: 0 > >> [java] ===================================================== > >> > >> > >> Commit the change > >> If you are a committer you can directly commit the change via a git > >>push. If you are not not a committer you would need to generate a patch > >>file and add it to the JIRA issue. Make sure you generate the patch from > >>the base SDK directory like so. > >> > >> git diff frameworks/projects/spark/src/spark/components/Scroller.as > >> > >> diff --git a/frameworks/projects/spark/src/spark/components/Scroller.as > >>b/frameworks/projects/spark/ > >> index 9f91412..c48222d 100644 > >> --- a/frameworks/projects/spark/src/spark/components/Scroller.as > >> +++ b/frameworks/projects/spark/src/spark/components/Scroller.as > >> @@ -1579,7 +1579,8 @@ public class Scroller extends SkinnableComponent > >> } > >> else > >> { > >> - viewport[scrollProperty] = snapScrollPosition; > >> + if (scrollProperty) > >> + viewport[scrollProperty] = snapScrollPosition; > >> > >> return null; > >> } > >> > >> Update JIRA > >> Mark the bug as resolved noting down the Apache Flex versions it has > >>been fixed in. > >> > >> Hope that was helpful, > >> Justin > > > > > > > >-- > >Ix Multimedia Software > > > >Jan Luykenstraat 27 > >3521 VB Utrecht > > > >T. 06-51952295 > >I. > >
http://mail-archives.apache.org/mod_mbox/flex-dev/201307.mbox/%3CCACK5iZcbuj6L0WPOSHyki2iFyniK8W6xJQZ9+CXL2akC5tNNxA@mail.gmail.com%3E
CC-MAIN-2017-43
refinedweb
1,050
70.63
This chapter describes groups in MarkLogic Server, and includes the following sections: This chapter describes how to use the Admin Interface to create and configure groups. For details on how to create and configure groups programmatically, see Creating and Configuring Groups in the Scripting Administrative Tasks Guide. The basic definitions for group, host, and cluster are the following: For single-node configurations, you can only use one group at a time (because there is only one host). For clusters configurations with multiple hosts, you can have as many group configurations as makes sense in your environment. Groups allow you to have several configurations, each of which applies to a distinct set of hosts. Different configurations are often needed when different hosts perform different tasks, or when the hosts have different system capabilities (disk space, memory, and so on). In cluster configurations, a common configuration is to have one group defined for the evaluator nodes (hosts that service query requests) and another group defined for the data nodes (hosts to which forests are attached). HTTP, ODBC, XDBC, and WebDAV servers are defined at the group level and apply to all hosts within the group. Schemas and namespaces can also be defined at the group level to apply group-wide. The Configure tab of the Group Administration section of the Admin Interface enables you to define configuration information for memory settings, SMTP server settings, and other configuration settings. The values for the settings are set at installation time based on your system memory configuration at the time of the installation. For a description of each configuration option, see the Help tab of the Group Administration section of the Admin Interface. The relationships between a cluster, a group and a host in MarkLogic Server may be best illustrated with an example. In this example, each machine is set up as a host within the example cluster. Specifically, hosts E1, E2 and E3 belong to a group called Evaluator-Nodes. They are configured with HTTP servers and XDBC servers to run user applications. All hosts in the Evaluator-Nodes group have the same MarkLogic Server configuration. Hosts D1, D2 and D3 belong to a group called Data-Nodes. Hosts in the Data-Nodes group are configured with data forests and interact with the nodes in the Evaluator-Nodes group to service data requests. See the sections on databases, forests and hosts for details on configuring data forests. For more information about clusters, see the Scalability, Availability, and Failover Guide. If you are administering a single-host MarkLogic environment, the host is automatically added to a Default group during the installation process. You will only have one host in the group and will not be able to add other hosts to the group. The following procedures describe how to create and manage groups in MarkLogic Server: To create a new group, perform the following steps: MarkLogic Server will use this name to refer to the group. For information about auditing, including how to configure various audit events, see Auditing Events. Adding a group is a 'hot' administrative task; the changes are reflected immediately without a restart. To view the settings for a particular group, perform the following steps: You must drop all hosts assigned to a group before you can delete a group. To delete a group, perform the following steps: Deleting a group is a hot operation; the server does not need to restart to reflect your changes. To enable encrypted SSL communication between hosts in the group, set xdqp ssl enabled to true. All communications to and from hosts in the group will be secured, even if the other end of the socket is in a group that does not have SSL enabled. The SSL keys and certificates used by the hosts are automatically generated when you install or upgrade MarkLogic Server. No outside authority is used to sign certificates used between servers communicating over the internal XDQP connections in a cluster. Such certificates are self-signed and trusted by each server in the cluster. For details on configuring SSL communication between web browsers and App Servers, see Configuring SSL on App Servers. For details on configuring FIPS 140-2 mode for SSL communication, see OpenSSL FIPS 140-2 Mode. The following screen capture shows the options related to configuring SSL for intra-cluster XDQP communication. The installation process configures an SMTP server based on the environment at installation time. A single SMTP server is configured for all of the hosts in a group. The SMTP configuration is used when applications use the xdmp:email function. To change the SMTP server or the SMTP timeout for the system (the time after which SMTP requests fail with an error), perform the following steps: Changing any SMTP settings is a hot operation; the server does not need to restart to reflect your changes. Perform the following steps to restart all the hosts in a group from the Admin Interface: The restart operation normally completes within a few seconds. It is possible, however, for it to take longer under some conditions (for example, if the Security database needs to run recovery or if the connectivity between hosts in a cluster is slow). If it takes longer than a few seconds for MarkLogic Server to restart, than the Admin Interface might return a 503: Service Unavailable message. If you encounter this situation, wait several seconds and then reload the Admin Interface.
https://docs.marklogic.com/8.0/guide/admin/groups
CC-MAIN-2020-34
refinedweb
904
50.87
Title: Caching Concepts Author: Shivprasad Koirala Email: shiv_koirala@yahoo.com Language: Caching Concepts Level: Beginner Description: .Net Interview Questions 4th Edition (B)? Part 1 - SoftArchInter1.aspx Part 2 - SoftArch2.aspx Part 3 - SoftArch3.aspx Part 4 - SoftArch4.aspx UML interview questions part 1 SoftArch5.aspx ‘System.Web.Caching’ namespace. You can get a reference to the Cache object by using the Cache property of the Http Context class in the ‘System.Web’ namespace or by using the Cache property of the Page object. When you add an item to the cache, you can define dependency relationships that can force that item to be removed from the cache under specific activities of dependencies. Example if the cache object is dependent on file and when the file data changes you want the cache object to be update. Following are the supported. Partial Class Default_aspx Public Sub display Announcement() Private Sub Page_Init(ByVal sender As Object, By Val e As System.EventArgs) Handles Me. nit display Announcement() End Sub End Class Note:- If:- •. • ASP session state has no inherent solution to work with Web Farms.ASP.NET session can be stored in state server and SQL SERVER which can support multiple server. • ASP session only functions when browser supports cookies.ASP.NET session can be used with browser side cookies or independent of it. • In Proc: - In this mode Session, state is stored in the memory space of the Aspnet_wp.exe process. This is the default setting. If the IIS reboots or web application restarts then session state is lost. • State Server:-In this mode Session state is serialized and stored in a separate process (Aspnet_state.exe); therefore, the state can be stored on a separate computer (a state server). • SQL SERVER: - In this mode Session, state is serialized and stored in a SQL Server database.. Following are the things to remember so that SQL SERVER Mode works properly:- • SQL SERVER mode session data is stored in a different process so you must ensure that your objects are serializable. • IIS met abase (\LM\W3SVC\2) must be identical across all servers in that farm. • By default Session objects are stored in “Temped”, you can configure it store outside “TempDB” by running Microsoft provided SQL script. Note:- “TempDB” database is re-created after SQL SERVER computer reboot.If you want to maintain session state with every reboot best is to run SQL Script and store session objects outside “TempDB” database. : • Hidden fields • View state • Hidden frames • Cookies • Query strings:- • No server resources are required because state is in a structure in the page code. • Simplicity. • States are retained automatically. • The values in view state are hashed, compressed, and encoded, thus representing a higher state of security than hidden fields. • View state is good for caching data in Web frame configurations because the data is cached on the client. Following are limitation of using View state:- • Page loading and posting performance decreases when large values are stored because view state is stored in the page. •. Following are the benefits of using hidden frames: • You can cache more than one data field. • The ability to cache and access data items stored in different hidden forms. • The ability to access JS crept ® variable values stored in different frames if they come from the same site. The limitations of using hidden frames are: • Hidden frames are not supported on all browsers. • Hidden frames data can be tampered thus creating security hole. Following are benefits of using cookies for state management:- • No server resources are required as they are stored in client. • They are light weight and simple to use Following are limitation of using cookies:- • Most browsers place a 4096-byte limit on the size of a cookie, although support for 8192-byte cookies is becoming more common in the new browser and client-device versions available today. • Some users disable their browser or client device’s ability to receive cookies, thereby limiting the use of cookies. • Cookies can be tampered and thus creating a security hole. • Cookies can expire thus leading to inconsistency. Below is sample code of implementing cookies Request. Cookies. Add (New Http Cookie (“name”, “user1”)) A query string is information sent to the server appended to the end of a page URL. Following are the benefits of using query string for state management:- • No server resources are required. The query string containing in the HTTP requests for a specific URL. • All browsers support query strings. Following are limitations of query string:- • Query string data is directly visible to user thus leading to security problems.- • Most browsers and client devices impose a 255-character limit on URL length.) Sliding Expiration specifies that the cache will expire if a request is not made within a specified duration. Sliding expiration policy is useful whenever you have a large number of items that need to be cached, because this policy enables you to keep only the most frequently accessed items in memory. For example, the following code specifies that the cache will have a sliding duration of one minute. If a request is made 59 seconds after the cache is accessed, the validity of the cache would be reset to another minute: you will also want to be able to post to another page in your application. The Server. Transfer method can be used to move between pages, however the URL does. • Create a Web Form and insert a Button control on it using the VS .NET designer. • Set the button's PostBackUrl property to the Web Form you want to post back. For instance in this case it is "nextpage.aspx" .
http://www.codeproject.com/kb/aspnet/cacheinterview.aspx
crawl-002
refinedweb
934
64.91
Getting Started with MongoDB & MongoMapper Clinton R. Dreisbach, Former Viget Article Category: Posted on As part of our NoSQL exploration, I’ve spent some time lately with MongoDB. MongoDB bills itself as a “schema-free document-oriented database.” In using MongoDB, I’ve found it to be an easy transition from RDBMS’s because of the way it organizes document-based data. Here’s the basics: MongoDB has collections of data, not tables. Unlike CouchDB, which is also a document-oriented DB, Mongo has namespaces for data. These are schema-less, so any data could go in each namespace. In my practice, I’ve persisted objects of one class into each collection, not unlike ActiveRecord with MySQL or any other RDBMS. MongoDB has indexes. Even though each collection has no schema, you can still index the data in a collection based off a field. Not all documents in a collection have to have this field. MongoDB has a query language and query profiling. While you can use JavaScript to search through a collection, like CouchDB, you also have access to a rich query language that can filter based on fields, like SQL, and filter based on the contents of embedded documents, which proves to be totally freaking awesome. Instead of a complex join, you can query for all documents in the posts collection that have an embedded comment in the last month. Given the similarities between MongoDB and a relational database, you’d think it would be easy to use in Ruby in place of ActiveRecord, and you’d be right. John Nunemaker has created a gem called MongoMapper to work as an object mapper to MongoDB. Using MongoMapper, you can create model classes like so: class Book include MongoMapper::Document key :title, String, :required => true key :author, String key :published_at, Date key :user_id, String timestamps! # HECK YES belongs_to :user many :chapters end You’ll note several things here. Keys are defined in the model, like in a DataMapper model, although they aren’t defining a schema, only a mapping for this particular model. (If the difference seems subtle, that’s because it is: MongoMapper in many ways lets you treat MongoDB as a relational DB.) The keys can be typecast as I’ve done, although they don’t have to be. I’ve defined relationships to other models, and MongoMapper is smart about this. In the case of many :chapters, it looks to see if the Chapter class is embeddable. If so, it will embed Chapter documents in my Book document. If not, it will store them in their own collection. Just because MongoMapper defines a document with keys, you don’t have to stick to the keys. Because collections are schema-less, you can add new attributes at will, like in this example: book = Book.new(:title => "Moby Dick 2") # => #<Book _id: , title: Moby Dick 2, author: > book.author = "Dan Brown" book.update_attributes(:author => "J.K. Rowling", :isbn => '1-2345-6789-0', :amazon_score => 1.25) book.save book = Book.find_by_title("Moby Dick 2") # => #<Book _id: 4aafe487477a51f0e8000002, # title: Moby Dick 2, # author: J.K. Rowling, # isbn: 1-2345-6789-0, # amazon_score: 1.25> You can see that I can set keys defined in the class with setters, but I can set any attribute through update_attributes. MongoMapper’s API is roughly equivalent to ActiveRecord’s, allowing you to use in a Rails application with little difficulty. The only things I’ve had to do are define human_name on model classes and define new_record? on embedded documents. The only other thing you need to know to get started with MongoMapper is how to tell it what database to use. All you have to do is set MongoMapper.connection and MongoMapper.database. In my sample Rails app, I’ve put a file in config/initializers/ that looks like this: db_config = YAML::load(File.read(RAILS_ROOT + "/config/database.yml")) if db_config[Rails.env] && db_config[Rails.env]['adapter'] == 'mongodb' mongo = db_config[Rails.env] MongoMapper.connection = Mongo::Connection.new(mongo['hostname']) MongoMapper.database = mongo['database'] end You can see my database.yml file for more information on setup or check out Ben Scofield’s Rails template for MongoMapper. That should get you started! I’ve really enjoyed using MongoDB so far. For further information, checkout the MongoDB Ruby driver code, the MongoMapper code, and the code for my sample app on GitHub, and look out for more upcoming posts about how we’ve used MongoDB.
https://www.viget.com/articles/getting-started-with-mongodb-mongomapper/
CC-MAIN-2022-21
refinedweb
740
65.73
Programming :: Use A Class Inside A Struct?Apr 8, 2011 Is is possible to use a class inside a struct? I keep getting segmentation fault with this code: Code: struct my_struct { unsigned count; std::string msg; [code].... Is is possible to use a class inside a struct? I keep getting segmentation fault with this code: Code: struct my_struct { unsigned count; std::string msg; [code].... Is it not possible to declare a vector inside a C++ class ? Have a look at the following code: Code: #include <stdio.h> #include <iostream> #include <malloc.h> // malloc #include <strings.h> // bzero [code].... I have an old C application in which i am trying to include some STL cointainers. When i use the STL container alone it works fine, but when i include it into a C struct i have segmentation faults errors. I know that it is not a good idea to mix C and C++. Considering this code: Code: typedef struct{ int shmid; ... APPLSPACETYPE applSpace; [code]... and how to make a malloc for this issue; something like : Code: mem->applSpace.rData.completeGroups2zeroGroup=(map_completeGroups2zeroGroup_type *)malloc((sizeof(map_completeGroups2zeroGroup_type)+1)* sizeof(char ));. How do we allocate memory of struct? what i did was Code: int main() struct amp { [code]..... cout <<"The size of 'struct' is"<< sizeof(struct amp)<<"and it is located at"<<struct amp*s = malloc(sizeof(struct amp))<<endl; it gives me an error--- In funtion 'int main()': error: expected primary-expression before 'struct' error: expected ';' before 'struct' For a work project, I've got a bunch of python code from about a year ago that controls the movement of our EVI-D30 camera over a ttyUSB connection. It used to work fine on a 32-bit Fedora box, but recently we moved our whole project over to a 64-bit Gentoo server, and the same code seems to be worthless on the new platform. I didn't write the code, so I'm have trouble figuring out how to fix it. Error messages usually look like this: Code: File "./CameraController.py", line 172, in pan turn_callback(cmdStruct[0], cmdStruct[1]) File "./CameraController.py", line 147, in turn_callback cameras[camera].TiltUp() [code].... is there any problem that might rise by by having a vector as a member of struct in c++ as follows.ex. struct A { int a; [code]... I having a nested structures which are referenced with pointer variables : example : typedef struct { int a ; char *b ; int *c ; }EXP2 ; [Code]... While trying to access the value of sv.exp2->a it throws segmentation fault error.How to acess this variable ?? I following structure typedef struct { [code]... I have what should be a relatively simple program (fadec.c) that maps a struct from an included header file (fadec.h) to a shared memory region, but Im struggling accessing members in the struct from the pointer returned by shmat. Ultimately, I want to access members in the shared memory structure with a globally declared version of the struct, shm__. Not only do I not know how to accomplish that, but I cant even seem to access members of the shared struct directly from the pointer itself (compiler complains about dereferencing pointer to incomplete type). Im at a loss and could use another set of eyes if you guys dont mind taking a gander: Compile Errors: tony-pc:/cygdrive/p/test> cc -o fadec fadec.c fadec.c: In function 'main': fadec.c:30: error: dereferencing pointer to incomplete type fadec.c:31: error: dereferencing pointer to incomplete type [Code]... int GetTime(struct timeval tv) { gettimeofday(&tv, NULL) printf("%d ", tv.tv_sec); /* here is the right value */ return 0; } [Code].... What happend? struct timeval has a self timer? Is it possible to create a global struct variable with predefined member values?Only one instance of that struct is ever needed.View 1 Replies View Related ! I understand that block_device pointer *bd sholuld get initialized. Program should produce initialization error for *bd. Compiler is producing '->'. I am not understanding why? [code]... This for Kernel 2.6.29.6. I'm trying to code a kernel module that displays process information. how to count opened file descriptors per task. I have been able to write a module that lists all the current process names along with their pid number in /var/log/messages. Basically, I cycle through the ring of processes using the macro for_each_process(task) and printk the comm and pid of each task. I'm trying to see how many file descriptors each task has open. I've been reading up in books and all over the internet. At first I thought I needed to access max_fds under files_struct, but that just lists the maximum number of file descriptors that can be opened per task, which by default is set at 256. I then thought about counting the elements in the fd_array. But then I learned that every task's fd_array is initially set at 32. Now I know that I need to access open_fds of type fd_set * in files_struct. open_fds is a pointer to all the open file descriptors. The problem is that I don't know how to access a pointer of type fd_set. Is there a good guide or book that really focuses on type fd_set and open_fds? Every book and resource I've read never really go into depth on this. relationship between files struct, open_fds, and the open file descriptors in task?. File "/home/mohit/Download/vdrift-2009-06-15/SConstruct", line 9, in <module> scons: warning: The BoolOption() function is deprecated; use the BoolVariable() function instead. File "/home/mohit/Download/vdrift-2009-06-15/SConstruct", line 13, in <module> Checking for C++ header file asio.hpp... (cached) yes Checking for C++ header file boost/bind.hpp... (cached) yes Checking for C++ header file GL/gl.h... (cached) yes Checking for C++ header file GL/glu.h... (cached) yes Checking for C++ header file SDL/SDL.h... (cached) yes [Code]........ I'm struggling with the issue of passing a vector of a class to itself, here's what state its in now... (tried many variations, but without direction). Code: #include <iostream> #include <string> [code]... I am trying to make a periodic boundary condition type function, using an existing class given to me in lecture notes, but am having some trouble! Effectively, I am trying to make an array such that, for a point in any row of a 2D matrix ("Matrix(i,j)"), the command "next_i[i]" will return "(i+1)%L", where L is the number of data points in the row. This will enable me to select a point to the right of any point in the matrix: "Matrix(next[i],j)" [Code].... My new guy has created several functioning webpages on his machine with TOMCAT 6 with Sun JDK, yet our machines use TOMCAT 5.5 with Open JDK, which his webpages don't show. Do you have any idea how to make them work? The error showing in a browser: Code: HTTP Status 500 - type Exception report description The server encountered an internal error () that prevented it from fulfilling this request. exception org.apache.jasper.JasperException: Unable to compile class for JSP: [Code]... I've had to do some code in java, a language I'm very much unfamiliarly with so please excuse my incorrect use of terms. The basic outline of my problem is I create a class object as a local within a swing button function it works fine. If I create it as a global ( with I think I need to do ) within main, then prototype it with the other swing objects at the bottom of the file when it is called it causes a host of problems. I think the easiest way is to show it.View 2 Replies View Related Okay so I'm working on a program here as I'm learning java,I have an array that is initialized with 5 objects that are hard coded. I have made a GUI that takes the input needed and creates an object with those values.I need to add that object to the ArrayList that I have previously made.Okay, so I have three classes, guiclass.java, main.java and gladiator.java Objects are made and defined in "gladiator".Main contains my public static void main section, launches my gui, creates my five hard coded objects, creates my ArrayList and adds my five hard coded objects to the ArrayList.Now, I need to add the object that I generated in the guiSection [action Listener]to the ArrayList that I created in my main class's public static void main string... section. Problem is my arraylist "cannot be resolved" from guiclass.View 13 Replies View Related I am looking to write a function to return an MD5 hash in Java but I don't want to us the MessageDigest class as I am using the J2ME framework which doesn't include it.View 3 Replies View Related How can I handle the situation below so that the "Fatal Error" message is not shown. It would be ideal if I could supply a default class to be used. I'd prefer to not use: ini_set() to supress the errors but actually be able to "handle" the error. Code: <?php class MyClass [code].... I have a application in C++, and now I have two class. MyDialog is the class that main function launch. In MyDialog class there are four elements and when I click over theese elements, there is a MousePressEvent that launch other class, Touchpad class. So, in some moments, I have loaded two class. My question is, how can pass a value from Touchpad class to MyDialog class, when I close (destroy) Touchpad class. In a few words, is it possible to communicate values between class?View 5 Replies View Related i need to implement a Set class(like in stl) using a vector.Here is my code that doesnt work corectly: Code: #include <iostream> #include <vector> template<class T> class Set { [Code].... How can I implement c_str() for a string class?View 2 Replies View Related This is a really specific question, but maybe someone can help. I'm debugging someone else's code, and they call a UDPWriter and specify an IP address and port, and I'm trying to make sure this multicast traffic goes over a certain port. How can I determine which port the UDP defaults to and change it? It's confusing to me because I'm not familiar with all the layers the OS sends traffic through before it goes through the interface. Is there maybe some simpler way to tell the OS to send multicast traffic over both interfaces? i'm a bit stuck playing with the following class setup for glut. The actual issue is with the function pointers.Basically the following code is the opengl redbook cube example slightly modified.View 2 Replies View Related I'm writing a binary search tree class to insert records and I'm stuck on the following error: Code: /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/crt1.o: In function `_start': (.text+0x20): undefined reference to `main' collect2: ld returned 1 exit status Here is the code for my class: [Code]... I really don't know how to approach this, I thought that everything was working fine but I have no idea what is wrong. Also, for clarification, main() is in a different .cpp file that #includes "tree.h"
https://linux.bigresource.com/Programming-use-a-class-inside-a-struct--ezReMWo3e.html
CC-MAIN-2021-17
refinedweb
1,915
65.73
Analytics/Systems/Cluster/Revision augmentation and denormalization This process is the third step of the Data Lake pipeline. After this step, a fully denormalized version of users, pages and revisions history will be available for queries in a single folder / hive table on Hadoop. It also means it'll be available for loading into the serving layer(s), like druid. The data sources used to build this denormalized table are: the revision and archive tables from the MediaWiki databases loaded at Data Loading, plus the page and user histories reconstructed in previous step. The resulting dataset documentation can be found in Analytics/Data Lake/Schemas/Mediawiki history. Main functionalities of this step - Compute values that need complex joins, like revert information, byte-diff and delete time. - Historify fields that change in time, for example: page title, page namespace, user name, user groups, etc. A field is considered "historical" when, i.e. in the case of the page title, it holds the title the page had at the time of the given event. The opposite of historical fields are "latest" fields, that hold the value that is valid as of today. - Union and format all data (coming from 3 different schemas) in the same denormalized schema. Performance challenges To historify and populate all required fields, sorting of big portions of the huge dataset (like all revisions of all projects - ~3 billion items) needs to happen. This prevented the initial algorithm from running the process over large wikis. Solution The solution that worked around the performance problems was to use the "secondary sorting" trick, a classical distributed system pattern for sorting big datasets. The algorithm is written in Scala-Spark and uses Resilient Distributed Datasets (RDDs) to store an process big data in a distributed architechture. We tried to decouple as much as possible code for solving scalability problems from code solving data-oriented problem.
https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Revision_augmentation_and_denormalization
CC-MAIN-2020-10
refinedweb
313
52.39
<< Arshan Khanifar11,497 Points what's NullPointerException error? I'm making the hangman game myself and I'm following along with the video, but I don't know where my problem is : this is my code: public class Game { private String mAnswer; private String mHits; private String mMisses; public Game(String answer) { answer = mAnswer; mHits = ""; mMisses = ""; }; public boolean applyGuess(char letter) { boolean isHit = mAnswer.indexOf(letter) >=0 ; if (isHit) { mHits += letter; }else { mMisses += letter; }; return isHit; }; }; and I open up the repl and load the game. and I make the game class, but when I'm using the method applyGuess i get this error: java.lang.NullPointerException
https://teamtreehouse.com/community/whats-nullpointerexception-error
CC-MAIN-2022-27
refinedweb
107
55.78
Code:You will write a C++ program that calculates the volume of spheres or cubes based on user input. You will write C++ functions to calculate the volumes, and you will call these functions from your main() program. First your program will ask the user to enter 's' if they want to calculate the volume of a sphere, 'c' if they want the volume of a cube, or 'q' if they want to quit. If they enter 's', the program will prompt the user to enter the radius of the sphere. Your program will pass this value to your sphereVolume() function, which will return the answer so you can then display to the user from main(). Similarly, if they enter 'c' the program will prompt the user to enter the height of the cube, then call your cubeVolume() function to find the volume. The function will return the answer, and in main(), you will display it to the user. After displaying the answer, your program should loop to the beginning, and continue like this until the user enters 'q' to quit. Remember, volume of a cube = height^3 volume of sphere = (4/3) * PI * r^3 My code is: # include <iostream> # include <cmath> using namespace std; int main () { double cubeVolume(double height); double h = pow(h,3); double sphereVolume(double radius); double PI = 3.14159; double r = pow(r,3); cout << "sphereVolume is (4.0 / 3.0) * PI * pow(h,3) "; } { question: cout << "Do you want to calculate the volume of a sphere or cube(s/c)? (q for quit) "; char i; cin >> i; double result; if (i=='s') { cout << "Enter the sphere's radius: "; double getradius; cin >> getradius; cout << "The sphere's volume is " << sphereVolume(getradius) <<".\n"; } if (i=='c') { cout << "Enter the cube's height: "; double getheight; cin >> getheight; cout << "The cube's volume is " << cubeVolume(getheight) << ".\n"; } if (i!='c' && i!='s' && i!='q') { cout << "Invalid answer. s, c, or q please.\n"; goto question; } if (i!='q') goto question; return 0; }}
https://cboard.cprogramming.com/cplusplus-programming/124691-needs-help-my-cplusplus-code-calculating-volumes-functions.html
CC-MAIN-2017-30
refinedweb
335
71.24
Download Ceptah Bridge and install it. Open an existing Microsoft Project document. Click ADD-INS > JIRA > Synchronise in the ribbon or press Shift-Ctrl-S. As there are no mappings between JIRA and Microsoft Project for the plan yet, the following message will appear. Press OK. Another message will be displayed. Press OK. The following dialog will appear. Enter JIRA URL. Make sure that the URL includes both protocol (http or https) and host name. If the port is different from 80, it should be specified as well. You can open your JIRA home page in a browser and copy the URL from the address bar: URL: URL: Enter your JIRA login and password. If you access JIRA through a proxy server, tick 'Use proxy server' and enter the proxy server parameters. Press 'Test Connection' to check the settings. Press 'OK' to save the settings. The Project Settings Dialog will appear. You will need to complete the mandatory fields marked with * to start basic synchronisation. Take notice of the Issue Key mapping. The keys of the linked issues for MS Project tasks will be stored in the Text10 field. That is how Ceptah Bridge knows which issue needs to be synchronised with a task. This field is populated automatically during import and during synchronisation as issues get created. Select the JIRA project where issues will be created during synchronisation by default. A drop-down list with the keys of the JIRA projects available to you will open in the project settings dialog. Select the JIRA project where issues will be created during synchronisation by default. Observe the Summary mapping. Issue Summary will be synchronised with Task Name. If the task is not on the first level, the names of the tasks it is part of will be added before the child task name and delimited with the '>' symbol. For instance, if there is a task called '2nd Phase' that has a sub-task called 'Regional Offices', which in its turn contains a task 'Deployment', the summary for the issue linked to 'Deployment' will be '2nd Phase > Regional Offices > Deployment'. When a new issue is created during synchronisation, its summary will be set to the above described composite name. If a task has already been linked to an issue and the issue summary is different from the name composed using the task names, the summary will be reset to the latter value. The arrow on the direction button points from the side (MS Project or JIRA) that is the source of truth towards the side that should be updated during synchronisation if different. With the configuration in the picture, Summary in JIRA will be updated based on the MS Project task names if required. The direction, as well as other options, can be changed to tailor the mapping to your business process. The direction buttons are only regarded during synchronisation and only if the task is linked to an issue. Otherwise, which is when issues are imported or are being created, the direction button states are ignored. This applies to all mappings. Select the default type for new issues. Select the default priority for issues. Observe the Due Date mapping. Due date is mapped to Task Finish and will be reset to the Finish value during synchronisation because the arrow is pointing to JIRA. If you flipped the direction by clicking the button, Task Finish would be copied from Due Date instead. Observe the Assignee mapping. Assignee Name is mapped to Resource Name, and the direction is from MS Project to JIRA. That means that the issue will be assigned (or reassigned) to the JIRA user whose full name is the same as the Resource Name for the synchronised task in MS Project. Note the fact that 'Do not create issue if no assignee' is ticked. Ceptah Bridge will not create issues for the tasks that have their Resources field blank. For the purpose of this tutorial, please make sure that the tasks that you want to be published in JIRA have their Resources field populated with valid JIRA user names as the tasks will not be synchronised otherwise. You have the minimum set of mappings configured now. Press OK to save the settings. As the synchronisation is about to begin and the product needs to be activated to run synchronisation, the activation window will pop up. Complete the form and press Activate to continue. An Internet connection will be required for successful activation. Now Ceptah Bridge will go through the MS Project tasks and prepare a list of changes to be made. The top half of the window displays the tasks and/or issues that need to be updated or created during import or synchronisation. In our case it contains a list of issues to be created in JIRA. The bottom part provides a preview of the changes to be applied to each field of the item selected in the top list. Press 'Apply changes' to start the actual synchronisation. Ceptah Bridge will go through the prepared tasks, create issues and link them to the tasks by populating the Text10 field with issue keys. The synchronised items will be marked as done and a link to a newly created issue will be displayed for each row in the top list. Add the Text10 field to your current task view (e.g. Gantt Chart) in MS Project. Observe that Text10 contains the keys of the linked issues now. Ceptah Bridge populates the Hyperlink field with a link to the associated issue in JIRA. This is defined by the 'Hyperlink' mapping configurable on the 'Secondary' project settings tab. Click the link to open the related issue in JIRA. Change a few mapped fields in MS Project. Then run synchronisation again. Ceptah Bridge will suggest making some changes in JIRA to reflect the modifications on the Microsoft Project side. Press 'Apply Changes'. After the synchronisation finishes, you can open the issues in JIRA and see that they are in sync with tasks. Press 'Compare'. Ceptah Bridge will not find any differences as JIRA and MS Project are in sync now.
https://www.ceptah.com/Bridge/Tutorials/GettingStarted.aspx
CC-MAIN-2019-26
refinedweb
1,017
65.12
Oleh <address@hidden> writes: Hi Oleh, > I wrote some indentation for the tabular environment. > Let me know if it's alright. > I attach a patch and a test. I like that table formatting. Nice! As you might know, as official GNU project AUCTeX only incorporates code from people that have assigned copyright to the FSF. Are you Oleh Krehel from Ukraine? If so, then you have already done that. If not, are you willing to sign a CA? Then I'll send you the request off-list. > +(defun re-search-backward->column (str) > + (save-excursion > + (if (re-search-backward str nil t) > + (current-column) > + 0))) New functions/variable should always be namespace-prefixed. But I don't see why you need that function anyhow. If the re-search-backward doesn't find a match, then we're not in a table and calling `LaTeX-indent-tabular' shouldn't have been called in the first place. > +(defun LaTeX-indent-tabular () > + (let ((beg-col (re-search-backward->column "\\\\begin{tabular}"))) It would be good if it would work with other tabular-like environments, too. Say tabular*, tabularx, tabulary, longtable, and the math tabular-alikes array, align, eqnarray, cases, ... You should be able to figure out the nearest surrounding tabular-like environment with `LaTeX-current-environment'. > + (if (looking-at "\\\\end{tabular}") > + beg-col > + (+ 2 > + (if (looking-at "\\\\\\\\") > + beg-col > + (let ((any-col (re-search-backward->column > + "\\\\begin{tabular}\\|\\\\\\\\\\|&"))) > + (if (equal "&" (match-string-no-properties 0)) Better use `string-equal' here. > + any-col > + beg-col))))))) > + > (provide 'latex) > > ;;; latex.el ends here Bye, Tassilo
https://lists.gnu.org/archive/html/auctex/2013-10/msg00001.html
CC-MAIN-2020-34
refinedweb
259
51.24
Details Description Woden should correctly support WSDL importing, as per the WSDL 2.0 spec Part 1, section 4.2 Importing Descriptions. wsdl:import is a namespace import. The location attribute is optional and is treated as a hint only. Currently, Woden will attempt to resolve the location attribute of a wsdl:import to a document, but will not attempt to resolve a namespace only import. The requirement on Woden is to associated an imported namespace with ALL imported documents that have that as their targetNamespace. Extracted from Woden weekly call Minutes 9th Jan 07 (see [1] below): Arthur: Import relates to namespace not location, so WSDL content for each namespace should be represented in Woden in a 'master' description doc keyed of that namespace and any retrieved documents from that namespace should be included in that master document. This master description is stored in a 'catalog' keyed by NS. When a wsdl:import is processed it should try to resolve against the catalog first, before trying to retrieve a document. If the wsdl:import has no location attribute, this catalog is the way to resolve the import. John: So if a wsdl import cannot be resolve to anything (via this catalog or externally) it is an error? At least, it results in an error if the component model contains any components from this namespace? Arthur: correct. John: Currently Woden always tries to retrieve a document based on the location attribute of a wsdl:import. It does check a cache to reuse the DescriptionElement if the document has already been retrieved, but it does not do anything special to resolve a wsdl import without a location attribute. This catalog mechanism sounds like the solution. [1]
https://issues.apache.org/jira/browse/WODEN-124
CC-MAIN-2014-15
refinedweb
285
53.81
Error Processing Managing the behavior of InterSystems IRIS® data platform when an error occurs is called error processing or error handling. Error processing performs one or more of the following operations: Correcting the condition that caused the error Performing some action that allows execution to resume despite the error Diverting the flow of execution Logging information about the error InterSystems IRIS supports three types of error processing, which can be used simultaneously. These are: The preferred mechanism for ObjectScript error handling is the TRY-CATCH mechanism. The $ZTRAP mechanism is provided for a more traditional style of error handling. The TRY-CATCH Mechanism InterSystems IRIS supports a TRY-CATCH mechanism for handling errors. With this mechanism, you can establish delimited blocks of code, each called a TRY block; if an error occurs during a TRY block, control passes to the TRY block’s associated CATCH block, which contains code for handling the exception. A TRY block can also include THROW commands; each of these commands explicitly issues an exception from within a TRY block and transfers execution to a CATCH block. To use this mechanism in its most basic form, include a TRY block within ObjectScript code. If an exception occurs within this block, the code within the associated CATCH block is then executed. The form of a TRY-CATCH block is: TRY { protected statements } CATCH [ErrorHandle] { error statements } further statements where: The TRY command identifies a block of ObjectScript code statements enclosed in curly braces. TRY takes no arguments. This block of code is protected code for structured exception handling. If an exception occurs within a TRY block, InterSystems IRIS sets the exception properties (oref.Name, oref.Code, oref.Data, and oref.Location), $ZERROR, and $ECODE, then transfers execution to an exception handler, identified by the CATCH command. This is known as throwing an exception. The protected statements are ObjectScript statements that are part of normal execution. (These can include calls to the THROW command. This scenario is described in the following section.) The CATCH command defines an exception handler, which is a block of code to execute when an exception occurs in a TRY block. The ErrorHandle variable is a handle to an exception object. This can be either an exception object that InterSystems IRIS has generated in response to a runtime error or an exception object explicitly issued by invoking the THROW command (described in the next section). The error statements are ObjectScript statements that are invoked if there is an exception. The further statements are ObjectScript statements that either follow execution of the protected statements if there is no exception or follow execution of the error statements if there is an exception and control passes out of the CATCH block. Depending on events during execution of the protected statements, one of the following events occurs: If an error does not occur, execution continues with the further statements that appear outside the CATCH block. If an error does occur, control passes into the CATCH block and error statements are executed. Execution then depends on contents of the CATCH block: If the CATCH block contains a THROW or GOTO command, control goes directly to the specified location. If the CATCH block does not contain a THROW or GOTO command, control passes out of the CATCH block and execution continues with the further statements. Using THROW with TRY-CATCH InterSystems IRIS issues an implicit exception when a runtime error occurs. To issue an explicit exception, the THROW command is available. The THROW command transfers execution from the TRY block to the CATCH exception handler. The THROW command has a syntax of: THROW expression where expression is an instance of a class that inherits from the %Exception.AbstractException Opens in a new window class, which InterSystems IRIS provides for exception handling. For more information on %Exception.AbstractException Opens in a new window, see the following section. The form of the TRY/CATCH block with a THROW is: TRY { protected statements THROW expression protected statements } CATCH exception { error statements } further statements where the THROW command explicitly issues an exception. The other elements of the TRY-CATCH block are as described in the previous section. The effects of THROW depends on where the throw occurs and the argument of THROW: A THROW within a TRY block passes control to the CATCH block. A THROW within a CATCH block passes control up the execution stack to the next error handler. If the exception is a %Exception.SystemException object, the next error handler can be any type (CATCH or traditional); otherwise there must be a CATCH to handle the exception or a <NOCATCH> error will be thrown. If control passes into a CATCH block because of a THROW with an argument, the ErrorHandle contains the value from the argument. If control passes into a CATCH block because of a system error, the ErrorHandle is a %Exception.SystemException Opens in a new window object. If no ErrorHandle is specified, there is no indication of why control has passed into the CATCH block. For example, suppose there is code to divide two numbers: div(num,div) public { TRY { SET ans=num/div } CATCH errobj { IF errobj.Name="<DIVIDE>" { SET ans=0 } ELSE { THROW errobj } } QUIT ans } If a divide-by-zero error happens, the code is specifically designed to return zero as the result. For any other error, the THROW sends the error on up the stack to the next error handler. Using $$$ThrowOnError and $$$ThrowStatus Macros InterSystems IRIS provides macros for use with exception handling. When invoked, these macros throw an exception object to the CATCH block. The following example invokes the $$$ThrowOnError() macro when an error status is returned by the %Prepare() method: #Include %occStatus TRY { SET myquery = "SELECT TOP 5 Name,Hipness,DOB FROM Sample.Person" SET tStatement = ##class(%SQL.Statement).%New() SET status = tStatement.%Prepare(myquery) $$$ThrowOnError(status) WRITE "%Prepare succeeded",! RETURN } CATCH sc { WRITE "In Catch block",! WRITE "error code: ",sc.Code,! WRITE "error location: ",sc.Location,! WRITE "error data:",$LISTGET(sc.Data,2),! RETURN } The following example invokes $$$ThrowStatus after testing the value of the error status returned by the %Prepare() method: #Include %occStatus TRY { SET myquery = "SELECT TOP 5 Name,Hipness,DOB FROM Sample.Person" SET tStatement = ##class(%SQL.Statement).%New() SET status = tStatement.%Prepare(myquery) IF ($System.Status.IsError(status)) { WRITE "%Prepare failed",! $$$ThrowStatus(status) } ELSE {WRITE "%Prepare succeeded",! RETURN } } CATCH sc { WRITE "In Catch block",! WRITE "error code: ",sc.Code,! WRITE "error location: ",sc.Location,! WRITE "error data:",$LISTGET(sc.Data,2),! RETURN } These system-supplied macros are further described in the “ObjectScript Macros and the Macro Preprocessor” chapter of this book. Using the %Exception.SystemException and %Exception.AbstractException Classes InterSystems IRIS provides the %Exception.SystemException Opens in a new window and %Exception.AbstractException Opens in a new window classes for use with exception handling. %Exception.SystemException Opens in a new window inherits from the %Exception.AbstractException Opens in a new window class and is used for system errors. For custom errors, create a class that inherits from %Exception.AbstractException Opens in a new window. %Exception.AbstractException Opens in a new window contains properties such as the name of the error and the location at which it occurred. When a system error is caught within a TRY block, the system creates a new instance of the %Exception.SystemException Opens in a new window class and places error information in that instance. When throwing a custom exception, the application programmer is responsible for populating the object with error information. An exception object has the following properties: Name — The error name, such as <UNDEFINED> Code — The error number Location — The label+offset^routine location of the error Data — Any extra data reported by the error, such as the name of the item causing the error Other Considerations with TRY-CATCH The following describe conditions that may arise when using a TRY-CATCH block. QUIT within a TRY-CATCH Block A QUIT command within a TRY or CATCH block passes control out of the block to the next statement after the TRY-CATCH as a whole. TRY-CATCH and the Execution Stack The TRY block does not introduce a new level in the execution stack. This means that it is not a scope boundary for NEW commands. The error statements execute at the same level as that of the error. This can result in unexpected results if there are DO commands within the protected statements and the DO target is also within the protected statements. In such cases, the $ESTACK special variable can provide information about the relative execution levels. Using TRY-CATCH with Traditional Error Processing TRY-CATCH error processing is compatible with $ZTRAP error traps used at different levels in the execution stack. The exception is that $ZTRAP may not be used within the protected statements of a TRY clause. User-defined errors with a THROW are limited to TRY-CATCH only. User-defined errors with the ZTRAP command may be used with any type of error processing. %Status Error Processing Many of the methods in the InterSystems IRIS class library return success or failure information via the %Status Opens in a new window data type. For example, the %Save() method, used to save an instance of a %Persistent Opens in a new window object, returns a %Status Opens in a new window value indicating whether or not the object was saved. Successful method execution returns a %Status Opens in a new window of 1. Failed method execution returns %Status Opens in a new window as an encoded string containing the error status and one or more error codes and text messages. Status text messages are localized for the language of your locale. You can use %SYSTEM.Status Opens in a new window class methods to inspect and manipulate %Status Opens in a new window values. InterSystems IRIS provides several options for displaying (writing) the %Status Opens in a new window encoded string in different formats. For further details, refer to “Display (Write) Commands” in the “Commands” chapter of this manual. In the following example, the %Prepare fails because of an error in the myquery text: “ZOP” should be “TOP”. This error is detected by the IsError() Opens in a new window method, and other %SYSTEM.Status Opens in a new window methods display the error code and text: SET myquery = "SELECT ZOP 5 Name,DOB FROM Sample.Person" SET tStatement = ##class(%SQL.Statement).%New() SET status = tStatement.%Prepare(myquery) IF ($System.Status.IsError(status)) { WRITE "%Prepare failed",! DO StatusError() } ELSE {WRITE "%Prepare succeeded",! RETURN } StatusError() WRITE "Error #",$System.Status.GetErrorCodes(status),! WRITE $System.Status.GetOneStatusText(status,1),! WRITE "end of error display" QUIT The following example is the same as the previous, except that the status error is detected by the $$$ISERR() macro of the %occStatus include file. $$$ISERR() (and its inverse, $$$ISOK()) checks whether or not %Status Opens in a new window=1. The error code is returned by the $$$GETERRORCODE() macro: #Include %occStatus SET myquery = "SELECT ZOP 5 Name,DOB FROM Sample.Person" SET tStatement = ##class(%SQL.Statement).%New() SET status = tStatement.%Prepare(myquery) IF $$$ISERR(status) { WRITE "%Prepare failed",! DO StatusError() } ELSE {WRITE "%Prepare succeeded",! RETURN} StatusError() WRITE "Error #",$$$GETERRORCODE(status),! WRITE $System.Status.GetOneStatusText(status,1),! WRITE "end of error display" QUIT These system-supplied macros are further described in the “ObjectScript Macros and the Macro Preprocessor” chapter of this book. Some methods, such as %New(), generate, but do not return a %Status Opens in a new window. %New() either returns an oref to an instance of the class upon success, or the null string upon failure. You can retrieve the status value for methods of this type by accessing the %objlasterror system variable, as shown in the following example. SET session = ##class(%CSP.Session).%New() IF session="" { WRITE "session oref not created",! WRITE "%New error is ",!,$System.Status.GetErrorText(%objlasterror),! } ELSE {WRITE "session oref is ",session,! } For more information, refer to the %SYSTEM.Status Opens in a new window class. Creating %Status Errors You can invoke system-defined %Status errors from your own methods by using the Error() Opens in a new window method. You specify the error number that corresponds to the error message you wish to return. WRITE "Here my method generates an error",! SET status = $System.Status.Error(20) WRITE $System.Status.GetErrorText(status),! You can include %1, %2, and %3 parameters in the returned error message, as shown in the following example: WRITE "Here my method generates an error",! SET status = $System.Status.Error(214,"3","^fred","BedrockCode") WRITE $System.Status.GetErrorText(status),! You can localize the error message to display in your preferred language. SET status = $System.Status.Error(30) WRITE "In English:",! WRITE $System.Status.GetOneStatusText(status,1,"en"),! WRITE "In French:",! WRITE $System.Status.GetOneStatusText(status,1,"fr"),! For a list of error codes and messages (in English), refer to the “General Error Messages” chapter of the InterSystems IRIS Error Reference. You can use the generic error codes 83 and 5001 to specify a custom message that does not correspond to any of the general error messages. You can use the AppendStatus() Opens in a new window method to create a list of multiple error messages. Then you can use GetOneErrorText() Opens in a new window or GetOneStatusText() Opens in a new window to retrieve individual error messages by their position in this list: CreateCustomErrors SET st1 = $System.Status.Error(83,"my unique error") SET st2 = $System.Status.Error(5001,"my unique error") SET allstatus = $System.Status.AppendStatus(st1,st2) DisplayErrors WRITE "All together:",! WRITE $System.Status.GetErrorText(allstatus),!! WRITE "One by one",! WRITE "First error format:",! WRITE $System.Status.GetOneStatusText(allstatus,1),! WRITE "Second error format:",! WRITE $System.Status.GetOneStatusText(allstatus,2),! %SYSTEM.Error The %SYSTEM.Error Opens in a new window class is a generic error object. It can be created from a %Status error, from an exception object, a $ZERROR error, or an SQLCODE error. You can use %SYSTEM.Error Opens in a new window class methods to convert a %Status to an exception, or to convert an exception to a %Status. Traditional Error Processing This section describes various aspects of traditional error processing with InterSystems IRIS. These include: How Traditional Error Processing Works Handing Errors with $ZTRAP Handling Errors in an Error Handler Processing Errors from the Terminal Prompt How Traditional Error Processing Works For traditional error processing, InterSystems IRIS provides the functionality so that your application can have an error handler. An error handler processes any error that may occur while the application is running. A special variable specifies the ObjectScript commands to be executed when an error occurs. These commands may handle the error directly or may call a routine to handle it. To set up an error handler, the basic process is: Create one or more routines to perform error processing. Write code to perform error processing. This can be general code for the entire application or specific processing for specific error conditions. This allows you to perform customized error handling for each particular part of an application. Establish one or more error handlers within your application, each using specific appropriate error processing. If an error occurs and no error handler has been established, the behavior depends on how the InterSystems IRIS session was started: If you signed onto InterSystems IRIS at the Terminal prompt and have not set an error trap, InterSystems IRIS displays an error message on the principal device and returns the Terminal prompt with the program stack intact. The programmer can later resume execution of the program. If you invoked InterSystems IRIS in Application Mode and have not set an error trap, InterSystems IRIS displays an error message on the principal device and executes a HALT command. Internal Error-Trapping Behavior To get the full benefit of InterSystems IRIS error processing and the scoping issues surrounding the $ZTRAP special variable, it is helpful to understand how InterSystems IRIS transfers control from one routine to another. InterSystems IRIS builds a data structure called a “context frame” each time any of the following occurs: A routine calls another routine with a DO command. (This kind of frame is also known as a “DO frame.”) An XECUTE command argument causes ObjectScript code to execute. (This kind of frame is also known as a “XECUTE frame.”) A user-defined function is executed. The frame is built on the call stack, one of the private data structures in the address space of your process. InterSystems IRIS stores the following elements in the frame for a routine: The value of the $ZTRAP special variable (if any) The position to return from the subroutine When routine A calls routine B with DO ^B, InterSystems IRIS builds a DO frame on the call stack to preserve the context of A. When routine B calls routine C, InterSystems IRIS adds a DO frame to the call stack to preserve the context of B, and so forth. If routine A in the figure above is invoked at the Terminal prompt using the DO command, then an extra DO frame, not described in the figure, exists at the base of the call stack. Current Context Level You can use the following to return information about the current context level: The $STACK special variable contains the current relative stack level. The $ESTACK special variable contains the current stack level. It can be initialized to 0 (level zero) at any user-specified point. The $STACK function returns information about the current context and contexts that have been saved on the call stack The $STACK Special Variable The $STACK special variable contains the number of frames currently saved on the call stack for your process. The $STACK value is essentially the context level number (zero based) of the currently executing context. Therefore, when an image is started, but before any commands are processed, the value of $STACK is 0. See the $STACK special variable in the ObjectScript Reference for details. The $ESTACK Special Variable The $ESTACK special variable is similar to the $STACK special variable, but is more useful in error handling because you can reset it to 0 (and save its previous value) with the NEW command. Thus, a process can reset $ESTACK in a particular context to mark it as a $ESTACK level 0 context. Later, if an error occurs, error handlers can test the value of $ESTACK to unwind the call stack back to that context. See the $ESTACK special variable in the ObjectScript Reference for details. The $STACK Function The $STACK function returns information about the current context and contexts that have been saved on the call stack. For each context, the $STACK function provides the following information: The type of context (DO, XECUTE, or user-defined function) The entry reference and command number of the last command processed in the context The source routine line or XECUTE string that contains the last command processed in the context The $ECODE value of any error that occurred in the context (available only during error processing when $ECODE is non-null) When an error occurs, all context information is immediately saved on your process error stack. The context information is then accessible by the $STACK function until the value of $ECODE is cleared by an error handler. In other words, while the value of $ECODE is non-null, the $STACK function returns information about a context saved on the error stack rather than an active context at the same specified context level. See the $STACK function in the ObjectScript Reference for details.. Error Codes When an error occurs, InterSystems IRIS sets the $ZERROR and $ECODE special variables to a value describing the error. $ZERROR Value InterSystems IRIS sets $ZERROR to a string containing: The InterSystems IRIS error code, enclosed in angle brackets. The label, offset, and routine name where the error occurred. (For some errors): Additional information, such as the name of the item that caused the error. The AsSystemError() Opens in a new window method of the %Exception.SystemException Opens in a new window class returns the same values in the same format as $ZERROR. The following examples show the type of messages to which $ZERROR is set when InterSystems IRIS encounters an error. In the following example, the undefined local variable abc is invoked at line offset 2 from label PrintResult of routine MyTest. $ZERROR contains: <UNDEFINED>PrintResult+2^MyTest *abc The following error occurred when a non-existent class is invoked at line offset 3: <CLASS DOES NOT EXIST>PrintResult+3^MyTest *%SYSTEM.XXQL The following error occurred when a non-existent method of an existing class is invoked at line offset 4: <METHOD DOES NOT EXIST>PrintResult+4^MyTest *BadMethod,%SYSTEM.SQL You can also explicitly set the special variable $ZERROR as any string up to 128 characters; for example: SET $ZERROR="Any String" The $ZERROR value is intended for use immediately following an error. Because a $ZERROR value may not be preserved across routine calls, users that wish to preserve a $ZERROR value for later use should copy it to a variable. It is strongly recommended that users set $ZERROR to the null string ("") immediately after use. See the $ZERROR special variable in the ObjectScript Reference for details. For further information on handling $ZERROR errors, refer to the %SYSTEM.Error Opens in a new window class methods in the InterSystems Class Reference. $ECODE Value When an error occurs, InterSystems IRIS sets $ECODE to the value of a comma-surrounded string containing the ANSI Standard error code that corresponds to the error. For example, when you make a reference to an undefined global variable, InterSystems IRIS sets $ECODE set to the following string: ,M7, If the error has no corresponding ANSI Standard error code, InterSystems IRIS sets $ECODE to the value of a comma-surrounded string containing the InterSystems IRIS error code preceded by the letter Z. For example, if a process has exhausted its symbol table space, InterSystems IRIS places the error code <STORE> in the $ZERROR special variable and sets $ECODE to this string: ,ZSTORE, After an error occurs, your error handlers can test for specific error codes by examining the value of the $ZERROR special variable or the $ECODE special variable. Error handlers should examine $ZERROR rather than $ECODE special variable for specific errors. See the $ECODE special variable in the ObjectScript Reference for details. Handling Errors with $ZTRAP To handle errors with $ZTRAP, you set the $ZTRAP special variable to a location, specified as a quoted string. You set the $ZTRAP special variable to an entry reference that specifies the location to which control is to be transferred when an error occurs. You then write $ZTRAP code at that location. Setting $ZTRAP in a Procedure Within a procedure, you can only set the $ZTRAP special variable to a line label (private label) within that procedure. You cannot set $ZTRAP to any external routine from within a procedure block. When displaying the $ZTRAP value, InterSystems IRIS does not return the name of the private label. Instead, it returns the offset from the top of the procedure where that private label is located. For further details see the $ZTRAP special variable in the ObjectScript Reference. Setting $ZTRAP in a Routine Within a routine, you can set the $ZTRAP special variable to a label in the current routine, to an external routine, or to a label within an external routine. You can only reference an external routine if the routine is not procedure block code. The following example establishes LogErr^ErrRou as the error handler. When an error occurs, InterSystems IRIS executes the code found at the LogErr label within the ^ErrRou routine: SET $ZTRAP="LogErr^ErrRou" When displaying the $ZTRAP value, InterSystems IRIS displays the label name and (when appropriate) the routine name. A label name must be unique within its first 31 characters. Label names and routine names are case-sensitive. Within a routine, $ZTRAP has three forms: SET $ZTRAP="location" SET $ZTRAP="*location" which executes in the context in which the error occurred that invoked it. SET $ZTRAP="^%ETN" which executes the system-supplied error routine %ETN in the context in which the error occurred that invoked it. You cannot execute ^%ETN (or any external routine) from a procedure block. Either specify the code is [Not ProcedureBlock], or use a routine such as the following, which invokes the %ETN entrypoint BACK^%ETN: ClassMethod MyTest() as %Status { SET $ZTRAP="Error" SET ans = 5/0 /* divide-by-zero error */ WRITE "Exiting ##class(User.A).MyTest()",! QUIT ans Error SET err=$ZERROR SET $ZTRAP="" DO BACK^%ETN QUIT $$$ERROR($$$CacheError,err) }Copy code to clipboard For more information on %ETN and its entrypoints, see Logging Application Errors. For details on its use with $ZTRAP, see SET $ZTRAP=^%ETN. For further details see the $ZTRAP special variable in the ObjectScript Reference. Writing $ZTRAP Code The location that $ZTRAP points to can perform a variety of operations to display, log, and/or correct an error. Regardless of what error handling operations you wish to perform, the $ZTRAP code should begin by performing two tasks: Set $ZTRAP to another value, either the location of an error handler, or the empty string (""). (You must use SET, because you cannot KILL $ZTRAP.) This is done because if another error occurs during error handling, that error would invoke the current $ZTRAP error handler. If the current error handler is the error handler you are in, this would result in an infinite loop. Set a variable to $ZERROR. If you wish to reference a $ZERROR value later in your code, refer to this variable, not $ZERROR itself. This is done because $ZERROR contains the most-recent error, and a $ZERROR value may not be preserved across routine calls, including internal routine calls. If another error occurs during error handling, the $ZERROR value would be overwritten by that new error. It is strongly recommended that users set $ZERROR to the null string ("") immediately after use. The following example shows these essential $ZTRAP code statements: MyErrHandler SET $ZTRAP="" SET err=$ZERROR /* error handling code using err as the error to be handled */ Using $ZTRAP Each routine in an application can establish its own $ZTRAP error handler by setting $ZTRAP. When an error trap occurs, InterSystems IRIS takes the following steps: Sets the special variable $ZERROR to an error message. Resets the program stack to the state it was in when the error trap was set (when the SET $ZTRAP= was executed). In other words, the system removes all entries on the stack until it reaches the point at which the error trap was set. (The program stack is not reset if $ZTRAP was set to a string beginning with an asterisk (*).) Resumes the program at the location specified by the value of $ZTRAP. The value of $ZTRAP remains the same.Note: You can explicitly set the variable $ZERROR as any string up to 128 characters. Usually you would set $ZERROR to a null string, but you can set $ZERROR to a value. Unstacking NEW Commands With Error Traps When an error trap occurs and the program stack entries are removed, InterSystems IRIS also removes all stacked NEW commands back to the subroutine level containing the SET $ZTRAP=. However, all NEW commands executed at that subroutine level remain, regardless of whether they were added to the stack before or after $ZTRAP was set. For example: Main SET A=1,B=2,C=3,D=4,E=5,F=6 NEW A,B SET $ZTRAP="ErrSub" NEW C,D DO Sub1 RETURN Sub1() NEW E,F WRITE 6/0 // Error: division by zero RETURN ErrSub() WRITE !,"Error is: ",$ZERROR WRITE RETURN When the error in Sub1 activates the error trap, the former values of E and F stacked in Sub1 are removed, but A, B, C, and D remain stacked. $ZTRAP Flow of Control Options After a $ZTRAP error handler has been invoked to handle an error and has performed any cleanup or error logging operations, the error handler has three flow control options: Handle the error and continue the application. Pass control to another error handler Terminate the application Continuing the Application After a $ZTRAP error handler has handled an error, you can continue the application by issuing a GOTO. You do not have to clear the values of the $ZERROR or $ECODE special variables to continue normal application processing. However, you should clear $ZTRAP (by setting it to the empty string) to avoid a possible infinite error handling loop if another error occurs. See “Handling Errors in an Error Handler” for more information. After completing error processing, your $ZTRAP error handler can use the GOTO command to transfer control to a predetermined restart or continuation point in your application to resume normal application processing. When an error handler has handled an error, the $ZERROR special variable is set to a value. This value is not necessarily cleared when the error handler completes. Some routines reset $ZERROR to the null string. The $ZERROR value is overwritten when the next error occurs that invokes an error handler. For this reason, the $ZERROR value should only be accessed within the context of an error handler. If you wish to preserve this value, copy it to a variable and reference that variable, not $ZERROR itself. Accessing $ZERROR in any other context does not produce reliable results. Passing Control to Another Error Handler If the error condition cannot be corrected by a $ZTRAP error handler, you can use a special form of the ZTRAP command to transfer control to another error handler. The command ZTRAP $ZERROR re-signals the error condition and causes InterSystems IRIS to unwind the call stack to the next call stack level with an error handler. After InterSystems IRIS has unwound the call stack to the level of the next error handler, processing continues in that error handler. The next error handler may have been set by a $ZTRAP. The following figure shows the flow of control in $ZTRAP error handling routines. Handling Errors in a $ZTRAP Error Handler When an error occurs in an error handler, the flow of execution depends on the type of error handler that is currently executing. If the new error occurs in a $ZTRAP error handler, InterSystems IRIS passes control to the first error handler it encounters, unwinding the call stack only if necessary. Therefore, if the $ZTRAP error does not clear $ZTRAP at the current stack level and another error subsequently occurs in the error handler, the $ZTRAP handler is invoked again at the same context level, causing an infinite loop. To avoid this, Set $ZTRAP to another value at the beginning of the error handler. Error Information in the $ZERROR and $ECODE Special Variables If another error occurs during the handling of the original error, information about the second error replaces the information about the original error in the $ZERROR special variable. However, InterSystems IRIS appends the new information to the $ECODE special variable. Depending on the context level of the second error, InterSystems IRIS may append the new information to the process error stack as well. If the existing value of the $ECODE special variable is non-null, InterSystems IRIS appends the code for the new error to the current $ECODE value as a new comma piece. Error codes accrue in the $ECODE special variable until either of the following occurs: You explicitly clear $ECODE, for example: SET $ECODE = ""Copy code to clipboard The length of $ECODE exceeds the maximum string length. Then, the next new error code replaces the current list of error codes in $ECODE.. See the $ECODE and $ZERROR special variables in the ObjectScript Reference for details. For further information on handling $ZERROR errors, refer to the %SYSTEM.Error Opens in a new window class methods in the InterSystems Class Reference. Forcing an Error You set the $ECODE special variable or use the ZTRAP command to cause an error to occur under controlled circumstances. Setting $ECODE You can set the $ECODE special variable to any non-null string to cause an error to occur. When your routine sets $ECODE to a non-null string, InterSystems IRIS sets $ECODE to the specified string and then generates an error condition. The $ZERROR special variable in this circumstance is set with the following error text: <ECODETRAP> Control then passes to error handlers as it does for normal application-level errors. You can add logic to your error handlers to check for errors caused by setting $ECODE. Your error handler can check $ZERROR for an <ECODETRAP> error (for example, “$ZE["ECODETRAP"”) or your error handler can check $ECODE for a particular string value that you choose. Creating Application-Specific Errors Keep in mind that the ANSI Standard format for $ECODE is a comma-surrounded list of one or more error codes: Errors prefixed with “Z” are implementation-specific errors Errors prefixed with “U” are application-specific errors You can create your own error codes following the ANSI Standard by having the error handler set $ECODE to the appropriate error message prefixed with a “U”. SET $ECODE=",Upassword expired," Processing Errors at the Terminal Prompt When you generate an error after you sign onto InterSystems IRIS at the Terminal prompt with no error handler set, InterSystems IRIS takes the following steps when an error occurs in a line of code you enter: InterSystems IRIS displays an error message on the process’s principal device. The process breaks at the call stack level where the error occurred. The process returns the Terminal prompt. Understanding Error Message Formats As an error message, InterSystems IRIS displays three lines: The entire line of source code in which the error occurred. Below the source code line, a caret (^) points to the command that caused the error. A line containing the contents of $ZERROR. In the following Terminal prompt example, the second SET command has an undefined local variable error: USER>WRITE "hello",! SET x="world" SET y=zzz WRITE x,! hello WRITE "hello",! SET x="world" SET y=zzz WRITE x,! ^ <UNDEFINED> *zzz USER> In the following example, the same line of code is in a program named mytest executed from the Terminal prompt: USER>DO ^mytest hello WRITE "hello",! SET x="world" SET y=zzz WRITE x,! ^ <UNDEFINED>WriteOut+2^mytest *zzz USER 2d0> In this case, $ZERROR indicates that the error occurred in mytest at an offset of 2 lines from the a label named WriteOut. Note that the prompt has changed, indicating that a new program stack level has been initiated. Understanding the Terminal Prompt By default, the Terminal prompt specifies the current namespace. If one or more transactions are open, it also includes the $TLEVEL transaction level count. This default prompt can be configured with different contents, as described in the ZNSPACE command documentation. The following examples show the defaults: USER> TL1:USER> If an error occurs during the execution of a routine, the system saves the current program stack and initiates a new stack frame. An extended prompt appears, such as: USER 2d0> This extended prompt indicates that there are two entries on the program stack, the last of which is an invoking of DO (as indicated by the “d”). Note that this error placed two entries on the program stack. The next DO execution error would result in the prompt: USER 4d0> For a more detailed explanation, refer to Terminal Prompt Shows Program Stack Information in the “Command-line Routine Debugging” chapter. Recovering from the Error You can then take any of the following steps: Issue commands from the Terminal prompt View and modify your variables and global data Edit the routine containing the error or any other routine Execute other routines Any of these steps can even cause additional errors. After you have taken these steps, your most likely course is to either resume execution or to delete all or part of the program stack. Resuming Execution at the Next Sequential Command You can resume execution at the next command after the command that caused the error by entering an argumentless GOTO from the Terminal prompt: USER>DO ^mytest hello WRITE "hello",! SET x="world" SET y=zzz WRITE x,! ^ <UNDEFINED>WriteOut+2^mytest *zzz USER 2d0>GOTO world USER> Resuming Execution at Another Line You can resume execution at another line by issuing a GOTO with a label argument from the Terminal prompt: USER 2d0>GOTO ErrSect Deleting the Program Stack You can delete the entire program stack by issuing an argumentless QUIT command from the Terminal prompt: USER 4d0>QUIT USER> Deleting Part of the Program Stack You can issue QUIT n with an integer argument from the Terminal prompt to delete the last (or last several) program stack entry: USER 8d0>QUIT 1 USER 7E0>QUIT 3 USER 4d0>QUIT 1 USER 3E0>QUIT 1 USER 2d0>QUIT 1 USER 1S0>QUIT 1 USER> Note that in this example because the program error created two program stack entries, you must be on a “d” stack entry to resume execution by issuing a GOTO. Depending on what else has occurred, a “d” stack entry may be even-numbered (USER 2d0>) or odd-numbered (USER 3d0>). By using NEW $ESTACK, you can quit to a specified program stack level: USER 4d0>NEW $ESTACK USER 5E1> /* more errors create more stack frames */ USER 11d7>QUIT $ESTACK USER 4d0> Note that the NEW $ESTACK command adds one entry to the program stack. Logging Application Errors InterSystems IRIS provides several ways to log an exception to an application error log. The %ETN utility logs errors. It can be invoked as ^%ETN or using one of its entrypoints: FORE^%ETN, BACK^%ETN, or LOG^%ETN. The %Exception.AbstractException.Log() Opens in a new window method. Using %ETN to Log Application Errors The %ETN utility logs an exception to the application error log and then exits. You can invoke %ETN (or one of its entrypoints) as a utility: DO ^%ETN Or you can set the $ZTRAP special variable to %ETN (or one of its entrypoints): SET $ZTRAP="^%ETN" You can specify %ETN or one of its entry points: FORE^%ETN (foreground) logs an exception to the standard application error log, and then exits with a HALT. This invokes a rollback operation. This is the same operation as %ETN. BACK^%ETN (background) logs an exception to the standard application error log, and then exits with a QUIT. This does not invoke a rollback operation. LOG^%ETN logs an exception to the standard application error log, and then exits with a QUIT. This does not invoke a rollback operation. The exception can be a standard %Exception.SystemException Opens in a new window, or a user-defined exception. To define an exception, set $ZERROR to a meaningful value prior to calling LOG^%ETN; this value will be used as the Error Message field in the log entry. You can also specify a user-defined exception directly into LOG^%ETN: DO LOG^%ETN("This is my custom exception"); this value will be used as the Error Message field in the log entry. If you set $ZERROR to the null string (SET $ZERROR="") LOG^%ETN logs a <LOG ENTRY> error. If you set $ZERROR to <INTERRUPT> (SET $ZERROR="<INTERRUPT>") LOG^%ETN logs an <INTERRUPT LOG> error. LOG^%ETN returns a %List structure with two elements: the $HOROLOG date and the Error Number. The following example uses the recommended coding practice of immediately copying $ZERROR into a variable. LOG^%ETN returns a %List value: SET err=$ZERROR /* error handling code */ SET rtn = $$LOG^%ETN(err) WRITE "logged error date: ",$LIST(rtn,1),! WRITE "logged error number: ",$LIST(rtn,2) Calling LOG^%ETN or BACK^%ETN automatically increases the available process memory, does the work, and then restores the original $ZSTORAGE value. However, if you call LOG^%ETN or BACK^%ETN following a <STORE> error, restoring the original $ZSTORAGE value might trigger another <STORE> error. For this reason, the system retains the increased available memory when these %ETN entrypoints are invoked for a <STORE> error. Using Management Portal to View Application Error Logs From the Management Portal, select System Operation, then System Logs, then Application Error Log. This displays the Namespace list of those namespaces that have application error logs. You can use the header to sort the list. Select Dates for a namespace to display those dates for which there are application error logs, and the number of errors recorded for that date. You can use the headers to sort the list. You can use Filter to match a string to the Date and Quantity values. Select Errors for a date to display the errors for that date. Error # integers are assigned to errors in chronological order. Error # *COM is a user comment applied to all errors for that date. You can use the headers to sort the list. You can use Filter to match a string. Select Details for an error to open an Error Details window that displays state information at the time of the error including special variables values and Stacks details. You can specify a user comment for an individual error. The Namespaces, Dates, and Errors listings include checkboxes that allow you to delete the error log for the corresponding error or errors. Check what you wish to delete, then select the Delete button. Using %ERN to View Application Error Logs The %ERN utility examines application errors recorded by the %ETN error trap utility. %ERN returns all errors logged for the current namespace. Take the following steps to use the %ERN utility: At the Terminal prompt, DO ^%ERN. The name of the utility is case-sensitive; responses to prompts within the utility are not case-sensitive. At any prompt you may enter ? to list syntax options for the prompt, or ?L to list all of the defined values. You may use the Enter key to exit to the previous level. For Date: at this prompt, enter the date on which the errors occurred. You can use any date format that is accepted by the %DATE utility; if you omit the year, the current year is assumed. It returns the date and the number of errors logged for that date. Alternative, you can retrieve lists of errors from this prompt using the following syntax: ?L lists all dates on which errors occurred, most recent first, with the number of errors logged. The (T) column indicates how many days ago, with (T) = today and (T-7) = seven days ago. If a user comment is defined for all of the day’s errors, it is shown in square brackets. After listing, it re-displays the For Date: prompt. You can enter a date or T-n. [text lists all errors that contain the substring text. <text lists all errors that contain the substring text in the error name component. ^text lists all errors that contain the substring text in the error location component. After listing, it re-displays the For Date: prompt. Enter a date. Error: at this prompt supply the integer number for the error you want to examine: 1 for the first error of the day, 2 for the second, and so on. Or enter a question mark (?) for a list of available responses. The utility displays the following information about the error: the Error Name, Error Location, time, system variable values, and the line of code executed at the time of the error. You can specify an * at the Error: prompt for comments. * displays the current user-specified comment applied to all of the errors of that day. It then prompts you to supply a new comment to replace the existing comment for all of these errors. Variable: at this prompt you can specify numerous options for information about variables. If you specify the name of a local variable (unsubscipted or subscripted), %ERN returns the stack level and value of that variable (if defined), and all its descendent nodes. You cannot specify a global variable, process-private variable, or special system variable. You may enter ? to list other syntax options for the Variable: prompt. *A: when specified at the Variable: prompt, displays the Device: prompt; press Return to display results on the current Terminal device. *V: when specified at the Variable: prompt, displays the Variable(s): prompt. At this prompt specify an unsubscripted local variable or a comma-separated list of unsubscripted local variables; subscripted variables are rejected. %ERN then displays the Device: prompt; press Return to display results on the current Terminal device. %ERN returns the value of each specified variable (if defined) and all its descendent nodes. *L: when specified at the Variable: prompt, loads the variables into the current partition. It loads all private variables (as public) and then all public variables that don't conflict with the loaded private variables.
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCOS_ERRORS
CC-MAIN-2021-25
refinedweb
7,466
53.61
Haven’t updated the forum in a few versions so here’s the last updates [code:31wux592] 27/10/08 4.19.09 – Development branch release Fix reverb settings being ignored. Bug introduced in 4.19.08. Fixed oscillator producing incorrect tone when switching to triangle tone. 23/10/08 4.19.08 – Development branch release - Wii – SIMD optimised software mixer. resulting in ~15% performance improvement. PS3 – Added ‘spursmode’ member to FMOD_PS3_EXTRADRIVERDATA. This can be set to the appropriate FMOD_PS3_SPURSMODE enum to allocate the SPURS task with context memory. Fixed an occasional crash in System::update when nonblocking loads are in progress and FMOD_INIT_ENABLE_PROFILE is on..19.07 – Development branch release - Added getMemoryInfo function to all public classes, both low-level and event system. Use this function to get detailed information on the memory usage of any FMOD object. - Added new public interface file fmod_memoryinfo.h - Added FMOD_CHANNEL_CALLBACKTYPE_OCCLUSION. When fmod calculates an occlusion value based on geometry, this callback will be triggered to that the user has a chance to modify it. - PS3 – Add backwards playback support for PCM data. - Event API – Added FMOD_EVENTPROPERTY_TYPE enum - Event API – Added "type" parameter to Event::getPropertyInfo - Event API – Added FMOD_EVENT_CALLBACKTYPE_OCCLUSION. This wraps the channel version of the callback for events. - Event API – Added FMOD_EVENTPROPERTY_TYPE enum Event API – Added "type" parameter to Event::getPropertyInfo) FMOD_CHANNEL_CALLBACK parameters changed from unsigned int to void * to allow better 64bit compatibility. - Channel::setCallback changed to remove ‘command’ parameter and per callback type callback capability. This isn’t needed and will save memory. - Event API – FMOD_EVENT_INFO.memoryused is now deprecated. Use Event::getMemoryInfo instead - Event API – FMOD_EVENT_SYSTEMINFO.eventmemory, instancememory, dspmemory are now deprecated. Use getMemoryInfo functions instead. 9/10/08 4.19.06 – Development branch release Event API – Added FMOD_EVENT_USERDSP flag to tell FMOD that you plan to add your own DSP units to an event at runtime. Event API – Added FMOD_EVENT_USERDSP flag to tell FMOD that you plan to add your own DSP units to an event at runtime. Fixed floating point divide by zero error when loading certain midi files. Fixed broken embedded loop points in sounds with more than one subsound PS3 – Fixed possible memory leak when too many dsp units are allocated. 2/10/08 4.19.05 – Development branch release - PS3 – FMOD now only uses 1 SPURS Task / SPU Thread for both mixing and mpeg stream decoding. - PS3 – 512k memory saving when using SPURS mode due to context memory no longer being needed. Mixer optimizations. - Removed dsp node management logic, ps3 now has no code to do with mixer executed on PPU. - Memory optimizations. Significant memory reductions. Event API – Large memory optimization. Event instances now shed over 500 bytes per instance, saving possibly hundreds of kilobytes in a normal game project. Wii – Enable streaming sounds to be played through the WiiMote speaker. Win – Vista (WASAPI) now supports output as mono or stereo when control panel is set with a higher number of channels, i.e. quad, 5 point 1, etc. Profiler – Added support for graphing channel and codec usage. FSBankEx – Fix crackling when exporting 6 channel files. PS3 – Removed ‘spu_priority_streamer’ and ‘spu_priority_at3’ members from FMOD_PS3_EXTRADRIVERDATA struct. - PS3 – ‘spu_priority_mixer’ renamed to ‘spu_thread_priority’ in FMOD_PS3_EXTRADRIVERDATA struct. [/code:31wux592] This thread is for discussion / bugreports for the current release. Download the current release from the front page at - brett asked 10 years ago - You must login to post comments
http://www.fmod.org/questions/question/forum-28534/
CC-MAIN-2018-43
refinedweb
562
50.84
Microsoft SharePoint 2010 Administration Cookbook — Save 50% Over 90 simple but incredibly effective recipes to administer your Microsoft SharePoint 2010 applications with this book and eBook (For more resources on MicroSoft SharePoint, see here.) Introduction SharePoint 2010 has been architected to be a proactive system that provides many tools to the administrators. The goal is to catch issues before they occur. If they do occur, the system should give the administrator the capability to debug the issues with the least amount of resistance. One example of this is the new logging database. It collects information from disparate servers and collates this information into the database. For instance, the Unified Logging Service (ULS) logs collect information that is useful in troubleshooting issues. These logs are found on every SharePoint Server. These ULS logs are collected from all servers and the event logs. This makes the logging database a valuable tool. It is must-have knowledge for SharePoint administrators and covered in one of the recipes. Reporting is another area where SharePoint 2010 has been given focus. Reports are more robust and present better information down at the site level. This gives administrators a better idea about how their site is being utilized, what they are searching for, and uncovers where functionality is lacking. When it comes to monitoring and being proactive, SharePoint offers another level of service — self-correcting health monitoring. SharePoint 2010 health monitoring jobs have the ability to uncover issues, report them, and then SharePoint is able to automatically correct the issue (in some cases). Finally, SharePoint 2010 delivers a tool that can give details on the performance of a page. Currently, we have to use a tool such as Microsoft Visual Round Trip Analyzer. This is now an innate built-in capability of the infrastructure. The last recipe in this chapter shows how to use this tool. The monitoring and reporting capabilities combined together empower the administrator to be proactive with regards to the health of the SharePoint farm. These capabilities can be leveraged with other SharePoint functions such as alerts, so that the team managing the SharePoint farm should be well versed in the performance of the installation. Accessing the SharePoint 2010 logging database As mentioned in the introduction, the SharePoint 2010 logging database is a major enhancement to monitoring, debugging, and protecting the health of the farm. By default, the database is called WSS_Logging . This database should be the starting point for administrators to collect and analyze information. In this recipe, we will access the database and run a view (that already is installed) against it. Getting ready You must have farm-level administrative permissions to the Central Administration site. You must have read and execute permissions as well to the WSS_Logging database in order to open and execute views. How to do it... - Open up SQL Server Management Studio. - When asked for authentication, log in to the correct instance where SharePoint is running using your windows authentication credentials. If SQL authentication is the preferred method of connecting, use the appropriate User ID/Password. - Navigate to the WSS_Logging database and click on the plus sign to expand it. - Under the toolbar at the top, click on the New Query button. - In the new query window, type in the following query: Select * from RequestUsage. - Click Execute. Results are populated in the window pane below the query, as seen in the following screenshot: How it works... In the above recipe a view called RequestUsage was executed. This is an out of the box view that provides site usage information. It provides information such as the referring URL, the browser being used, the site ID, the web ID, the server URL, the request type, and when it was done. The logging database contains, but is not limited to, the following information: It is a place where information is aggregated from across the farm. For instance, all ULS logs, from every SharePoint server, are collected within this database. There are 26 views installed by default. However, the purpose of this database is to give administrators and developers a place to log information based on processes. These are typically custom processes. Views can be created to meet an organization's needs. There's more... The location of the logging database is not a setting that can be done through the user interface in Central Administration. Because of all the data that is collected in this database, it can grow quite large. Additionally, as SharePoint-integrated applications are created, developers can utilize this database to communicate issues. Therefore, due to size and usage, it is a wise idea to move the database to another physical location such as a dedicated disk. This can be done only via PowerShell, using the following command: Set-SPUsageApplication -DatabaseServer <DB Server Name> -DatabaseName <DB Name> [-DatabaseUsername <User Name>] [-DatabasePassword <Password>] More info The ULS logs are present on every WFE. It is important for an Administrator to know where to find these logs manually. They are located at the following location: \Common Files\Microsoft Shared\Web Server Extensions\14\Logs. Configuring what gets logged The SharePoint 2010 logging database covered in the previous recipe captures information that. In this recipe, we will cover how to change what gets captured and put into the logging database. Getting ready You must have farm-level administrative permissions to the Central Administration site. How to do it... - Open the SharePoint 2010 Central Administration website. - Click Monitoring. - Under the Reporting section, click Configure usage and health data collection. - The following form appears for configuration: Fill in the following details: - Usage Data Collection: This is enabled by default. - Event Selection: These are specific events that are being logged. Use the check box to enable or disable them. - Usage data collection settings: In this section, the location of the ULS logs are set. Also, there is a setting to limit the size of the log file. - Health data collection: This is enabled by default. - Log Collection Schedule: Administrator has the ability to change the schedule. - Modify the settings in step 4 and click OK.. There's more... The logging information is retained for a period of 14 days by default. Using PowerShell you can change this parameter, using the following command: Set-SPUsageDefinition -Identity <GUID> [-Enable] -DaysRetained 14 (For more resources on MicroSoft SharePoint, see here.) Editing rule definitions in the health analyzer SharePoint 2010 has a built-in health analyzer that acts as a best practice analyzer. The health analyzer will report whether or not the farm is compliant with each predefined health rule. The health analyzer builds upon the best practice analyzer from Microsoft Office SharePoint Server 2007. There are roughly 65 rules that are categorized as follows: - Security - Performance - Configuration - Availability Each rule is run by a timer job, and each rule has a specific purpose such as checking application pool memory, checking how security is configured on the farm, or checking drive space. In Central Administration, it is possible to edit existing rules in order to meet the needs of your organization. Changes can be made to the scheduled execution of the job. On the ribbon, there is an option named Run Now that will execute the rule immediately. The rules are available out of the box and are meant to allow you to be proactive. In this recipe, we will modify one of the existing health analyzer rules. Getting ready You must have farm-level administrative permissions to the Central Administration site. How to do it... - Open the SharePoint 2010 Central Administration website. - Click Monitoring. - Under the Health Analyzer section, click Review rule definitions. - Under the category Security, click The server farm account should not be used for other services. - A form pops up, which contains the parameter of the rule. The left-most ribbon button is Edit Item; click the button. The following screenshot appears: - Change the Schedule to Daily, from the default value of Weekly. - You must also manually change the value of the parameter Version. Change it to 2.0, from present 1.0. - Click Save. How it works... The health analyzer rule definitions are run via the timer jobs. There are several parameters that the administrator can modify: - Title: This is the text description of the rule. - Scope: This is where the rule will run. - Schedule: This is how often the rules are employed. - Enabled: This designates the rule as active. - Repair Automatically: When the timer job kicks off, it will check the rules. If the rule can be checked and then corrected via SharePoint best practices, it will be. - Version: This is a manually edited text box that tracks versioning of the rules. The page also notes who created the rule, and when the rule was last edited and by who. There's more... In addition to being able to edit the rule, there are several other parameters as shown by the following screenshot: - Version History: Shows all the versions - Alert Me: This notifies you when changes are made - Run Now: This executes the rule More info—adding a new health rule Every rule that a farm installation may need cannot be covered by the out of the box health rules. For instance, consider monitoring the number of tenants in the user profile social database. There may be a need for a governance rule that monitors this and flags the administrator when certain levels are reached; however, there is no out of the box rule available today that can help govern this. In order to implement a new health rule, code must be written to utilize the Microsoft.SharePoint.Administration.Health namespace . Once the assembly is written, it must be placed in the Global Assembly Cache (GAC) on every machine. The new health rule must then be registered with the SharePoint Health Analyzer. The best way to do this is to create a SharePoint feature that can be activated and deactivated. Viewing web analytics reports Web analytics reports are an innate part of the SharePoint 2010 installation. These reports are prebuilt. They use collected data from the active SharePoint installation to present information such as number of site collections, top destinations, top pages, page views, and top referrers. Using this information, an administrator can determine the flow of traffic. This is information that will comprise part of the story for performance monitoring. This recipe shows how to invoke the reports and how to view custom reports. In this recipe, we will modify one of the existing health analyzer rules. Getting ready You must have farm-level administrative permissions to the Central Administration site. How to do it... - Open up the SharePoint 2010 Central Administration website. - Click Monitoring. - Under the Reporting section, click View Web Analytics reports. - Choose a web application by clicking on it. - The page that is presented contains a left-hand navigation, as shown here: Click any of the above options and you will be presented with the appropriate report. When the report is presented, it is shown as a graph at the top and a grid at the bottom. How it works... The reporting data is collected by the usage data per web application, per site collection, per site, and finally, per search service application. The web analytics timer job runs as per its schedule and updates the collected information. This recipe showed how to access the reports through Central Administration. They can also be accessed through the site collection and sites via the Site Actions drop-down list. Web analytics is now part of the services infrastructure. It is called the Web Analytics Service Application. This must be provisioned and configured similar to setting up the other services. The following diagram shows the infrastructure components of the Web Analytics Service Application: The information is collected on the web front-end (WFE) servers into .usage files. Timer jobs kick off a process that pulls the information into a staging database, where information is kept for 24 hours. Information is then aggregated into the reporting database, where it is retained for a period of 25 months by default. There's more... The data shown with date ranges can be modified. This can be achieved by clicking on the Change Settings link above the graph: The following ribbon appears: With a click of the appropriate date button, the data will be filtered. Depending on the report, there may be filters other than date. Finally, the report can also be customized or exported to Excel. This can be done with the help of the two buttons on the right-most side of the preceding screenshot. More info Customized reports are also possible. There is an Administrative Report Library in Central Administration. There are folders in that library that contain reports written by someone in the organization. These reports may be particular to an organization's needs or audit concerns, among other things. There is a Customized Reports link on the left-hand navigation shown in a preceding screenshot; currently, there is only Search Administration Reports in there. (For more resources on MicroSoft SharePoint, see here.) Troubleshooting with correlation IDs An undesirable thing for users of SharePoint is getting an indefinite message that has a big red "X" and the word "Error" in bold adjacent to it. The user has done something but the page does not tell what the error is and how to fi x this error. It only points them to the site administrator, that is, you. SharePoint 2010 has addressed this issue by creating a mechanism to track communications between the web front-ends and the user's requests. This is in the form of a GUID called the correlation ID. Now when a user gets his/her error page, he/she can contact the administrator and provide the correlation ID. The administrator can then track the cause of this error using the correlation ID as a reference in the ULS logs. This recipe shows the steps to perform after the correlation ID is provided to the administrator. To induce the error with a correlation ID, we will stop the web analytics service. Getting ready You must have farm-level administrative permissions to the Central Administration site. This recipe uses PowerShell. You must be a member of the SharePoint_Shell_Access database role on the configuration database. You also must be a member of the WSS_ADMIN_WPG local group . How to do it... - In Central Administration, navigate to System Settings. Under Servers, click Manage services on server. - Click Stop associated with Web Analytics Web Service. - Click Monitoring on the left-hand side navigation. - Under Reporting, click View Web Analytics reports. The following error should be shown: - On the publishing farm server, select Start | All Programs | Microsoft SharePoint 2010 Products | SharePoint 2010 Management Shell. - In the PowerShell command prompt, type in the following command, replacing the correlation ID with the one from your screen. Get-SPLogEvent | ?{$_.Correlation -eq "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"} It should produce a message from the log fi le that reads: There are no instances of the Web Analytics Service Application started on any server in this farm. Ensure that at least one instance is started on an application server in the farm using the Services on Server page in Central Administration. How it works... The PowerShell command, Get-SPLogEvent, does the job of retrieving all events in the log files. The | character sends the output of Get-SPLogEvent to the next command. The ? is the "where" command. The curly braces is the "where" condition. The $_ notation is used to represent the object being sent across the pipeline. Finally, the -eq option represents the "equal to" condition. As a whole, this statement does a search job in the log files for a particular correlation ID, and produces the information associated with the request. A request is traced through its lifetime. The correlation ID collects information across multiple servers maintaining the integrity of the request. Different sites can consume information from an application server that resides in a different farm but is shared. Without the correlation ID, it would be very difficult and cumbersome to trace an error. When an error occurs, the correlation ID will have the same reference across all of the servers. This applies to WFE, application, web services, and any other components that are consumed. There's more... There are two other methods for looking up a correlation ID other than PowerShell. They are: - Using Excel (or notepad): Log on to the Web Front End server that generated the error and navigate to the location of the ULS logs. Open the log file in Excel and utilize the find and innate filtering capabilities of Excel to find the correlation ID. - Utilizing the logging database: You can execute the Accessing the SharePoint 2010 logging database recipe and then look for the correlation ID. More info On Microsoft's site, there is a free ULS viewer that can be utilized. This is not supported by Microsoft. It allows users to open a ULS log file and display its contents in a readable manner. It contains filtering, sorting, and many other functions that make the data readable. The ULS viewer can be found here:. Enabling the Developer Dashboard The Developer Dashboard is not just for developers who write code. It is an important tool in the arsenal of the SharePoint Administrator. Tools such as Microsoft's Visual Round Trip Analyzer are used to determine why a page is performing poorly. The downside of tools such as this is that they interrogate the page from the outside and so information such as database queries cannot be seen. We would have to use another tool such as SQL Profiler to see this information. The Developer Dashboard brings this functionality natively to SharePoint 2010. It provides information, such as how a page is built, how it is performing, what database queries are being run and for how long, at the bottom of a page in report form. Administrators can use this information to pinpoint what is happening on a page. In this recipe, we will enable the Developer Dashboard and view the report at the bottom of the page. This is done through PowerShell and can be scripted in the SharePoint environment. Getting ready In order to run PowerShell commands, you must be a member of the SharePoint_Shell_Access database role on the configuration database. You also must be a member of the WSS_ADMIN_WPG local group . How to do it... - On the publishing farm server, select Start | All Programs | Microsoft SharePoint 2010 Products | SharePoint 2010 Management Shell. - In the PowerShell command prompt, type in the following command: $db = [Microsoft.SharePoint.Administration. SPWebService]::ContentService.DeveloperDashboardSettings; $db.DisplayLevel = 'On'; $db.RequiredPermissions ='EmptyMask'; $db.TraceEnabled = $true; $db.Update() - Open a team site. You should see a screenshot similar to the following at the bottom of the page: How it works... The Developer Dashboard is a farm-wide setting. When you turn it on, the dashboard appears on page load at the bottom of the page. The first line creates a reference to the necessary web service. The RequiredPermissions parameter specifies who can see the Developer Dashboard. Setting the trace level to true creates a new link called Show or hide additional tracing information... at the bottom of the Developer Dashboard. There's more... By default the Developer Dashboard is disabled. There are three modes that can be set: - On - Off - OnDemand When the OnDemand mode is specified, a button appears in the upper right-hand corner of the page as shown here: When clicked, the Developer Dashboard is shown, and when clicked again, it disappears. This gives administrators the flexibility of enabling the Developer Dashboard without the need to make it visible always. More info The Developer Dashboard is available only with Windows authentication and is not available with SQL authentication. Summary In this article we covered: - Accessing the SharePoint 2010 logging database - Configuring what gets logged - Editing rule definitions in the health analyzer - Viewing web analytics reports - Troubleshooting with correlation IDs - Enabling the Developer Dashboard Further resources on this subject: - Microsoft SharePoint 2010 Business Performance Enhancement [Books] - Microsoft SharePoint 2010 Administration: Farm Governance [article] - Interacting with Data on the SharePoint Server [article] - Integrating Silverlight 4 with SharePoint 2010 [article] - Using ASP.NET Controls in SharePoint [article] About the Author : Peter Serzo Peter Serzo is an English major from Kent State who started his technical career with EDS out of college. 20 years later, all as a consultant, he is a national speaker regarding to SharePoint having worked at organizations of all sizes. His next challenge is to bring SharePoint to children and teach them. He has been working with SharePoint since 2003 in companies such as Microsoft, Ford, ADP, and many others throughout the United States. He is a Senior SharePoint Architect for High Monkey Consulting. The name refers to an old Jamaican proverb that means the higher up you go, the more responsible you must be; High Monkey takes pride in its accountability and excellence of work in regards to its clients' needs. Post new comment
http://www.packtpub.com/article/microsoft-sharepoint-2010-administration-monitoring-and-reporting
CC-MAIN-2013-48
refinedweb
3,503
55.95
by Raoul-Gabriel Urma Published July 2014 A look at eight features from eight JVM languages The JVM languages fall into three categories: They have features that Java doesn’t have, they are ports of existing languages to the JVM, or they are research languages. inference—that they can’t find in Java yet. The languages we’ll look at in this first category are Scala, Groovy, Xtend, Ceylon, Kotlin, and Fantom. them from fully exploiting a multicore system. However, Jython and JRuby—the Python and Ruby implementations on the JVM—get rid of this restriction by making use of Java threads instead. (You can read more about JRuby and JRubyFX in this issue’s “JavaFX with Alternative Languages” article by Josh Juneau. Juneau also covers Jython extensively on his blog.) Another popular language ported to the JVM is Clojure, a dialect of Lisp, which we’ll look at in this article. In addition, Oracle recently released Nashorn, a project that lets you run JavaScript on the JVM. The third category is languages that implement new research ideas, are suited only for a specific domain, or are just experimental. The language that we’ll look at in this article, X10, is designed for efficient programming for high-performance parallel computing. Another language in this category is Fortress from Oracle Labs, now discontinued. For each language we examine, one feature is presented to give you an idea of what the language supports and how you might use it. Scala is a statically typed programming language that fuses the object-oriented model and functional programming ideas. That means, in practice, that you can declare classes, create objects, and call methods just like you would typically do in Java. However, Scala also brings popular features from functional programming languages such as pattern matching on data structures, local type inference, persistent collections, and tuple literals. The fusion of object-oriented and functional features lets you use the best tools from both worlds to solve a particular problem. As a result, Scala often lets programmers express algorithms more concisely than in Java. Feature focus: pattern matching. To illustrate, take a tree structure that you would like to traverse. Listing 1 shows a simple expression language consisting of numbers and binary operations. [Java] class Expr { ... } class Number extends Expr { int val; ... } class BinOp extends Expr { String opname; Expr left, right; ... } Listing 1 Say you’re asked to write a method to simplify some expressions. For example “5 / 1” can be simplified to “5.” The tree for this expression is illustrated in Figure 1. Figure 1 In Java, you could deconstruct this tree representation by using instanceof, as shown in Listing 2. Alternatively, a common design pattern for separating an algorithm from its domain is the visitor design pattern, which can alleviate some of the verbosity. See Listing 3. [Java] Expr simplifyExpression(Expr expr) { if (expr instanceof BinOp && "/".equals(((BinOp)expr).opname) && ((BinOp)expr).right instanceof Number && ... // it’s all getting very clumsy && ... ) { return (Binop)expr.left; } ... // other simplifications } Listing 2 [Java] public class SimplifyExprVisitor { ... public Expr visit(BinOp e){ if("/".equals(e.opname) && e.right instanceof Number && ...){ return e.left; } return e; } } Listing 3 However, this pattern introduces a lot of boilerplate. First, domain classes need to provide an accept method to use a visitor. You then need to implement the “visit” logic. In Scala, the same problem can be tackled using pattern matching. See Listing 4. [Scala] def simplifyExpression(expr: Expr): Expr = expr match { case BinOp("+", e, Number(0)) => e // Adding zero case BinOp("*", e, Number(1)) => e // Multiplying by one case BinOp("/", e, Number(1)) => e // Dividing by one case _ => expr // Can’t simplify expr } Listing 4. Feature focus: safe navigation. Groovy has many features that let you write more-concise code compared to Java. One of them is the safe navigation operator, which prevents a NullPointerException. In Java, dealing with null can be cumbersome. For example, the following code might result in a NullPointerException if either person is null or getCar() returns null: Insurance carInsurance = person.getCar().getInsurance(); To prevent an unintended NullPointerException, you can be defensive and add checks to prevent null dereferences, as shown in Listing 5. [Java] Insurance carInsurance = null; if(person != null){ Car car = person.getCar(); if(car != null){ carInsurance = car.getInsurance(); } } Listing 5 However, the code quickly becomes ugly because of the nested checks, which also decrease the code’s readability. The safe navigation operator, which is represented by ?., can help you navigate safely through potential null references: def carInsurance = person?.getCar()?.getInsurance() In this case, the variable carInsurance will be null if person is null, getCar() returns null, or getInsurance() returns null. However, no NullPointerException is thrown along the way.. Feature focus: homoiconicity. What differentiates Clojure from most languages is that it’s a homoiconic language. That is, Clojure code is represented using the language’s fundamental datatypes—for example, lists, symbols, and literals—and you can manipulate the fundamental datatypes using built-in constructs. As a consequence, Clojure code can be elegantly manipulated and transformed by reusing the built-in constructs. Clojure has a built-in if construct. It works like this. Let’s say you want to extend the language with a new construct called unless that should work like an inverted if. In other words, if the condition that is passed as an argument evaluates to false, Clojure evaluates the first branch. Otherwise—if the argument evaluates to true—Clojure evaluates the second branch. You should be able to call the unless construct as shown in Listing 6. [Clojure] (unless false (println "ok!!") (println "boo!!")) ; prints "ok!!" (if false (println "boo!!") (println "ok!!")) ; prints "ok!!" Listing 6 To achieve the desired result you can define a macro that transforms a call to unless to use the construct if, but with its branch arguments reversed (in other words, swap the first branch and the second branch). In Clojure, you can manipulate the code representing the branches that are passed as an argument as if it were data. See Listing 7. [Clojure] (defmacro unless "Inverted 'if'" [condition & branches] (conj (reverse branches) condition 'if)) Listing 7 In this macro definition, the symbol branches consists of a list that contains the two expressions representing the two branches to execute ( println "boo!!" and println "ok!!"). With this list in hand, you can now produce the code for the unless construct. First, call the core function reverse on that list. You’ll get a new list with the two branches swapped. You can then use the core function conj, which when given a list, adds the remaining arguments to the front of the list. Here, you pass the if operation together with the condition to evaluate.. Feature focus: smart casts. Many developers see the Java cast feature as annoying and redundant. For an example, see Listing 8. [Java] if(expr instanceof Number){ System.out.println(((Number) expr).getValue()); } Listing 8 Repeating the cast to Number shouldn’t be necessary, because within the if block, expr has to be an instance of Number. The generality of this technique is called flow typing—type information propagates with the flow of the program. Kotlin supports smart casts. That is, you don’t have to cast the expression within the if block. See Listing 9. [Kotlin] if(expr is Number){ println(expr.getValue()) // expr is automatically cast to Number } Listing 9 construct for defining type aliases (similar to C’s typedef; for example, you could define Strings to be an alias for List<String>), flow typing (for example, no need to cast the type of an expression in a block if you’ve already done an instanceof check on it), union of types, and local type inference. In addition, in Ceylon you can ask certain variables or blocks of code to use dynamic typing—type checking is performed at runtime instead of compile time. Feature focus: for comprehensions. for comprehensions can be seen as syntactic sugar for a chain of map, flatMap, and filter operations using Java SE 8 streams. For example, in Java, by combining a range and a map operation, you can generate all the numbers from 2 to 20 with a step value of 2, as shown in Listing 10. [Java] List<Integer> numbers = IntStream.rangeClosed(1, 10).mapToObj( x -> x * 2).collect(toList()); Listing 10 In Ceylon, it can be written as follows using a for comprehension: List<Integer> numbers = [for (x in 1...10) x * 2]; Here’s a more-complex example. In Java, you can generate a list of points in which the sum of the x and y coordinates is equal to 10. See Listing 11. [Java] List<Point> points = IntStream.rangeClosed(1, 10).boxed() .flatMap(x -> IntStream.rangeClosed(1, 10) .filter(y -> x + y == 10) .mapToObj(y -> new Point(x, y))) .collect(toList()); Listing 11 Thinking in terms of flatMap and map operations using the Stream API might be overwhelming. Instead, in Ceylon, you can write more simply, as done in the code shown in Listing 12, which produces [(1, 9), (2, 8), (3, 7), (4, 6), (5, 5), (6, 4), (7, 3), (8, 2), (9, 1)]. [Ceylon] List<Point> points = [for (x in 1..10) for(y in 1..10) if(x+y == 10) Point(x, y)]; Listing 12 The result: Ceylon can make your code more concise. Xtend is a statically typed object-oriented language. One way it differs from other languages is that it compiles to pretty-printed Java code rather than. Feature focus: active annotations. Xtend provides a feature called active annotations, which is a way to do compile-time metaprogramming. In its simplest form, this feature allows you to generate code transparently, such as adding methods or fields to classes with seamless integration in the Eclipse IDE for example. New fields or meth-ods will show up as members of the modified classes within the Eclipse environment. More-advanced use of this feature can generate a skeleton of design patterns such as the visitor or observer pattern. You can provide your own way to generate code using template expressions. Here’s an example to illustrate this feature in action. Given sample JSON data, you can automatically generate a domain class in your Xtend program that maps JSON properties into members. The Eclipse IDE will recognize these members, so you can use features such as type checking and autocompletion. All you have to do is wrap the JSON sample within an @Jsonized annotation. Figure 2 shows an example within the Eclipse IDE using a JSON sample representing a tweet. Figure—see the link to an empirical study conducted by Parnin et al. in “Learn More”) without complicating the overall type system. In addition, Fantom provides two kinds of method invocations: one that goes through type checking at compile time (using a dot notation: .) and one that defers checking to runtime (using an arrow notation: ->). Feature focus: immutability. Fantom encourages immutability through language constructs. For example, it supports const classes—once created, an instance is guaranteed to have no state changes. Here’s how it works. You can define a class Transaction prefixed with the const keyword: const class Transaction { const Int value } The const keyword ensures that the class declares only fields that are immutable, so you won’t be able to modify the field named value after you instantiate a Transaction. This is not much different than declaring all fields of a class final in Java. However, this feature is particularly useful with nested structures. For example, let’s say the Transaction class is modified to support another field of type Location. The compiler ensures that the location field can’t be reassigned and that the Location class is immutable. For instance, the code in Listing 13 is incorrect and will produce the error Const field 'location' has non-const type 'hello_0::Location'. Similarly, all classes extending a const class can be only const classes themselves. [Fantom] const class Transaction { const Int value const Location location := Location("Cambridge") } class Location{ Str city new make(Str city) { this.city = city } }. Feature focus: constraint types. Consider a simple Pair class, with a generated constructor: class Pair(x: Long, y: Long){} You can create Pair objects as follows: val p1 : Pair = new Pair(2, 5); However, you can also define explicit constraints (similar to contracts) on the properties of a Pair at use-site. Here, you want to ensure that p2 holds only symmetric pairs (that is, the values of x and y must be equal): val p2 : Pair{self.x == self.y} = new Pair(2, 5); Because x and y are different in this code example, the assignment will be reported as a compile error. However, the following code compiles without an error: val p2 : Pair{self.x == self.y} = new Pair(5, 5); In this article, we examined eight features from eight popular JVM languages. These languages provide many benefits, such as enabling you to write code in a more concise way, use dynamic typing, or access popular functional programming features. I hope this article has sparked some interest in alternative languages and that it will encourage you to check out the wider JVM eco-system. Acknowledgements. I’d like to thank Alex Buckley, Richard Warburton, Andy Frank, and Sven Efftinge for their feedback.
http://www.oracle.com/technetwork/articles/java/architect-languages-2266279.html
CC-MAIN-2015-48
refinedweb
2,213
56.86
Browse Alphabetically: Creating directories in C++ and Linux Would there be an easy way to create several different directories in C++ / Linux? Here's an example: I want to save a file to a directory: /tmp/a/b/c But there directory isn't there. I want it to be created automatically. Thanks! Printing colored characters in a Linux terminal Can I print different colored characters to a Linux terminal that supports it? I've been using C++ but I'm not sure if it's working correctly. Also, would it support color codes? How to create permanent cookies How do I create permanent cookies?. Transfer values to SQL database How do I get data from Windown form and store it in a SQL database? Is there a way to programmatically create an Exchange 2010 account using C#? My friend and I have been given the task to write a program to automatically create an Exchange 2010 mailbox. Our research has told us that we need to use Powershell but we can't find the namespace to reference. Any help would be appreciated. I know the basic thing of c++ programming and I would like to go deeper in C++. Please suggest me a good book on C++ for advanced topics (windows programming, aptitude in c++,etc.., ) Collection Or Reference Of Multiple Runtime Controls In FLP For Only Specific Controls I have create two runtime textbox contols in FlowLayOutPanel as below.. One Textbox Control TextBox tb= new TextBox(); tb.Name="tbox"+i.tostring(); this.FlowLayOutPanel1.Controls.Add(tb); Another Textbox Control TextBox tb= new TextBox(); tb.Name="tbox"+i.tostring();...... CSMA/CD Simulation in C#? How to simulate CSMA/CD protocol with C# or any language?... How do I display session ID on the web Hi, I am doing a login page with MySQL and ASP.NET/C#. I would like the program to run in the following sequence. The user login The login page will search through MySQL database, When it is Authenticate, it will allow the user to access. My concern now is how to you use the session ID to display... C# – ComboBox with ADO.NET A combobox is provided in the form and I want to display a particular column from a table in the database (SQL). And when I select this particular data corresponding rows should be displayed in the text boxes provide. How should I write the code in C#. to use C# MAPI to connect to Exchange Server inbox I'm looking to write a C# application that will connect to my Exchange Server and read my inbox. I'm thinking of using MAPI but I have a few questions first: Would it be possible to do it remotely? Any requirements with this? Is there a certain code I should use? Thanks! Inserting How to check browser activity? Hi, I'm working a project, in that I'm storing login, logout time. What if i don't click logout button and close my browser, or somehow my system shutdown.I cannot store the logout time, so that my program will be in error state. Tell me how to get the logout time when my programmer didn't click... Display data item which is start from Particular letter from database table in C# .NET In C# .NET (SELECT * FROM [tablename] where fristname LIKE 'k*') this query is not work with MS Access database. Detecting new emails in an Exchange Server mailbox I currently have a mailbox that is getting alerts from several servers (from time to time). When I get this alert, I need it to perform a custom action based on the content of the email. I need to keep it as a service (C# or VB.NET) which runs from anywhere. Would there be an API for reading this? How can I learn more about Java and scripting languages? I got my bachelor's degree in electronics and communication engineering and continued to get my master's in networking. When I started applying for internships, I found that most of the companies require Java, C, C++, Perl, HTML and Visual Basic, but I only have C and a little bit of C++... Browse Alphabetically:
http://itknowledgeexchange.techtarget.com/itanswers/tagdirectory/development/page/5/
CC-MAIN-2014-49
refinedweb
702
65.62
Get Highest and Lowest value of first 15 Minutes data I have 1 minute CSV data. I want to get the highest and lowest value of the first 15 minutes data on a daily basis to calculate other things. You can create a highest and lowest variable that reset every day, In the first 15 minutes, compare the highest to the current high and keep the highest, and opposite for low. The code could look like this: def __init__(self): self.highest = 0 self.lowest = 0 def next(self): if self.data.datetime.time() < datetime.time(9, 45, 0): if self.data.datetime.date() != self.data.datetime.date(-1): print("\n") self.highest = self.datas[0].high[0] self.lowest = self.datas[0].low[0] self.highest = max(self.datas[0].high[0], self.highest) self.lowest = min(self.datas[0].low[0], self.lowest) self.log(f"highest (high) {self.highest:7.2f} ({self.datas[0].high[0]:7.2f}), " f"lowest (low) {self.lowest:7.2f} ({self.datas[0].low[0]:7.2f})") Output 2020-04-22 09:31:00, highest (high) 2778.25 (2778.25), lowest (low) 2780.75 (2780.75) 2020-04-22 09:32:00, highest (high) 2778.75 (2778.75), lowest (low) 2780.75 (2781.50) 2020-04-22 09:33:00, highest (high) 2778.75 (2772.25), lowest (low) 2779.25 (2779.25) 2020-04-22 09:34:00, highest (high) 2778.75 (2773.50), lowest (low) 2773.50 (2773.50) 2020-04-22 09:35:00, highest (high) 2778.75 (2774.75), lowest (low) 2773.50 (2776.75) 2020-04-22 09:36:00, highest (high) 2778.75 (2776.50), lowest (low) 2773.50 (2777.25) 2020-04-22 09:37:00, highest (high) 2778.75 (2775.50), lowest (low) 2773.50 (2780.25) 2020-04-22 09:38:00, highest (high) 2778.75 (2774.50), lowest (low) 2773.50 (2776.25) 2020-04-22 09:39:00, highest (high) 2778.75 (2775.75), lowest (low) 2773.50 (2775.75) 2020-04-22 09:40:00, highest (high) 2778.75 (2776.75), lowest (low) 2773.50 (2777.75) 2020-04-22 09:41:00, highest (high) 2780.00 (2780.00), lowest (low) 2773.50 (2781.00) 2020-04-22 09:42:00, highest (high) 2780.00 (2777.75), lowest (low) 2773.50 (2782.25) 2020-04-22 09:43:00, highest (high) 2780.00 (2777.50), lowest (low) 2773.50 (2779.25) 2020-04-22 09:44:00, highest (high) 2780.00 (2777.50), lowest (low) 2773.50 (2778.75) - Eduardo De La Garza last edited by @run-out said in Get Highest and Lowest value of first 15 Minutes data: if self.data.datetime.date() != self.data.datetime.date(-1): Can you explain that line of code? When would they not be different?Thanks! - Eduardo De La Garza last edited by @Eduardo-De-La-Garza said in Get Highest and Lowest value of first 15 Minutes data: Can you explain that line of code? When would they not be different?Thanks! One minute date, checking for new date at the beginning of the day. :)
https://community.backtrader.com/topic/2768/get-highest-and-lowest-value-of-first-15-minutes-data/5
CC-MAIN-2022-27
refinedweb
523
80.68
In this chapter, we will look at a simple working example of ASP.NET MVC. We will be building a simple web app here. To create an ASP.NET MVC application, we will use Visual Studio 2015, which contains all of the features you need to create, test, and deploy an MVC Framework application. Following are the steps to create a project using project templates available in Visual Studio. Step 1 − Open the Visual Studio. Click File → New → Project menu option. A new Project dialog opens. Step 2 − From the left pane, select Templates → Visual C# → Web. Step 3 − In the middle pane, select ASP.NET Web Application. Step 4 − Enter the project name, MVCFirstApp, in the Name field and click ok to continue. You will see the following dialog which asks you to set the initial content for the ASP.NET project. Step 5 − To keep things simple, select the ‘Empty’ option and check the MVC checkbox in the Add folders and core references section. Click Ok. It will create a basic MVC project with minimal predefined content. Once the project is created by Visual Studio, you will see a number of files and folders displayed in the Solution Explorer window. As you know that we have created ASP.Net MVC project from an empty project template, so for the moment the application does not contain anything to run. Step 6 − Run this application from Debug → Start Debugging menu option and you will see a 404 Not Found Error. The default browser is, Internet Explorer, but you can select any browser that you have installed from the toolbar. To remove the 404 Not Found error, we need to add a controller, which handles all the incoming requests. Step 1 − To add a controller, right-click on the controller folder in the solution explorer and select Add → Controller. It will display the Add Scaffold dialog. Step 2 − Select the MVC 5 Controller – Empty option and click ‘Add’ button. The Add Controller dialog will appear. Step 3 − Set the name to HomeController and click the Add button. You will see a new C# file HomeController.cs in the Controllers folder, which is open for editing in Visual Studio as well. Step 4 − To make this a working example, let’s modify the controller class by changing the action method called Index using the following code. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; namespace MVCFirstApp.Controllers { public class HomeController : Controller { // GET: Home public string Index(){ return "Hello World, this is ASP.Net MVC Tutorials"; } } } Step 5 − Run this application and you will see that the browser is displaying the result of the Index action method.
https://www.tutorialspoint.com/asp.net_mvc/asp.net_mvc_getting_started.htm
CC-MAIN-2020-05
refinedweb
452
67.65
I'm rebuilding something in Elixir from some code I built in C#. It was pretty hacked together, but works perfectly (although not on Linux, hence rebuild). Essentially what it did was check some RSS feeds and see if there was any new content. This is the code: Map historic (URL as key, post title as value). List<string> blogfeeds while true for each blog in blogfeeds List<RssPost> posts = getposts(blog) for each post in posts if post.url is not in historic dothing(post) historic.add(post) I am wondering how I can do Enumeration effectively in Elixir. Also, it seems that my very process of adding things to "historic" is anti-functional programming. Obviously the first step was declaring my list of URLs, but beyond that the enumeration idea is messing with my head. Could someone help me out? Thanks. This is a nice challenge to have and solving it will definitely give you some insight into functional programming. The solution for such problems in functional languages is usually reduce (often called fold). I will start with a short answer (and not a direct translation) but feel free to ask for a follow up. The approach below will typically not work in functional programming languages: map = %{} Enum.each [1, 2, 3], fn x -> Map.put(map, x, x) end map The map at the end will still be empty because we can't mutate data structures. Every time you call Map.put(map, x, x), it will return a new map. So we need to explicitly retrieve the new map after each enumeration. We can achieve this in Elixir using reduce: map = Enum.reduce [1, 2, 3], %{}, fn x, acc -> Map.put(acc, x, x) end Reduce will emit the result of the previous function as accumulator for the next item. After running the code above, the variable map will be %{1 => 1, 2 => 2, 3 => 3}. For those reasons, we rarely use each on enumeration. Instead, we use the functions in the Enum module, that support a wide range of operations, eventually falling back to reduce when there is no other option. EDIT: to answer the questions and go through a more direct translation of the code, this what you can do to check and update the map as you go: Enum.reduce blogs, %{}, fn blog, history -> posts = get_posts(blog) Enum.reduce posts, history, fn post, history -> if Map.has_key?(history, post.url) do # Return the history unchanged history else do_thing(post) Map.put(history, post.url, true) end end end In fact, a set would be better here, so let's refactor this and use a set in the process: def traverse_blogs(blogs) do Enum.reduce blogs, HashSet.new, &traverse_blog/2 end def traverse_blog(blog, history) do Enum.reduce get_posts(blog), history, &traverse_post/2 end def traverse_post(post, history) do if post.url in history do # Return the history unchanged history else do_thing(post) HashSet.put(history, post.url) end end
http://www.dlxedu.com/askdetail/3/c924905c8096d9651101a1701cb6f3e6.html
CC-MAIN-2018-22
refinedweb
496
66.84
Python for better or worse has found cemented itself as the lingua franca of data science. With its rise in popularity also comes how it is deployed. Simultaneously with the rise in Python has also been the rise in container deployments. Like Python, containers are being used in data science as well to run processes of all kinds. While data science is one popular use for Python it is not the only one, as there are many. Regardless of how Python is used, containerizing a Python app is relatively straightforward given that Python and Docker Hub provide a lot of automation to make this happen. Python has PyPi, the Python package manager that can used to install Python application dependencies. PyPi can use a manifest file that lists requirements to automate this process. Docker can invoke PyPi on build to produce a container image that has all of the dependencies and the application using these dependencies. Imagine you’re trying to deploy the following Python code, contained in index.py. The application is a simple, “Hello World” app that uses Flask, a small HTTP server for Python apps. index.py from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == "__main__": app.run(host="0.0.0.0", port=int("5000"), debug=True) To do so, create a text file called Dockerfile in your application’s root and paste in the following code. Dockerfile FROM python:alpine3.7 COPY . /app WORKDIR /app RUN pip install -r requirements.txt EXPOSE 5000 CMD python ./index.py Note that FROM directive is pointing to python:alpine3.7. This is telling Docker what base image to use for the container, and implicitly selecting the Python version to use, which in this case is 3.7. Docker Hub has base images for almost all supported versions of Python including 2.7. This example is using Python installed on Alpine Linux, a minimalist Linux distro, which helps keep the images for Docker small. Prefer Alpine unless there’s a compelling reason to use another base image such as Debian Jessie. Also note is the RUN directive that is calling PyPi (pip) and pointing to the requirements.txt file. This file contains a list of the dependencies that the application needs to run. Because Flask is a dependency, it is included as such in the requirements.txt with a simple reference. You can also select version libraries if you need specific versions with requirements.txt. The file should also be in the root of the application. requirements.txt flask The remaining directives in the Dockerfile are pretty straightforward. The CMD directive tells the container what to execute to start the application. In this case, it is telling Python to run index.py. The COPY directive simply moves the application into the container image, WORKDIR sets the working directory, EXPOSE exposes a port that is used by Flask. To build the image, run Docker build from a command line or terminal that is in the root directory of the application. docker build --tag my-python-app . This will “tag” the image my-python-app and build it. After it is built, you can run the image as a container. docker run --name python-app -p 5000:5000 my-python-app This starts the application as a container. The –name parameter names the container and the -p parameter maps the host’s port 5000 to the containers port of 5000. Lastly, the my-python-app refers to the image to run. After it starts, you should be able to browse to the container. Depending on how you are running Docker depends on what the IP address of the application will be. Docker for Windows and Docker for Mac will be able to use 127.0.0.1. For other instances, it will be the host IP of a VM or physical machine you are running Docker on. Naturally, more complex scenarios will require more attention to details, but the basic flow is the same for most all Python apps. Putting it all together will enable containerized Python apps in short order though! Here’s a working example using the files mentioned above.
https://www.wintellect.com/containerize-python-app-5-minutes/
CC-MAIN-2018-05
refinedweb
699
65.83
20 October 2011 16:34 [Source: ICIS news] VIENNA (ICIS)--Belgium-based Solvay does not expect its Epicerol (glycerin-to-epichlorohydrin) projects in ?xml:namespace> Thailand-based Vinythai, a Solvay affiliate, is using the Epicerol technology for its 100,000 tonnes/year epichlorohydrin (ECH) plant in Map Ta Phut. The facility, which is expected to start by the end of 2011, will use 110,000 tonnes/year of refined glycerin, quantities of which are already booked, said Thibaud Caulier, business development manager at Solvay. Caulier was speaking at the ICIS 8th World Oleochemicals conference held in Solvay is also planning to build another Epicerol-based ECH facility in “We plan to have long-term supply contracts with mechanisms in place to avoid price volatility of glycerin. Sustainability is among our selection criteria for glycerin suppliers,” he added. For 1 tonne of ECH produced using the Epicerol technology, 1.1 tonnes of refined glycerine is used as feedstock and added with hydrogen chloride. Producers typically make ECH through the allyl chloride (ALC) or allyl alcohol ( Solvay said it is not easy for other companies to produce ECH from glycerin because of the expertise needed in various technologies. “Solvay has more than 1,000 single patent applications filed across the world for Epicerol. We have a big budget for this and we intend to enforce our patents,” said Caulier. Long-term availability of hydrogen chloride at affordable costs is also an issue for some companies who want to enter the market. “Glycerin to ECH is considered one of the most attractive valorizations of glycerin because of today’s price gap between propylene and glycerin. However, this attractiveness might not be the same for other newcomers that are more exposed to risks,” said Caulier. Glycerin’s historical price volatility is one risk that does not help the development of new glycerin applications, he added. “In 2008, when glycerin prices soared, this killed some of the new development projects aimed at using glycerin as some of these applications are very cost-sensitive,” said Caulier. The two-day oleochemicals
http://www.icis.com/Articles/2011/10/20/9501760/solvays-projects-in-thailand-china-will-not-impact-glycerin-market.html
CC-MAIN-2014-15
refinedweb
344
50.87
Work Items and Custom Controls Why Customization? Team Foundation Server (TFS) manages the entire application lifecycle using underlying process templates. TFS provides, off the shelf, three built-in process templates for managing the general execution of a project. There are also several community-developed templates available that can be integrated to TFS. If you need more than this, you can develop templates to handle specific aspects of the workflow that supports your application development process. TFS process templates can be customized to meet your particular requirements. This customization can be done at various levels such as: - Creating a new process template - Adding new work item types - Customizing existing work items - Customizing existing process template - Adding custom controls to work items In this article, we’ll take a look at how to create a new process template, and demonstrate some of these customizations with the aim of getting you started with your own customizations. Develop New Process Template We can define our own process and work items using a new process template. When this process template is defined, it can be based on an existing template and then modified using the process editor. The Process Editor is part of TFS Power tools, which can be downloaded from Visual Studio gallery. To develop a new process template, we need to perform these steps: - Open Visual Studio and connect to TFS server. - Navigate to Team Explorer â Settings. - Select ‘Process Template Manager‘ from the ‘Team Project Collection‘ section to launch the ‘Process Template Manager’ window. - Select one of the pre-defined process templates, based on which our new template will be developed. - Click ‘Download‘ to download the process template to local system. - Open the downloaded process template in Visual Studio using the Process editor for editing. - Select Tools â Process Editor â Process Templates â Open Process Template. - Select the process template xml file name from the download location. - Work Item Tracking: Work items related to the process and their associated properties. - Areas and Iterations: Iterations defines release management and areas will be based on whether they are functional, non-functional or feature-wise. - Groups and Permissions: Defines the default TFS groups such as Contributors or Readers. - Lab: Defines the lab system details associated with lab management. - Build: Defines the default build templates and permissions related to build for each TFS group. - Source Control: Different policies related to version control management. - Portal: Defines the SharePoint integration components. - Reports: Defines the sub folders under the root reporting URL. - To create a new process template, rename the process template and modify the different sections according to our requirement. - Save the modified template to the local system. - Open the Process Template Manager again and click ‘Upload‘ to upload our new template, for example, XYZ Team Template. This template will be available for any new project creation. Create our new project using the new template to follow user defined process for the project execution. Work Item Customization Customization Work Item Types (WIT) using WITadmin Command Visual Studio provides a command line tool called WITadmin to perform work item customization. This tool is available in the folder [drive:]\Program Files\Microsoft Visual Studio 12.0\Common7\IDE. This tool can be used for importing a work item, editing it and exporting it back. Download work item definition to local system using WITadmin: Parameters: - exportwitd : specify whether to export or import the WIT definition - /collection:URL: Specify the URL of the project collection - /p:name: provide the project name - /n:type: provide the work item type - /f:path: specify the path where the WIT definition needs to be downloaded This command downloads the specified WIT definition, in above case the bug template to the specified path. We can edit the XML file using any XML editor or Visual Studio. Wit definition consists of three sections: - Fields – Defines the underlying database field. Specify the name of the database field, reference name, type and whether the field is reportable or not. - Layout – Defines the layout of the work item. Fields will be represented in the layout using specific controls. - Workflow – Defines the state transitions. Every work item will have its own lifecycle, which will be defined as workflow. Depending on the customization requirements, fields, layout and workflows can be altered. Here, a new field called Message will be added to our Bug template. Define the Message field under Fields section as: Now, place the control to represent this field in the layout after the Reason field as: Save the Work Item Type (WIT) definition file and run the import command under witadmin tool as: Open the Bug template and observe our new field appeared after the reason field: Process Editor The Process Editor has various options for working with a work item definition file. We can open the WIT directly from the server and edit it. Opening WIT from the server will reflect the changes directly on the server. If you are planning many changes or planning to work offline, use the ‘Export WIT’ option to download the WIT file to local system. Modify the local copy using ‘Open WIT from File’ and upload it back to server using ‘Import WIT‘. Custom Work Item - Open the most appropriate template WIT file for the type that is most suited for the new work item that you want to create - Here, we open the task work item and create a new custom work item called Work. - Change the work item name as Work and modify the description. - This creates a new work item type called Work with same template as ‘Task’. New Work Item creation will display the new work item type, displayed as follows: Custom Fields The process editor can be used to modify our new work item ‘Work‘. The Existing task template will appear as follows: Remove the ‘Reason‘ and ‘Blocked‘ fields and add a new field called ‘Days‘. Adding new field: - Click ‘New‘ in the ‘Fields‘ section. - Provide the Name, type, reference name, help text and reportable attributes for the new field under the Field definition tab and click ‘OK‘. - Navigate to the ‘Layout‘ section, where the new field will replace the ‘Reason‘ and ‘Blocked‘ fields. - Delete the controls by right-clicking the control and select ‘Delete‘ from the context menu. - Right-click the column and select ‘new control’. Bind the new control with our custom field. - Save the work item definition and try to create a new work item to observe the new field. - The field can appear as a text field, into which the user can enter the required value, a drop-down list from which only pre-selected values can be picked, or a combo-box where you can either enter a value from the keyboard or select from the list. - Navigate to the ‘Field‘ section and double-click the ‘custom‘ field to open the Field Definition window. - Navigate to the ‘Rules‘ tab and select either the AllowedValues or the SuggestedValues rule. - Either enter the list of values or else bind to a Global list which will be already defined in the project collection level to handle the list details. - Save the WIT definition and open the new work item of type ‘Work‘ and observe the drop-down displayed for the ‘Days‘ field. Custom Workflow You may need to alter the workflow corresponding to an existing work item. We’ll take as an example the existing ‘bug type’ work item. The lifecycle of the bug is New â Approved/committed/Done/Removed â Approved/committed/Done. We want to introduce a new stage after ‘New’ called ‘Triage Approved‘. Now, the lifecycle of the work item changes to New â Triage Approved â Approved/committed/Done/Removed â Approved/committed/Done. - Open the ‘Bug‘ work item using Process Editor and navigate to the ‘Workflow‘ tab. - In the flow diagram, drag one state control from the tool box to the work flow and then rename it to ‘Triage Approved‘. - Connect the new state with ‘New‘ and other states using ‘Transition‘ control from the toolbox. Transition control will appear as an expandable box with two connection as shown below - Double-click the transition to open the Transition properties. - Change the starting state from ‘New’ to ‘Triage Approved’ to point it to the new state. - Now, we have only one transition from New, which is to ‘Triage Approved’. All other transitions will be from ‘Triage Approved’ and back to the same. - New bugs will have the changed lifecycle. After the bug is created, it will be in ‘New‘ state. From ‘New’, there will be only one state change called ‘Triage Approved’. Custom Work Item Control Development TFS supports a limited set of controls that can be used in work item customizations. The most important three controls that are supported for field display are: - FieldControl – Defines one liner text field. This is the default control used for integer, string or double data type. - DateTimeControl – Defines a date field and displays a ‘date-picker’ for quick date selection. - HtmlFieldControl – Defines an html field with rich text support. Your project requirements may require a more complex control than these to support your particular requirements. In this section, we will look into how we can create such a custom control. Any new control that can be used for work item customization will need to have two versions : a Control to support web access which will be built using jQuery and HTML, and a Visual Studio edition of the control which will be developed using .NET code Web Access Control The Web access version of a control is created using HTML tags and jQuery widgets. It can also be based on one of the core TFS controls. TFS offers many core controls, which are not exposed as work item controls. In our first example, we will first create a control using HTML tags and then a control based on a core TFS control. HTML Control TFS has a built-in type called PlainText, which supports more than 255 Unicode characters. We can use either the FieldControl or HtmlFieldControl to represent the Plaintext field. PlainText field is to ccapture more than 255 character information, displaying such a long text in a single line will not be appropriate always. FieldControl will display a single line text box only. HtmlFieldControl has a pre-defined minimum height and will display with a 15 line height. We will create a new custom control based on TextArea, which can be used for the PlainText field. Create JavaScript file Web access controls are based on jQuery and JavaScript. Start defining the new control as a new module in TFS Parameters - Name of the new module: TFS.WorkItemTracking.Controls.TextAreaControl - Dependent module list: Dependent modules like work item tracking controls, tfs core and ui common controls. - Module definition: We can define the control logic as a separate method or as an anonymous method shown in the above sample code. Now, define a constructor for the new module, which in turn calls the base constructor. Next, implement the new control logic by inheriting from WorkItemControl. Now, define various methods required to handle the new control logic inside the inherit block. Define an init method to handle the initialization of the control. The following code snippet creates a control that is based on the Html TextArea control. Define bind and unbind events to handle the change in the control value. A Bind event defines the event handler for the change event – _onChange. The Bind event will bind the data from the underline field to the control. Unbind will remove the event handler binding. this._getField() refers to the bind field or the database field. Define the change event handler to capture the new control value and assign it to the corresponding field. this._control[0] refers to the TextArea defined in the init section. this._getField() refers to the bind field or the database field. Now, define the invalidate method, which will be triggered when there is any change in the underline field due to the other control logic or state change. Get the updated value of the field and bind to the control. Now, register the module as a work item custom control using the TFS.WorkItemTracking.Controls.registerWorkItemControl() method. The complete code snippet of our new TextAreaControl is as follows: Create manifest file With the Control logic implementation using Javascript now complete, we still need to define a manifest file, which defines the module, namespace, vendor and the Web site for more information. The manifest file will be an XML file with following schema: Create JavaScript minified file Every Web access control has one debug version of the JavaScript that can be used for future enhancements, and one ‘minified’ (reduced in size by removing anything not essential for execution) version of the JavaScript that will be used by TFS Web access at runtime. We can create the minified version of JavaScript using various tools. Here, the Web essence Minifier extension for Visual Studio has been used to generate the minified version of the JavaScript. Create Package We have the following three files associated with the new custom control: - JavaScript file - Minified version of the same JavaScript - Manifest file Compress all these files into a single .zip file to create a new custom control package, which can be distributed and installed in TFS Web access. Deployment - Open the Team Web access. - Navigate to the root admin section of the site - Select the Gear symbol ( ) in the top-right corner of the site to navigate to admin section or use the link - Click the ‘Extensions ‘tab and click ‘Install‘. - Select the zip file in the install new Extension pop-up window. - Click ‘OK’ to complete the installation process. - Click the ‘Enable‘ link to enable the new control for usage. Work Item Customization with TextAreaControl Download the WIT template and the ‘Bug‘ template using the witadmin command and open the XML file in the XML editor to customize the WIT definition. Add a new field to the Bug template as: Now, define the control using our new custom control, TextAreaControl: Save the WIT definition file and import to TFS server. Now, create a new bug for the updated project and observe the new control. Create New Web Access Control – Based on TFS Core Control TFS has several core components that are defined in the TFS.UI.Controls.Common script file, which are not exposed to the work item level. When we create new custom controls, we can base them on these core controls, which have defined methods and proper styles. We will create, as an example, a new CheckboxListControl based on the TFS Core control CheckboxList. We can specify the Module declaration and the construction definition as before. We can define our new control module as CheckboxListControl with dependent modules TFS.WorkItemtracking.Controls, TFS.core and TFS.UI.Controls.Common. Now, we have to provide the init method to define the control based on the existing Core control TFS.UI.Controls.Common.CheckboxList. We will, however, then change the character used for delimiting the selected items and the handler for the change event. We will use a comma as the delimiting character. To do this, we define the change event handler to get the checked values from the CheckboxList and join to a single string using comma, which will be assigned to the underline field. We next define the invalidate method to bind the values to the CheckboxList. getAllowedValues() returns the values bind to the field using AllowedValues or SuggestedValues option. We use the SuggestedValues option to bind the values, which will allow us to select more than one options from the list. Moreover, retrieve the value from the underline field and mark the checked items in the list. Define the getControlValue method to return the checked values joining using comma: Now, register the control to the work Item tracking control collection. Here is the complete code snippet for the new CheckboxListControl Package & Deploy the CheckboxListControl The next stage is to create the minified version of JavaScript using the minifier discussed earlier. Also, define the manifest file for the CheckboxListControl. Package the files into a zip file and deploy to the web access using the TFS web access extension. Work Item Customization with CheckboxListControl Download the WIT template for ‘Bug’ using the witadmin command and open the XML file in XML editor to customize the WIT definition. Add a new field called Days to the Bug template along with suggested values. Now, define the control using our new custom control, CheckboxListControl. <Control FieldName=”My.Days” Type=”CheckboxListControl” Label=”Days” LabelPosition=”Left” /> Save the WIT definition file and import to TFS server. Now, create a new Bug for the updated project and observe the new control. Fallback mechanism TFS supports different clients such as Team Web Access, Visual Studio and Microsoft Test Manager. We have created the custom controls for the Team Web access, where the fields will be displayed using the new control. This new control will not be available for the Visual Studio and it displays an error message shown as follows: Either we need to develop a new custom control with the same name for the Visual Studio or else use an existing control as a fallback option. This fallback option requires you to specify a second control type for the control tag, which will be used if the first control isn’t available. We use the new custom control to display the message field in Web access and use the fallback option to specify FieldControl for Visual Studio. PreferredType is the option to specify the fallback control. If the TextAreaControl is not available in Web access or in Visual Studio, it will use the FieldControl to display the field. Now, the field is displayed using FieldControl in Visual Studio. Visual studio Control Develop the work item custom control to support the work item management using Visual Studio. Unlike the TFS Web access control which will be deployed in TFS server, Visual Studio controls need to be installed in each individual user system. If most of the users are using Web access, then things are easier because we can limit the control development to Web access control with a fallback mechanism to a default control for Visual Studio. If we have more users who work on Visual Studio, then we must develop the control to support Visual Studio and distribute this control to every user on each deployment of the custom control for them to install in their user system. Create Custom Control Create a class library project using Visual Studio and add one UserControl. Add the reference to Microsoft.TeamFoundation.WorkItemTracking.Controls and Microsoft.TeamFoundation.WorkItemTracking.Client from the path [drive]:\Program Files\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies to the class library project. We have created one custom control for web access, CheckboxListControl in our last section. We will create the corresponding visual Studio control now. Implement the IWorkItemControl and IWorkItemToolTip interfaces to the user control. Before moving to the details of the interfaces, we will look into the basic implementation of the custom control. Our control is a checkboxlist control, so we must define the CheckedListBox control and initialize the properties of the control in a public constructor. Moreover, add the new control to the controls collection. ToolTip and Label properties are declared in the IWorkItemToolTip interface. Label will be used for displaying the label and can be implemented using the automatic property. The tooltip defines the tooltip of the control, which normally displays the HelpText associated with the field. Associate the field name to the tooltip of the label. IWorkItemControl has following method declarations and properties to handle the life time of the control. Clear(); Clears the current value of the control InvalidateDatasource(); When there is any change in the field value, this method will get invoked to propagate the field change to the control. Get the field values and before setting the checked items, bind the allowed values or the suggested values to the control. FlushToDatasource(); Bind the control value back to the field SetSite(IServiceProvider serviceProvider); Allows the platform to provide ServiceProvider implementation to the control, which we are not using for our custom control. ReadOnly Defines whether the control is read-only or not, which is again not used for our custom control. WorkItemFieldName returns the work item field name corresponding to the control WorkItemDatasource Define the work item data source bound to the control PropertiesRefers the collection of properties. This can be used for binding the custom control options and other properties. These events will handle the changes done by the control itself and avoid the invalidation notifications. The core implementation of the control is completed. The completed code snippet corresponding to our CheckboxListControl Create wicc file We need to create the metadata file corresponding to the manifest file for our Web control to define the assembly and the control. This file will be saved with an extension of .wicc. Package and Deploy Now, we need to package the Visual Studio class library as a Visual Studio extension or an MSI to distribute to the users. For testing, we can copy the dll and the wicc file to the location UserApplicationData\Local\Microsoft\Team Foundation\Work Item Tracking\Custom Controls\12.0. Verify the new Custom Control Now, open a new work item of type ‘Bug’ in Visual Studio, which is already bound to the CheckboxListControl as part of our web access custom control development. Observe the Checkbox control inside Visual Studio with values updated from Web access control. Access Control Access Control at Field Level Sometimes, we need to control the access at field level. This can be implemented using the Rules associated with the field. In the following scenario, we have added the read-only constraint for all valid users and not for the administrators. This rule displays the field in read-only mode for all users except those in the Project Collection Administrators group. We have other constraints such as Empty, Frozen or conditional using When and WhenNot, required and so on as part of the pre-defined rule set. We can use these constraints to restrict the field-level access for the user. Access Control at Workflow level Sometimes, we may need to allow only one specific group to create a work item and allow others to edit the work item once created. This kind of lifecycle level access privileges can be set at work flow level. Open the transition properties to define the access restrictions. We can restrict the creation of a particular work item using the Workflow transition rules. The transition rule defined in the next figure shows that this specific work item can be created by only Project Collection Administrators and not by Project Collection Build Administrators. Similarly, we can specify which groups are able to make updates to any specific work item using the transition-level management of access rights. Conclusion It is possible to customize TFS at different levels beyond the standard process templates. TFS provides standard controls but allows the end user to develop new custom controls in order to meet special requirements. We can develop custom work item controls to improve the usability aspects of both TFS Web access and Visual-Studio-based management of work items. - [Process Editor]: Microsoft Visual Studio Team Foundation Server 2013 Power Tools - [Web essence Minifier extension for Visual Studio]: Visual Studio Gallery: The Minifier - [TFS 2013 WIT Custom Controls]: TFS 2013 Work Item custom Controls
https://www.red-gate.com/simple-talk/dotnet/visual-studio/customizing-team-foundation-server-2013/
CC-MAIN-2019-18
refinedweb
3,917
61.87
We are excited to announce our recent support of .NET Standard 2.0 and ASP.NET Core 2 applications for Raygun Crash Reporting. The update is for developers needing to target the .NET Standard 2 APIs. Our new provider targets both .NET Standard 1.6 and .NET Standard 2.0, so it can be used with both .NET Core 1 and .NET Core 2 applications. At the time of writing, it is just the .NET Core provider and ASP.NET Core provider that are .NET Core 2 compatible. Useful links: Getting started with ASP.NET Core The easiest way to get started with Raygun and ASP.Net is to use NuGet to install. Alternatively, you can edit the csproj file. - Use NuGet to add the Mindscape.Raygun4Net.AspNetCoreversion 6.0.0 - Add the following code to your appsettings.json: "RaygunSettings": { "ApiKey": "PUT_YOUR_OWN_API_KEY_HERE" } Next, in Startup.cs: - Add using Mindscape.Raygun4Net.AspNetCore;to your using statements. - Add app.UseRaygun();to the Configure method after any other ExceptionHandling methods. - Add services.AddRaygun(Configuration);to the ConfigureServices method. Following these steps adds the RaygunAspNetCoreMiddleware into the middleware pipeline and allows the Raygun client to report the exception before any other error handlers take over, for example, app.UseExceptionHandler() and app.UseDeveloperExceptionPage(). Looking to the future for Raygun and .NET Core Raygun’s latest .NET Core provider allows the sending of exceptions to Raygun. We will continue to add features (for example, Breadcrumbs) to the new .NET Core provider to bring it up to feature parity with Raygun’s other providers. The importance of .NET Core is that it is cross-platform compatible and is the future of .NET development. Raygun are ourselves moving our infrastructure over to .NET Core, so watch this space for more updates. Also, if you haven’t looked into .NET Core, it’s pretty amazing, 2.1 recently shipped and is impressive.
https://raygun.com/blog/net_core_2-0_support/
CC-MAIN-2018-34
refinedweb
313
64.47
History (2) Tim Bray published 14 Theses on this issue. 1. It is not strictly necessary for namespace documents to exist. 2. Namespaces vary widely in semantic effect. 3. Namespaces have definitive material. 4. It is good for namespace documents to exist. 5. Namespace names should not be relative URI references. 6. Namespace names should not be URNs. 7. The definitive material for a namespace is normally distributed among multiple resources. 8. Content-negotiation is not a sufficiently powerful tool for selecting definitive-material resources. 9. Namespace documents should provide a level of indirection. 10. Namespace names are frequently not dereferenced at run-time. 11. Anyone should be able to write software to process a Web resource. 12. Namespace documents should be human-readable. 13. Namespace documents should not favor the needs of any one application or application class. 14. Namespace documents should not be "schemas". Paul Cotton 3 of 5
http://www.w3.org/2003/Talks/techplen-tagissue8/slide3-0.html
CC-MAIN-2014-42
refinedweb
151
53.68
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives For a while I am puzzled by the unsuitability of public and protected - e.g. default access modifiers in many object oriented languages - for design and especially maintenance of APIs. They are so natural in OOP, yet so harmful for anything that needs to be compatible for more than one version that I have to ask: why there are no better alternatives? public protected I've summarized my thoughts in Clarity of Access Modifiers essay and I'd like to bring it to attention of LtU readers as I am sure some of you will know the reason why we have these fuzzy modifiers. Also some of you may know about languages that offer better alternatives with more clarity. I am looking forward to hear your opinion. Feel free to comment here or at the page's discussion tab. I've come to think about public/private/protected as common patterns for capability programming: the API designer can cleanly hide capabilities from the consumer and peak into the consumer's object. There are likely benefits for performance, but, overall, that's it. In this light, their limitations on overriding / late-binding are defensive measures: they help library writers enforce invariants. However, they're only a mechanism; for anything interesting, the burden of proof is on the API writer, and these modifiers just serve as coarse tools for implementing it. Your focus seems to be on the consumer of the object: how can the consumer safely manipulate (extend) one? At this point, the problem is the lack of rich specifications; namespaces (or access modifiers) are just mechanisms, not revealing the finer requirements. The posting for Liskov's well-earned Turing award is timely: Java's only safety system is a weak type system; it's insufficient for what you want. Java allows the expression of basic substitutions, such as your extension doesn't return an int instead of a string, but nothing behavioral (e.g., type state -- see J. Aldrich, or, really, Liskov's seminal paper!). The solution of subtle and ad-hoc reliance on new modifier interpretations, while perhaps helpful for those stuck with a legacy language, seems like lipstick on a pig, especially relative to modern approaches. Deep in my mind I am rationalistic person and I enjoy solutions with flexibility and elegance. On the other hand, right now I am seeking for clueless solution. A solution that will help people do the right decision with as little understanding as possible. Example: By default defining method void sayHello() in Java makes it invisible to external API users (which is good, btw.). However as soon as your friend tells you to let him access that method, and you start to seek for the simplest solution, what will you be suggested? Make the method public. Wrong. Well, it works but also exposes you to all the maintenance issues described in my essay. void sayHello() Having callable around (proposed combination of Java's public final) the like-a-hood a clueless developer did the right choice would be much higher. Hopefully. public final Re. "Java allows the expression of basic substitutions, such as your extension doesn't return an int instead of a string, but nothing behavioral" - sounds interesting from rationalistic point of view, however this might be very distinct from problems those millions of Java developers solve daily. Maybe the masses can learn how to describe behaviour as part of their type system in about ten years, but I hope they stop doing the mistakes with public and protected sooner. I'm unclear how capabilities and access modifiers relate, unless the root issue is that languages with access modifiers rarely support closures. Closures are one mechanism for building invariants in controlling capabilities; access modifiers are another. If you go beyond object capabilities to general language capabilities (instead of "X can't write or read Y's fields" to "X can't write to field Y"), which I'm starting to think is more in line with how we actually write code, access modifiers are natural mechanisms for securing certain operations. The second part of your sentence suggests, if OO languages had lambdas, they wouldn't need modifiers. Perhaps, with strenuous encoding, that's true, but what's the point? We need to extend a secure calculus to the full language, and, if we really expect people to write secure code, we should reduce all possible barriers to doing so, including forcing them to write strenuous encodings for notions like "private". There is a (large) chance current modifiers aren't ideal in terms of language capabilities. However, I'd wager a private instance variable is often closer than "lambda" :) I've been playing with membranes where user policies can control objects or even their fields; this is just a more dynamic, less intrusive, and more adviseable form of "private". What's the right way to go? I don't have a user study to say which. ...have unique solutions along these lines - although none address your specific concerns. LtU discussion on public vs. published interfaces is probably relevant as well. Doesn't CTM have something interesting to say about this? I'll admit I haven't gotten nearly as far with that book as I should... One of the last references in the "public vs. published" talks about Eiffel and its list of types allowed to access a particular attribute. Interesting. I tried to go through some Eiffel tutorial but it does not look like solution to my problem. First of all it looks like that in Eiffel there is no way to hide something from subclass (at least there is an example that overrides more_sig although defined as {NONE}). The other problem I see is that the default value for access modifier is "everyone" which is comfortable, but dangerous. What I am looking for is a language that would make: Of course there can still be private, package private, or similar modifiers. Those are behind the scene and API users don't see them. But forcing each API visible method to choose one and only one from callable, slot or callback, is the holy grail I search for. Eiffel does not feel to help with that. Anyone has better pointers? It sounds like you mostly just want 'final' to be the default with only abstract methods non-final. Then 'public' and 'private' become your 'callable' and 'callback'. Note that this scheme loses the ability to provide default implementations of methods, since nothing is overrideable. You might consider adding an 'overrideable' keyword for this purpose. Also note that in moving to this scheme, you're going to lose the standard OO inheritance model, which is the reason the default is non-final. In Eiffel you have pre- and post- conditions which overriding methods must respect. This allows you to reason that a method will work even if the methods it relies on are overridden. Java doesn't have pre- and post- conditions, so its inheritance model seems unprincipled. You can override public methods in ways that break invariants needed by base classes, and the languages provides no support in detecting this. Sort of true. The mapping to existing Java modifiers is: and ban: Although I am suggesting to eliminate half of existing modifiers, I do not think the language would loose any expressive power. There seems to be 1:1 mapping to switch any API to callable/slot/callback style while keeping the original power of the API. Pre/post conditions look like a good idea, just my dreamed system would have to enforce existence of such checks for every overriddable method. I am trying to prevent developers to accidentally make a method overridable while acting without thinking (usual mode of majority of developers, including me). Rejecting compilation until each overriddable method has at least some pre/post condition would force them to think about meaning of their actions immediately and not after few future releases, when it is too late and one can only suffer with consequences. Your proposed translation does not have the same properties as the original. Your default implementation idiom requires hooking up every abstract method to its default implementation in every concrete class. Worse is that in the presence of any abstract method, your scheme prevents extending a concrete class. You need two versions of every class to do standard OOP - the version where things aren't yet hooked up that you can extend, and the hooked-up version that you can use. This loses the sub-typing relationship between concrete classes that exists in normal OOP. I agree with your sentiment that making right things obvious and wrong things awkward is a noble goal. I'm just pointing out potentially unwanted consequences of this idea. My viewpoint is that OOP style per-method inheritance is bad, and my reasoning is pretty much based on the line of thought of our last two posts: You need precise types (tight pre- post- conditions) to make it safe but putting such tight checks everywhere is impractical. The design of object-oriented subsystems have always entailed a focus on and proper (and limited) disclosure of close collaborations between classes within a subsystem (or framework). To this end, it may be useful to consider promised operations.
http://lambda-the-ultimate.org/node/3248
CC-MAIN-2022-40
refinedweb
1,566
52.09
#include <deal.II/base/thread_management.h> A container for task objects. Allows to add new task objects and wait for them all together. The task objects need to have the same return value for the called function. Note that the call to join_all() must be executed on the same thread as the calls that add subtasks. Otherwise, there might be a deadlock. In other words, a Task object should never passed on to another task for calling the join() method. Definition at line 1811 of file thread_management.h. Add another task object to the collection. Definition at line 1818 of file thread_management.h. Wait for all tasks in the collection to finish. It is not a problem if some of them have already been waited for, i.e. you may call this function more than once, and you can also add new task objects between subsequent calls to this function if you want. Definition at line 1831 of file thread_management.h. List of task objects. Definition at line 1843 of file thread_management.h.
https://www.dealii.org/current/doxygen/deal.II/classThreads_1_1TaskGroup.html
CC-MAIN-2019-39
refinedweb
173
68.87