text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hi Sinisa,
This problem was there earlier but I already fixed this a week back, can you try pulling the
latest code and check again.
-Harikrishna
On 05-Mar-2013, at 1:05 AM, Sinisa Denic <sdenic@peacebellservers.com> wrote:
> There is no return of random set password at start of password enabled virtual machine,
> so user can not login. VM must be stopped and password restarted again in order to get
actual password.
>
> This part of code is in charge of it..seems not to working well, indeed there is no args.password
value again.
>
> file ./ui/scripts/instances.js
>
> actions: {
> start: {
> label: 'label.action.start.instance' ,
> ...
> ...
> ...
> complete: function(args) {
> if(args.password != null) {
> alert('Password of the VM is ' + args.password);
> }
> return 'label.action.start.instance';
> }
> ...
>
>
> ----- Original Message -----
>
> From: "Min Chen" <min.chen@citrix.com>
> To: cloudstack-dev@incubator.apache.org, "Rohit Yadav" <bhaisaab@apache.org>, "Sinisa
Denic" <sdenic@peacebellservers.com>
> Cc: shadowsor@gmail.com
> Sent: Thursday, February 14, 2013 9:08:39 PM
> Subject: Re: Password has been reset to undefined
>
> Filed a JIRA issue for this:
>.
> Fix has been checked into 4.1 (Commit
> 07ce770e9711ab5ddfc9382129b35c96aadc7846) and ported to master.
>
> Thanks
> -min
>
>
> On 2/14/13 10:48 AM, "Min Chen" <min.chen@citrix.com> wrote:
>
>> I found the problem for this. The reason is that in UserVmVO class,
>> "password" is defined as "transit", so it is not stored in Database. When
>> we use id to find this user vm from user_vm_view, the password set from
>> reset routine internally is lost. I will file a defect on this and provide
>> a fix today.
>>
>> Thanks
>> -min
>>
>> On 2/14/13 2:13 AM, "Rohit Yadav" <bhaisaab@apache.org> wrote:
>>
>>> Hi Sinisa, thanks for reporting this. Please file this issue on JIRA.
>>> But AFAIK the response generator is supposed to create the response,
>>> the cmd api class should not add the password. I don't remember if it
>>> was an issue, let's ask Min as she did a lot of the response
>>> generating stuff.
>>>
>>> Hi Min, can you check please?
>>>
>>> Regards.
>>>
>>> On Thu, Feb 14, 2013 at 3:36 PM, Sinisa Denic
>>> <sdenic@peacebellservers.com> wrote:
>>>> Sorry I forgot to set cs-dev mailing list in cc in my previous post so
>>>> other users can see.
>>>> I will repeat:
>>>>
>>>> In order to get password on UI after reset password for virtualmachine
>>>> in CS4.1.0-SNAPSHOT
>>>> I had to made this modification:
>>>>
>>>> ---
>>>> a/api/src/org/apache/cloudstack/api/command/user/vm/ResetVMPasswordCmd.j
>>>> a
>>>> va
>>>> +++
>>>> b/api/src/org/apache/cloudstack/api/command/user/vm/ResetVMPasswordCmd.j
>>>> a
>>>> va
>>>> @@ -114,6 +114,7 @@ public class ResetVMPasswordCmd extends
>>>> BaseAsyncCmd {
>>>> if (result != null){
>>>> UserVmResponse response =
>>>> _responseGenerator.createUserVmResponse("virtualmachine",
>>>> result).get(0);
>>>> response.setResponseName(getCommandName());
>>>> + response.setPassword(password);
>>>> this.setResponseObject(response);
>>>> } else {
>>>> throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR,
>>>> "Failed to reset vm password");
>>>>
>>>> Do you maybe know where the problem could reside,as I spent a few days
>>>> on it :(?
>>
>
> | http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201303.mbox/%3C39CFDD79-F05A-4F98-89E9-B6BCFB80BD6C@citrix.com%3E | CC-MAIN-2018-43 | refinedweb | 489 | 61.22 |
In Robot Framework user guide there is a section that describes how to pass variable files and also some possible variables if needed.
Example:
pybot --variablefile taking_arguments.py:arg1:arg2
pybot --variablefile taking_arguments.py:arg1:arg2
IP_PREFIX = arg1
NameError: name 'arg1' is not defined
The only way to use variables in an argument file using the
--variablefile filename.py:arg1:arg2 syntax is to have your variable file implement the function
get_variables. This function will be passed the arguments you specify on the command line, and must return a dictionary of variable names and values.
For example, consider the following variable file, named "variables.py":
def get_variables(arg1, arg2): variables = {"argument 1": arg1, "argument 2": arg2, } return variables
This file creates two robot variables, named
${argument 1} and
${argument 2}. The values for these variables will be the values of the arguments that were passed in. You might use this variable file like this:
pybot --variablefile variables.py:one:two ...
In this case, the strings "one" and "two" will be passed to
get_variables as the two arguments. These will then be associated with the two variables, resulting in
${argument 1} being set to
one and
${argument 2} being set to
two. | https://codedump.io/share/lClMjYVBTpdc/1/how-to-use-extra-arguments-passed-with-variable-file---robot-framework | CC-MAIN-2016-44 | refinedweb | 201 | 55.64 |
Next article: Friday Q&A 2012-03-02: Key-Value Observing Done Right: Take 2
Previous article: Deadlocks and Lock Ordering: a Vignette
Tags: code fridayqna hack memory.
Code
Just like last time, the code we're going to discuss is available on GitHub:
Goals
The mirrored memory trick allows exposing pointers to the interior buffer to the outside world, since both data and free space are always contiguous. The goal then is to create an API which makes this easy to use. In default operation, the ring buffer should also grow to accommodate newly written data. For multithreaded use, the ring buffer's size can be locked, at which point the buffer becomes thread safe for one simultaneous reader and writer. For that case, thread safety should be acheived with no locks.
API
The ring buffer is implemented in a class called
MAMirroredQueue:
@interface MAMirroredQueue : NSObject
For reading, there are three methods. One method retrieves how much data is available to be read, one method returns a pointer to the data, and one method advances the pointer:
- (size_t)availableBytes; - (void *)readPointer; - (void)advanceReadPointer: (size_t)howmuch;
This way a client can find out how much data it can read, it can access the data, and then when it's finished it can remove that data from the ring buffer by advancing the pointer.
For writing data, the interface is similar. However, instead of a method to query the amount of data, there's a method to simply ensure that the necessary amount of free space is available:
- (BOOL)ensureWriteSpace: (size_t)howmuch; - (void *)writePointer; - (void)advanceWritePointer: (size_t)howmuch;
The
ensureWriteSpace: method returns success or failure. When allocation is not locked (the default state), it will always succeed. When allocation is locked (to ensure thread safety), it will succeed only if there is enough free space in the buffer, and return
NO otherwise.
The other two methods in write API are the same as in the read API: one to retrieve the data pointer, and one to advance it once data is written.
With all this talk of locking allocations, we probably want some methods to actually manage that:
- (void)lockAllocation; - (void)unlockAllocation;
The queue starts out unlocked. If thread-safe operation is needed, the desired amount of space can be allocated by calling
ensureWriteSpace:, and afterwards
lockAllocation ensures that the buffer doesn't get reallocated. If the buffer ever needs to be expanded in the future,
unlockAllocation followed by another
ensureWriteSpace: can be used to accomplish that.
Finally, I also wrote a pair of UNIX-like wrappers around the above functionality:
- (size_t)read: (void *)buf count: (size_t)howmuch; - (size_t)write: (const void *)buf count: (size_t)howmuch;
These aren't necessary, and are actually inefficient because the API requires copying data into and out of the ring buffer, but an API that shares the semantics of the POSIX
read and
write calls can be nice to have.
Instance Variables
The buffer itself is described by three instance variables:
char *_buf; size_t _bufSize; BOOL _allocationLocked;
All of which are, I hope, self explanatory. Note that
_buf is a
char * to allow for easy pointer arithmetic, which will come in handy later. Although
gcc and
clang allow it, technically pointer arithmetic is not allowed on
void * pointers, so
char * is a convenient choice for byte-addressed entities.
In addition to the buffer, we also need a read pointer and a write pointer:
char *_readPointer; char *_writePointer;
Utility Functions
Page size is important for this code, and it needs to be able to round numbers up and down to a multiple of the page size. I wrote two simple wrappers around mach macros which manage this:
static size_t RoundUpToPageSize(size_t n) { return round_page(n); } static void *RoundDownToPageSize(void *ptr) { return (void *)trunc_page((intptr_t)ptr); }
The first one simply rounds a byte size up to the nearest multiple of page size, and the second one rounds a pointer down to the nearest page boundary.
Code
First up is the
dealloc method. I'm assuming
ARC, so no need to call
super. All it does is free the buffer if it's been allocated:
- (void)dealloc { if(_buf) free_mirrored(_buf, _bufSize, MIRROR_COUNT); }
MIRROR_COUNT is simply a
#define which describes how many mirrored copies to allocate. Interestingly, it's set to
3, not
2 as you might expect, which is why my mirrored allocator supports an arbitrary number of mirrorings instead of just hardcoding two of them. More on the reasoning for this later.
There is no initializer method, as simply having all of the instance variables set to
0 or
NULL is sufficient. The ring buffer starts out empty, and all zeroes describes that just fine.
Next up, we have the
availableBytes method. The first thing it does is subtract the read pointer from the write pointer:
- (size_t)availableBytes { ptrdiff_t amount = _writePointer - _readPointer;
Normally, this would just be the number of data bytes in the buffer. However, in the event that another thread is modifying this buffer while we're computing this value, the pointers could be moving around. If they're just moving around by a read or write amount, then that's no problem. We may end up computing the old value or the new value for the number of available bytes, but either one works fine.
However, the pointers can also be moved around by the size of the buffer. When the read pointer goes into the second mirrored region, it gets reset back into the first one, and the write pointer follows. Because of this, the computed size here may be less than zero (if we see the update to the write pointer but not the read pointer), or may be greater than the buffer size (if we see the read pointer update but not the write pointer). Since we know that the number of available bytes must be between zero and the buffer size, this is easy to correct: just check for these cases, and adjust the amount accordingly:
if(amount < 0) amount += _bufSize; else if((size_t)amount > _bufSize) amount -= _bufSize; return amount; }
Next up, we have
readPointer. This simply returns the ivar:
- (void *)readPointer { return _readPointer; }
advanceReadPointer:. This simply adds the amount to the read pointer:
- (void)advanceReadPointer: (size_t)howmuch { _readPointer += howmuch;
However, it's not done here. In the event that this advanced the read pointer past the end of the first mirrored region, both it and the write pointer need to be pulled back. For the read pointer, this is simply a matter of subtracting
_bufSize. Since the write pointer can be modified by both the reader thread (with this method) and the writer thread (in
advanceWritePointer:) simultaneously, it needs to be updated using an atomic operation. I use the built-in
__sync_sub_and_fetch function to do this:
if((size_t)(_readPointer - _buf) >= _bufSize) { _readPointer -= _bufSize; __sync_sub_and_fetch(&_writePointer, _bufSize); } }
Next comes
ensureWriteSpace:. The first part of this is trivial: find out how much free space is available, by subtracting
[self availableBytes] from the total buffer size, and if the requested amount is less, everything is all set:
- (BOOL)ensureWriteSpace: (size_t)howmuch { size_t contentLength = [self availableBytes]; if(howmuch <= _bufSize - contentLength) return YES;
Otherwise, we know that the free space is not sufficient to meet the request. If allocation is locked, that's it, game over, return
NO:
else if(_allocationLocked) return NO;
If allocation is not locked, then it's time to reallocate the buffer.
The first thing to do is figure out how much memory to allocate, then allocate a new buffer of that size. Recall that, because the mirrored allocator uses virtual memory tricks, it must allocate a multiple of the page size. We need at least
contentLength + howmuch memory, so the new buffer size is found by rounding that number up to the nearest page size:
size_t newBufferLength = RoundUpToPageSize(contentLength + howmuch);
Next, allocate the new buffer:
char *newBuf = allocate_mirrored(newBufferLength, MIRROR_COUNT);
Now that we have the new buffer, the code branches a bit. If there's already an existing buffer, then we're reallocating memory, and have to copy the data out of the old buffer and into the new:
if(_bufSize > 0) {
Once again, we're going to play virtual memory games. Mach provides a
vm_copy function which copies page-aligned memory without actually copying it. Instead, the pages are remapped and set up to copy on write. For this case, where we're going to immediately deallocate the old memory, this means that no data is ever actually copied, and the system just plays some virtual memory tricks to make it look like it was.
We want to copy starting from the read pointer, but because everything has to be page-aligned, the copy has to start at the beginning of the page containing the read pointer:
char *copyStart = RoundDownToPageSize(_readPointer);
Likewise, the length needs to be a multiple of the page size. Starting from
copyStart, we need to copy
_writePointer - copyStart bytes, but this needs to be rounded up fit the page size:
size_t copyLength = RoundUpToPageSize(_writePointer - copyStart);
Now that this is set, we can "copy" this data into the new buffer:
vm_copy(mach_task_self(), (vm_address_t)copyStart, copyLength, (vm_address_t)newBuf);
Now that the data is copied, we need to compute the location of the new read pointer. We copied additional bytes by rounding
_readPointer down to
copyStart. The new read pointer is equal to
newBuf plus that number of additional bytes:
char *newReadPointer = newBuf + (_readPointer - copyStart);
This spot was particularly troublesome for me when I was developing the code, so I tossed in an assert to make sure that it failed early and loudly:
if(*newReadPointer != *_readPointer) abort();
Now we can free the old buffer and reassign the read pointer:
free_mirrored(_buf, _bufSize, MIRROR_COUNT); _readPointer = newReadPointer;
The write pointer is set to equal the read pointer plus the previously computed content length:
_writePointer = _readPointer + contentLength; }
For the case where no previous buffer exists, the code is simple: just set the read and write pointer to the beginning of the new buffer:
else { _readPointer = newBuf; _writePointer = newBuf; }
The new buffer is allocated, data is copied if necessary, and now all that remains is to set the
_buf and
_bufSize ivars and then return
YES to the caller:
_buf = newBuf; _bufSize = newBufferLength; return YES; }
Next up, the
writePointer method, which is once again just a simple accessor:
- (void *)writePointer { return _writePointer; }
The
advanceWritePointer: method is also simple, performing an atomic add of
_writePointer:
- (void)advanceWritePointer: (size_t)howmuch { __sync_add_and_fetch(&_writePointer, howmuch); }
Note that, unlike
advanceReadPointer:, this method doesn't need any checks to wrap pointers back to the first mirrored data section. The
advanceReadPointer: method handles wrapping both read and write pointers. Since no wrapping occurs here, the write pointer can often sit in the second mirrored data section for long periods of time, but that's perfectly fine.
By having the write pointer always be ahead of the read pointer, this code avoids an annoying ambiguity. A ring buffer which uses read and write pointers usually suffers from a problem when the read and write pointers are equal. There's no simple way to distinguish between the buffer being empty (where both pointers are equal because there is no data between them) and the buffer being full (where the pointers are equal becaus there's no free space between them).
Ring buffer implementations typically avoid this either by prohibiting the buffer from becoming completely full, and instead defining "full" as being one unit less than true full capacity, or by using a pointer plus a count rather than using two pointers.
Neither alternative is attractive in this case. Having the buffer be artifically one byte small is particularly painful when playing crazy virtual memory games, since it's likely that code using this ring buffer will want to deal in whole pages as well, and by losing a single byte out of the buffer, it essentially loses an entire 4kB page. Using a pointer and a count makes achieving lockless thread safety much more difficult if not completely impossible, since the write pointer becomes a derived value computed from two other values, both of which get modified by the read thread, and which can be momentarily inconsistent with each other as seen from another thread when both are updated simultaneously.
The virtual memory games played with mirrored allocation give a third, better way: simply use two pointers, and express "full" by having the write pointer equal the read pointer plus the buffer size. It's natural, it works great due to the mirrored allocation, and it's easy to deal with.
Next up are the allocation locking methods. These simply manipulate the
_allocationLocked ivar. Nothing else needs to be done in these methods, since they just modify the behavior of
ensureWriteSpace:. Here's the code:
- (void)lockAllocation { _allocationLocked = YES; } - (void)unlockAllocation { _allocationLocked = NO; }
Next, we have the UNIX-like compatibility wrappers. These help illustrate how the more primitive, direct API is used. The
read:count: method figures out how much to read using
availableBytes, copies data out from
readPointer, then calls
advanceReadPointer: to mark the data as read:
- (size_t)read: (void *)buf count: (size_t)howmuch { size_t toRead = MIN(howmuch, [self availableBytes]); memcpy(buf, [self readPointer], toRead); [self advanceReadPointer: toRead]; return toRead; }
The story for
write:count: is a little more complex, as it behaves differently depending on whether allocation is locked. If allocation is locked, then it only writes as much data as will fit in the buffer's remaining space. Otherwise, it uses
ensureWriteSpace: to grow the buffer to the appropriate size, if needed:
- (size_t)write: (const void *)buf count: (size_t)howmuch { if(_allocationLocked) howmuch = MIN(howmuch, _bufSize - [self availableBytes]); else [self ensureWriteSpace: howmuch];
The rest is straightforward. It copies the data into
writePointer, advances the write pointer, and returns the amount of data written:
memcpy([self writePointer], buf, howmuch); [self advanceWritePointer: howmuch]; return howmuch; }
And that completes the implementation of the mirrored queue.
Thread Safety
One of the goals of this implementation is to be thread safe, for the case where there is one reader thread and one writer thread. The above code achieves this, but it's not entirely clear why it's safe. There are no locks, and the read pointer doesn't even use atomic operations to update.
First, note that the code is only thread safe for the case when allocation is locked. That means that all of the tricky reallocation code in
ensureWriteSpace: doesn't come into play. This is good, because it would be incredibly difficult to make that code thread safe without locking. Given that, we can consider
_buf and
_bufSize to be constant. The only variables which could be potentially modified by one thread while simultaneouly being read by another thread are
_readPointer and
_writePointer.
It's easiest to consider this as two separate cases. First, the reader thread needs to be correct even when the writer thread is modifying these values. Second, the writer thread needs to be correct even when the reader thread is modifying these values. If both hold, then the entire hting is correct.
Let's look at the first case, of making sure the reader thread is correct in the face of modifications from the writer thread. The writer thread only ever modifies
_writePointer. The only place where the reader thread depends on the value of
_writePointer is in
availableBytes:
ptrdiff_t amount = _writePointer - _readPointer;
This unsynchronized access is perfectly safe. There's a race condition in that it's not certain whether the read thread will see the old or new value of
_writePointer. However, it doesn't matter.
_writePointer can only increase, and the number of available bytes can likewise only increase. If it sees the old value, it still computes a number of available bytes that's correct, just slightly out of date. If it sees the new value, then so much the better. Therefore, the reader is safe.
Let's look at the writer's safety in the face of changes from the reader now. The reader can alter both pointers, so the analysis is a little more complex. The writer code also calls
availableBytes, a method nominally on the reader side, so that method has to be safe for both.
The only way that the reader thread can alter
_writePointer is with this line:
__sync_sub_and_fetch(&_writePointer, _bufSize);
Because of the mirrored structure of the underlying buffer, both the old and new values of
_writePointer are correct here.
availableBytes can deal with either value, and will return the right answer in either case. Likewise, when
writePointer returns the value, it doesn't matter whether it's the old or new value, as they are both equivalent in terms of what happens when data is written to them. Finally,
advanceWritePointer: is safe, since it also uses a
__sync builtin to modify
_writePointer, ensuring that both updates will be applied in some order, and the particular order is not important.
The only place where the writer thread uses
_readPointer is in
availableBytes. Just like the corresponding case when the writer thread modifies its pointer and the reader calls
availableBytes, it's safe for the reader thread to modify
_readPointer while the writer is calculating
availableBytes. Advancing the read pointer decreases the number of available bytes, which increases the amount of write space. If the writer thread sees the old value for
_readPointer here, it computes the older, smaller amount of write space, which is still safe, just slightly stale.
The writer thread is safe in the face of changes from the reader thread, and vice versa. Therefore, this code really is thread safe.
Triplicate
I promised that I would explain why
MAMirroredQueue allocates three mirrored copies of its buffer, and now is the time.
Normally, two copies is enough for the contiguous ring buffer. However, remember that this implementation is a little odd, even for a mirrored ring buffer, in using separate read and write pointers and allowing the write pointer to sit in the second mirrored area. This enables lockless thread safety and use of the full buffer without the strange ambiguity when the two pointers are equal. However, it also requires the existence of a third mirrored area.
The need is extremely rare, but it can happen. To start with, we need a buffer where the data portion has wrapped around to the beginning. In a normal ring buffer, that would look like this:
read | v +----------+ |xxx xxxxx| +----------+ ^ | write
In the mirrored ring buffer, it instead looks like this:
read | v +----------+----------+ |xxx xxxxx|xxx xxxxx| +----------+----------+ ^ | write
So far so good, still. The write pointer is valid, and writing data into the empty area is correct. Now, let's say there are separate reader and writer threads manipulating this buffer simultaneously. The reader thread moves the read pointer up:
read | v +----------+----------+ | x | x | +----------+----------+ ^ | write
The next step for the reader thread is to move both read and write pointers down into the first mirrored area. But! Before that can happen, let's say that the writer suddenly goes to write a bunch of data. (Remember, in the world of preemptive threading, the two threads can run in all sorts of weird orders.) Before the read thread can move any pointers around, the write thread computes the available space, writes data into the buffer, and crashes:
read | v +----------+----------+ | x | xXXXXXXX|XX +----------+----------+ ^ | write
The writer wrote off the end of the second mirrored segment! It saw a large amount of free space available, which is correct, but the trouble is that the end of this free space can only be accessed from a write pointer residing in the first mirrored segment. Since we allow the write pointer to hang out in the second mirrored segment, there's the possibility for trouble. This is a brief race window requiring very specific circumstances to trigger, but it can happen, and it needs to be protected against.
Fortunately, this problem is easy to solve by simply allocating a third mirrored segment. In that case, the sequence looks like this:
read | v +----------+----------+----------+ |xxx xxxxx|xxx xxxxx|xxx xxxxx| +----------+----------+----------+ ^ | write read | v +----------+----------+----------+ | x | x | x | +----------+----------+----------+ ^ | write read | v +----------+----------+----------+ |XXxXXXXXXX|XXxXXXXXXX|XXxXXXXXXX| +----------+----------+----------+ ^ | write
With the extra mirrored region at the end, the surplus data is written to a safe spot and everything works correctly. At this point, the read thread will jump in, shift the pointers down, and life continues as usual.
Conclusion
That wraps up our exploration of a mirrored-memory ring buffer. I hope the journey was enlightening. Oh, and the source code is all available under a standard BSD license in case you have a use for it in your own apps.
Come back next time for another exciting episode. Friday Q&A is, as always, driven by reader input, so send in your ideas for topics in the meantime.
s/hting/thing/
Add your thoughts, post a comment:
Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion. | https://www.mikeash.com/pyblog/friday-qa-2012-02-17-ring-buffers-and-mirrored-memory-part-ii.html | CC-MAIN-2017-34 | refinedweb | 3,491 | 55.37 |
Visual Studio debugger cannot step into QtWebkit.dll code
I'm trying to debug an issue I'm having with launching windows from Javascript in a web view. However, I cannot step into any QtWebkit.dll code from the Visual Studio debugger using Qt 4.8.5. E.g. a simple app:
@
#include <QApplication>
#include <QWebView>
int main(int argc, char** argv)
{
QApplication app(argc, argv);
QWebView webView; webView.load(QUrl("")); webView.show(); return app.exec();
}
@
I can step into QApplication's, QUrl's, and QString's constructors. I cannot, however, step into QWebview::load or any other QtWebkit.dll code. I've tried with both the pre-built binaries from qt-project.org as well as my own libraries built from source.
Any suggestions?
Hi,
You can't debug a compiled code. Use the source code | https://forum.qt.io/topic/34037/visual-studio-debugger-cannot-step-into-qtwebkit-dll-code | CC-MAIN-2018-13 | refinedweb | 137 | 53.37 |
"Abstraction is layering ignorance on top of reality." -- Richard Gabriel
Directory structure
The Nim project's directory structure is:
Bootstrapping the compiler
Note: Add . to your PATH so that koch can be used without the ./.
Compiling the compiler is a simple matter of running:
nim c koch.nim koch boot -d:release
For a debug version use:
nim c koch.nim koch boot.
Developing the compiler
To create a new compiler for each run, use koch temp:
koch temp c test.nim
koch temp creates a debug build of the compiler, which is useful to create stacktraces for compiler debugging.
You can of course use GDB or Visual Studio to debug the compiler (via --debuginfo --lineDir:on). However, there are also lots of procs that aid in debugging:
# dealing with PNode: echo renderTree(someNode) debug(someNode) # some JSON representation # dealing with PType: echo typeToString(someType) debug(someType) # dealing with PSym: echo symbol.name.s debug(symbol) # pretty prints the Nim ast, but annotates symbol IDs: echo renderTree(someNode, {renderIds}) if `??`(conf, n.info, "temp.nim"): # only output when it comes from "temp.nim" echo renderTree(n) if `??`(conf, n.info, "temp.nim"): # why does it process temp.nim here? writeStackTrace()
These procs may not already be imported by the module you're editing. You can import them directly for debugging:
from astalgo import debug from types import typeToString from renderer import renderTree from msgs import `??`
The compiler's architecture
Nim uses the classic compiler architecture: A lexer/scanner feeds tokens to a parser. The parser builds a syntax tree that is used by the code generators. This syntax tree is the interface between the parser and the code generator. It is essential to understand most of the compiler's code.
Semantic analysis is separated from parsing..
Bisecting for regressions
koch temp returns 125 as the exit code in case the compiler compilation fails. This exit code tells git bisect to skip the current commit:
git bisect start bad-commit good-commit git bisect run ./koch temp -r c test-source.nim
You can also bisect using custom options to build the compiler, for example if you don't need a debug version of the compiler (which runs slower), you can replace ./koch temp by explicit compilation command, see Rebuilding.
Runtimes
Nim has two different runtimes, the "old runtime" and the "new runtime". The old runtime supports the old GCs (markAndSweep, refc, Boehm), the new runtime supports ARC/ORC. The new runtime is active when defined(nimV2).
Coding Guidelines
- We follow Nim's official style guide, see nep1.html.
- Max line length is 100 characters.
- Provide spaces around binary operators if that enhances readability.
- Use a space after a colon, but not before it.
- [deprecated] old runtime's garbage collectors need some assembler tweaking to work. The default implementation uses C's setjmp function to store all registers on the hardware stack. It may be necessary that the new platform needs to replace this generic code by some assembler code.
Runtime type information
Note: This section describes the "old runtime".
Runtime type information (RTTI) is needed for several aspects of the Nim programming language:
- Garbage collection
- The old GCs use the RTTI for traversing abitrary Nim types, but usually only the marker field which contains a proc that does the traversal.
- Complex assignments
- Sequences and strings are implemented as pointers to resizeable buffers, but Nim requires copying for assignments. Apart from RTTI the compiler also generates copy procedures as a specialization.
We already know the type information as a graph in the compiler. Thus we need to serialize this graph as RTTI for C code generation. Look at the file lib/system/hti.nim for more information.
Magics and compilerProcs
The system module contains the part of the RTL which needs support by compiler magic. The C code generator generates the C code for it, just like any other module. However, calls to some procedures like addInt are inserted by the generator. Therefore there is beneficial Env = ref object x: int # data proc anon(y: int, c: Env): int = return y + c.x proc add(x: int): tuple[prc, data] = var env: Env lambda (y: int): proc (z: int): int {.closure.} = return lambda (z: int): int = return x + y + z var add24 = add(2)(4) echo add24(5) #OUT 11
This should produce roughly this code:
type EnvX = ref object x: int # data EnvY = ref object y: int ex: EnvX proc lambdaZ(z: int, ey: EnvY): int = return ey.ex.x + ey.y + z proc lambdaY(y: int, ex: EnvX): tuple[prc, data: EnvY] = var ey: EnvY new ey ey.y = y ey.ex = ex result = (lambdaZ, ey) proc add(x: int): tuple[prc, data: EnvX] = var ex: Env!
Notes on type and AST representation
To be expanded.
Integer literals
In Nim, there is a redundant way to specify the type of an integer literal. First of all, it should be unsurprising that every node has a node kind. The node of an integer literal can be any of the following values:
nkIntLit, nkInt8Lit, nkInt16Lit, nkInt32Lit, nkInt64Lit, nkUIntLit, nkUInt8Lit, nkUInt16Lit, nkUInt32Lit, nkUInt64Lit
On top of that, there is also the typ field for the type. It the kind of the typ field can be one of the following ones, and it should be matching the literal kind:
tyInt, tyInt8, tyInt16, tyInt32, tyInt64, tyUInt, tyUInt8, tyUInt16, tyUInt32, tyUInt64
Then there is also the integer literal type. This is a specific type that is implicitly convertible into the requested type if the requested type can hold the value. For this to work, the type needs to know the concrete value of the literal. For example an expression 321 will be of type int literal(321). This type is implicitly convertible to all integer types and ranges that contain the value 321. That would be all builtin integer types except uint8 and int8 where 321 would be out of range. When this literal type is assigned to a new var or let variable, it's type will be resolved to just int, not int literal(321) unlike constants. A constant keeps the full int literal(321) type. Here is an example where that difference matters.
proc foo(arg: int8) = echo "def" const tmp1 = 123 foo(tmp1) # OK let tmp2 = 123 foo(tmp2) # Error
In a context with multiple overloads, the integer literal kind will always prefer the int type over all other types. If none of the overloads is of type int, then there will be an error because of ambiguity.
proc foo(arg: int) = echo "abc" proc foo(arg: int8) = echo "def" foo(123) # output: abc proc bar(arg: int16) = echo "abc" proc bar(arg: int8) = echo "def" bar(123) # Error ambiguous call
In the compiler these integer literal types are represented with the node kind nkIntLit, type kind tyInt and the member n of the type pointing back to the integer literal node in the ast containing the integer value. These are the properties that hold true for integer literal types.
n.kind == nkIntLit n.typ.kind == tyInt n.typ.n == n
Other literal types, such as uint literal(123) that would automatically convert to other integer types, but prefers to become a uint are not part of the Nim language.
In an unchecked AST, the typ field is nil. The type checker will set the typ field accordingly to the node kind. Nodes of kind nkIntLit will get the integer literal type (e.g. int literal(123)). Nodes of kind nkUIntLit will get type uint (kind tyUint), etc.
This also means that it is not possible to write a literal in an unchecked AST that will after sem checking just be of type int and not implicitly convertible to other integer types. This only works for all integer types that are not int. | https://nim-lang.github.io/Nim/intern.html | CC-MAIN-2021-39 | refinedweb | 1,302 | 64.2 |
save code problem - Hibernate
hibernate code problem String SQL_QUERY =" from Insurance...: "
+ insurance. getInsuranceName());
}
in the above code,the hibernate where clause was used and then it was iterate to fetch the values.
but my
Hibernate save()
This tutorial explain how save() works in hibernate
Problem in updating query in Hibernate - Hibernate
Problem in updating query in Hibernate Hi,
I have used a query.... Bean class is created for this set collection as well but still the values...,
Please visit the following link: class.
How to save database data into excelsheet using hibernate
How to save database data into excelsheet using hibernate How to save database data into excelsheet using hibernate firstExample not inserting data - Hibernate
hibernate firstExample not inserting data hello all ,
i followed... problem is data is not inserting into DB even though the program is executed... for more information.
Thanks.
How to save value in JSP
How to save value in JSP Employee Name Time-IN Time-OUT... 324 2012-12-12
save
i want to save dis value jsp to action ...how can i get all value ..and store..how can its values goes
image save to folder in java
image save to folder in java Hi,
I am working with java. In my application i want to give facility to user to add and change image. I use open... to overcome from that problem
Thanks and Regrads
Pushpa sheela
save data DAO
save data DAO how to save data in db
package com.tcs.ignite.dao;
import com.tcs.ignite.bean.User;
import...,Email_id,Password,Gender,Hobby,City) values('" + user.getName
Text field save as word file
Text field save as word file Dear experts At run how to save set of text field contains text into single Word file. To save in our desktop computer. Solve my problem
Eclipse hibernate problem
Eclipse hibernate problem hie..i've just started a basic hibernate application using eclipse helios.I have added the jars,configured the xml files...;Hibernate Eclipse Integration
Hibernate Tutorials
uploading problem
("insert into
filePath(fileId,filePath,fileName,fileType)
values("+id+",'"+fileP...();
}
}
}
}
%>
my problem...:
firstly....
then problem solved...
bt real problem is when i upload files fusing mozilla
How to save data to excel with a 2.1 work sheet format without changing it?
and save it to excel file then read it again .. the problem is , when data...How to save data to excel with a 2.1 work sheet format without changing... ) type , if i read it as it is it throughs an exceeption so i should save that file
Hibernate : Session Save
In this section we will discuss how session.save() works in Hibernate
server problem - Hibernate
server problem dear sir please give me best soloution how run hibernate and spring application on jboss and weblogic10.1.0 sever and compile
thanks
Save a pdf with spring as below code
Save a pdf with spring as below code
c:/report.pdf
All work in the same way, but never save the report to disk.
How to solve this problem? please reply me
How to save data - Swing AWT
How to save data Hi,
I have a problem about how to save data ,but
first before data save to dababase, data must view in jLisit or Jtable... save data from jList and Jtable(in jList or jTable data will be many).Thank's
hibernate
hibernate what is problem of tree loading
save output in multiple text files
save output in multiple text files Hi..I'm getting problem to store the output of this php code into multiple text files. Can anyone sugeest.
Here is my code:
<?php
for($i=1; $i<=100; $i++)
{
$html = file_get
Hibernate save or update method example
Hibernate save or update method example
This tutorial is an example of loading... of the Session object.
The Hibernate's saveOrUpdate method is very useful... assigned is new instance then save
It detects the change in the state
J2ee - Hibernate
J2ee I need to save datas like employee details in database, while saving i am getting exceptioin as org.hibernate.exception.GenericJDBCException... that cannot insert exampleVO into database.. please help to me to solve this problem
Dragging and dropping HTML table row to another position(In Jsp) and save the new position (values) also in database
Dragging and dropping HTML table row to another position(In Jsp) and save the new position (values) also in database Hi members,
I have one Html table in jsp page and i am iterating values (in columns of html table)from
How to Dragging and dropping HTML table row to another position(In Jsp) and save the new position (values) also in database(MySql)?
) and save the new position (values) also in database(MySql)? Hi members,
I have one Html table in jsp page and i am iterating values (in columns of html... position.I want to save the position ( new position) in database(MySql).How
Java - Hibernate
, this type of output.
----------------------------
Inserting Record
Done
Hibernate: insert into CONTACT (FIRSTNAME, LASTNAME, EMAIL, ID) values (?, ?, ?, ?)
Could... =...).
i wll be obliged.
Might be you have deploying problem.
deploy
Full path of image to save it in a folder
Full path of image to save it in a folder Sir ,I am trying to upload a file but only sends file name as parameter.
I want a code that would help me...,picPath) values(?,?) ");
ps.setString(1, title);
ps.setString(2
hibernate annotations
hibernate annotations I am facing following problem, I have created... address_.adno=?
Hibernate: insert into student_tbl (age, sname, sid) values (?, ?, ?)
Hibernate: insert into address_tbl (city, street, sid, adno) values
Java Compilation error. Hibernate code problem. Struts first example - Hibernate
Java Compilation error. Hibernate code problem. Struts first example Java Compilation error. Hibernate code problem. Struts first
How to save and get value from JSP
How to save and get value from JSP Employee Name Time... 324 2012-12-12
save
i want to save dis value jsp to action ...how can i get all value ..and store..how can its values goes
hibernate - Hibernate
hibernate Hai,This is jagadhish
I have a problem while developing...,
As per your problem exception is related to ant.Plz check the lib for Ant.
For read more information:
loop problem - Java Magazine
();
//save student ID after press enter.
//where to save... into student values('"+ID+"','"+name+"')");
}
catch(Exception e){}
break;
case 2
Hibernate advantages and disadvantages
the solution of your problem
on the web. You can even download and learn Hibernate...
and processing the result by hand is lot of work. Hibernate will
save...Hibernate advantages and disadvantages
In this section we will discuss
SQL database parameters should save at where?
SQL database parameters should save at where? Hi,
Currently the information regarding the SQL databases are hard coded into the script,
Assuming... not have any problem, but with this changes.
it meant this file will be read for every
code save word file in 10g database - SQL
code save word file in 10g database I am not having any idea to save the whole word document in Oracle 10g. Please help me. Hi Friend...) values ( ?, ?)");
//pstmt.setString(1, id);
pstmt.setString(1
Hibernate - Hibernate
Hibernate Hai this is jagadhish, while executing a program in Hibernate in Tomcat i got an error like this
HTTP Status 500....
Hopefully this will solve your problem.
Regards
Deepak Kumar
Create Bar Chart with database values
Create Bar Chart with database values
In this section, you will learn how to create a bar chart by retrieving the values from the database. For this purpose... the chart to a file specified in JPEG format. If you want to save the file in PNG
Tomcat installation problem - Hibernate Interview Questions
Tomcat installation problem Hello
Save JRadioButton Text in to database - Java Beginners
Save JRadioButton Text in to database Hello Sir I have Two JaradioButtons
1)Semwise Course
2)Yearwise Course
I want To Save Text from...) values(?)");
st.setString(1,g);
int i=st.executeUpdate
structs2 + hibernate - Hibernate
structs2 + hibernate pls suggest me to how save the pojo object in in hibernate using struts2
Facing Problem to insert Multiple Array values in database - JSP-Servlet
Facing Problem to insert Multiple Array values in database Hai... facing the problem while inserting the data in database.
iam using the MsAccess...
Manoj
Hi friend,
You Have Problem in Data Table and check
Retrieve Value from Table - Hibernate
retrieve values From database using hibernate in web Application. As I new to hibernate I couldn't find solution for this problem.. Can anyone help please.. ...://
save switch data into database - Java Beginners
save switch data into database switch(menu) {
case 1://add a student
System.out.print("Enter student ID: ");
int ID = scan.nextInt...=st.executeUpdate("insert into student values('"+ID+"','"+name+"')");
}
catch
How to upload and save a file in a folder - Java Beginners
How to upload and save a file in a folder Hi
How to upload and save a file in a folder?
for example
Agreement.jsp
File:
So when... in solving the problem :
Image Upload
Insert image from user using and save in database
Insert image from user using and save in database when i am trying to upload a image from user and trying to save into oracle9i database... = connection.prepareStatement("insert into file(file_data) values(?)");
fis = new
Problem in uploading image to to mysql database
have no problem in saving the image in the folder, my problem is it can't save... = connection.prepareStatement("insert into save_image(name, city, image) values...Problem in uploading image to to mysql database Hi, need some help
Problem in uploading image to to mysql database
have no problem in saving the image in the folder, my problem is it can't save...Problem in uploading image to to mysql database Hi, need some help... a user click the submit button the name, city and the image(Save as BLOB) must
java code problem - Java Beginners
java code problem i have created a JTable in Class1 now i need to use the same JTable in another class Class2, to edit some values. i am not getting...*;
import java.awt.event.*;
class Form extends JFrame {
JButton Edit,SAVE Getting Started
Hibernate Getting Started
Hibernate Getting Started
In this Hibernate Getting Started you will quickly download the a Hibernate
running example project and run problem - Java Server Faces Questions
jsp problem where we have to save java bean classes in jsp? ...-application.
the code like this
//to set the java bean values
// to get... the java bean values
// to get the java bean values
Skyline Problem
a side in common with the x axis and non-negative y values
the skyline of S is the line tracing the largest y value from any rectangle in S
The Problem:
Write.... Your method should solve the problem three different ways; each solution saveOrUpdate Method
Hibernate saveOrUpdate Method
In this tutorial you will learn about the saveOrUpdate method in Hibernate.
Hibernate's saveOrUpdate method inserts the new... with the existed identifier. In short this method calls the
save() method
Industrial problem
should directly save to my specified folder. and i wanna use that image in automatic Excetion - Hibernate
the same error that
Hibernate: insert into login (uname, password) values...hibernate Excetion The database returned no natively generated...://
It will be helpful for you
How to save JCombobox Selected Item in to Access Database - Java Beginners
How to save JCombobox Selected Item in to Access Database How to save JCombobox Selected Item in to Access Database Hi Friend,
Try...=stmt.executeUpdate("insert into Student(Course) values('"+st
Hibernate Isolation Query. - Hibernate
Hibernate Isolation Query. Hi,
Am Using HibernateORM with JBOSS server and SQLSERVER.I have a transaction of 20 MB, when it is getting processed... for the problem sql error - Hibernate
hibernate sql error Hibernate: insert into EMPLOYE1 (firstName, lastName, age, p-type, EMP_ID) values (?, ?, ?, 'e', ?)
Exception in thread "main...,
Please visit the following links:
Problem with arraylist
Problem with arraylist I am currently working on a java project..., 0.0911138412309322]
Match is true
The values of cluster1outputrecord.... It seems that problem
Understanding Hibernate <generator> element
Understanding Hibernate <generator> element... about hibernate
<generator> method in detail. Hibernate generator element... and it is required to set the primary key value before
calling save() method.
Here
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/85697 | CC-MAIN-2015-48 | refinedweb | 2,067 | 56.76 |
This documentation is archived and is not being maintained.
Office Communications Server
Securing OCS with ISA Server
Alan Maddison
At a Glance:
- OCS 2007 R2 Edge Server roles and topologies
- Configuring ISA Server in a 3-Leg Perimeter network
- Understanding OCS certificate requirements
- Creating a Web listener and a reverse proxy for OCS
Office Communications Server (OCS) 2007 R2 provides powerful capabilities that allow you to extend your Unified Communications infrastructure to users outside of your organization. This can have tremendous benefits. Functionality such as Web and audio/visual (A/V) conferencing, for example, can improve an organization's responsiveness and effectiveness in its daily business tasks.
If you choose to deploy OCS functionality to remote users and partners, however, you must complete two important steps. First, you will need to deploy OCS Edge Servers to best practices defined in the Office Communications Server 2007 R2 Security Guide. Second, you will need to provide reverse proxy access to the Edge Servers. In this article I'll take a look at using ISA Server 2006 with SP1 to secure your OCS deployment.
Edge Servers
To communicate with other OCS infrastructures and to allow remote access to your organization's network, you need to deploy one or more Edge Servers in your perimeter network (also known as a DMZ) so that users outside the firewall can gain secure access to your internal OCS deployment. OCS 2007 R2 includes three Edge Server roles, summarized in Figure 1, which also describes reverse proxy functionality.
Prior to the R2 release of Office Communications Server 2007, Microsoft supported four different Edge Topologies that provided varying levels of sophistication and scalability. In general, the topologies differed in the location of the various Edge Server roles:
- Consolidated Edge Topology
- Single-Site Edge Topology
- Scaled Single-Site Edge Topology
- Multiple Site with a Remote Site Edge Topology
With the release of R2, Microsoft now require that all roles run on the same server with scalability provided by hardware load balancers (the Network Load Balancer, or NLB, component available in Windows is not supported). A Consolidated Edge Topology is a cost-effective approach that has the added benefit of being the simplest to deploy and administer. This topology works for all organizations, and for the purposes of this article I'll focus on this design.
An important point to remember is that in addition to changes in OCS topology support, there have been some significant changes in the network topologies and technologies supported by R2. However, in R2 the A/V Edge Server role must have a public IP address on the external network interface card (NIC) due to the underlying requirement of the Simple Traversal of UDP through NAT (STUN) protocol.
Though Microsoft has been very adamant about this requirement, many Administrators have ignored the documentation and have attempted to implement the OCS Edge Servers in a Network Address Translation (NAT) environment. If you implement NAT in front of your A/V Edge Servers, your users could experience intermittent connectivity problems and may not even be able to establish a connection.
One important point to remember is that as of R2 Microsoft no longer supports Destination Network Address Translation (DNAT) in an OCS environment. This means that the firewall or Load Balancer must be configured with Source NAT (SNAT) before introducing R2 into your environment. Finally, your Edge Servers should not be members of an Active Directory domain.
If the Edge Servers are not configured correctly, you will only be creating more of a headache for yourself when ISA Server is thrown into the mix. Review, validate and test your Edge configuration before moving on.
Figure 2 3-Leg Perimeter deployment of Consolidated Edge Servers
Getting Ready
To deploy the OCS Edge Topology, I'll focus on a common implementation scenario in which a firewall, ISA Server in this case, is deployed in a 3-Leg Perimeter network; a simple, cost-effective approach that shares core concepts with more complex firewall topologies. In this design, the three legs correspond to the internal network, the perimeter network, and the external (Internet) network. This approach not only allows external user access to the OCS infrastructure, it also supports using ISA Server in a reverse proxy role.
Figure 2 depicts a logical view of an ISA Server 3-Leg implementation. Before beginning to configure ISA Server, it is a good idea to have already laid out the IP addressing and Fully Qualified Domain Name (FQDN) mappings for your Edge Servers and ISA Server. This will make the configuration process a lot simpler and quicker.
Figure 3 shows an example of a suitable check list. Note that I have used a private IP address range for informational purposes only; you will have to use public IP addresses.
Figure 4 shows the IP addressing information for the ISA Server, including the IP address used to publish (reverse proxy) the OCS Address Book and Content Web site used by the Web Conferencing Server.
In a Consolidated Edge Deployment, there is typically a single physical server with two NICs, one for internal traffic and one for external traffic. The NIC connected to the corporate network (internal) will have an IP address from your existing internal IP address range. The NIC that connects to remote users (external) will use at least one public (fully routable) IP address for the A/V Edge Server. Although not strictly necessary, it is simpler to also assign public IP addresses to the other Edge roles, as in Figure 4.
You will also need to create unique subnets for your public IP address space. ISA Server will need two IP addresses on its external interface (one of which will be used for the reverse proxy), and you will need four public IP addresses for your perimeter network (three for the Access Edge Servers and one for the ISA Server perimeter interface). As an example, this is shown in Figure 3 as 29-bit subnet mask (255.255.255.248), which would give you six usable IP addresses per subnet.
If you have been allocated only a small number of public IP addresses, you can use network address translation for the Access Edge Server and the Web Conferencing Edge Server roles and connect the A/V Edge Server directly to the Internet. However, this approach is a little less secure in that you are bypassing the packet inspection capabilities of ISA Server and would require a third network interface card for your Edge Server.
Configuring ISA Server 2006
To configure the 3-Leg Perimeter, you need to launch the ISA Server management console and select Networks under the Configuration node in the left-hand pane, as shown in Figure 5. Next, click on the Templates tab in the right-hand pane and select the 3-Leg Perimeter template to start the Network Template Wizard.
This process will erase any existing configuration and rules, so note that if you are not using a new system, you should take care to export the configuration, an option offered by the wizard on the second screen. Clicking Next on the second screen begins the configuration process by asking you to input the IP address ranges that define your internal network.
Figure 5 Selecting 3-Leg Perimeter template
You have a number of options to help define the internal address space. If your organization is small, simply adding the network adapter that is connected to the corporate network is sufficient. Larger organizations will need to use the other options, such as adding IP address ranges to fully define their internal address space.
After entering this information, click Next to move to the next screen to define the perimeter network address space. It is usually sufficient to add the adapter you have dedicated to the perimeter network. Next, you must select a firewall policy. Be sure to choose the "Allow Unrestricted Access" firewall policy, then click Next to go to the Summary screen where you can review your configuration. Select Finish to complete the wizard and then, to accept these changes, click Apply and then OK.
The basic 3-Leg Perimeter is functional at this point but it needs some additional configuration in order to permit network traffic between the OCS Edge Servers and external users. The first thing to note is that the default configuration of a 3-Leg Perimeter in ISA Server includes support for VPN users. If you don't need this type of remote access, remove this item.
To do so, select Firewall Policy in the left-hand pane of the ISA Management console and then right-click the rule called VPN Clients to Internal Network and select Delete. To accept the changes, choose Apply and then OK. Two rules remain: the Unrestricted Internet access rule and the Default rule.
The next step is to add the computer objects that represent your Edge Servers. Select Firewall Policy and then choose the Toolbox tab, as shown in Figure 6.
Figure 6 Firewall Policy rules
Click on the New button and select Computer from the dropdown menu. Add the name of the Edge Server, for example Access Edge, and then enter the IP address for that server. Repeat this step for the A/V Edge Server and the Web Conferencing Edge Server. When you are finished, you should see your three computer objects listed under the Computers container.
The next step is to add the protocols used by OCS to the ISA Server configuration. You will need to add two new protocols, as shown in Figure 7.
Note the asterisk in Figure 7. In OCS 2007 R2, this port requirement is necessary only if you use the A/V capabilities of OCS with a Federated Partner running Office Communications Server 2007. Remote users will not need these ports to be open.
It is worth emphasizing that it is a security best practice to only open ports that you require. For example, consider A/V conferencing with a Federated Partner, which requires UDP ports in the range 50,000–59,999. If you don't have a Federated Partner then you don't need to open these ports.
In the ISA Server management console, within the Firewall Policy and the Toolbox tab selected, click on Protocols, then on New and on Protocol to launch the New Protocol Definition Wizard. When the wizard launches, enter a name for the protocol, then click Next to go to the Primary Connection Information screen. On this screen click New and then add the information for the MTLS/SIP protocol, as shown in Figure 7.
To finish adding the protocol, click OK and then the Next button twice; you do not need to configure anything on the Secondary connections screen. Now click the Finish button on the Summary screen, and repeat these steps for the STUN protocol. Make sure you apply these changes to ISA Server when you have finished adding the protocols.
The Persistent Shared Object Model (PSOM) protocol (a proprietary protocol for transporting Web conferencing content) is not listed in Figure 7. This is because PSOM is used for traffic between the Web Conferencing Server and the Web Conferencing Edge Server. This traffic is sent out on the internal network card of the Edge Server.
After adding the above two protocols from Figure 7, the next step is to create the three Access rules that will permit users to connect to the Edge Server from the Internet. Once again, having this information at hand will make these configuration steps a lot simpler and less prone to error. Figure 8 shows the information you will need to create the three external access rules.
Start by selecting Firewall Policy and the Tasks tab in the ISA Server Management console and then click on Create Access Rule to launch the New Access Rule wizard. You can create the rules in any order, so let's start with the Access Edge rule. Once the wizard launches, enter a name for the rule and click Next. On the Rule Action screen, make sure that the Allow radio button is selected and click Next.
The following screen requires you to add the Protocols to which the rule applies. Click the Add button on the right-hand side of the screen to launch the Add Protocols window. Under the Common Protocols folder, select HTTPS and press Add, then select the STUN protocol—which should be listed under the User-Defined folder—and click Add. Click Close and then Next to move onto the Access Rule Sources screen.
This step requires you to select the Access Rule source, and in the case of the Access Edge rule, you will need to select the External network object by clicking Add and selecting it from the Networks folder. Then click Close and then Next to move to the Access Rule Destination screen. On this screen click Add and select the Access Edge computer object you created earlier from the Computers folder. Click the Close button and then Next to move to the User Sets screen.
Leave the All Users set default selection and click Next. Review the Summary, then click Finish to complete the process. You can now create the two remaining access rules using the information you compiled based on Figure 8. Make sure you apply these changes to ISA Server once you have finished.
A Word about Certificates
The last steps in this configuration process are focused on creating the SSL listener and creating the reverse proxy (publishing a Web application) for Office Communications Server. It's important to understand the certificate requirements of Office Communications Server and how these requirements impact ISA Server configuration.
The most important point to note here is that Office Communications Server requires the use of x.509 certificates and does not support wildcard certificates in the Common Name (Subject Name). You will need to specify a Subject Alternate Name (SAN) when requesting the certificate for the Edge Servers.
Since ISA Server 2006 SP1 introduced full support for SAN certificates, this will not present any problems. With this in mind, you need to address the certificate requirements for both the internal and external interfaces of the Edge Server. If your Active Directory environment does not use a top-level domain namespace, you will need to use a certificate from a trusted third-party Certificate Authority (CA) for the external interface and a certificate from an internal enterprise CA for the internal interfaces. In this scenario, the Root Certificate for your enterprise CA must be installed on clients that will access OCS internally as well as on the ISA Server; this ensures that the necessary trust relationships are in place for the OCS configuration. The external third-party certificate should already be trusted, and therefore the Root Certificate of the third-party CA doesn't need to be installed on all clients.
The certificate required for the reverse proxy configuration corresponds to the FQDN of the OCS address book and conference content download. In Figure 4, this was ocscontent.contoso.com but can obviously be anything you choose as long as the certificate comes from a trusted third party. Of course, your SAN certificate from this third party would also include the FQDNs of your Edge Servers, samples of which you can also see in Figure 3.
Reverse Proxy
The first step in creating a reverse proxy for OCS is to create an SSL Web Listener. To do this, click on the Firewall container in the left-hand pane and select the Toolbox tab in the right–hand pane. Select Network Object from the list, click the New button and select Web Listener from the dropdown menu. This will launch the New Web Listener wizard, which prompts you to enter a name for the Web Listener.
The name should be meaningful to you, but it doesn't necessarily have to be associated with OCS as the Web Listener could potentially have multiple uses. When you click Next, the Client Connection Security screen appears with the default option selected—Require SSL secured connections with clients. Leave the selection as is and click Next to move to the Web Listener IP Address screen, where you should check the External networks option. Next, click the Select IP Addresses button and choose the IP address that you have assigned to the reverse proxy (see Figure 4).
After clicking OK and then Next, you will see the Listener SSL Certificates screen. Select the option to Assign a certificate for each IP address and then click the Select Certificate button. Now select the certificate associated with the OCS address book and Web conference content that was discussed earlier. After selecting the certificate, click the Next button to configure Authentication Settings. Make sure that No Authentication is selected in the dropdown box and then click the Next button twice. Finally review the Summary screen and click the Finish button to complete the creation of the Web Listener.
Once the Web Listener is created, you can create the Web Site Publishing rule. To do this, click on the Firewall container and select the Tasks tab, then click on Publish Web Sites to launch the New Web Publishing Rule Wizard. The first piece of information to enter is the name of the rule, so enter a meaningful name such as OCS Content and click Next. On the Select Rule Action screen, select Allow and click Next to move to the Publishing Type screen where you will select Publish a single Web site or load balancer and then click Next.
To configure Server Connection Security, select the option to use SSL, then press Next to proceed to the Internal Publishing Details screen. Here you need to enter the internal name of the Web components server. Click Next and enter the path information, which should be /* to allow all paths. Click Next and enter the public name you created for the Address Book and Web Conference content downloads (in the example here it is ocscontent.contoso.com). Leave the other selections at the defaults and click Next.
Now select the Web Listener you created and click Next. On the Authentication Delegation screen, the dropdown box should read No delegation, but client may authenticate directly. Click Next and confirm that All Users is selected on the User Sets screen, then click Next again and then Finish to complete the process. The configuration of ISA Server is now complete.
Last Words
Extending Office Communications Server to support remote users and partners can yield tremendous benefits but requires a great deal of planning, particularly when it comes to securing your Edge Servers. ISA Server 2006 provides a robust, flexible, and scalable solution that is the ideal platform for securing Office Communications Server 2007.
Alan Maddison is a Senior Consultant specializing in Microsoft technologies with Strategic Business Systems, a division of Brocade.
Show: | https://technet.microsoft.com/en-us/library/2009.03.isa.aspx | CC-MAIN-2018-17 | refinedweb | 3,143 | 58.52 |
15.3. StringCoder - Part A¶
The following is a free response question from 2008. It was question 2 on the exam. You can see all the free response questions from past exams at.
Question 2. Consider a method of encoding and decoding words that is based on a master string. This master string
will contain all the letters of the alphabet, some possibly more than once. An example of a master string is
"sixtyzipperswerequicklypickedfromthewovenjutebag". This string and its indexes are
shown below.
An encoded string is defined by a list of string parts. A string part is defined by its starting index in the
master string and its length. For example, the string
"overeager" is encoded as the list of string parts
[ (37, 3), (14, 2), (46, 2), (9, 2) ] denoting the substrings
"ove",
"re",
"ag", and
"er".
String parts will be represented by the
StringPart class shown below.
public class StringPart { /** @param start the starting position of the substring in a master string * @param length the length of the substring in a master string */ public StringPart(int start, int length) { /* implementation not shown */ } /** @return the starting position of the substring in a master string */ public int getStart() { /* implementation not shown */ } /** @return the length of the substring in a master string */ public int getLength() { /* implementation not shown */ } // There may be other instance variables, constructors, and methods }
The class
StringCoder provides methods to encode and decode words using a given master string. When
encoding, there may be multiple matching string parts of the master string. The helper method
findPart is
provided to choose a string part within the master string that matches the beginning of a given string.
public class StringCoder { private String masterString; /** @param master the master string for the StringCoder * Precondition: the master string contains all the letters of the alphabet */ public StringCoder(String master) { masterString = master; } /** @param parts an ArrayList of string parts that are valid in the * master string * Precondition: parts.size() > 0 * @return the string obtained by concatenating the parts of the * master string */ public String decodeString(ArrayList<StringPart> parts) { /* to be implemented in part (a) */ } /** @param str the string to encode using the master string * Precondition: all of the characters in str appear in the master * string; * str.length() > 0 * @return a string part in the master string that matches the * beginning of str. * The returned string part has length at least 1. */ private StringPart findPart(String str) { /* implementation not shown */ } /** @param word the string to be encoded * Precondition: all of the characters in word appear in the * master string; * word.length() > 0 * @return an ArrayList of string parts of the master string * that can be combined to create word */ public ArrayList<StringPart> encodeString(String word) { /* to be implemented in part (b) */ } // There may be other instance variables, constructors, and methods }
15.3.1. Try and Solve It¶
Part a. Finish writing the
StringCoder method
decodeString. This method retrieves the substrings in the master
string represented by each of the
StringPart objects in parts, concatenates them in the order in
which they appear in parts, and returns the result.
The code below contains a main method for testing the
decodeString method. | https://runestone.academy/runestone/books/published/csawesome/FreeResponse/StringCoderA.html | CC-MAIN-2020-05 | refinedweb | 526 | 58.21 |
23 December 2009 21:06 [Source: ICIS news]
HOUSTON (ICIS news)--US polyvinyl chloride (PVC) producers will reduce vinyl chloride air emissions by 70 tonnes/year as a result of pollution enforcement actions taken since 2003, the Environmental Protection Agency (EPA) said on Wednesday.
The EPA said it has addressed pollution issues at 11 US PVC plants. Vinyl chloride is a human carcinogen with a reportable emissions quantity of one pound, the EPA said.
During the 2009 fiscal year, the ?xml:namespace>
Shintech agreed to spend roughly $12m (€8.4m) to reach environmental compliance at its
The EPA said Shintech would spend $4.8m to close a lagoon and a drying bed that were not designed to handle hazardous waste, implement a series of audits and reviews of its hazardous-waste handling practices, and add a treatment tank to its waste-water treatment system.
Shintech also has agreed to pay a $2.6m civil penalty to resolve environmental violations and to perform $4.7m worth of supplemental environmental projects, the EPA said.
This year was also marked by a settlement in which polymers major Invista was to pay $1.7m in civil penalties and spend between $240m and $500m to correct environmental violations at 12 facilities acquired from DuPont in 2004, the EPA said.
“By correcting these violations, Invista will reduce harmful air pollution by nearly 10,000 tons per year,” the EPA said, noting that Invista voluntarily disclosed the violations.
In fiscal year 2009, the EPA resolved voluntarily disclosed violations with 400 entities, including more than 50 resolutions that resulted in direct environmental benefits, the agency said.
“As a result of disclosures resolved this fiscal year, more than 7.4m lbs [33,566 tonnes] of hazardous waste will be treated, minimised, or properly disposed and nearly 23m of pollutants will be reduced or treated,” the EPA said.
The agency also made headway in criminal cases, it said, with 387 new environmental crime cases opened during the fiscal year, the highest number in the past five years.
A total of 200 defendants (146 individuals and 54 businesses or corporations) were charged with criminal violations during fiscal year 2009, an increase over the 176 defendants who were charged in fiscal year 2008.
Criminal defendants were assessed a total of $96m in fines and restitution, an increase over the $63.5m assessed in fiscal year 2008, the EPA said.
This included a $50m fine assessed against BP, the largest criminal fine ever assessed under the Clean Air Act, after the company plead guilty to a felony violation of the act. BP was prosecuted for conduct that resulted in the explosion on 23 March, 2005 at its
( | http://www.icis.com/Articles/2009/12/23/9321799/us-pvc-producers-to-cut-vinyl-chloride-emissions-by-70-tonnesyr.html | CC-MAIN-2013-48 | refinedweb | 444 | 51.68 |
Please explain these answers
I say its #1, because the cast is used to fit a super class into its subclass. But the answer he gives is #3. I dont understand why.
May the force of the Java be in all of us !!!
Sub s = (Sub)b;
is OK, even if b is declared to be of type Base, but only if it currently contains an object of type Sub. That's not known until runtime, and in your example, is not true. Hence, the runtime exception.
Base b = new Sub(); // OK
Sub s = (Sub)b; // OK
b = new Base(); // OK
s = (Sub)b; // ClassCastException
[ January 22, 2003: Message edited by: Greg Charles ]
Sheriff
I dont understand why you say that there is correct type in the cast exprssion Sub s=(Sub) b;
If a class B extends another class A, so that B is the subclass of A, then isnt
A a=new A();
B b = (B) a; //superclass needs a cast to be
//assigned to a subclass
always true,
just like
a=b; //subclass needs no cast to
//be assigned a superclass object
May the force of the Java be in all of us !!!
I know that a superclass can always be assigned its subclass. That is why Im confused why the following gives a runtime exception
class Base {}
class Sub extends Base {}
class Sub2 extends Base {}
public class CEx
{
public static void main(String argv[])
{
Base b=new Base();
Sub s=(Sub) b;
}
}
Can you explain why?
May the force of the Java be in all of us !!!
We can't cast you to be a Queen (of Britain), because you're not a Queen (of Britain). We *can* cast a queen of Britain to a mere mortal human.
Using Java, the notion above translates to:
Human you= new Human();
Queen elisabeth = (Qeen)you;
Translating the class names back to your example, it translates to:
Base b= new Base();
Sub s= (Sub)b;
Get it?
kind regards
ps. no offense intended towards any arbitrary queens neither to other mortals ;-)
Now, to assign that reference to a reference of type Sub you must cast the reference to the type Sub. The assigned reference could be a real Sub, but it could also be the Base part of a Sub.
However, there is a risk that it could be the Base part of a Sub2 object or just a Base object on its own.
The cast to (Sub) tells the compiler you are willing to wait until runtime to see if it is really is a Sub.
At runtime the JVM takes a look at the actual object referred to by b and says "Hey!, you told me this is a reference to a Sub object (the cast) but it's really a plain Base object, I object!". So it throws up a ClassCastException.
Sort of....
-Barry
[ January 23, 2003: Message edited by: Barry Gaunt ]
Ask a Meaningful Question and HowToAskQuestionsOnJavaRanch
Getting someone to think and try something out is much more useful than just telling them the answer.
So this queen goes up to the Palace and the guard says "Who goes there?". "S'me ducky, look at me badge". The guard is suspicious, and does a body search...
A loud exception is raised!
[ January 23, 2003: Message edited by: Barry Gaunt ]
Ask a Meaningful Question and HowToAskQuestionsOnJavaRanch
Getting someone to think and try something out is much more useful than just telling them the answer.
Say you have two animals, a dog and a bear
(represent this statement as follows)
class Animal{}// class Base
class Dog extends Animal{}// class Sub extends Base{}
class Bear extends Animal{}
A dog has its own unique features, so is a bear (represent this as follows)
Dog dog = new Dog();
Bear bear = new Bear();
And an animal could be a dog, or it could be a bear, so the ff is right;
Animal animal = new Dog(); // Base b = new Sub();
Animal animal2 = new Bear();
But a dog could not just be ANY animal (nor a bear be any animal) because it is a dog, so the ff statements are wrong;
Dog d = new Animal(); // Sub s = new Base();
Bear b = new Animal();
------------
Say, in another case, we have a new animal, 'animal3'
Animal animal3 = new Animal();// Base b = new Base();
We can not just convert this 'animal3' into a dog, like the ff:
Dog d = (Dog)animal3;// also not right is
// s = (Sub)b;
or
Bear br = (Bear)animal3;// also wrong
because, that new 'animal' (animal3) is neither a bear (not new Bear()) nor a dog (not new Dog()). It is just a new Animal();
Why can't a dog be a bear (d=br), anyway, they're both animals? Why not? But... well that is probably for a new language with different inheritance rules...
Shashank, this and the previous posts, are ways to remember some of Java's rules on inheritance. Rules that we may not readily understand or agree upon. Nevertheless, these are rules we have to abide to.
You may be treading into the 'why-is-there-such-a-rule' realm (correct me). Regarding the "real why's" of these rules (e.g. rationalizations, why such rule was adapted, etc..), probably the designers could fill in on this. For all we know, even the designers themselves may have seriously argued on why such and such rule was adopted ... and only to end up settling the issues by choosing heads or tails (kidding on this). In the course of using these rules, however, we might find the real reasons or otherwise. And its good to be in such pursuit of knowledge.
Answer is No
2) class Y is a subclass of class X. Will this compile? X myX=new Y();
Answer is Yes
I can understand these two examples.
class Y is a subclass of class X.
Y myY=new Y();
X myX=new X();
Is myX=myY legal?
Is myY=(Y)myX legal?
Is myX=new Y() legal?
Is myY=(Y)new X() legal?
For example, in the following code c can be cast to a label or to a button, because it is an instanceof label or button (by virtue of inheritance I guess), then isnt the same applicable to the example above, so that since class Y is an instanceof class X, it should be able to cast myX to a myY saying myY = (Y) myX;
Why then does myY = (Y) myX; give a runtime error? Isnt the runtime error saying that myY (and therefore class Y) is somehow not the correct type, that is 'is not a myX and therefore not an instancof class X'?? However, by virtue of inheritance and the fact that class Y is a subclass of class X, it does seem that myY should be an instanceof myX, just like c is an instanceof of Label or Button. Where am I going wrong in my understanding?
If my understanding above is not correct, and if you say that Label and Button are both subclasses of component, and that c cannot be cast to either without finding out if it is an instanceof that object,
if (c instanceof Label) Label lc=(Label) c;
then what would make c be an instanceof Label?
What then would then make myX an instance of myY?
Finally, is the (x instanceof y) operator saying that the runtime value of x is equal to y, even though x is a superclass of y?
[ January 25, 2003: Message edited by: Shashank Gokhale ]
May the force of the Java be in all of us !!!.
A cast is needed to assign a superclass to a subclass, according to the quote above, but there is also a possibility of a runtime error if the subclass is not the right type as someone said in earlier posts. So what would it take for an assignment of this type to not cause a runtime or any other error? Please give some code to clarify.
May the force of the Java be in all of us !!!
[ January 25, 2003: Message edited by: Barry Gaunt ]
Ask a Meaningful Question and HowToAskQuestionsOnJavaRanch
Getting someone to think and try something out is much more useful than just telling them the answer.
Human h = new Queen();
Queen q = (Queen)h;
h = new Thug();
q = (Queen)h; //ClassCastException
I understand that a subclass A cannot be casted to another subclass B, even if both A and B are subclasses of the same parent class. So like you have in your example a thug object cannot be cast into a queen object even though a thug and queen both inherit from the human class. Thug and queen are two different types of objects formed from different classes (Thug and Queen)
My question was that if A is a subclass of B, why is
A a=(A) new B();
not legal
and what I gather from your code is that if B is a tyoe A, then the cast is legal, otherwise it cannot be cast. The lines that indicate this are
Human h=new Queen();
Queen q=(Queen) h;
In line 1 you are saying that you want the h object of the Human class to be a reference to a Queen object. And because it is a reference to a queen object, you are able to cast the h object to a Queen object.
In the lines
Human h=new thug();
Queen q=(Queen) h; //ilegal
the object h which now refers to a thug object cannot be cast to a queen object because it is a reference to a thug object.
If you had just said
Queen q=(Queen) new Human();
you would have gotten a runtime error correct?
And if you had said
Human h=new Human();
Queen q=(Queen) h;
you would have still gotten a runtime error, correct, since in neither of these two cases is a human explicitly set to refer to a queen object?
So, in other words, casting a superclass to a subclass doesnt always work, except when the superclass is already a reference to a subclass object.
Am I right in my understanding that if B extends A
A a = B b; is always true buy
B b=(B) a; is not always true?
May the force of the Java be in all of us !!!
Put your compiler hat on: Where am I casting a Thug to a Queen? In both cases I am casting a Human reference to a Queen reference.
What if instead of h = new Thug() ( or new Queen()); I had h = makeHuman(); where makeHuman returns a Human? makeHuman() could actually return a Queen, Thug, Human, or any other subclass of Human. Whether I return a Queen or a Thug is decidable only at runtime, so the compiler must allow the casts (Queen)makeHuman() and (Thug)makeHuman(). However, when the assignment actually takes place at runtime h may not actually refer to a Queen, but a Thug. So the cast is always allowed at compile time, but at runtime execution could fail.
At compile time Queen q = (Queen)(new Human()); and Thug t = (Thug)(new Human()); are both allowed, but fail at runtime.
-Barry
[ January 26, 2003: Message edited by: Barry Gaunt ]
Ask a Meaningful Question and HowToAskQuestionsOnJavaRanch
Getting someone to think and try something out is much more useful than just telling them the answer.
At compile time Queen q = (Queen)(new Human()); and Thug t = (Thug)(new Human()); are both allowed, but fail at runtime.
Okay this is just what I understood, that doing a plain old cast of a superclass (Human) to either a Queen or a Thug would give a runtime error, unless there were preceding statements that caused the Human object to refer to a queen or thug object.
So
Human h=new Queen(); Queen q=(Queen) h;
does compile and not give a runtime error, but
Queen q=(Queen) new Human();
will compile but will also give a runtime error, correct?
May the force of the Java be in all of us !!!
Queen q=(Queen) new Human();
will compile but will also give a runtime error, correct?
That's correct. The best thing is to compile and run the code. That's how I convinced myself. But I will bet I will get it wrong in the SCJP test
-Barry
Ask a Meaningful Question and HowToAskQuestionsOnJavaRanch
Getting someone to think and try something out is much more useful than just telling them the answer.
{
public static void main(String argv[])
{}
public static native void amethod();
}
Question in Marcus green's exam 2 with answer:
The above listed code compiles properly, but why if there is a call to amethod, would it give a runtime error?
May the force of the Java be in all of us !!!
| http://www.coderanch.com/t/393141/java/java/explain-answers | CC-MAIN-2016-22 | refinedweb | 2,133 | 65.86 |
Hello,
I am currently migrating a large codebase that we use for more than a single unity project to assembly definitions and UPM.
Before this change, the project's code used to be compiled outside of unity and the compiled dlls were copied in the assets directory of the unity project. This worked well, but with recent versions of unity, it became difficult to reference unity assemblies and make the project build for a particular version of unity. With assembly definitions and UPM we are able to reuse the same code and target different unity versions with ease.
However, I am missing something which I had better control over when we were compiling our own dlls. We used to define custom conditional compilation symbols, which were global for a given project because we stored them in the respective .csproj file. We could also define these symbols conditionally using the MSBuild syntax. As we moved to asmdef, the .csproj file is no-longer maintained by us, but is being auto-generated by Unity. If we define our custom symbols there, they will be removed as Unity re-generates the csproj files.
My question is -- is it possible to define custom compilation symbols for asmdef projects.
What else happens after the compilation cycle?
0
Answers
Moving file failed
2
Answers
FileNotFoundException (System.reflection) when using SmarFox2.dll
1
Answer
Are Assembly definition files are not build system files?
1
Answer
namespace support in unity
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/1707855/asmdef-and-custom-conditional-compilation-symbols.html | CC-MAIN-2021-17 | refinedweb | 247 | 56.25 |
Check if script is executed within Pythonista
Hi!
I would like to write multi-platform scripts for Pythonista on iOS, Mac and Windows.
Do you know a way how a script which is executed within Pythonista can check that it is executed within Pythonista and not on a Mac?
If you're only looking to determine which OS you're on (and not whether you're in the Pythonista IDE specifically), you can do that with:
from sys import platform print(platform)
If you're on iPhone/iPad, platform should be "iOS". If you're on macOS, you should get "darwin". Windows should be "win32" I believe.
@WyldKard
almost correct ☺️ heres what i get get from
print(platform.platform())on iPad Air2
Darwin-19.3.0-iPad5,4-64bit might be because of iPadOS?
#Edit..
stand corrected...
print(sys.platform) >>> ios
sorry 😉😊
😞 just realised i can never trust the
platformmodule again lol | https://forum.omz-software.com/topic/6299/check-if-script-is-executed-within-pythonista/5 | CC-MAIN-2022-27 | refinedweb | 152 | 66.44 |
Up until this point, the components we've created have been stateless. They have properties (aka props) that are passed in from their parent, but nothing changes about them once the components come alive. Your properties are considered immutable once they have been set. For many interactive scenarios, you don't want that. You want to be able to change aspects of your components as a result of some user interaction (or some data getting returned from a server or a billion other things!)
What we need is another way to store data on a component that goes beyond properties. We need a way to store data that can be changed. What we need is something known as state! In this tutorial we are going to learn all about it and how you can use it to create stateful components.
Onwards!
If you know how to work with properties, you totally know how to work with states...sort of. There are some differences, but they are too subtle to bore you with right now. Instead, let's just jump right in and see states in action by using them in a small example.
What we are going to is create a simple lightning counter example:
What this example does is nothing crazy. Lightning strikes the earth's surface about 100 times a second. We have a counter that simply increments a number you see by that same amount. Let's create it.
The primary focus of this example is to see how we can work with state. There is no point in us spending a bunch of time creating the example from scratch and retracing paths that we've walked many times already. That's not the best use of anybody's time.
Instead of starting from scratch, modify an existing HTML document or create a new one with the following contents:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Dealing with State</title> <script src="[email protected]/umd/react.development.js"></script> <script src="[email protected]/umd/react-dom.development.js"></script> <script src="[email protected]/babel.min.js"></script> </head> <body> <div id="container"></div> <script type="text/babel"> class LightningCounter extends React.Component { render() { return ( <h1>Hello!</h1> ); } } class LightningCounterDisplay extends React.Component { render() { var divStyle = { width: 250, textAlign: "center", backgroundColor: "black", padding: 40, fontFamily: "sans-serif", color: "#999", borderRadius: 10 }; return ( <div style={divStyle}> <LightningCounter/> </div> ); } } ReactDOM.render( <LightningCounterDisplay/>, document.querySelector("#container") ); </script> </body> </html>
The bulk of this component is the divStyle object that contains the styling information responsible for the cool rounded background. The return function returns a div element that wraps the LightningCounter component.
The LightningCounter component is where all the action is going to be taking place:
class LightningCounter extends React.Component { render() { return ( <h1>Hello!</h1> ); } }
This component, as it is right now, has nothing interesting going for it. It just returns the word Hello! That's OK - we'll fix this component up later.
The last thing to look at is our ReactDOM.render method:
ReactDOM.render( <LightningCounterDisplay/>, document.querySelector("#container") );
It just pushes the LightningCounterDisplay component to our container element in our DOM. That's pretty much it. The end result is you see the combination of markup from our ReactDOM.render method and the LightningCounterDisplay and LightningCounter components.
Now that we have an idea of what we are starting with, it's time to make plans for our next steps. The way our counter works is pretty simple. We are going to be using a setInterval function that calls some code every 1000 milliseconds (aka 1 second). That "some code" is going to increment a value by 100 each time it is called. Seems pretty straightforward, right?
To make this all work, we are going to be relying on three APIs that our React Component exposes:
We'll see these APIs in use shortly, but I wanted to give you a preview of them so that you can spot them easily in a lineup!
We need a variable to act as our counter, and let's call this variable strikes. There are a bunch of ways to create this variable. The most obvious one is the following:
var strikes = 0; // :P
We don't want to do that, though. For our example, the strikes variable is part of our component's state. The way to do this is by creating a state object, making our strikes variable be a property of it, and ensure we set all of this up when our component is getting created. The component we want to do all of this to is LightningCounter, so go ahead and add the following highlighted lines:
class LightningCounter extends React.Component { constructor(props, context) { super(props, context); this.state = { strikes: 0 }; } render() { return ( <h1>Hello!</h1> ); } }
We specify our state object inside our LightningCounter component's constructor. This runs waaaay before your component gets rendered, and what we are doing is telling React to set an object containing our strikes property (initialized to 0).
If we inspect the value of our state object after this code has run, it would look something like the following:
var state = { strikes: 0 };
Before we wrap this section up, let's visualize our strikes property. In our render method, make the following highlighted change:
class LightningCounter extends React.Component { constructor(props, context) { super(props, context); this.state = { strikes: 0 }; } render() { return ( <h1>{this.state.strikes}</h1> ); } }
What we've done is replaced our default Hello! text with an expression that displays the value stored by the this.state.strikes property. If you preview your example in the browser, you will see a value of 0 displayed. That's a start!
Next up is getting our timer going and incrementing our strikes property. Like we mentioned earlier, we will be using the setInterval function to increase the strikes property by 100 every second. We are going to do all of this immediately after our component has been rendered using the built-in componentDidMount method.
The code for kicking off our timer looks as follows:
class LightningCounter extends React.Component { constructor(props, context) { super(props, context); this.state = { strikes: 0 }; } componentDidMount() { setInterval(this.timerTick, 1000); } render() { return ( <h1>{this.state.strikes}</h1> ); } }
Go ahead and add these highlighted lines to our example. Inside our componentDidMount method that gets called once our component gets rendered, we have our setInterval method that calls a timerTick function every second (or 1000 milliseconds).
We haven't defined our timerTick function, so let's fix that by adding the following highlighted lines to our code:
class LightningCounter extends React.Component { constructor(props, context) { super(props, context); this.state = { strikes: 0 }; } timerTick() { this.setState({ strikes: this.state.strikes + 100 }); } componentDidMount() { setInterval(this.timerTick, 1000); } render() { return ( <h1>{this.state.strikes}</h1> ); } }
What our timerTick function does is pretty simple. It just calls setState. The setState method comes in various flavors, but for what we are doing here, it just takes an object as its argument. This object contains all the properties you want to merge into the state object. In our case, we are specifying the strikes property and setting its value to be 100 more than what it is currently.
Just like you've seen here, you will often end up modifying an existing state value with an updated value. The way we are getting the existing state value is by calling this.state.strikes. For performance-related reasons, React may decide to batch state updates in rapid succession. This could lead to the original value stored by this.state to be out-of-sync with reality. To help with this, the setState method gives you access to the previous state object via the prevState argument.
By using that argument, our code could be made to look as follows:
this.setState((prevState) => { return { strikes: prevState.strikes + 100 }; });
The end result is similar to what we had originally. Our strikes property is incremented by 100. The only potential change is that the value of the strikes property is guaranteed to be whatever the earlier value stored by our state object would be.
So, should you use this approach for updating your state? There are good arguments on both sides. One side argues for correctness, despite the original approach of using this.state working out fine for most real-world cases. The other side argues for keeping the code simple and not introducing additional complexity. There is no right or wrong answer here. Use whatever approach you prefer. I'm only calling this out for completeness, for you may run into the prevState approach in any React code you encounter in the wild.
There is one more thing you need to do. The timerTick function has been added to our component, but its contents aren't scoped to our component. In other words, the this keyword where we are accessing setState will return a TypeError in the current situation. There are several solutions you can employ here - each a little frustrating in its own way. We'll look at this problem in detail later, but for now, we are going to explicitly bind our timerTick function to our component so that all of the this references resolve properly. Add the following line to our constructor:
constructor(props, context) { super(props, context); this.state = { strikes: 0 }; this.timerTick = this.timerTick.bind(this); }
Once you've done this, our timerTick function is ready to be a useful part of our component!
If you preview your app now, you'll see our strikes value start to increment by 100 every second:
Let's ignore for a moment what happens with our code. That is pretty straightforward. The interesting thing is how everything we've done ends up updating what you see on the screen. That updating has to do with this React behavior: Whenever you call setState and update something in the state object, your component's render method gets automatically called. This kicks of a cascade of render calls for any component whose output is also affected. The end result of all this is that what you see in your screen in the latest representation of your app's UI state. Keeping your data and UI in sync is one of the hardest problems with UI development, so it's nice that React takes care of this for us. It makes all of this pain of learning to use React totally worth it...almost! :P
What we have right now is just a counter that increments by 100 every second. Nothing about it screams Lightning Counter, but it does cover everything about states that I wanted you to learn right now. If you want to optionally flesh out your example to look like my version that you saw at the beginning, below is the full code for what goes inside our script tag:
class LightningCounter extends React.Component { constructor(props, context) { super(props, context); this.state = { strikes: 0 }; this.timerTick = this.timerTick.bind(this); } timerTick() { this.setState({ strikes: this.state.strikes + 100 }); } componentDidMount() { setInterval(this.timerTick, 1000); } render() { var counterStyle = { color: "#66FFFF", fontSize: 50 }; var count = this.state.strikes.toLocaleString(); return ( <h1 style={counterStyle}>{count}</h1> ); } } class LightningCounterDisplay extends React.Component { render() { var commonStyle = { margin: 0, padding: 0 }; var divStyle = { width: 250, textAlign: "center", backgroundColor: "#020202", padding: 40, fontFamily: "sans-serif", color: "#999999", borderRadius: 10 }; var textStyles = { emphasis: { fontSize: 38, ...commonStyle }, smallEmphasis: { ...commonStyle }, small: { fontSize: 17, opacity: 0.5, ...commonStyle } }; return ( <div style={divStyle}> <LightningCounter /> <h2 style={textStyles.smallEmphasis}>LIGHTNING STRIKES</h2> <h2 style={textStyles.emphasis}>WORLDWIDE</h2> <p style={textStyles.small}>(since you loaded this example)</p> </div> ); } } ReactDOM.render( <LightningCounterDisplay />, document.querySelector("#container") );
If you make your code look like everything you see above and run the example again, you will see our lightning counter example in all its cyan-colored glory. While you are at it, take a moment to look through the code to ensure you don't see too many surprises.
We just scratched the surface on what we can do to create stateful components. While using a timer to update something in our state object is cool, the real action happens when we start combining user interaction with state. So far, we've shied away from the large amount of mouse, touch, keyboard, and other related things that your components will come into contact with. We'll fix that up in the future. Along the way, you'll see us taking what we've seen about states to a whole new level! If that doesn't excite you, then I don't know what will :P
Next tutorial: Going from Data to UI! ) | https://www.kirupa.com/react/dealing_with_state.htm | CC-MAIN-2020-16 | refinedweb | 2,108 | 58.08 |
How to Optimize Your Code for Interview Questions
There are many different ways to solve interview-type algorithm questions. However given different constraints, some methods become unusable as input size grows. In this tutorial, we’ll analyze the time and space complexity for two different versions of the range summary algorithm. Through this problem, we’ll learn how to optimize code given different constraints and how to analyze the efficiency of our solution.
In this tutorial, we will learn how to optimize solutions to interview type questions by looking into two versions of the range summary algorithm in Ruby.
Let’s begin!
Want to ace your technical interview? Schedule a Technical Interview Practice Session with an expert now!
The Problem
We are given a sorted array of unique integers and must write a method that takes this array and returns an array of ranges of contiguous values. Each range is an array with the start and end integers (inclusive) of the contiguous portions of the original array (see below for a few examples).
Input: [2, 3, 6, 7, 8, 9, 14, 15, 16, 17] Output: [[2, 3], [6, 9], [14, 17]] Input: [8, 9] Output: [[8, 9]] Input: [1, 3, 5] Output: [[1, 1], [3, 3], [5, 5]]
Method 1: The Unoptimized (and Ugly) Nested While Loop Way
Let’s begin by taking a look at an unoptimized way to solve this problem to get an idea of what not to do. In this method, we use nested while loops where the outer loop tracks the index of the start of each range and the inner loop tracks the index of the end of each range. Let’s begin by walking through some pseudocode for the solution.
# Define a method, range_summary, that takes in a single array of integers, nums. # Initialize ranges, as an array with one empty range array inside of it. # Initialize index iterators, i and j. i, the index of the beginning of each range starts at 0, and j, the index of the end of each range, starts at 1. # Create while loop that iterates while i is less than the length of nums - 1. # If the last range in ranges is empty, add nums[i] to it. # Create inner while loop that iterates until j <= to the length of nums. # Set i equal to j and j equal to i + 1. # If nums[j] does not equal nums[i] + 1, # push nums[i] onto the last range in ranges. # Unless nums[j] is the last element of nums, # add an empty range array to ranges. # Break inner while loop. # Set j equal to i + 1. # Return ranges.
Now, let’s turn that pseudocode into real Ruby code.
def range_summary(nums) ranges = [[]] i = 0 j = 1 while i < nums.length - 1 ranges.last << nums[i] if ranges.last.empty? while j <= nums.length i = j j = i + 1 if nums[j] != nums[i] + 1 ranges.last << nums[i] ranges << [] if j != nums.length break end end j = i + 1 end ranges end
As we can see, iterating through the nums array while keeping track of two different indices and instantiating multiple conditionals is complicated and not very readable. When you find yourself nesting loops, that should be a sign that there is possibly a more efficient and legible method.
But how inefficient is our solution? We loop through the outer loop n – 1 times where n is the length of nums. We also iterate through the inner loop n – 1 times in the worst case (if each number in nums is separated by more than 1). In the best case, we loop through the inner loop only once if all the numbers are in order. However, keep in mind that we always want to optimize our solution to handle the worst case. Putting it all together, we can conclude that the worst case time complexity for this solutions is O((n – 1)^2), which reduces to O(n^2).
Space complexity for our solution is actually pretty good. The only two items we store are the integers for the index iterators, i and j. Therefore, the space complexity is constant.
In summary:
- Time Complexity: O(n^2)
- Space Complexity: O(1)
- Readability: 3/10
Method 2: The Elegant Each Loop Way
We’ve hinted that there is a better way to solve the range summary algorithm—so let’s see it! By using variables (first and last for each range that contains) we eliminate the need for an inner loop. Instead, we just iterate through each integer in nums and update the first and last variables. This way ends up being much more readable because we don’t have to keep track of two indices, and we use appropriate variable names. As always, let’s begin with pseudocode.
# Define method, range_summary, that takes in a single array of integers, nums. # Initialize ranges to be an empty array. # Initialize variable first to equal first value of nums. # Create and each_with_index loop, where num is the current element and i is the index. # Set variable last equal to num. # If the next element is equal to last + 1, # set last equal to the next element. # Otherwise, # add [first, last] to ranges, # and set first equal to the next element.} # Return ranges.
Let’s see it in action.
def range_summary(nums) ranges = [] first = nums.first nums.each_with_index do |num, i| last = num if nums[i + 1] == last + 1 last = nums[i + 1] else ranges << [first, last] first = nums[i + 1] end end ranges end
Off the bat, we can see that this is a more optimized solution because it is much more legible. Writing legible code significantly cuts down on development time and makes it easier to make quick updates to code bases. The
each_with_index loop iterates through n times in both the worst and best cases. By using a single loop and keeping track of the first and last elements of the ranges, we cut efficiency down from O(n^2) to O(n). Now that’s an optimized solution!
Space complexity for this solution is also constant. We only store the two variables, first and last.
In summary:
- Time Complexity: O(n)
- Space Complexity: O(1)
- Readability: 9/10
Conclusion
Now that we understand how to optimize the range summary algorithm, how do we extend what we have learned and apply it to other algorithmic problems?
First of all, we have to understand what we are trying to optimize for, whether it be time, space, readability, or cost. Many times, optimizing for one of these negatively impacts another, thus we must know which constraint is most important.
Next, we need to understand what data structures (arrays, hashes, linked lists, etc.) are available to us and the strengths and weaknesses of each. For example, searching an array for a particular value is an O(n) operation, while accessing a value by its key in a hash is an O(1) operation. Having information like this readily available will help us evaluate what the most efficient solution is for our constraints. Given an algorithmic problem, we should first always ask ourselves the following two questions:
- What are we doing with our data structures? (adding, deleting, accessing, etc.)
- Can we use a different data structure that would be better suited for this?
Lastly, we just have to practice! These are difficult problems, and the only way to really become better at writing efficient solutions is to keep doing it.
Author’s Bio
Hannah Squier is a self-taught software developer, with a background in GIS and civil engineering. As a UC Berkeley Engineering graduate and early startup employee, she has navigated many complex challenges with her technical know-how and perseverance. While preparing for her next adventure to become a full time software engineer, she writes tutorials to give back to the developer community. When she’s not coding, Hannah plays frisbee and thinks about how to make cities better places to live in. Get in touch at hannahsquier@gmail.com.
Or Become a Codementor!
Codementor is your live 1:1 expert mentor helping you in real time. | https://www.codementor.io/ruby-on-rails/tutorial/optimize-your-code-for-coding-interview | CC-MAIN-2017-43 | refinedweb | 1,358 | 64.1 |
Querying a Database: Connected Approach
Querying is the process of retrieving data from a data source such as a database. We can do queries using the SELECT statement of the SQL. You will learn how to use this statement to query a table in a database using ADO.NET classes. We will present the connected approach of querying data from the data source. The following are the basic steps you should follow when querying results with an open connection.
- Create A Connection
- Create A Command
- Create A DataReader
- Specify connection string for the Connection
- Specify Connection that will be used for the Command
- Specify the CommandText that will be executed by the Command
- Add values to command parameters (if any).
- Open the Connection
- Execute DataReader
- Read every row from the result set
- Close the Reader
- Close the Connection
With this steps, we can already query a table of data from the database. To demonstrate these steps, let’s create a new Windows Forms Application and name it QueryingDatabaseConnected. Add a ListBox control to the form and name it studentsListBox.
Figure 1
Double-click the title bar to generate an event handler for the form’s Load event. Be sure to import the System.Data.SqlClient at the top.
using System.Data.SqlClient;
Use the following code for the Load event handler.
private void Form1_Load(object sender, EventArgs e) {(); connection.Close(); }
Example 1
Line 3 creates a Connection object (Step 1). Line 4 creates a command object (Step 2). Line 5 declares a reader object (Step 3). Line 7 assigns the proper connection string for our connection (Step 4). The connection string uses the SQLEXPRESS server instance and the University database as the initial database. Line 9 assigns the Connection to be used by the Command object (Step 5). Line 10 specifies the SQL command for the Command object (Step 6). The command specifies that we get the FirstName, LastName, and Age of every record from the Students table. The SQL command assigned to the CommandText of the Command has no parameters so we can skip Step 7 which adds the required values to command parameters. We then open the connection in line 12 by using the DbConnection.Open() method (Step 8). Line 13 creates a DbDataReader instance using the DbCommand.ExecuteReader() method and we assigned the returned DbDataReader object to a variable so we can use it later (Step 9). Lines 15 to 22 iterates each row of the result set returned by executing the SELECT statement of the Command (Step 10). We used the DbDataReader.Read() method to obtain the first row of the result set. If there is at least one row available, the Read() will return true and the loop will continue. We then assigned the values of each column to their specific variables(lines 17-19). We used the indexer for the DbDataReader object that accepts a string representing the name of the column. We then converted their results to proper data types. For example, we converted the content of reader[“Age”] to integer and stored it to an integer variable. Line 21 simply adds the retrieved data to the Items of the ListBox. After the first loop, the Read() method will execute again and obtain the next row in the result set. If no more records are found, then Read()will return false and exit the loop. After exiting the loop, we closed the DataReader in line 24 (Step 11) and also the Connection in line 25 (Step 12). Execute the program and you will see the following output.
Figure 2
By using a while loop in our code, the Read() method was executed until no more records are found. Each record was then added to the ListBox and presented to the user.
You can simplify the code even more by taking advantage of the overloaded constructors of the DbConnection and DbCommand classes. For example, you can simply combine Step 1 and Step 4 by using the DbConnection‘s overloaded constructor that accepts the connection string.
SqlConnection connection = new SqlConnection(@"Data Source=.SQLEXPRESS;" + "Initial Catalog=University;Integrated Security=SSPI");
You can combine Steps 2, 5, 6 by using the DbCommand‘s overloaded constructor that accepts the command text and the connection to be used.
SqlCommand command = new SqlCommand( "SELECT FirstName, LastName, Age FROM Students", connection);
Another thing is that we can use the using statement when creating a connection. Consider the following modifications to the code in Example 1.
using (); }
Example 2
The whole connection process was enclosed inside the using block. On the first line of the using statement, we placed the declaration and initialization of the Connection object. This signifies the using block to immediately destroy the Connection the moment it exits the block. So if the execution reaches the closing brace of the using block, our Connection will be destroyed, and as a result, it will automatically be closed. That’s why, we can omit the call to DbCommand.Close() method. This technique is better because it ensures that the connection will be closed when you are finished using it. Note that the Connection object will no longer be accessible once the execution leaves the using block.
Another good practice is enclosing the connection process into a try catch finally block. This ensures the program that it can handle any database errors or exceptions it can encounter during runtime. Suppose for example that the connection string is wrong or the connection has timed out, then exceptions will be thrown and you need to handle them. We can start adding the try block from the line where the DbConnection.Open() was called.
try { connection.Open(); reader = command.ExecuteReader(); while (reader.Read()) { string firstName = reader["FirstName"].ToString(); string lastName = reader["LastName"].ToString(); int age = Convert.ToInt32(reader["Age"]); studentsListBox.Items.Add(String.Format("{0} {1}, {2}", firstName, lastName, age)); } } catch (SqlException e) { MessageBox.Show(e.Message); } finally { reader.Close(); connection.Close(); }
Example 3
From the moment of opening the connection to accessing the contents of each row, exceptions can be thrown so we enclosed all of them in a try block. We used the SqlException class in the catch block (line 16) which is an SQL Server provider’s version of DbException, to handle any database related errors. We then show the error message to the user using the Message property. We placed the codes that close the DataReader and the Connection (lines 22-23) inside the finally block. We did this to ensure that this codes will always execute regardless of whether exceptions were found or not. If you used a using statement as shown earlier, then you won’t have to write the code for closing the Connection. Try to get a habit of using exception handling when connecting and accessing a data source. | https://compitionpoint.com/connection-string-sql/ | CC-MAIN-2021-21 | refinedweb | 1,132 | 65.12 |
Photo by Pavel Nekoranec on Unsplash
This is a continuation of the SOLID. In the last post, I covered the O and now we will be continuing with the L. So if you didn't check the last post, feel free to click on the link:
Liskov substitution principle
Let z be a property provable about objects x of type T. Then z should be true for objects y of type S where S is a subtype of T.
In a nutshell: A has a property A, if B is a subtype of A then B has also the property A and B should behave the way A is expected to behave. And that means any derived class (B) should be able to substitute its parent class (A) without the consumer knowing it.
Example:
Let's suppose we have a car Class and two other classes Ferrari and Bugatti. Both of the cars are just..cars...sooo..they should behave like a normal car would do, right?
public class Car { public int speed; public Car(int speed) { this.speed = speed; } } public class Bugatti extends Car { public Bugatti (int speed) { super(speed); } } public class Ferrari extends Car { public Ferrari (int speed) { super(speed); } }
And now let's suppose we want to customize a car by increasing its speed, usually, we would write a method inside the Car like this:
public void increaseSpeed(Car car) { String message = "You increased the speed of " + car.getClass().getSimpleName() + " from " + car.speed; System.out.println(message); }
Please notice that increaseSpeed takes any car as a parameter.
We didn't have to implement this method in each class (Ferrari and Bugatti) because they should behave the superclass (car) would. No Complications. And that's the Liskov Substitute Pattern. We replaced the car argument with one of the subtypes each time, and nothing has changed.
Now let's check our results:
You increased the speed of Bugatti from 200 You increased the speed of Ferrari from 210
Conclusion:
The Liskov Substitution Principle extends the open/closed principle by preventing breaking client code.
Sources :
Meyer, B. (1997): Object-Oriented Software Construction, 2nd edition
Robert C. Martin “Principles of OOD”
Discussion (2)
great article!
why is the method increaseSpeed static?
I think it is more easy to explain Liskovs with non-static methods, and you should also skip the metaprogramming part.
Static methods and metaprogramming are code smells
Hi, thanks or your feedback. You're totally right!. I would adjust the static part, and I used reflection in the increaseSpeed method to only simplify the output. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/sightlessdog/revisiting-the-l-in-solid-116m | CC-MAIN-2021-10 | refinedweb | 426 | 63.59 |
Generic TLDs Threaten Name Collisions and Information Leakage 115
Posted by Unknown Lamer
from the turns-out-bad-practices-bite-you dept.
from the turns-out-bad-practices-bite-you dept.
CowboyRobot writes "As the Internet Corporation for Assigned Names and Numbers (ICANN) continues its march toward the eventual approval of hundreds, if not more than 1,000, generic top-level domains (gTLDs), security experts warn that some of the proposed names could weaken network security at many companies. Two major issues could cause problems for companies: If domain names that are frequently used on a company's internal network — such as .corp, .mail, and .exchange — become accepted gTLDs, then organizations could inadvertently expose data and server access to the Internet. In addition, would-be attackers could easily pick up certificates for domains that are not yet assigned and cache them for use in man-in-the-middle attacks when the specific gTLD is deployed." Another way to look at it: why were they using invalid domains in the first place?
Whats worse.. (Score:4, Insightful)
I used to work for a company where some uncommon but in use domain names where being used on the intranet, and where overriding the internet ones.. A real pain in the ass.
Re: (Score:1)
An external site (with a TLD not hidden by one of your internal TLDs) may link to domains in the external TLD which are hidden by your internal TLD. If you browse that site from your intranet, for you that link will point to the internal domain instead. Which means any interactions from the web page meant to go to the external site will instead go to the internal site.
Re: (Score:3)
Likewise, if your users are set up to use the internal domain but are external to the network, it is an easy MITM attack.
Re:Whats worse.. (Score:4, Informative)
Q: "Why were they using invalid domains in the first place?"
A:Two words: "Active Directory".
.corp .labs .legal
Planning a non-Internet accessible directory infrastructure with AD's Internet namespace rooting has commonly resulted in the deliberate planning for alternative, corporate designated roots, by IT departments. I'm not saying it is right or wrong, but I ran across this frequently in years consulting and doing pen/vuln.
Re: (Score:1)
Nice rant about not being able to print through the VPN but I bet there are several reasons for this.
1) Some MBA decided to cut costs by cutting printing down
2) It's a management decision for what ever reason - handed down to IT
3) It's due to an idiot that doesn't know how to configure a VPN to allow printing - happens all the time
4) Company may have a requirement that all docs are PDF for review/storage reasons instead of hardcopy
Instead of Ranting on
/. about it, ask the IT dept why. You may be suprised a
Obviously the other IT dept has been asked. (Score:2)
Not being able to print is the tip of the iceberg. That was one example of a local resource being blocked by stupid VPN dogmatism. There are many more! Here's one: You have an end user who needs to VPN-connect from a business partner site to use a single app. You've forced all the traffic from the end user through the VPN tunnel (as advocated in the post inspiring the rant) so now the end user cannot re
Re: (Score:2)
Salesman uses laptop to connect to internal domain over company wifi in the office. Goes to Starbucks later and connects to the very same domain name on the very same laptop and application, but since it's the Starbucks wifi it goes to the wrong place.
Re: (Score:3)
Did you deliberately completely ignore what I wrote, or are you *that* stupid?
Improve the world, slit your wrists.
That said, you said nothing about locked down laptops and in general, BYOD is the new black. You asked why namespace separation fails and I told you. Alas, you just wanted to thump your chest and blow out massive fart clouds. Please make that intent more clear next time so you can get your troll mod and move on.
Re: (Score:3)
I heard of a place were youtube.com redirected to a feed of the office CCTV cameras and a message stating "this event has been logged".
That's why I have been giving my internal (Score:5, Insightful)
That's why I have been giving my internal domains silly like
.zyxprivnet for at least 15 years...
It would be nice to reserve some domain names for internal use although, just like internal ip addresses..
Re:That's why I have been giving my internal (Score:5, Insightful)
I actually tried to get a TLD reserved for "RFC1918" style use about 12+ years ago: [ietf.org]
I also tried the ICANN but they weren't interested either. And when they approved stuff like
.biz, .info. I got the impression they weren't really interested in improving the Internet from a technical aspect but more interested in $$$$. Did the creation of .biz etc really help the Internet that much?
Maybe others may have more success trying it now?
Re:That's why I have been giving my internal (Score:4, Interesting)
I think
.biz was helpful, in that I don't trust any domain name that ends in .biz.
Re: (Score:3)
[offtopic] scary that with just your one post, I now know your name and address as they are posted at the bottom of your draft RFC [/offtopic]
Re:That's why I have been giving my internal (Score:5, Interesting)
I wonder which three letter organization icann will be giving
.onion to :/
Re: (Score:1)
I wonder which three letter organization icann will be giving
.onion to :/
Clearly it will be: T.H.E. because what other use would there be on the internet besides the.onion
;-)
Re: (Score:2)
I would suspect NRL [wikipedia.org], since they're the ones who sponsored the TOR project in the first place..
I've always advocated using your own FQDN for internal networks. If you own example.com, then put your internal stuff on internal.example.com - dead easy, job done. This gets even easier with Bind's RPZ functionality - you don't even need the "internal" subdomain; you can just add/replace RRs in your main domain, which is rather useful where you want different servers to handle your internal and external access (e.g. mail.example.com can point at an internal mail server when inside your LAN, and an external mail server for anyone on the internet).
However, a lot of people decide to use random TLDs for this instead - in particular I've got a number of customers, who under the advice of supposidly qualified network engineers set up their networks to operate on the
.local TLD. This, of course, now becomes a problem since .local is normally used by mDNS, so we end up with conflicting names and all sorts of problems.
I would guess you're relatively safe using
.localnet (since traditionally localhost is localhost.localnet) if you really must use a non-globally-unique domain name, but IMHO it solves a lot of problems in the long run if you just use a proper FQDN for everything (not least because you don't end up with naming conflicts if you merge LANs together at a later date).
Another thing to consider is: if you're basing your security on reverse DNS lookups then you're an idiot, since the attacker can trivially set their reverse DNS to anything, valid or going to see asdfgqwerty.example.com/zxcvbnm and think where do we keep the sales notes. If you set it up at [notes.sales] they may actually have a chance to remember that.
I'd hope that the average employee would know who their employer is. i.e. if you're emplyed by Example Ltd. you might expect everything to be under example.com... In any case, all this would usually be linked from a company-wide intranet. Your example of sticking things under [notes.sales] increases the complexity, because now your users are going to have to understand that they need to use "notes.sales" when they're inside the company's network and "notes.sales.example.com" when they're outside th
Re: (Score:3)
It (.local) was actually official MS advice for a long time [wikipedia.org]
I tend to think Apple made a poor choice given the pre-existence of lots of
.local domains in use (default on Small Business Server 2000 from memory, and supported by [microsoft.com])
I'm more familiar with
.localdomain than .localnet, but it wasn't in wide use until long after .local became popular (though to be fair I can find at least one reference to it as far ba
Re: (Score:2)
It (.local) was actually official MS advice for a long time [wikipedia.org]
I tend to think Apple made a poor choice given the pre-existence of lots of
.local domains in use (default on Small Business Server 2000 from memory, and supported by [microsoft.com])
I think that both of them made a terrible choice.
Microsoft advised using a domain that (by their own admission) "At the present time, the
.local domain name is not registered on the Internet." Not sure how that could ever have struck them as a bright idea. I guess MS was arrogant enough to think the rest of the world would bend to accomodate their de-facto standards rather than bothering to get them properly ratified.
Apple then went along and chose a name that they knew was already widely in use, per offi
Re: (Score:2)
Agreed. To be fair, I was just defending the "supposidly qualified network engineers" [sic].
I just find Apple's move a little more douchy given
.local would have been discovered by a google at that time, probably.
Re:That's why I have been giving my internal (Score:5, Insightful)
oh, like
.local ? >_>
Re: (Score:3)
.local is used in mDNS (also known as Zeroconf or Bonjour).
.localhost, however, is reserved in RFC 2606 [ietf.org].
Re: (Score:1)
The ".localhost" TLD has traditionally been statically defined in
host DNS implementations as having an A record pointing to the
loop back IP address and is reserved for such use. Any other use
would conflict with widely deployed code which assumes this use.
Seems like that won't do either.
Re:That's why I have been giving my internal (Score:5, Interesting)
No.
.local is for different usage: [ietf.org]
Sure took them a long while to reserve that too.
I proposed reserving a "RFC1918" like TLD about 12+ years ago, but there was not enough interest: [ietf.org]
I did try via the ICANN (emailed them to ask them to reserve it). But the ICANN were more interested in "yet another dotcom tld" like
.biz .info.
And I didn't have a spare USD100k lying around to apply for the TLD through ICANN, and give it to the world if I even succeeded in getting it.
Re:That's why I have been giving my internal (Score:4, Insightful) [ietf.org]
.test, .example, .localhost and .invalid.
You can use
The use of these TLD's is somewhat defined and not quite similar to the "intranet"-type use you describe, but atleast they're available for private use and nobody will bother you if you use, for example, ".invalid" for your internal domains.
On the other hand, why not simply use subdomains of an actual domainname you own?
If you own example.com, you could use intranet.example.com or perhaps privateserver.internal.example.com
It would be nice if something like ".intranet" could be a reserved TLD.
Re: (Score:2) [slashdot.org]
Re: (Score:3)
Oh also, that rfc dates back a little. Things change and I wouldn't be surprised if they created a
.example top domain at some point for something like teaching purposes.
Back then, a domain couldn't start with a number and nowadays we have 2600.org.
I think we need a new RFC with some reserved prefix like
.intern
So
.internmyproject1 .internmail .internnews .internanything would be guaranteed never to be used.
Re: (Score:3)
AFAIK, it still holds.
.com domains, so maybe example.com was also a victim of that.
A while back some idiots thought it would be smart to redirect all failed
But this was quickly reverted after public outcry.
Re: (Score:2)
Oh also, that rfc dates back a little. Things change and I wouldn't be surprised if they created a
.example top domain at some point for something like teaching purposes.
example.com and example.org are explicitly registered for this purpose.
Re: (Score:3, Interesting)
I do realize it's inconceivable, but some people do not own domain names. Well, I do, but they don't really match my internal naming scheme. So, my internal domain is something that wasn't valid until they came up with the stupid gTLD concept: shark species as hostname, domain "sharks" on my network and in a similar vein Kiplings Jungle Book characters as hostnames and "jungle" as domain for my parents network. This works f
Re:That's why I have been giving my internal (Score:4, Funny)
You can use
.test, .example, .localhost and .invalid. ...and nobody will bother you if you use, for example, ".invalid" for your internal domains.
Some CEOs and PHBs might
;).
Re: (Score:3)
Some CEOs and PHBs might
;).
Indeed. The proper usage these days is
.challenged.
Invalids and GIMP haters (Score:2)
Re: (Score:2)
How about: Because I don't own any... and I shouldn't need to for private use!
Re: (Score:3)
That's why I have been giving my internal domains silly like
.zyxprivnet for at least 15 years...
zyxprivnet sounds like a cool gTLD to register... i'll get right on it.
On the other hand...
.LOCAL and .LAN are unlikely to be allowed as a TLD; since .LOCAL has prior use by Apple
for Bonjour/Multicast DNS.
Also,
.INVALID and .LOCALDOMAIN are reserved private TLDs.
Why not use real domains instead? (Score:4, Informative)
It's much better to use a real domain which you actually own and will remember to renew.
Re: (Score:3)
Sometimes you work on small experimental projects where it is too bothersome to ask your big brother for a subdomain name. Example: mysmallproject.ibm.com.
You just come up with a domain name to make things more simple for people working on your LAN. example:
.zyx1999prj
You can't forget to renew them because there is no renewing authority. You just made the tld up yourself!
Re:Why not use real domains instead? (Score:4, Insightful)
Have you ever worked for IBM or any other big corporation? You will have to go through 7 levels of approval, impact analysis, cost analysis, get about 50 people involved etc. and wait several months, Nah
;-)
Note that, of course, I always create subdomains when I have control of the domain or when it is easy to get in touch with the person who does. Read: smaller companies.
Re: (Score:2)
Re: (Score:2)
> Have you ever worked for IBM or any other big corporation? You will have to go through 7 levels of approval, impact analysis, cost analysis, get about 50 people involved etc. and wait several months
I can't understand why big organizations can't delegate responsibility for subdomains so that this isn't a problem. Once an internal unit of Example Corp (example.com) is goes through the internal hoops to get foo.example.com subdomain, they ought to handle the process when someone wants bar.foo.example.com.
Re: (Score:2)
In that case, simply edit your hosts file and add your own entry for project123.ibm.com. Your first DNS server is your computer... unless you've changed the default host.conf
Re: (Score:2)
If you choose to go the
/etc/hosts file route, then you do not need a domain name at all. Host names will suffice.
On the hand, I prefer DNS and I do not know any other way than using a zone file to cause hostnames to resolve to IP addresses. I might use the hosts file for something with at most 5 machines that need to know each other
You need DNS and DHCP anyway for people with laptops that move around and that are not always on your network and who sometimes don't even have admin rights on their laptop.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
I don't like numbers without context . . . (Score:5, Interesting)
Currently, 25 percent of queries to the domain name system are for devices and computers that do not exist, suggesting the companies are already leaking information to the Internet
And how many of those are due to actual people as opposed to confused webcrawlers looking up dead links?
"Oh hai, a new webpage. Lookie, a link. hddp://mywobsite.youspace.com/forum/?post=1. Oh, there's nothing there.
Lookie, another link. hddp://mywobsite.youspace.com/forum/?post=2. Oh, there's nothing there
Lookie, another link. hddp://mywobsite.youspace.com/forum/?post=3. Oh, there's nothing there"
Re:I don't like numbers without context . . . (Score:5, Interesting)
True. At the same time, though, I remember that for a while my favorite site was donotreply.com, where the owner would post emails he got as a result of organizations listing email addresses in the @donotreply.com domain. Apparently, even major security firms made it easy to accidentally reply confidential information to whoever happened to own donotreply.com.
Re: (Score:1)
And on that point, Google actually have a silly number of spiders crawling deepnet links these days such as queried pages, pages needing logins and so on.
Not sure which year they started that, but it was a good while ago now. (maybe 5+ years ago)
It could easily just be Google crawlers brute-forcing things that might have existed, or may still possibly exist, or might just be down. (due to Google)
Unknown lamer unknowledgeable and lame, news at 11 (Score:4, Insightful)
why were they using invalid domains in the first place?
Because they could and nobody had warned them that ICANN was eventually going to go for a massive AOLisation of the DNS.
Even without these objections, ICANN is just fscking around (for money, it ain't cheap to sup at their table), and blaming what the rest of the world may or may not have done is not really constructive here.
Re:Unknown lamer unknowledgeable and lame, news at (Score:4)
why were they using invalid domains in the first place?
Because they could and nobody had warned them that ICANN was eventually going to go for a massive AOLisation of the DNS.
The answer is "because there are a lot of idiots passing themselves off as network engineers who actually don't have a clue". It's *never* been sane to pick arbitrary unreserved addresses in any network address space and assume they won't ever be used. And frankly I've seen this time and time again, including such crazyness as people picking arbitrary unallocated IPv4 networks to use internally instead of RFC1918 networks, and then being surprised when things start breaking after those networks have been allocated out to a third party.
Why open that can of worms at all? (Score:4, Insightful)
Seriously, the internet has reached a level of growth where ANY major change like that WILL invariably break something that grew along with it. And we didn't even reach the point yet where this alone is obviously a serious business advantage or drawback, depending on who gets certain TLDs. Who gets to have
.mail? Who gets .web? Who is the lucky dog who gets that license to print money? And, worse, to keep certain people from using it at all, preferably those that would present a competitor to them?
Who gets to use
.$well_known_name? .exchange? .office? Or how about .gates? .jackson?
If this does anything, it just opens up a new round of domain name turf wars and domain squatting. Only this time, there is no escape from the squatter. There is no $name.$land when $name.com is held for ransom.
Re: (Score:2)
What ever you pick, how ever much it cost you, someone will use their trademarks and copyrights to sue you for it, plus damages.
Re: (Score:2)
Ferrero might disagree [internatio...office.com].
But rest easy, of course they made certain to get the ".kinder" domain before ANYONE could DARE to snatch it from them.
And let's not go for funny little tidbits like Apple Computers vs. Apple Records. It's not so unlikely that people register the same trademark if it is a common name. And don't tell me there aren't many trademarked names that actually come from either normal words (where the trademark consists to a good deal of a picture, which is pretty moot when it comes to domain n
Re: (Score:2)
Just noticed the link wants a login now. Odd. But essentially it's about Ferrero losing the lawsuit for the "kinder.at" domain name to a charity organization. Use the search engine of your choice to find out details if interested.
Re: (Score:2)
Re: (Score:3)
"Who gets to have -?"
The highest bidder, of course.
Re: (Score:2)
they're opening the can of worms because for them it's actually a can of cash and can of need-to-be for otherwise useless guys.
Re: (Score:2)
Then why do WE agree to partake in the can-of-worms-opening?
Do I need a new TLD? For all I care they can keep it.
Sooo... (Score:2)
The Internet ought not evolve, because some network admins at companies don't know how to use it properly? Is that the argument? I'd say that's a rather bad argument.
Re:Sooo... (Score:5, Insightful)
The internet is critical infrastructure now.
Would you suggest changing the mains voltage for the US power grid? "Evolving" to 220v would reduce substation transformer requirements and reduce copper usage in residential construction. Or perhaps people don't know how to use electricity properly, so screw them when nothing works.
Re: (Score:2)
In my opinion, adding the TLD
.assholes and reserving it strictly for business cannot do harm.
Re: (Score:3)
I think we're saying the internet ought not evolve bug mandibles and a third arm growing out of its forehead. Arbitrary TLDs are just bad design.
Re: (Score:2)
It wasn't removed...there just aren't any more seeders.
This is a BS article and masks the real issue (Score:5, Informative). In addition, browser completion into ".com" by default means that any typo will take you outside the company, so it's an idiotic example anyway.
The real issue is that if there are 1000 TLDs, all the companies that stupidly equate the DNS namespace with the trademark namespace will, in order to "defend their trademarks" feel they have to register their trademarks as domain names with 1000's of registrars. The don't like this.
As a pointed example, we used to maintain the top level DNS servers for free; it was a volunteer thing, and Paul Vixie did most of the work. Then the idiots at Dupont went off and registered over 400 domains in a single day, and that was it; that was too much work to expect the volunteers to do for free, and so they decided not to do so. Thereafter you paid for registration. Then people decided they could make a good profit at it, and instead of paying for a change to the TLD subdelegation record. And the whole "let's rent domain subdelegations of TLDs instead of selling them was born".
So back to Dupont... 400 domains * 1000 registrars * $30 average per year = $12M
Expect legislation protecting trademarks across all TLDs to follow shortly on this whole fiasco.
Re: (Score:2)
Indeed, this needs to be an exception to trademark law as the namespace doesn't actively distinguish between similarly named companies in different lines of work. The UDRP -- warts and all -- does work for disputes if one comes up. That should be a sufficient starting place for encroachment if someone is attempting to mimic you.
Every company in America should not have to license 800000000000000000000 domain names "because TM".
-l
Re: (Score:2).
I think this is untrue - I'm pretty sure you could use Bind's RPZ functionality to do this. Although why you would is anyone's guess.
However, that doesn't seem to be what the article is talking about. The article is talking about your DNS server being nonauthoritative (and forwarding) at the . level, but authoritative for (for example) "exchange.", "corp.", etc. which is, of course, fully supported in any DNS server because thats how DNS works.
In addition, browser completion into ".com" by default means that any typo will take you outside the company, so it's an idiotic example anyway.
What browsers complete to
.com by default? Firefox, at least,
And more importantly... (Score:5, Insightful)
Just imagine if company A asks for a certificate for mail.corporate, but then uses it for industrial espionage against company B's mail.corporate server...
Re: (Score:2)
Count me in.
It's the current DNS system that's flawed, no matter what TLD's there are or not. It is time to abolish the old system.
DNS management must be decentralized, everyone who connects to the Internet should be automatically in charge of it (by running a p2p DNS search node), domain names ought to be arbitrary, free and strictly distirbuted on a first come, first served basis. There are plenty of working models that would prevent abuse and contrary to what some people claim security is NOT an issue (a
why were they using invalid domains in the first.. (Score:4, Informative)
Because they could. Because it was an easy solution. Because no one could imagine that ICANN would someday be so broken that
FUD (Score:2)
This is mostly FUD.
Regarding external certificates, most certification agencies (at least those that are members of the [cabforum.org] have stopped issuing certificates for invalid domain names for any date posterior to November 1st 2015. They put this policy in place on Nov 1st 2012. Any such certificates that might be marked as valid beyond that date will be revoked on October 1st 2016.
Now, there may be a concern with internal certificates for such domains, but that is for the internal policy o
Another way to look at it (Score:2)
Another way to look at it: why were they using invalid domains in the first place?
Another way to look at it: why are they being dependent on an external TLD structure for their security mechanism?
Nothing new (Score:1)
I'm sure major entities [wikipedia.org] already re-route [wikipedia.org] things like
.com, .net, and .org to "internal" sites on an as-needed basis.
Let the Balkanization of the Internet begin^H^H^H^H^Hcontinue.
It broke itself (Score:2)
If you have internal systems facing the internet where just using the right domain name would unveil what is inside to all the world, the one that "broke it" is you, either by designing "security" that way or choosing vendors that force you to work that way. Depending in the ignorance of the remote side is a bad security measure (or better, is a good insecurity measure).
In fact, probably is good that something makes evident that you have an open insecure system in internet. The bad guys (including NSA and
.local issues (Score:2)
Old news. This has been an issue for YEARS.
Microsoft used to use and even advocate
.local in many of its articles and educational documentation even after it became used by Multicast DNS / mDNS and other systems ()
It was only recently that they stopped when the SSL registrars will no longer accept
.local for certificates.
I have also seen several networks using
.int for internal domains even though those were used for international organizations for a LONG time. Same as wit
Reserverd TLD's (Score:2) | http://tech.slashdot.org/story/13/07/15/2353206/generic-tlds-threaten-name-collisions-and-information-leakage | CC-MAIN-2014-15 | refinedweb | 4,776 | 65.01 |
Stores context used per node in path planning, stores row and column position.
More...
#include <GridWorld.h>
Stores context used per node in path planning, stores row and column position.
Definition at line 25 of file GridWorld.h.
List of all members.
default constructor, initializes to invalid position for fail-fast
Definition at line 27 of file GridWorld.h.
constructor from a coordinate pair
Definition at line 29 of file GridWorld.h.
a silly function to scatter row and column across the span of size_t
Definition at line 31 of file GridWorld.h.
comparison operator to allow sorting states for faster lookup in std::set
Definition at line 38 of file GridWorld.h.
equality is used to test against goal
Definition at line 40 of file GridWorld.h.
[friend]
just for debugging
Definition at line 42 of file GridWorld.h.
column
Definition at line 45 of file GridWorld.h.
Referenced by GridWorld::expand(), hash(), GridWorld::heuristic(), operator<(), operator==(), and GridWorld::operator[]().
row
Definition at line 44 of file GridWorld.h. | http://tekkotsu.org/dox/structGridWorld_1_1State.html | CC-MAIN-2018-47 | refinedweb | 170 | 50.94 |
again - Reload modules when they change
use again 'LWP::Simple'; # default import use again 'LWP::Simple', []; # no import use again 'LWP::Simple', [qw(get)]; # import only get use again 'LWP::Simple', (); # default import (!!) use again 'LWP::Simple', qw(get); # import only get use again; require_again 'Foo::Bar';
When the perl interpreter lives for a long time, modules are likely to change during its lifetime. Especially for mod_perl applications, this module comes in handy.
use again;
A bare
use again; (that is: no import list) will export
require_again (and
use_again, which always croaks saying you should use
use again instead) into your namespace. There is no convenient way to import
require_again without importing
use_again too.
use again MODULE, [ IMPORTS ];
If you do pass arguments, the first is used with
require_again, and all remaining arguments are used to import symbols into your namespace.
When given arguments,
use again does before or it has changed since the last time
require_again loaded it.
There is no license. This software was released into the public domain. Do with it what you want, but on your own risk. The author disclaims any responsibility.
Juerd <juerd@cpan.org> <> | http://search.cpan.org/~juerd/again-0.02/again.pm | CC-MAIN-2016-30 | refinedweb | 191 | 63.19 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
This patch series is pending review: - Remove Wundef warnings for specification macros - Add _POSIX namespace SYSCONF macros to conf.list - Use PTHREAD_DESTRUCTOR_ITERATIONS - Use conf.list to generate spec array Siddhesh On Fri, Sep 19, 2014 at 03:50:03PM +0530, Siddhesh Poyarekar wrote: > This patch set fixes Wundef warnings for all POSIX_* macros and also > proposes a way to organize all of the sysconf variables so that > they're generated from one place (posix/conf.list) and are hence > typo-proof. All patches have been verified on x86_64 to ensure that > they don't result in any significant changes in generated code. > > [PATCH 1/4] Remove Wundef warnings for specification macros > > - This is the initial patch that adds the conf.list file and macros > and fixes one set of warnings. > > [PATCH 2/4] Add _POSIX namespace SYSCONF macros to conf.list > > - This is the second patch that adds the POSIX namespace sysconf > macros to the list and fixes the remaining warnings. > > [PATCH 3/4] Use PTHREAD_DESTRUCTOR_ITERATIONS > > - This patch replaces the POSIX_THREAD_DESTRUCTOR_ITERATIONS with a > view to unify getconf and sysconf usage. > > [PATCH 4/4] Use conf.list to generate spec array > > - This patch removes the hand-written specs variable and replaces it > with an auto-generated array. The vars array can be similarly > replaced after adding all of its constituent variables to conf.list. > I'll do that once these patches are in. > > Siddhesh >
Attachment:
pgpIaXBvSTxKo.pgp
Description: PGP signature | http://sourceware.org/ml/libc-alpha/2014-11/msg00074.html | CC-MAIN-2019-51 | refinedweb | 256 | 58.38 |
I am having trouble running a script (any script, even a simple test script as shown in user manual). Currently I’ve moved to testing in the script playground until I figure out what I’m doing wrong.
My script (run in the plalyground):
shared.Microblend.PrintHelloWorld(“Hello World”)
My script in the shared library shared>Microblend>PrintHelloWorld:
def PrintHelloWorld(message):
print message
Error in console:
[b]Traceback (most recent call last):
File “”, line 1, in
TypeError: ‘com.inductiveautomation.ignition.common.script.ScriptModule’ object is not callable
Ignition v7.7.2 (b2014121709)
Java: Oracle Corporation 1.8.0_11[/b]
My OS is:
Windows Server 2008 R2
I’m sure this is simple oversight. Any help would be appreciated. | https://forum.inductiveautomation.com/t/run-script-in-shared-library-gives-object-not-callable/9866 | CC-MAIN-2022-40 | refinedweb | 118 | 50.53 |
Speech to ear speaker?
Is there away to use the text to speak so that it outputs to the ear speaker and not the speaker phone?
@dwildbore , hi I used the example below from the Pythonista docs. I am using apple Airpods on iPad 12.5' 2nd gen. The sound worked fine for me. When the AirPods in, the speech came though them and after removal the sound come out of the speaker. I didn't have a wired pair of earphones handy to see if it worked differently or not.
import speech import time def finish_speaking(): # Block until speech synthesis has finished while speech.is_speaking(): time.sleep(0.1) # US English: speech.say('Hello World', 'en_US') finish_speaking() # Spanish: speech.say('Hola mundo', 'es_ES') finish_speaking() # German: speech.say('Hallo Welt', 'de_DE') finish_speaking()
I should have clarified. I mean like the speaker used during a phone call that you hold up to your ear. Good to know the ear pods work tho, i'm hoping to get a pair! thanks!
In case anyone is wondering I got it working:
from objc_util import *
import speech
import time
AVAudioSession = ObjCClass('AVAudioSession')
audioSession = AVAudioSession.sharedInstance()
oldCategory = audioSession.category()
audioSession.setCategory_error_(ns('AVAudioSessionCategoryPlayAndRecord'), None)
speech.say('hello') #like a phone call
time.sleep(1)
audioSession.setCategory_error_(ns(oldCategory), None)
speech.say('hello again') # speaker phone | https://forum.omz-software.com/topic/4705/speech-to-ear-speaker | CC-MAIN-2022-40 | refinedweb | 220 | 53.78 |
.
29 comments:
Hi Jim-
I have your book and am using it to get a better understanding of coding using JQuery and Javascript. I have two grids on a time entry page and am trying to align them one below the other exactly column to column and am thinking I need to use something like jquery to achieve this because when PS renders the page, no matter how much I align the grids in app designer they are always off. Any ideas on how I should achieve this ? I got as far as finding the elements in firebug but cannot get the position to change.
Thanks.
George
@George, lining them up column by column will be difficult. As you saw, the tr elements have a unique ID, but the cell (td) elements do not. It is certainly possible to set both to have the same column widths using jQuery. One way to do this would be to iterate over the td's in row 1 of the first table, and compare the widths with the second table. For each column, set the width of both to the greater of the two.
@George, here is an example of iterating over row 1 in the user profile roles and then printing the width of each cell in that row:
$("#trPSROLEUSER_VW\\$0_row1").find("td").each(function() {
console.log($(this).width());
});
When using jQuery selectors, always start with something very specific with browser native support, like an ID selector, and then use find on the subset to reduce the selected elements. jQuery resolves selectors right to left, so searching for the selector "#trPSROLEUSER_VW\\$0_row1 td" is accurate, but not as efficient because the sizzle engine will iterate over all td's, reducing the set to just the ones that are children of #trPSROLEUSER_VW$0_row1.
Also notice the \\ before the $. $ has a special meaning in selectors, so you have to escape it in jQuery selectors.
Hi Jim,
How are you? This is lakshminarayana.
@Lakshminarayana, the submitAction call in the button click is a JavaScript form post. There are two ways to execute this code. The first is using jQuery selectors and .click() to trigger the original button's click. The second is to call the exact same code from your event handler. If you hide the navigation buttons within the grid's or scroll's properties, I'm not sure if the JavaScript will still work. This will perform an Ajax form post and reload the grid based on the Ajax response.
I also thought there were PeopleCode methods or functions for switching pages in a scroll. Worst case, I thought you could set the active row using PeopleCode in a FieldChange event.
Hi Jim,
Long-time reader, first-time commenter (though I believe we've been on at least one or two conference calls / e-mail threads together, through Matt).
I wanted to share the basics of the code we use to overcome this same problem within InFlight.
SharePoint's Web Parts on a container page are very much akin to PS pagelets -- autonomous and (for all intents and purposes) unaware of each other when they're rendered.
So, like you, we aim to load jQuery and the plugins once and only once.
Here's the code that would go inside each pagelet: (tags are in [] instead of <>)
[script type='text/javascript']
if (typeof INFLIGHT_LOADER_EXISTS == 'undefined') {
document.write('[script src="/InFlight/Common/js/InFlightAssetLoader.min.js"][\/script]');
}
[/script]
[script type='text/javascript']
InFlightLoadFile('/path/to/resource.1.js'); // e.g. plugin #1
.
.
.
InFlightLoadFile('/path/to/resource.N.js');
[/script]
Here's the simplified edition of the InFlightAssetLoader.js... It can be modified to support whatever flavor of lazy loading you like (though document.write has proven consistently successful for us). Also, the real script does things like allowing you to write into different spots within the DOM, but I wanted to keep this simple. The basic structure is here:
var INFLIGHT_LOADER_EXISTS = true; // note that in a real implemetation, variables like this should be namespaced, not global
var InFlightLoadedFiles = [];
var JQUERY_FILE = "/InFlight/Common/js/jquery.min.js";
var INFLIGHT_CSS = "/InFlight/Common/css/InFlightHelperStyles.css";
InFlightLoadFile(JQUERY_FILE);
InFlightLoadFile(INFLIGHT_CSS);
function InFlightLoadFile(filePath, args) {
// Isolate the file name and extension
var fileName = filePath.replace(/^.*(\\|\/|\:)/, '');
if (InFlightLoadedFiles[fileName] == null) {
// Detect if we're trying to add jQuery itself and
// look to see if the jQuery() function already exists
var REjQ = /jquery(?:\-(\d|\.)*)?(?:\.min)?\.js/;
var skipLoad = false;
if (REjQ.test(fileName)) {
skipLoad = (typeof jQuery != 'undefined');
}
if (!skipLoad) {
// We haven't loaded this script yet, so load it
var REext = /(?:\.([^.]+))?$/;
var ext = REext.exec(fileName)[1];
var isCSS = (ext === 'css');
if (isCSS) {
document.write("[link rel='stylesheet' type='text/css' href ='" + filePath + "' /]");
} else {
document.write("[script type='text/javascript' src ='" + filePath + "'][\/script]");
}
InFlightLoadedFiles[fileName] = true;
}
}
}
@James, thank you for sharing. This is a very good approach because it keeps the browser from attempting to read all that JavaScript multiple times. For sites using Pagelets with a PeopleToos 8.49 Portal and earlier, your recommendation is better than mine. The reason I qualify the PeopleTools release is because later versions of PeopleTools (8.50 or 8.51... can't remember which) switched from delivering a server-side assembled homepage to a client-side assembled homepage. The 8.5+ homepage contains pointers to pagelets and uses Ajax to parallel load pagelet content after the homepage loads. document.write is a great strategy for loading JavaScripts when a page is loading because of its blocking nature. I recommend this approach in my PeopleTools Tips book. After a page loads, however, document.write will actually replace the page. The way PeopleTools 8.5+ handles this is to check for pagelets using document.write, and then force a full homepage non-Ajax reload.
The document.write strategy with a PeopleSoft portal is great for 8.49 and earlier, but may cause performance problems with later versions of PeopleTools because it causes the homepage to stop processing and reload. You can see this in action by creating a simple HTML pagelet that has the following script:
[script type="text/javascript]
document.write("Hello World!");
[/script]
When a PeopleTools 8.50 (or was it 8.51...) portal loads this pagelet, as soon as it hits document.write, the whole homepage will reload in "classic" mode without Ajax.
I discovered this "feature" after a PeopleTools upgrade. I had some user homepages that would "flicker" (load part way, then reload) and I wasn't sure why. I had pagelets that used document.write, which was causing a full homepage refresh. This behavior makes it so document.write continues to work, it just doesn't perform as well.
How does SharePoint handle this? Does it serve a fully loaded homepage with all WebParts and WebPart content or does it use Ajax to fetch WebPart content in parallel/asynchronously after the homepage loads?
@James, I was just looking at my collaborate schedule for tomorrow and I see you have a session in the morning. I look forward to seeing you there!
I did buy your book last week, great information. I have a question. In our Interaction Hub (9.1, 8.52.07) we have content from HCM and Financials and it's working well. We have a go-live coming up for a large group of self service users. When the content is rendered there is a return link that we would like to remove. Can you point me to what might drive that return link and how to remove it from the the HCM and Financials content? Thanks in advance.
@Randy, what type of content? Is it query content?
Hi Jim-
Trying to apply your fix and for the life of me can't get it to work. Still pretty green with javascript and jQuery. Here's the java calls that I'm using:
script/script>
script/script>
script
$(document).ready(function() {
$("#demo_myHrLinks").accordion({
heightStyle: "content",
active: false,
collapsible: true,
header: 'h3',
disabled: false,
animated: 'bounceslide' });
});
/script>
I currently have this in the head. When I try to wrap your if statement, I see the statement in my pagelet. Help as to where to place it?
Thanks
Justin
@Justin, this post applies to the jQuery and jQuery UI files directly, not to the HTML that inserts the HTML. From what you posted in the comments, your jQuery is actually hosted by the Google Ajax API's/CDN, so you can't modify the files to take advantage of what I mentioned in this post. To apply "include protection," download the jquery files, modify them according to this post, place them in a location where your browser can access them (your PeopleSoft web server, etc), and then update your HTML to point to your versions of the JavaScript libraries.
Jim,
Just wondering if you have you tried to get Raphael.js to work in a PS environment. I tried to created some dashboard using Raphael but nothing returned. I'm sure the Raphael library was loaded, I could see the code in Firebug, but just got no results back.
Thanks Gary
@Gary, no, I haven't looked into Raphael.
Hi Jim,
Any reason not to put the JS path in the header? I do it regularly as I'm usually doing alternate templates for homepage to get the look I want. In cases where only an HTML override is used I call JS from a directory on the web server. If I'm doing a WEBLIB-based template I can get JS from the database (as an HtML definition).
Another approach is to put a "hidden" pagelet on the page, where the pagelet contains a URL or source code. I think it's rather hack-ish but it works in a pinch.
Gary - though I've not done Raphael work directly I have seen it used in PS in production. It is definitely do able.
Chris
@Chris, I have jQuery in my header as well. The problem with this approach is that it doesn't help with Pagelet Wizard. Yes, if I put jQuery in the header, then Pagelet Wizard pagelets on the homepage will be able to use it, but Pagelet Wizard won't use it in the Pagelet Wizard itself. This is because of the separation of the header and component pages through frames and is why I put jQuery in my pagelet HTML and XSL as well.
@Chris, Thanks I was able to get Raphael JS working in PS environment. Issue was with my function being called by window.onLoad(). I called the function directly from the pagelet and it worked.
Gary
I've done CSS injection into the component iFrame using JS to add another Link element to the head section. What might one need to consider in trying a similar approach with a script element? Needs a path, like to jquery on a server, or something else. Not quite the same as CSS, since style sheets are cached and we can get the URL via PeopleCode.
There are always little nuances to these nifty tricks. Just because one can doesn't mean one should, eh?
Cheers,
Chris
@Chris, right, just because you can, doesn't mean you should. In Part II of my PeopleTools Tips book I show how to insert scripts into pages. It works the same in iframes (as long as they don't violate the domain of origin policy). Here is the JavaScript:
apt.files.importScript = function(parms) {
var s = document.createElement("script");
s.type = "text/javascript";
s.src = parms.url;
s.id = parms.id;
s.defer = (parms.defer) ? parms.defer : false;
document.getElementsByTagName("head")[0].appendChild(s);
}
Just replace the document references with the document inside the iframe.
Jim
We are upgrading our PeopleTools from 8.49 to 8.53. I wrote a lot of jQuery JavaScript to improve the user experience in our self-service modules running in 8.49. I've moved on and another team is dealing with the upgrade. They are having some problems with the JavaScript in 8.53. I'm wondering if you are familiar with any conflicts between jQuery and the delivered 8.53 JavaScript?
One person on that team has theorized that PeopleSoft is now using prototype.js and that there are known conflicts with jQuery. I've seen nothing to indicates that prototype.js is being used.
I'm tied up with other work, so don't know the details of the problems, but may have to dig in myself at some point to see what is going on.
Thanks for any help.
Cheers
Dan
@Dan, as always, it is good to hear from you. No, PeopleSoft is not using Prototype. In fact, 8.53 now ships with jQuery and jQuery UI. It is an older version and it isn't used by regular transaction pages (yet?), but it is now delivered as HTML definitions.
I would need more information, but just FYI, I use jQuery with 8.53 with no issues. When I switched from 8.49 to 8.50, I had to work through the fact that $(document).ready doesn't fire on transaction pages because the content is ajax'd into the browser (It didn't work on 8.50 anyway).
Thanks for your insight Jim. I will recommend looking into the $(document).ready problem. One of the errors I've seen could be the result of that.
Dan
Hi Jim,
I am creating a context menu in all PeopleSoft pages using Jquery. I have been successfull when I test the pages in Firefox or Chrome browsers. However in Internet Explorer (8,9,10) version i keep getting error "SCRIPT438: Object doesn't support property or method 'defineProperty' ". Because of this error my other custom javascript files errors out probably. I have tried all Jquery version but have not been successful. The only version were I am able to get my context menu working in IE is 1.8.2 but again here too the context menu does not show up as it was showing up in the Firfox or Chrome. Do you have any idea on how to resolve this.
Thanks
Nagaraj
@Nagaraj, I am not sure what is causing this. Is defineProperty part of your code or is it from a jQuery plugin? Perhaps you can place some debug code around it? Are you using the IE Developer Toolbar?
Hi Jim,
Thanks for your great posts over the years.
I am trying my hands on Jquery recently(Watermark on field) and I am stuck with a small issue.
The issue here is how to refer a PeopleSoft page field in a Jquery code?
Below is a sample watermark code:
$(document).ready(function() {
$(".PSLONGEDITBOX").attr("placeholder","Please review the page.");
Here instead of ".PSLONGEDITBOX" I want to pass field name.
I tried with recordname_fieldname; but didn't work. Could you please put some light on it.
P.S: all my fields are on level 0.
@Arpan, the easiest way to find the ID is to right-click on the element and choose inspect to view it in the browser's inspector. Typically fields have the record_field naming convention, but there are exceptions.
Hi Jim,
Thanks for your reply. But it's not an issue with record_field naming convention. Could you please provide some sample code.
@Arpan, did you right-click on the element and inspect the HTML? That is the only way to know for sure.
Hi Jim,
I am facing a jQquery issue only in Chrome when I fire it from PS Modal window.
It's working find in all other browsers from Peoplesoft except chrome.
It's working fine externally out side of Peoplesoft page including Chrome.
Any help is appreciated.
Below is my js.
script type="text/javascript"("","css");
/script
script /script
script/script
style
#simplemodal-container[style] {
position:relative;
height: 500px;
width: 900px;
}
/style
style
#simplemodal-container[style] iframe{
position:absolute;
height: 500px;
width: 900px;
}
/style
script type="text/javascript"
var Frame = "iframe src='' scrolling='No' marginwidth='0' marginheight='0' hspace='0' vspace='0' style='border-style: none;'/iframe";
var modalRan = 0;
$(document).ready(function() {
if (modalRan == 0) {
modalRan=1;
$.modal(Frame.valueOf(),{
autoResize:true
});
}
});
/script | http://jjmpsj.blogspot.com/2013/04/jquery-plugin-include-protection.html | CC-MAIN-2018-09 | refinedweb | 2,693 | 75.5 |
1
Checking Your Tools
Written by Audrey Tam
You’re eager to dive into this book and create your first iOS app. If you’ve never used Xcode before, take some time to work through this chapter. You want to be sure your tools are working and learn how to use them efficiently.
Getting started
To develop iOS apps, you need a Mac with Xcode installed. If you want to run your apps on an iOS device, you need an Apple ID. And if you have an account on GitHub or similar, you’ll be able to connect to that from Xcode.
macOS
To use the SwiftUI canvas, you need a Mac running Catalina (v10.15) or later. To install Xcode, your user account must have administrator status.
Xcode
To install Xcode, you need 29 GB free space on your Mac’s drive.
➤ Open the App Store app, then search for and GET Xcode. This is a large download so it takes a while. Plenty of time to fix yourself a snack while you wait. Or, to stay in the flow, browse Chapter 12, “Apple App Development Ecosystem”.
➤ When the installation finishes, OPEN it from the App Store page:
Note: You probably have your favorite way to open a Mac application, and it will work with Xcode, too. Double-click it in Applications. Or search for it in Spotlight. Or double-click a project’s .xcodeproj file.
The first time you open Xcode after installation, you’ll see this window: Install additional required components?:
➤ Click Install and enter your Mac login password in the window that appears. This takes a little while, about enough time to make a cup of tea or coffee.
➤ When this installation process finishes, you might have to open Xcode again. The first time you open Xcode, you’ll see this Welcome window:
If you don’t want to see this window every time you open Xcode, uncheck “Show this window when Xcode launches”. You can manually open this window from the Xcode menu Window ▸ Welcome to Xcode or press Shift-Command-1. And, there is an Xcode menu item to perform each of the actions listed in this window.
Creating a new Xcode project
You’ll create a new Xcode project just for this chapter. The next chapter provides a starter project that you’ll build on for the rest of Section 1.
➤ Click Create a new Xcode project. Or, if you want to do this without the Welcome window, press Shift-Command-N or select File ▸ New ▸ Project… from the menu.
A large set of choices appears:
➤ Select iOS ▸ App and click Next. Now, you get to name your project:
- For Product Name, type FirstApp.
- Skip Team for now.
- For Organization Identifier, type the reverse-DNS of your domain name. If you don’t have a domain name, just type something that follows this pattern, like org.audrey. The grayed-out Bundle Identifier changes to your-org-id.FirstApp. When you submit your app to the App Store, this bundle identifier uniquely identifies your app.
- For Interface, select SwiftUI.
- For Life Cycle, select SwiftUI App.
- For Language, select Swift.
- Uncheck the checkboxes.
➤ Click Next. Here’s where you decide where to save your new project.
Note: If you forget where you saved a project, you can find it by selecting File ▸ Show in Finder from the Xcode menu.
➤ If you’re saving this project to a location that is not currently under source control, click the Source Control checkbox to create a local Git repository. Later in this chapter, you’ll learn how to connect this to a remote repository.
➤ Click Create. Your new project appears, displaying ContentView.swift in the editor pane.
Looks like there’s a lot going on! Don’t worry, most iOS developers know enough about Xcode to do their job, but almost no one knows how to use all of it. Plus Apple changes and adds to it every year. The best (and only) way to learn it is to jump in and start using it.
Ready, set, jump!
A quick tour of Xcode
You’ll spend most of your time working in a .swift file:
The Xcode window has three main panes: Navigator, Editor and Inspectors. When you’re viewing a SwiftUI View file in the Editor, you can view the preview canvas side-by-side with the code. When the app is running, the Debug Area opens below the Editor.
You can hide or show the navigator with the toolbar button just above it, and the same for the inspectors. The debug area has a hide button in its own toolbar. You can also hide any of these three panes by dragging its border to the edge of the Xcode window.
And all three have keyboard shortcuts:
- Hide/show Navigator: Command-0
- Hide/show Inspectors: Option-Command-0
- Hide/show Debug Area: Shift-Command-Y
Note: There’s a handy cheat sheet of Xcode keyboard shortcuts in the assets folder for this chapter. Not a complete list, just the ones that many people use.
Navigator
The Navigator has nine tabs. When the navigator pane is hidden, you can open it directly in one of its tabs by pressing Command-1 to Command-9:
- Project: Add, delete or group files. Open a file in the editor.
- Source Control: View Git repository working copies, branches, commits, tags, remotes and stashed changes.
- Symbol: Hierarchical or flat view of the named objects and methods.
- Find: Search tool.
- Issue: Build-time and runtime errors and warnings.
- Test: Create, manage and run unit and UI tests.
- Debug: Information about CPU, memory, disk and network usage while your app is running.
- Breakpoint: Add, delete, edit and manage breakpoints.
- Report: View or export reports and logs generated when you build and run the project.
The Filter field at the bottom is different for each tab. For example, the Project Filter lets you show only the files you recently worked on. This is very handy for projects with a lot of files in a deeply nested hierarchy.
Editor
When you’re working in a code file, the Editor shows the code and a Minimap. The minimap is useful for long code files with many properties and methods. You can hover the cursor over the minimap to locate a specific property, then click to go directly to it. You don’t need it for the apps in this book, so you may want to hide it via the button in the upper right corner of the editor.
When you’re working in a SwiftUI file, Option-Command-Return shows or hides the preview canvas.
The editor has browser features like tab and go back/forward. Keyboard shortcuts for tabs are the same as for web browsers: Command-T to open a new tab, Shift-Command-[ or ] to move to the previous or next tab, Command-W to close the tab and Option-click a tab’s close button to close all the other tabs. The back/forward button shows a list of previous/next files, but the keyboard shortcuts are Control-Command-right or -left arrow.
Inspectors
The Inspectors pane has three or four tabs, depending on what’s selected in the Project navigator. When this pane is hidden, you can open it directly in one of its tabs by pressing Option-Command-1 to Option-Command-4:
- File: Name, Full Path, Target Membership.
- History: Source Control log.
- Quick Help: Short form of Developer Documentation if you select a symbol in the editor.
- Attributes: Properties of the symbol you select in the editor.
The fourth tab appears when you select a file in the Project navigator. If you select a folder, you get only the first three tabs.
This quick tour just brushes the surface of what you can do in Xcode. Next, you’ll use a few of its tools while you explore your new project.
Navigation preferences
In this book, you’ll use keyboard shortcuts to examine and structure your code. Unlike the fixed keyboard shortcuts for opening navigator tabs or inspectors, you can set preferences for which shortcut does what. To avoid confusion while working through this book, you’ll set your preferences to match the instructions you’ll see.
➤ Press Command-, to open Preferences. In the Navigation tab, set:
- Command-click on Code to Selects Code Structure
- Option-click on Code to Shows Quick Help
- Navigation Style to your choice of Open in Tabs or Open in Place.
ContentView.swift
The heart of your new project is in ContentView.swift, where your new project opened. This is where you’ll lay out the initial view of your app.
➤ If ContentView.swift isn’t in the editor, select it in the Project navigator.
The first several lines are comments that identify the file and you, the creator.
import
The first line of code is an
import statement:
import SwiftUI
This works just like in most programming languages. It allows your code to access everything in the built-in SwiftUI module. See what happens if it’s missing.
➤ Click on the
import statement, then press Command-/.
You commented out the
import statement, so compiler errors appear, complaining about
View and
PreviewProvider.
➤ Press Command-Z to undo.
Below the
import statement are two
struct definitions. A structure is a named data type that encapsulates properties and methods.
struct ContentView
The name of the first structure matches the name of the file. Nothing bad happens if they’re different, but most developers follow and expect this convention.
struct ContentView: View { var body: some View { Text("Hello, world!") .padding() } }
Looking at
ContentView: View, you might think
ContentView inherits from
View, but Swift structures don’t have inheritance.
View is a protocol, and
ContentView conforms to this protocol.
The required component of the
View protocol is the
body computed property, which returns a
View. In this case, it returns a
Text view that displays the usual “Hello, world!” text.
Swift Tip: If there’s only a single code statement, you don’t need to explicitly use the
returnkeyword.
The
Text view has a
padding modifier — an instance method of
View — that adds space around the text. You can see it in this screenshot:
This also shows the Quick Help inspector for
Text. If you don’t want to use screen real estate for this inspector, Option-click
Text in the code editor to see the same information in a pop-up window. Clicking Open in Developer Documentation opens a window with more information.
➤ Select the
Text view in either the code editor or the canvas, then select the Attributes inspector. Click in the Add Modifier field and wait a short while until the modifiers menu appears:
Scrolling through this list goes on and on and on.
This inspector is useful when you want to add several modifiers to a
View. If you just need to add one modifier, Control-Option-click the view to open the Attributes inspector pop-up window.
struct ContentView_Previews
Below
ContentView is a
ContentView_Previews structure.
struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } }
The
ContentView_Previews structure is what appears in the canvas on the right of the code editor. Again, see what happens if it’s missing.
➤ Select the five lines of
ContentView_Previews and press Command-/.
Without
ContentView_Previews, there’s nothing in the canvas.
➤ Press Command-Z to undo or, if the five lines are still selected, press Command-/ to uncomment them.
For most apps, ContentView.swift is just the starting point. Often,
ContentView just defines the app’s organization, orchestrating several subviews. And usually, you’ll define these subviews in separate files.
Creating a new SwiftUI View file
Everything you see in a SwiftUI app is a
View. Apple encourages you to create as many subviews as you need, to avoid redundancy (DRY or Don’t Repeat Yourself) and organize your code to keep it manageable. The compiler takes care of creating efficient machine code, so your app’s performance won’t suffer.
➤ In the Project navigator, select ContentView.swift and type Command-N. Alternatively, right-click ContentView.swift then select New File… from the menu.
Xcode Tip: A new file appears in the project navigator below the currently selected file. If that’s not where you want it, drag it to where you want it to appear in the project navigator.
The new file window displays a lot of options! The one you want is iOS ▸ User Interface ▸ SwiftUI View. In Chapter 5, “Organizing Your App’s Data”, you’ll get to create a Swift File.
Naming a new SwiftUI view
➤ Select SwiftUI View then click Next. The next window lets you specify a file name. By default, the name of the new view will be the same as the file name. You’ll define the
ByeView in this file, so replace SwiftUIView with ByeView.
Swift Tip: Swift convention is to name types (like
struct) with UpperCamelCase and properties and methods with lowerCamelCase.
This window also lets you specify where (in the project) to create your new file. The default location is usually correct: in this project, in this group (folder) and in this target.
➤ Click Create to finish creating your new file.
The template code for a SwiftUI view looks almost the same as the
ContentView of a new project.
import SwiftUI struct ByeView: View { var body: some View { Text("Hello, world!") } } struct ByeView_Previews: PreviewProvider { static var previews: some View { ByeView() } }
Like
ContentView, the view’s body contains
Text("Hello, world!"), but there’s no padding.
Using your new SwiftUI view
Next, edit your new view’s
Text view string to look like this:
Text("Bye bye, World!")
Now, in ContentView.swift, in the code editor, delete the
Text view, then type bye. Xcode suggests some auto-completions:
Notice you don’t have to type the correct capitalization of
ByeView.
Xcode Tip: Descriptive names for your types, properties and methods is good programming practice, and auto-completion is one way Xcode helps you do the right thing. You can also turn on spell-checking from the Xcode menu: Edit ▸ Format ▸ Spelling and Grammar ▸ Check Spelling While Typing.
Select ByeView from the list, then add parentheses, so the line looks like this:
ByeView()
You’re calling the initializer of
ByeView to create an instance of the view.
➤ Click Resume or press Option-Command-P to refresh the preview:
You’ll create many new SwiftUI view files and Swift files to develop the apps in this book.
What else is in your project?
The Project navigator lists several files and folders.
- FirstAppApp.swift: This file contains the code for your app’s entry point. This is what actually launches your app.
@main struct FirstAppApp: App { var body: some Scene { WindowGroup { ContentView() } } }
The
@main attribute marks
FirstAppApp as the app’s entry point. You might be accustomed to writing a
main() method to actually launch an app. The
App protocol takes care of this.
The
App protocol requires only a computed property named
body that returns a
Scene. And a
Scene is a container for the root view of a view hierarchy.
For an iOS app, the default setup is a
WindowGroup scene containing
ContentView() as its root view. A common customization is to set different root views, depending on whether the user has logged in.
In an iOS app, the view hierarchy fills the entire display. In a macOS or iPadOS app,
WindowGroup can manage multiple windows.
- Assets.xcassets: Store your app’s images and colors here. AppIcon is a special image set for all the different sizes and resolutions of your app’s icon.
- Info.plist: This configuration property list contains information needed to launch your app. Many of the names are environment variables derived from the options you set when you created the project. Here you can find things like the app name and version number.
- Preview Content: If your views need additional code and sample data or assets while you’re developing your app, store them here. They won’t be included in the final distribution build of your app.
- Products: This is where Xcode stores your app after you build and run the project. A project can contain other products, like a Watch app or a framework.
In this list, the last two items are groups. Groups in the Project navigator appear to be folders, but they don’t necessarily match up with folders in Finder. In particular, there’s no Products folder in your project in Finder.
➤ In the Project navigator, select Products ▸ FirstApp.app, then show the File inspector:
FirstApp.app isn’t anywhere near your project files! It’s in your home directory’s hidden Library folder.
Note: Don’t rename or delete any of these files or groups. Xcode stores their path names in the project’s build settings and will flag errors if it can’t find them.
You’ll learn how to use these files in the rest of this book.
Xcode Preferences
Xcode has a huge number of preferences you can set to make your time in Xcode more productive.
Themes
You’ll be spending a lot of time working in the code editor, so you want it to look good and also help you distinguish the different components of your code. Xcode provides several preconfigured font and color themes for you to choose from or modify.
➤ Press Command-, to open Preferences, then select the Themes tab:
Go ahead and explore these. You can customize them or create your own. I’ll wait here. ;]
Matching delimiters
SwiftUI code uses a lot of nested closures. It’s really easy to mismatch your braces and parentheses. Xcode helps you find any mismatches and tries to prevent these errors from happening.
➤ In Preferences, select Text Editing ▸ Editing:
Most of the Code Completion items are super helpful. Although you can copy and paste code from this book, you should try to type the code as much as possible to learn how these aids work.
Here’s a big hint that something’s wrong or you’re typing in the wrong place: You’re expecting Xcode to suggest completions while you type, but nothing (useful) appears. When this happens, it’s usually because you’re outside the closure you need to be in.
➤ Now select the Text Editing ▸ Display tab. Check Code folding ribbon and, if you like to see them, Line numbers:
So what’s a code folding ribbon? Between the line numbers and the code, you see darker gray vertical bars. Hover your cursor over one, and it highlights the start and end braces of that closure:
Other ways to see matching delimiters:
- Option-hover over {, (, [ or a closing delimiter: Xcode highlights the start and end delimiters.
- Double-click a delimiter: Xcode selects the delimiters and their contents.
➤ Now click the bar (ribbon) to collapse (fold) those lines of code:
This can be incredibly useful when you’re trying to find your way around some complex deeply-nested code.
➤ Click the ribbon to unfold the code.
Adding accounts
You can access some Xcode features by adding login information for your Apple ID and source control accounts.
➤ In Preferences, select Accounts:
➤ Add your Apple ID. If you have a separate paid Apple Developer account, add that too.
To run your app on a device, you’ll need to select a Team. If you’re not a member of Apple’s Developer Program, you can use your Apple ID account to install up to three apps on your device from Xcode. The app works for seven days after you install it.
To add capabilities like push notifications or Apple Pay to your app, you need to set Team to a Developer Program account.
Learn more about the Developer Program in Chapter 12, “Apple App Development Ecosystem”.
- If you have an account at Bitbucket, GitHub or GitLab, add it here if you want to push your project’s local git repository to a remote repository.
Bitbucket, GitHub and GitLab accounts require a personal access token. Click the link to open the site’s token-creation page.
➤ To set up a remote repository, open the Source Control navigator (Command-2), then click the settings button and select New “FirstApp” Remote…:
➤ Select your options, then click Create:
And here it is:
Running your project
So far, you’ve relied on Preview to see what your app looks like. In the next chapter, you’ll use Live Preview to interact with your app. But some features don’t work in Live Preview, so then you need to build and run your app on a simulator. And some things will only work on an iOS device. Plus, it’s fun to have something on your iPhone that you built yourself!
The Xcode toolbar
First, a quick tour of the toolbar:
Xcode Tip: Press Option-Command-T to show or hide the toolbar. If this keyboard shortcut conflicts with another app, select the command from the Xcode View menu.
So far, you’ve only used the buttons at either end of the toolbar, to show or hide the navigator or inspector panes.
Working from left to right after the navigation pane button:
- Run button: Build and run (Command-R) the project.
- Stop button: Stop (Command-.) the running project.
- Scheme menu: This button’s label is the name of the app. Select, edit or manage schemes. Each product has a scheme. FirstApp has only one product, so it has only one scheme.
- Run destination menu: This menu defaults to its last item, currently iPod touch. Select a connected device or a simulated device to run the project.
- Activity view: A wide gray field that shows the project name, status messages and warning or error indicators.
- Library button: Label is a + sign. Opens the library of views, modifiers, code snippets and media and colors stored in Assets. Option-click this button to keep the library open.
- Code review button: If this project has a local git repository, this button shows a
diffof the current version of the current file and the most recent committed version. You can choose earlier committed versions from a menu.
Now that you know where the controls are, it’s time to use some of them.
Choosing a run destination
Apple sells a lot of different iPhone models, plus iPads and even an iPod Touch. They’re all different sizes, and some have a notch. How do you know if your app looks good on every screen size?
You don’t need a complete collection of iOS devices. Xcode has several Developer Tools, and one of them is Simulator. The run destination menu lets you choose from a list of simulated devices.
➤ Click the run destination button and select iPhone 12 Pro.
➤ Refresh the preview of
ContentView or
ByeView:
The preview uses the run destination device by default. You can create more than one preview and set each to a different device with the
previewDevice modifier.
For example:
struct ContentView_Previews: PreviewProvider { static var previews: some View { Group { ContentView() ContentView() .previewDevice("iPhone SE (2nd generation)") } } }
The preview usually looks the same as your app running on a simulated or real device, but not always. If you feel the preview doesn’t match what your code is laying out, try running it on a simulator.
Note: To zoom in or out on the preview canvas, use the + or - buttons in the canvas toolbar.
Build and run
➤ Click the run button or press Command-R.
The first time you run a project on a simulated device, it starts from an “off” state, so you’ll see a loading indicator. Until you quit the Simulator app, this particular simulated device is now “awake”, so you won’t get the startup delay even if you run a different project on it.
After the simulated device starts up, the app’s launch screen appears. For FirstApp, this is just a blank screen. You’ll learn how to set up your own launch screen in Chapter 16, “Adding Assets to Your App”.
And now, your app is running!
There’s not much happening in this app, but the debug toolbar appears below the editor window. For this screenshot, I showed the debug area, selected the Debug tab in the navigator pane, then selected the CPU item.
Not stopping
Here’s a trick that will make your Xcode life a little easier.
➤ Don’t click the stop button. Yes, it’s enabled. But trust me, you’ll like this. :]
➤ In ByeView.swift, replace “Bye bye” with “Hello again”:
Text("Hello again, World!")
➤ Click the run button or press Command-R.
Up pops this message:
➤ Don’t click Stop, although that will work: The currently running process will stop, and the new process will run. And this will happen every time you forget to stop the app. It takes just a moment, but it jars a little. Every time. And it’s easy to get rid of.
➤ Check Do not show this message again, then click Stop.
The app loads with your new change. But that’s not what I want to show you.
➤ One more time: Click the run button or press Command-R.
No annoying message, no “doh!” moment, ever again! You’re welcome. ;]
Running your apps on an iOS device
Sometimes, your app doesn’t look or behave quite right on the simulated device. Running it on a real device is the final word: It might look just as you expect, or it might agree with the preview and simulator that you’ve got more work to do.
Also, there are features like motion and camera that you can’t test in a simulator. For these, you must install your app on a real device.
Apple does its best to protect its users from malicious apps. Part of this protection is ensuring Apple knows who is responsible for every app on your device. Before you can install your app from Xcode onto your device, you need to select a team (your Apple ID), to get a signing certificate from Apple.
➤ In the project page, select the target. In the Signing & Capabilities tab, check Automatically manage signing, then select your account from the Team menu:
After some activity spinning, you’ll see a Provisioning Profile and a Signing Certificate. Xcode has created these and stored the certificate in your Mac’s keychain.
Note: The Bundle Identifier of your project uses your organization name because you created it as a new project. The other apps in this book have starter projects with com.raywenderlich as the organization. If you want to run these apps on an iOS device, you need to change the organization name in the bundle ID to something that’s uniquely yours. This is because one of the authors has already signed the app with the original bundle ID, and you’re not a member of our teams.
To run this book’s apps on your iOS device, it must have iOS 14 installed. If it’s not the absolute latest update, select the project, then set its iOS Deployment Target to match your device.
➤ Connect your device to your Mac with a cable. Use an Apple cable, as other-brand cables might not work for this purpose.
Note: If your account is a paid Apple Developer account, you won’t need to do the next several steps. Running your app on your device will just work.
The first time you connect a device to your Mac, the device will ask Trust This Computer?
➤ Tap Trust, then enter the device passcode when prompted.
➤ Select your device from the run destination menu: It appears at the top, above the simulators:
➤ Unlock your device, then build and run your project. Keep your device screen active until the app launches on your device.
This is the first time you’re running an app on a device, so there are several extra steps that Apple makes you perform, mainly trying to make sure nothing nasty installs itself on your device.
➤ First, you need to allow codesign to access the certificate that Xcode stored in your keychain:
➤ Enter your password, then click Always Allow.
Next, you’ll see FirstApp’s app icon appear on the screen of your device, but this error message appears on your Mac:
Of the three possible reasons, it’s the last one that’s holding things up: its profile has not been explicitly trusted by the user. Apple really doesn’t want just anyone installing potentially malicious apps on your device. You have to say it’s OK. The problem is, there’s not much here to tell you what to do.
➤ Well, the app icon is on your device’s screen, so why not tap it to see what happens?
You can allow using these apps in Settings is a pretty minimal hint, but open Settings to see what’s there. You’ll probably never guess where to look, so here are the relevant screenshots:
➤ Tap General. Scroll down to Device Management — you can just see the start of your certificate name. Tap this item.
➤ Tap Apple Development…, then tap Trust “Apple Development… and finally, tap Trust.
You won’t need to do this again unless you delete all your apps from this device.
➤ Now close Settings and tap the FirstApp icon:
Underwhelming? Yes, well, it’s the thought that counts. ;]
What matters is, you’re now all set up to run your own projects on this device. When you really want to get something running right away, you won’t have to stop and deal with any of this Trust business.
In the following chapters, you’ll create a much more interesting app.
Key points
- The Xcode window has Navigator, Editor and Inspectors panes, a Toolbar and a Debug Area, plus a huge number of Preferences.
- You can set some navigation keyboard shortcuts in Preferences, to match the instructions in this book.
- The template project defines an
Appthat launches with
ContentView, which displays “Hello, world!” in a
Textview.
- You can view Quick Help documentation in an inspector or by using a keyboard shortcut. Or, you can open the Developer Documentation window.
- When you create a new SwiftUI view file, give it the same name as the
Viewyou’ll create in it.
- Xcode’s auto-completion, delimiter-matching, code-folding and spell-checking help you avoid errors.
- You can choose one of Xcode’s font and color themes, modify one or create your own.
- You can run your app on a simulated device or create previews of specific devices.
- You must add an Apple ID account in Xcode Preferences to run your app on an iOS device.
- The first time you run your project on an iOS device, Apple requires you to complete several “Trust” steps. | https://koenig-assets.raywenderlich.com/books/swiftui-apprentice/v1.0/chapters/1-checking-your-tools | CC-MAIN-2021-49 | refinedweb | 5,131 | 74.08 |
Jeremy Shaw <Jeremy.Shaw at linspireinc.com> writes: > I am making a wild guess because I do not have all the information in > front of me right now but would this work ? > > ... do x <- if cond >> then textInputField ... >> else return () ... Let me make another guess, probably an even wilder one, ... You have to return a common type for both branches of the if. In the code snippet from above, you either get back a handle to a text input field or () --- so the types won't fit. To unify the types of both branches, I guess you have to introduce an new "wrapper" data type that *mabye* holds a handle of an input field *m* with return value *a* and validity flag *x*. data MaybeInputField m a x = MaybeInputField (Maybe (m a x)) It is important that all "input field type constructors" take two type arguments---one for the return type, and another one for the validity. Otherwise you won't be able to pass the input field to a submit button. (MaybeInputField TexxtInputField) for example fits into that scheme. With that in place, you may be able to write something like the following ... ... do mH <- if cond then do h <- textInputField ... return (MaybeInputField (Just h)) else return (MaybeInputField Nothing) -Matthias > On Feb 24, 2005 08:42 AM, John Goerzen <jgoerzen at complete.org> >> > > -- > >. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > > -- | http://www.haskell.org/pipermail/haskell-cafe/2005-February/009262.html | CC-MAIN-2014-42 | refinedweb | 234 | 73.27 |
C++/CLI specifies several keywords as extensions to ISO C++. The way they are handled
falls into five major categories, where only the first impacts the meaning of existing
ISO C++ programs.
1. Outright reserved words
As of this writing (November 22, 2003, the day after we released the candidate base
document), C++/CLI is down to only three reserved words:
gcnew generic nullptr
An existing program that uses these words as identifiers and wants to use C++/CLI
would have to rename the identifiers. I'll return to these three again at the end.
All the other keywords, below, are contextual keywords that do not conflict with identifiers.
Any legal ISO C++ program that already uses the names below as identifiers will continue
to work as before; these keywords are not reserved words.
2. Spaced keywords
One implementation technique we are using is to specify some keywords that include
embedded whitespace. These are safe: They can't possibly conflict with any user identifiers
because no C++ program can create an identifier that contains whitespace characters.
[I'll omit the obligatory reference to Bjarne's classic April Fool's joke article
on the whitespace operator. 🙂 But what I'm saying here is true, not a joke.]
Currently these are:
for each
enum class/struct
interface class/struct
ref class/struct
value class/struct
For example, "ref class" is a single token in the lexer, and programs
that have a type or variable or namespace named ref are entirely
unaffected. (Somewhat amazingly, even most macros named ref are
unaffected and don't affect C++/CLI, unless coincidentally the next token in the macro's
definition line happens to be class or struct; more
on this near the end.)
3. Contextual keywords that can never appear where an identifier could appear
Another technique we used was to define some keywords that can only appear in positions
in the language grammar where today nothing may appear. These too are safe: They can't
conflict with any user identifiers because no identifiers could appear where the keyword
appears, and vice versa. Currently these are:
abstract finally in
override sealed where
For example, abstract as a C++/CLI keyword can only appear in a class
definition after the class name and before the base class list, where nothing can
appear today:
ref class X abstract : B1, B2 { // ok, can only be the keyword
int abstract; //
ok, just another identifier
};
class abstract { }; //
ok, just another identifier
namespace abstract { /*...*/ } // ok, just another identifier
4. Contextual keywords that can appear where an identifier could appear
Some keywords can appear in a grammar position where an identifier could also appear,
and this is the case that needs some extra attention. There are currently five keywords
in this category:
delegate event initonly
literal property
In such grammar positions, when the compiler encounters a token that is spelled the
same as one of these keywords, the compiler can't know whether the token means the
keyword or whether it means an identifier until it first does some further lookahead
to consider later tokens. For example, consider the following inside a class scope:
property int x; // ok, here property is the contextual
keyword
property x; // ok, if property
is the name of a type
Now imagine you're a compiler: What do you do when you hit the token property as
the first token of the next class member declaration? There's not enough information
to decide for sure whether it's an identifier or a keyword without looking further
ahead, and C++/CLI has to specify the decision procedure -- the rules for deciding
whether it's a keyword or an identifier. As long as the user doesn't make a mistake
(i.e., as long as it's a legal program with or without C++/CLI) the answer is clear,
because there's no ambiguity.
But now the "quality of diagnostics" issue rears its head, in this category of
contextual keywords and this category only: What if the user makes a mistake?
For example:
property x; // error, if no
type "property" exists
Let's say that we set up a disambiguation rule with the following general structure
(I'll get specific in just a moment):
1. Assume one case and try to parse what comes next that way.
2. If that fails, then assume the other case and try again.
3. If that fails, then issue a diagnostic.
In the case of property x; when there's no type in scope named property,
both #1 and #2 will fail and the question is: When we get to the diagnostic in case
#3, what error message is the user likely to see? The answer almost certainly is,
a message that applies to the second "other" case. Why? Because the compiler already
tried the first case, failed, backed up and tried the second "other" case -- and it's
still in that latter mode with all that context when it finally realizes that didn't
work either and now it has to issue the diagnostic. So by default, absent some (often
prodigious) amount of extra work inside the compiler, the diagnostic that you'll get
is the one that's easiest to give, namely the one for the case the compiler was most
recently pursuing, namely the "other" case mentioned in #2 -- because the compiler
already gave up on the first case, and went down the other path instead.
So let's get specific. Let's say that the rule we picked was:
1. Assume that it's an identifier and try to parse it that way
(i.e., by default assume no use of the keyword extension).
2. If that fails, then assume that it's the keyword and try again.
3. If that fails, then issue a diagnostic.
Under that rule, what's the diagnostic the user gets on an illegal declaration of property
x;? One that's in the context of #2 (keyword), something like "illegal property
declaration," perhaps with a "the type 'x' was not defined" or a
"you forgot to specify the type for property 'x'" in there somewhere.
On the other hand, let's say that the rule we picked was:
1. Assume that it's the keyword and try to parse it that way.
2. If that fails, then assume that it's an identifier and try again.
3. If that fails, then issue a diagnostic.
Under this rule, the diagnostic that's easy to give is something like "the type 'property'
was not defined."
Which is better?
This illustrates why it's very important to consider common mistakes and whether the
diagnostic the user will get really applies to what he was probably trying to do.
In this case, it's probably better to emit something like "no type named 'property'
exists" than "you forgot to specify a type for your property named 'x'"
-- the former is more likely to address what the user was trying to do, and it also
happens to preserve the diagnostics for ISO C++ programs.
More broadly, of course, there are other rules you can use than the two "try one way
then try the other" variants shown above. But I hope this helps to give the flavor
for the 'quality of diagnostics' problem.
- Aside: There's usually no ambiguity in the case of property (or
the other keywords in this category); the only case I know of where you could write
legal C++/CLI code where one of these five keywords could be legally interpreted
both ways, both as the keyword and as an identifier, is when the type has a global
qualification. Here's an example courtesy of Mark Hall:
initonly :: T t;
Is this a declaration of an initonly member t of
type ::T (i.e, initonly ::T t;), or
a declaration of a member t of type initonly::T (i.e, initonly::T
t; where if initonly is the name of a namespace or class then this is legal
ISO C++). Our current thinking is to adopt the rule "if it can be an identifier, it
is," and so this case would mean the latter, either always (even if there's no such
type) or perhaps only if there is such a type.
I feel compelled to add that the collaboration and input over the past year-plus from Bjarne
Stroustrup and the folks at EDG (Steve Adamczyk,
John Spicer, and Daveed Vandevoorde) has been wonderful and invaluable in this regard
specifically. It has really helped to have input from other experienced compiler writers,
including in Bjarne's case the creator of the first C++ compiler and in EDG's case
the folks who have one of the world's strongest current C++ compilers. On several
occasions all of their input has helped get rid of inadvertent assumptions about "what's
implementable" and "what's diagnosable" based on just VC++'s own compiler implementation
and its source base. What's easy for one compiler implementation is not necessarily
so for another, and it's been extremely useful to draw on the experience of comparing
notes from two current popular ones to make sure that features can be implemented
readily on various compiler architectures and source bases (not just VC++'s) and with
quality user diagnostics.
5. Not keywords, but in a namespace scope
Finally, there are a few "namespaced" keywords. These make the most sense for pseudo-library
features (ones that look and feel like library types/functions but really are special
names known to the compiler because the compiler does special things when handling
them). They appear in the stdcli namespace and are:
array interior_ptr pin_ptr
safe_cast
That's it.
Now, for a moment let's go back to case #1, reserved words. Right now we're down to
three reserved words. What would it take to get down to zero? Consider the cases:
- nullptr: This has been proposed in WG21/J16 for C++0x, and at the
last meeting three weeks ago the evolution working group (EWG) was favorable to it
but wanted a few changes. The proposal
paper was written by me and Bjarne, and we will revise the paper for the
next meeting to reflect the EWG direction. If C++0x does adopt the proposal and chooses
to take the keyword nullptr then the list of C++/CLI reserved words
goes down to two and C++/CLI would just directly follow the C++0x design for nullptr,
including any changes C++0x makes to it.
- gcnew: One obvious way to avoid taking this as a reserved word would
be to put it into bucket #1 as a spaced keyword, "gc new".
- generic: Similarly, a spaced keyword (possibly "generic template")
would avoid taking this reserved word. Unfortunately, spelling it "<anything> template"
is not only ugly, but seriously misleading because a generic really is not at all
a template.
Is it worth it to push all the way down to zero reserved words in C++/CLI? There are
pros and cons to doing so, but I've certainly always been sympathetic to the goal
of zero reserved words; Brandon and
others will surely tell you of my stubborn campaigning to kill off reserved words
(I think I've killed off over a half dozen already since I took the reins of this
effort in January, but I haven't kept an exact body count).
I think the right time to decide whether to push for zero reserved words is probably
near the end of the C++/CLI standards process (summer-ish 2004). At that point, when
all other changes and refinements have been made and everything else is in its final
form, we will have a complete (and I hope still very short) list of places where C++/CLI
could change the meaning of an existing C++ program, and that will be the best time
to consider them as a package and to make a decision whether to eliminate some or
all of them in a drive-it-to-zero cleanup push. I am looking forward to seeing what
the other participants in all C++ standards arenas, and the broader community, think
is the right thing to do as we get there.
Putting it all together, what's the impact on a legal ISO C++ program? Only:
- The (zero to three) reserved words, which we may get down to zero.
- Macros with the same name as a contextual keyword, which ought to be rare because
macros with all-lowercase names, never mind names that are common words, are already
considered bad form and liable to break way more code than just C++/CLI. (For example,
if a macro named event existed it would already be breaking most
attempts to use Standard C++ iostreams, because the iostreams library has an enum
named event.)
Let me illustrate the macro cases with two main examples that affect the spaced keywords:
// Example 1: this has a different meaning in ISO C++ and C++/CLI
#define interface struct
In ISO C++, this means change every instance of interface to struct.
In C++/CLI, because "interface struct" is a single token, the macro
means instead to change every instance of "interface struct" to nothing.
Here's the simplest workaround:
// Workaround 1: this has the same meaning in both
#define interface interface__
#define interface__ struct
Here's another example of a macro that can change the meaning of a program in ISO
C++ and C++/CLI:
// Example 2: this has a different meaning in ISO C++ and C++/CLI
#define ref const
ref class C { } c;
In ISO C++, ref goes to const and the last line
defines a class C and simultaneously declares a const object of that
type named c. This is legal code, albeit uncommon. In C++/CLI, the
macro has no effect on the class declaration because "ref class"
is a single token (whereas the macro is looking for the token ref alone,
not "ref class") and so the last line defines a ref class C and
simultaneously declares a (non-const) object of that type named c.
Here's the simplest workaround:
// Workaround 2: this has the same meaning in both
#define REF const
REF class C { } c;
But hey, macro names are supposed to be uppercase anyway. 🙂
I hope these cases are somewhere between obscure and pathological. At any rate, macros
with short and common names are generally unusual in the wild because they just break
so much stuff. I would rate example 1 above as fairly obscure (although windows.h
has exactly that line in it, alas) and example 2 as probably outright pathological
(as I would rate all macros with short and common names).
Whew. That's all for tonight.
I was all set to say, "#define interface struct is not at all obscure – it’s in the VC++ header files." So, I guess the next version of the compiler will have new header files defining interface IDispatch, etc?
Why don’t you guys do something useful like add C99 support or GCC extensions instead of this stuff? Or hell, even numeric literals with the 0b/0B prefix would be a decent start.
PingBack from | https://blogs.msdn.microsoft.com/hsutter/2003/11/23/ccli-keywords-under-the-hood/ | CC-MAIN-2017-43 | refinedweb | 2,531 | 53.75 |
Frequently asked question about ucampas
Can I call ucampas from Windows XP?
Ucampas does not yet run under Windows, but you can use PuTTY to call ucampas on a Linux server. You can configure Windows Explorer to make this very convenient:
First, if you haven't done so yet, install PuTTY using the Windows installer (we need plink and pageant). Then set up public-key based login to slogin-serv1.
Then, in the Windows XP (or Server 2003) Explorer, pick the menu Tools | Folder Options ... | File Types. Select there registered file type HTML and click on the “Advanced” button. Add an action with the “New ...” button. Fill in the following settings:
- Action:
- ucampas
- Application used to perform action:
- "C:\Program Files\PuTTY\plink.exe" -t -l %USERNAME% slogin-serv1.cl.cam.ac.uk "/anfs/www/tools/bin/ucampas -r \"`/anfs/www/tools/bin/filerpath '%1'`\" && sleep 3 || read -p 'Press return to continue ...'"
- Use DDE:
- (not ticked)
If your local username differs from your departmental username, then replace %USERNAME% with the latter.
When you now right-click in Explorer on any HTML file located on the Computer Laboratory filer, you will see an action “ucampas” that runs ucampas over that file. You can do this on either the *-b.html or the resulting *.html file, ucampas will figure out what file to process in either case. If you click on a folder, all the files in the uconfig.txt-defined navigation tree within that folder will be processed.
Note: Microsoft removed in Windows Vista and Server 2008 the advanced file-types configuration menu used above and appears to offer nothing equivalent. (The assoc and ftype commands do not appear to support the association of more than one command line with a file type. Graham Titmus suggested the ExtMan utility by S.D. Gerling.)
Can I use ucampas with "make"?
The dependency relationship between ucampas input files is far more complex than what a simple Makefile can express. Therefore, "make" is not able to identify the minimal set of pages that needs to be recompiled after a source file was edited. A web page may have to be recompiled by ucampas if any of the following files have changed:
- the *-b.html source file of the web page,
- any uconfig.txt file in the same or any higher-up directory,
- any other *-b.html source file whose title is listed in the breadcrumbs, navigation bar, sitemap, or similar automatically generated navigation content on the page, if its title has changed.
What we, therefore, do currently on the main website is to call ucampas immediately on any *-b.html file after that has changed, to make changes to the body of that web page immediately visible. We then start a "ucampas -r" background process that rebuilds the entire site from its root directory. This may not be quite as elegant as what "make" users are accustomed to. However, automatically extracting titles from source files substantially reduces duplication of information and the risk of inconsistency compared to many alternatives.
Some people found the following Makefile useful if they do not have ucampas in their path, however it does not automatically ensure that all web pages are updated only when necessary:
%.html: %-b.html /anfs/www/tools/bin/ucampas $* all: /anfs/www/tools/bin/ucampas -r clean: /anfs/www/tools/bin/ucampas-clean
Call "make file.html" after you have edited file-b.html, and call "make" whenever you have edited a uconfig.txt file or a page title. (Do not routinely call "make clean", because ucampas decides whether to use relative or absolute URLs in navigation links depending on whether the target exists in your working directory.)
A more elegant solution is on the long-term wishlist.
Note: There are in fact another ways of using ucampas that simplify the dependency relationship. If you specified all page titles in uconfig.txt files, using the title or navtitle attribute, then the third point above no longer applies. Each output file would depend only on its own *-b.html input file and any uconfig.txt file along the path to the root. The title attribute allows you to leave the title element in each *-b.html file empty. The dependency relationship can further be simplified by defining the entire navigation tree in a single root uconfig.txt file. Any other uconfig.txt file can then remain empty. (Other uconfig.txt files might still be needed to maintain the path to the root, unless you flatten the file tree compared to the navigation tree using nosub.)
Can I use ucampas with HTTPS?
Ucampas tries to convert the URL of any hyperlink that it inserts into a page into a relative URL. Relative URLs help to make a web site easily accessible via different protocols, such as “http://”, “https://”, and “file://”.
Make sure you have told Ucampas via the url parameter the http:// URL of your web page, otherwise Ucampas will be unable to convert a number of house-style related URLs into relative ones.
Example: Let's assume you use Ucampas to format a personal home page such as = ~/public_html/index.html
Then make sure that Ucampas knows where that page lives in the http namespace by adding to your ~/public_html/uconfig.txt a line like:
url="",
(You don't have to repeat this in subdirectories, Ucampas is smart enough to inherit the “url” attribute correctly extended across sub pages.)
Also, make sure you have not set the file_access parameter or used option -i, which prevent a number of house-style related URLs to be converted into relative ones, and therefore can break a page when accessed via HTTPS. | http://www.cl.cam.ac.uk/local/web/ucampas/faq.html | CC-MAIN-2014-42 | refinedweb | 943 | 56.35 |
Currently, I have a 3D Python list in jagged array format.
A = [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0], [0], [0]]]
A + 4
[[[4, 4, 4], [4, 4, 4], [4, 4, 4]], [[4], [4], [4]]]
B = numpy.array(A)
B + 4
TypeError: can only concatenate list (not "float") to list
The answers by @SonderingNarcissit and @MadPhysicist are already quite nice.
Here is a quick way of adding a number to each element in your list and keeping the structure. You can replace the function
return_number by anything you like, if you want to not only add a number but do something else with it:
def return_number(my_number): return my_number + 4 def add_number(my_list): if isinstance(my_list, (int, float)): return return_number(my_list) else: return [add_number(xi) for xi in my_list] A = [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0], [0], [0]]]
Then
print add_number(A)
gives you the desired output:
[[[4, 4, 4], [4, 4, 4], [4, 4, 4]], [[4], [4], [4]]]
So what it does is that it look recursively through your list of list and everytime it finds a number it adds the value 4; this should work for arbitrarily deep nested lists. That currently only works for numbers and lists; if you also have e.g. also dictionaries in your lists then you would have to add another if-clause. | https://codedump.io/share/E2l9WjGpGtps/1/converting-a-3d-list-to-a-3d-numpy-array | CC-MAIN-2018-09 | refinedweb | 227 | 65.39 |
We’re quite eager to get to applications of algebraic topology to things like machine learning (in particular, persistent homology). Even though there’s a massive amount of theory behind it (and we do plan to cover some of the theory), a lot of the actual computations boil down to working with matrices. Of course, this means we’re in the land of linear algebra; for a refresher on the terminology, see our primers on linear algebra.
In addition to applications of algebraic topology, our work with matrices in this post will allow us to solve important optimization problems, including linear programming. We will investigate these along the way.
Matrices Make the World Go Round
Fix two vector spaces
of finite dimensions
(resp), and fix bases for both spaces. Recall that an
matrix uniquely represents a linear map
, and moreover all such linear maps arise this way. Our goal is to write these linear maps in as simple a way as possible, so that we can glean useful information from them.
The data we want from a linear map
are: a basis for the kernel of
, a basis for the image of
, the dimensions of both, and the eigenvalues and eigenvectors of
(which is only defined if in addition
). As we have already seen, just computing the latter gives us a wealth of information about things like the internet and image similarity. In future posts, we will see how the former allows us to infer the shape (topology) of a large data set.
For this article, we will assume our vector spaces lie over the field
, but later we will relax this assumption (indeed, they won’t even be vector spaces anymore!). Luckily for us, the particular choice of bases for
and
is irrelevant. The function
is fixed, and the only thing which changes is the matrix representation of
with respect to our choice of bases. If we pick the right bases, then the pieces of information above are quite easy to determine. Rigorously, we say two
matrices
are row equivalent if there exists an invertible matrix
with
. The reader should determine what the appropriate dimensions of this matrix are (hint: it is square). Moreover, we have the following proposition (stated without proof) which characterizes row equivalent matrices:
Proposition: Two matrices are row equivalent if and only if they represent the same linear map
up to a choice of basis for
.
The “row” part of row equivalence is rather suggestive. Indeed, it turns out that our desired form
can be achieved by a manipulation of the rows of
, which is equivalently the left multiplication by a special matrix
.
Reduced Row Echelon Form, and Elementary Matrices
Before we go on, we should describe the convenient form we want to find. The historical name for it is reduced row echelon form, and it imposes a certain form on the rows of a matrix
:
- its nonzero rows are above all of its zero rows,
- the leftmost entry in each nonzero row is 1,
- the leftmost entry in a row is strictly to the left of the leftmost entry in the next row, and
- the leftmost entry in each nonzero row is the only nonzero entry in its column.
The reduced row echelon form is canonical, meaning it is uniquely determined for any matrix (this is not obvious, but for brevity we refer the reader to an external proof). For example, the following matrix
has the given reduced row echelon form:
The entries which contain a 1 and have zeros in the corresponding row and column are called pivots, and this name comes from their use in the algorithm below.
Calling this a “form” of the original matrix is again suggestive: a matrix in reduced row echelon form is just a representation with respect to a different basis (in fact, just a different basis for the codomain
). To prove this, we will show a matrix is row equivalent to its reduced row echelon form. But before we do that, we should verify that the reduced row echelon form actually gives us the information we want.
For the rightmost matrix above, and assuming we know the correct choice of basis for
is
, we can determine a basis for the image quite easily. Indeed, if the
th column contains a single 1 in the
th row, then the vector
is in the image of
. Moreover, if we do this for each nonzero row (and because each nonzero row has a pivot) we obtain a basis for the whole image of
as a subset of the
. Indeed, they are linearly independent as they form a basis of
, and they span the image of
because each basis vector
which was not found in the above way corresponds to a row of all zeros. In other words, it is clear from the entries of the reduced row echelon form matrix that no vectors
expand as a linear combination of the unchosen
.
To put a matrix into reduced row echelon form, we allow ourselves three elementary row operations, and give an algorithm describing in what order to perform them. The operations are:
- swap the positions of any two rows,
- multiply any row by a nonzero constant, and
- add a nonzero multiple of one row to another row.
Indeed, we should verify that these operations behave well. We can represent each one by the left multiplication by an appropriate matrix. Swapping the
th and
th rows corresponds to the identity matrix with the same rows swapped. Multiplying the
th row by a constant
corresponds to the identity matrix with
in the
entry. And adding
times row
to row
corresponds to the identity matrix with a
in the
entry. We call these matrices elementary matrices, and any sequence of elementary row operations corresponds to left multiplication by a product of elementary matrices. As we will see by our algorithm below, these row operations are enough to put any matrix (with entries in a field) into reduced row echelon form, hence proving that all matrices are row equivalent to some matrix in reduced row echelon form.
Before we describe this algorithm, we should make one important construction which will be useful in the future. Fixing the dimension
of our elementary matrices we note three things: the identity matrix is an elementary matrix, every elementary matrix is invertible, and the inverse of an elementary matrix is again an elementary matrix. In particular, every product of elementary matrices is invertible (the product of the inverses in reverse order), and so we can describe the group generated by all elementary matrices. We call this group the general linear group, denoted
, and note that it has a very important place in the theory of Lie groups, which is (very roughly) the study of continuous symmetry. It has its name because we note that a matrix is in
if and only if its columns are linearly independent (and invertible). This happens precisely when it is row equivalent to the identity matrix. In other words, for any such matrix
there exists a product of elementary matrices
such that
, and hence
. So we can phrase the question of whether a matrix is invertible as whether it is in
, and answer it by finding its reduced row echelon form. So without further ado:
The Row Reduction Algorithm
Now that we’ve equipped ourselves with the right tools, let’s describe the algorithm which will transform any matrix into reduced row echelon form. We will do it straight in Python, explaining the steps along the way.
def rref(matrix): numRows = len(matrix) numCols = len(matrix[0]) i,j = 0,0
The first part is straightforward: get the dimensions of the matrix and initialize
to 0.
represent the current row and column under inspection, respectively. We then start a loop:
while True: if i >= numRows or j >= numCols: break if matrix[i][j] == 0: nonzeroRow = i while nonzeroRow < numRows and matrix[nonzeroRow][j] == 0: nonzeroRow += 1 if nonzeroRow == numRows: j += 1 continue
Here we check the base cases: if our indices have exceeded the bounds of our matrix, then we are done. Next, we need to find a pivot and put it in the right place, and essentially we work by induction on the columns. Since we are working over a field, any nonzero element can be a pivot, as we my just divide the entire row by the value of the leftmost entry to get a 1 in the right place. We just need to find a row with a nonzero value, and we prefer to pick the row which has the leftmost nonzero value, and if there are many rows with that property we pick the one in the row with the smallest index. In other words, we fix the leftmost column, try to find a pivot there by scanning downwards, and if we find none, we increment the column index and begin our search again. Once we find it, we may swap the two rows and save the pivot:
temp = matrix[i] matrix[i] = matrix[nonzeroRow] matrix[nonzeroRow] = temp pivot = matrix[i][j] matrix[i] = [x / pivot for x in matrix[i]]
Once we have found a pivot, we simply need to eliminate the remaining entries in the column. We know this won’t affect any previously inspected columns, because by the inductive hypothesis any entries which are to the left of our pivot are zero.
for otherRow in range(0, numRows): if otherRow == i: continue if matrix[otherRow][j] != 0: matrix[otherRow] = [y - matrix[otherRow][j]*x for (x,y) in zip(matrix[i], matrix[otherRow])] i += 1; j+= 1 return matrix
After zeroing out the entries in the
th column, we may start to look for the next pivot, and since it can’t be in the same column or row, we may restrict our search to the sub-matrix starting at the
entry. Once the while loop has terminated, we have processed all pivots, and we are done.
We encourage the reader to work out a few examples of this on small matrices, and modify our program to print out the matrix at each step of modification to verify the result. As usual, the reader may find the entire program on this blog’s Github page.
From here, determining the information we want is just a matter of reading the entries of the matrix and presenting it in the desired way. To determine the change of basis for
necessary to write the matrix as desired, one may modify the above algorithm to accept a second square matrix
whose rows contain the starting basis (usually, the identity matrix for the standard basis vectors), and apply the same elementary row operations to
as we do to
. The reader should try to prove that this does what it should, and we leave any further notes to a discussion in the comments.
Next time, we’ll relax the assumption that we’re working over a field. This will involve some discussion of rings and
-modules, but in the end we will work over very familiar number rings, like the integers
and the integers modulo
, and our work will be similar to the linear algebra we all know and love.
Until then! | http://jeremykun.com/2011/12/30/row-reduction-over-a-field/ | CC-MAIN-2015-14 | refinedweb | 1,870 | 53.44 |
Fast conversion to text format with character strings. More...
#include <char_strings.hpp>
Fast conversion to text format with character strings.
Provides a fast conversion to text using character strings.
Unlike some of the numeric converters, this uses an internal character buffer as the output is of known lengths (outputs are either 'true' or 'false'). Ensure your converter does not go out of scope while using the character strings otherwise your character strings will end up pointing at rubbish.
Definition at line 819 of file char_strings.hpp.
Constructor that initialises with an internal buffer.
Definition at line 824 of file char_strings.hpp.
Reimplemented in ecl::Converter< char *, void >.
Definition at line 826 of file char_strings.hpp.
Converts a bool to a char string held in the converter's buffer.
Definition at line 833 of file char_strings.hpp. | http://docs.ros.org/en/jade/api/ecl_converters/html/classecl_1_1Converter_3_01char_01_5_00_01bool_01_4.html | CC-MAIN-2022-27 | refinedweb | 136 | 60.51 |
How to create Visual Studio extensions using Visual Studio 2017.
Introduction
This article is regarding creation of Visual Studio extensions, using Visual Studio 2017, so that we can start making custom extensions.
Let us start creating the extension with the menu command, which will launch one simple Application.
Prerequisite
There are anumber of extensions available for Visual Studio and now let us create our own extension, and make use of that extension. We need to have Visual Studio 2017 installed to create the extensions.
To create a custom extension, we need to install the Extensibility extension installed to Visual Studio 2017.
Once that the Extensibility extension is installed, we can start creating any new extension.
Creation of custom extension
To start creating the Extension, follow the steps given below.
Open Visual Studio 2017 and click File->New->Project.
New Project Window will be pop up.
Under Installed-> Templates-> Extensibility, select VSIX Project.
Name the project as the SampleProject and click OK button.
Once the project is created, add New item to the project by clicking Add-> New Item.
In the pop-up Window, under Extensibility, select Custom command and name it as NotePad.cs.
v
Add the namespace, using System.Diagnostics.
New file will be added to the project and find the NotePad Constructor, update the constructor with the code given below.
Create a method called OpenNotePad and add the code given below.
Go to NotePadPackage.vsct file and update the ButtonText to Invoke NotePad
Once this is added, just build the solution and run it.
New instance of the Visual Studio 2017 will open in Experimental Instance mode.
Under Tools menu, we can find Invoke NotePad, click it to see that notepad is invoked.
If you want to add your custom Application to the tool menu, add the code given below in OpenNotePad method.
View All | https://www.c-sharpcorner.com/article/how-to-create-visual-studio-extensions-using-visual-studio-2017/ | CC-MAIN-2020-40 | refinedweb | 307 | 57.57 |
How We Built the Learn IDE in Browser
Learn about the engineering process and tech stack behind Learn.co’s IDE in Browser.
This post originally appeared on the Flatiron School blog.
This week, Flatiron School’s Engineering team rolled out the Learn IDE (Integrated Development Environment) in browser — complete with file tree, text editor, and terminal window. Now, when you do a lab on Flatiron’s Learn.co platform, you need only click on the “Open IDE” button to launch a functional development environment right in your browser!
We’re particularly excited about this feature because it allows students new to coding to get a taste of programming with real tools that developers use on the job. Unlike a REPL (a read-eval-print-loop) that executes a few lines of simple code, the IDE in browser allows students to experience the more complex interaction between editing different files and executing them from a command line. We noticed that setting up a local development environment, or even downloading our Learn IDE, was a common barrier to entry for budding developers. In building the IDE in browser, we sought to provide a simpler entry point for people to get started on their programming journeys.
Not on Learn yet? Sign up for an account.
How It Works.
How We Built It
The Back-End
For our back-end IDE servers, we chose to use the Phoenix framework, written in Elixir. Elixir, which leverages the Erlang VM, can handle millions of connections. It’s also scalable and concurrent, fault-tolerant, functional, and elegant. Those sound like a lot of buzzwords; boiling this down, that means that Phoenix gives us an app that is fast, self-healing (restartable after crashes), and fairly easy to make sense of and maintain. If you like Ruby, like most of us on the engineering team, you’ll probably also like Elixir. The Phoenix framework, it has been said, is like Rails — the good parts.
In broad strokes, the IDE backend is responsible for spinning up new Docker containers for each student who starts doing a lab on Learn. We use Phoenix sockets to communicate from the browser client to the server, and we pipe commands from the student’s terminal into his or her corresponding Docker container for a particular lab and also pipe the container output back to the user. Meanwhile, we also have a process watching for file system changes and sending those down to the client so we can construct a representation of the file tree in what we call the file tree pane. What’s nice about our backend implementation is that it is flexible enough to support both the existing Atom client as well as the new browser client.
The Front-End
As far as front-end tools are concerned, we opted to use React and Redux as part of our continuing migration from legacy front-end libraries Backbone, Marionette, and Alt.js. For the file tree, we built our own React component that recurses over a JSON payload containing the names of the files and directories in the student’s container on the remote server. Although several packages implementing file trees exist, we found that making our data match the format the packages’ APIs accepted would be just as much work, if not more. By deciding to build our own component, we were also able to more easily incorporate our CSS library and add custom menu behavior.
For the editor and terminal, we are using open-source libraries Ace and Xterm within React components.
When a student starts working on a lab on Learn, we initialize a Phoenix socket connection to our backend server for them. They join a Phoenix channel specific to their lab, and from there we start pushing messages into the channel to which the server responds. For most interactions that accept client input, whether related to the file system or the terminal, we first funnel that message through Redux, and then use custom middleware to push messages into the channel.
For example, this is a simplified example of the middleware function we wrote:
const middleware = () => { return store => { return next => action => { switch (action.type) { case TERMINAL_INPUT: this.channel.push(‘terminal_input’, {data: encode(action.payload)}) break case REQUEST_CREATE_FILE: this.channel.push(‘file_system_event’, {data: action}) break } next(action) } }}
Here, you’ll see that when an action with a payload type of “TERMINAL_INPUT” passes through our reducers, we’ll also push a message over the websocket into the Phoenix channel. The same is true for the “REQUEST_CREATE_FILE” event, which is dispatched when a user creates a file from the file tree pane and modal.
Likewise, when we receive messages back from the server through the channel via the socket connection, we dispatch custom Redux actions to change the state of our application. This strategy allows us to replay history for easier debugging and also lets us isolate side effects of impure functions.
Technical Challenges and Wins
Legacy Code
One of the largest difficulties with implementing the IDE in browser was not creating React components themselves, but making the React-Redux ecosystem communicate with existing code written in Backbone and Marionette. In order to facilitate communication between a legacy Radio event bus that dispatches and listens for certain global events, we built a Radio-Redux proxy in which we set up listeners for Radio events, and then dispatch Redux actions in response to them. This allows us to create a bridge between these divergent systems. Of course, because certain features of our website also need to read from the new state stored in Redux, we also enabled a query interface so that certain sub-apps within Learn can change their behavior based on the Redux state.
For example, our “Open IDE” button is rendered by a Backbone-Marionette app. In order to make the IDE open, we trigger a radio event.
onOpenIdeInBrowserClick() { radio.trigger(‘open:ideInBrowser’)}
And then to respond to that event, we have written a proxy that listens for that event and then dispatches an action. You’ll notice that this proxy accepts an argument of “store” and is actually set up when we create the Redux store.
import Radio from ‘backbone.radio’import { openIdeInBrowser} from ‘lib/redux/actions’
export default (store) => { let radio = Radio.channel(‘global’) radio.on(‘open:ideInBrowser’, () => { store.dispatch(openIdeInBrowser()) })}
Another challenge was leveling up the team on Elixir and Phoenix, as well as React and Redux. Although we have been using React in our codebase for a little over a year, at that time, we were using a different state management library called Alt.js, which is an implementation of the Flux architecture pattern. Although Alt.js served our purposes at the time, Redux has become the preferred state management tool because of its simplicity and clarity. Thus, we are slowly migrating over legacy code to use Redux as it becomes necessary. To remedy much of these challenges, we have been pairing extensively to encourage knowledge sharing as well as develop more robust, maintainable solutions.
Communication and Spikes
While building a complex feature with rotating teams, we began keeping a developers’ log to facilitate communication between parallel teams that have different people rolling on and off each day during development. By writing down what we achieved and what we were working on, we were able to enrich the context of the user stories and tasks listed in our project management tools and thus streamline some of the development process. We also did several mini spikes to get a handle on the complexity of the project and to figure out which tools would best suit our purposes during actual development.
Ultimately, this is a very exciting new tool for our students and was a delight to build. We can’t wait to iterate on it some more!
Resources
Learn More About the IDE in Browser here.
Thanks for reading! Want to work on a mission-driven team that ice cream and iterative development? We’re hiring!
| https://medium.com/flatiron-labs/how-we-built-the-learn-ide-in-browser-d6db3ff39083?source=---------9------------------ | CC-MAIN-2019-35 | refinedweb | 1,330 | 50.97 |
GatsbyJS - Use and Style React Components
Information drawn from
To build out the basic page structure for your blog site, you’ll need to know about React components and how Gatsby uses them.
By the end of this part of the Tutorial, you will be able to:
- Create page components to add new pages to your site.
- Import and use a pre-built component from another package.
- Create your own reusable “building block” component.
- Use component props to change the way a component renders.
- Use the children prop to create a wrapper component.
What is React?
React is the JavaScript library that Gatsby uses under the hood to create user interfaces (UIs). With React, you can break down your UI into smaller, reusable pieces called components.
For example, imagine the UI for an online store’s Products page:
To build this page in React, you might have a
<Navbar> component for the navigation menu, a
<Sidebar> component for extra information displayed to the side of the main content, and a
<ProductGrid> component to display all of the products for sale.
You can also create components from other components. For example, you might decide to break down the
<ProductGrid> component into a list of multiple
<ProductCard> components, which each display the details about a single product. This pattern is called composition, since your larger
<ProductGrid> component is composed of smaller
<ProductCard> components.
## What is a React component?
Under the hood, a React component is a function that returns a React element. A React element is an object that React uses to render DOM elements.
A component is a function that outputs a React component, written in JSX.
The simplest way to write React elements is with JSX. JSX is a JavaScript syntax extension that describes the DOM structure for your component. It looks a bit like having HTML in your JavaScript files:
const hello = <h1>Hello world!</h1>
So a simple React component might look something like this:
const Greeting = () => { return ( <h1>Hello world!</h1> ) }
Create a page component
There are two main types of components in a Gatsby site. The first type you’ll create are page components. A page component contains all the UI elements for a specific page of your site.
In this section, you’ll create two new page components: one for the Home page and one for an About page.
The Home Page content is in
src/index.js.
Task: Create a new page component for an About page
Now that you’ve updated the existing Home page, try creating a new page from scratch. Make an About page, so that you can tell people a little about yourself.
Create a new file: src/pages/about.js. Use the code below as a starting point for your About page.
src/pages/about.js
// Step 1: Import React import * as React from 'react' // Step 2: Define your component const AboutPage = () => { return ( <main> <title>About Me</title> <h1>About Me</h1> <p>Hi there! I'm the proud creator of this site, which I built with Gatsby.</p> </main> ) } // Step 3: Export your component export default AboutPage
Use the
<Link> component
So far, your blog site has two separate pages (Home and About), but the only way to get from one page to the other is to update the URL manually. It would be nice to add links to make it easier to switch between pages on your site.
The Link component is an example of a pre-built component that you can use in your site. In other words, the Link component is defined and maintained by another package (in this case, the Gatsby package). That means you can import it and use it in your own components without knowing too much about how it works under the hood.
The Link component lets you add a link to another page in your Gatsby site. It’s similar to an HTML
<a> tag, but with some extra performance benefits. The Link component takes a prop called to, which is similar to the
<a> tag’s href attribute. The value should be the URL path to the page on your site you want to link to.
– Key Gatsby Concept – 💡
The Gatsby Link component provides a performance feature called preloading. This means that the resources for the linked page are requested when the link scrolls into view or when the mouse hovers on it. That way, when the user actually clicks on the link, the new page can load super quickly.
Use the Link component for linking between pages within your site. For external links to pages not created by your Gatsby site, use the regular HTML
<a> tag.
–
Follow the steps below to add Link components to your Home and About pages.
On the Home page, import the Link component from the Gatsby package and add a link to your About page.
src/pages/index.js
import * as React from 'react' import { Link } from 'gatsby' const IndexPage = () => { return ( <main> <title>Home Page</title> <h1>Welcome to my Gatsby site!</h1> <Link to="/about">About</Link> <p>I'm making this by following the Gatsby Tutorial.</p> </main> ) } export default IndexPage
On the About page, import the Link component from the Gatsby package and add a link to your Home page.
src/pages/about.js
import * as React from 'react' import { Link } from 'gatsby' const AboutPage = () => { return ( <main> <title>About Me</title> <h1>About Me</h1> <Link to="/">Back to Home</Link> <p>Hi there! I'm the proud creator of this site, which I built with Gatsby.</p> </main> ) } export default AboutPage
Create a reusable layout component
If you take another look at the finished example blog, you might notice that there are some repeated parts of the UI across each page, like the site title and the navigation menu.
You could copy those elements into each page of your site separately. But imagine your site had dozens (or even thousands) of pages. If you wanted to make a change to the structure of your navigation menu, you’d have to go and update every one of those files separately. Yuck.
Instead, it would be better to create one common Layout component that groups all the shared elements to reuse across multiple pages. That way, when you need to make updates to the layout, you can make the change in one place and it will automatically be applied to all the pages using that component.
In this section, you’ll create your first custom building-block component: Layout. To do that, you’ll need to use a special React prop called children.
Follow the steps below to create a Layout component and add it to your Home and About pages.
Create a new file called src/components/layout.js. Insert the following code to define your Layout component. This component will render a dynamic page title and heading (from the pageTitle prop), a list of navigation links, and the contents passed in with the children prop. To improve accessibility, there’s also a
<main> element wrapping the page-specific elements (the
<h1> heading and the contents from children).
src/components/layout.js
import * as React from 'react' import { Link } from 'gatsby' const Layout = ({ pageTitle, children }) => { return ( <div> <title>{pageTitle}</title> <nav> <ul> <li><Link to="/">Home</Link></li> <li><Link to="/about">About</Link></li> </ul> </nav> <main> <h1>{pageTitle}</h1> {children} </main> </div> ) } export default Layout
Syntax Hint: You might have noticed that the Layout component uses a slightly different syntax for its props.
Now instead of looking like this:
const Layout = (props) => { ... }
…it looks like this:
const Layout = ({ pageTitle, children }) => { ... }
This is a JavaScript technique called destructuring.
Update your Home page component to use the Layout component instead of the hard-coded Link component you added in the previous section.
src/pages/index.js
import * as React from 'react' import Layout from '../components/layout' const IndexPage = () => { return ( <Layout pageTitle="Home Page"> <p>I'm making this by following the Gatsby Tutorial.</p> </Layout> ) } export default IndexPage
Update your About page component to use the Layout component as well. src/pages/about.js
import * as React from 'react' import Layout from '../components/layout' const AboutPage = () => { return ( <Layout pageTitle="About Me"> <p>Hi there! I'm the proud creator of this site, which I built with Gatsby.</p> </Layout> ) } export default AboutPage
Style components with CSS Modules
Now that you’ve got your page structure set up, it’s time to add some style and make it cute!
Gatsby isn’t strict about what styling approach you use. You can pick whatever system you’re most comfortable with.
In this Tutorial, you’ll use CSS Modules to style your components. This means that styles will be scoped to components, which helps avoid class naming collisions between components. Gatsby is automatically configured to handle CSS Modules - no extra setup necessary!
Follow the steps below to style your Layout component using CSS Modules.
Create a new file: src/components/layout.module.css. (The .module.css part at the end is important! That’s what tells Gatsby that these styles are using CSS Modules.)
Start by adding a single .container class:
src/components/layout.module.css
.container { margin: auto; max-width: 500px; font-family: sans-serif; }
Then import that class into your Layout component .js file, and use the className prop to assign it to the top-level
<div> element:
src/components/layout.js
import * as React from 'react' import { Link } from 'gatsby' import { container } from './layout.module.css' const Layout = ({ pageTitle, children }) => { return ( <div className={container}> <title>{pageTitle}</title> <nav> <ul> <li><Link to="/">Home</Link></li> <li><Link to="/about">About</Link></li> </ul> </nav> <main> <h1>{pageTitle}</h1> {children} </main> </div> ) } export default Layout
Syntax Hint: To apply classes to React components, use the className prop. (This is another example of a built-in prop that React automatically knows how to handle.)
This might be confusing if you’re used to using the class attribute on HTML elements. Do your best to not mix them up!
Now that you’ve seen how to style a single element for your component, add some more styles to apply to the other elements in your Layout component. src/components/layout.module.css
.container { margin: auto; max-width: 500px; font-family: sans-serif; } .heading { color: rebeccapurple; } .nav-links { display: flex; list-style: none; padding-left: 0; } .nav-link-item { padding-right: 2rem; } .nav-link-text { color: black; }
Import the new classes into your Layout component, and apply each class to the corresponding element. src/components/layout.js
import * as React from 'react' import { Link } from 'gatsby' import { container, heading, navLinks, navLinkItem, navLinkText } from './layout.module.css' const Layout = ({ pageTitle, children }) => { return ( <div className={container}> <title>{pageTitle}</title> <nav> <ul className={navLinks}> <li className={navLinkItem}> <Link to="/" className={navLinkText}> Home </Link> </li> <li className={navLinkItem}> <Link to="/about" className={navLinkText}> About </Link> </li> </ul> </nav> <main> <h1 className={heading}>{pageTitle}</h1> {children} </main> </div> ) } export default Layout
Syntax Hint: In CSS, the convention is to name classes using kebab case (like .nav-links). But in JavaScript, the convention is to name variables using camel case (like navLinks).
Luckily, when you use CSS Modules with Gatsby, you can have both! Your kebab-case class names in your .module.css files will automatically be converted to camel-case variables that you can import in your .js files.
Key takeaways
- React is a library that helps you break down your UI into smaller pieces called components. A component is a function that returns a React element. React elements can be written in JSX.
- Page components contain all the UI elements for a specific page of your site. Gatsby automatically creates pages for components that are the default exports of files in the src/pages directory. The name of the file will be used as the route for the page.
- Building-block components are smaller reusable parts of your UI. They can be imported into page components or other building block components.
- You **can import pre-built components (like Link) **from other packages, or you can write your own custom components from scratch (like Layout).
- You can use props to change how a component renders. You can define your own props when you build a component. React also has some built-in props, like children and className.
- Gatsby isn’t opinionated about what styling approach you want to use, but it works with CSS Modules by default.
------------------------------------------------------------------------
Last update on 03 Nov 2021
--- | https://codersnack.com/gatsbyjs-use-react-components/ | CC-MAIN-2022-33 | refinedweb | 2,085 | 64.1 |
Sclasner is a classpath scanner written in Scala.
It is intended as a replacement of Annovention and mainly used for standalone JVM applications. If you want a more complex solution, please see Reflections.
With Sclasner, you can:
- Scan all .class files (including those inside .jar files in classpath), then use Javassist or ASM to extract annotations
- Load all .po files
- etc.
ScanScan
For example, if you want to load all .txt files:
import java.io.File import sclasner.{FileEntry, Scanner} // We define a callback to process each FileEntry: // - The 1st argument is an accumulator to gather process results for each entry. // - The 2nd argument is each entry. // - The result of this callback will be passed to as the accumulator (the // 1st argument) to the next call. // - When all entries have been visited, the accumulator will be returned. def entryProcessor(acc: Seq[(String, String)], entry: FileEntry): Seq[(String, String)] = { if (entry.relPath.endsWith(".txt")) { val fileName = entry.relPath.split(File.pathSeparator).last val body = new String(entry.bytes) acc :+ (fileName, body) } else { acc } } // We actually do the scan: // - The 1st argument is the initial value of the accumulator. // - The 2nd argument is the callback above. val acc = Scanner.foldLeft(Seq.empty, entryProcessor)
Things in
FileEntry:
container: File, may be a directory or a JAR file in classpath. You may call
container.isDirectoryor
container.isFile. Inside each container, there may be multiple items, represented by the two below.
relPath: String, path to the file you want to check, relative to the
containerabove.
bytes: Array[Byte], body of the file the
relPathabove points to. This is a lazy val, accessing the first time will actually read the file from disk. Because reading from disk is slow, you should avoid accessing
bytesif you don't have to.
Signature of
Scanner.foldLeft:
foldLeft[T](acc: T, entryProcessor: (T, FileEntry) => T): T
CacheCache
One scan may take 10-15 seconds, depending things in your classpath and your computer spec etc. Fortunately, because things in classpath do not change frequently, you may cache the result to a file and load it later.
You provide the cache file name to
foldLeft:
// You can use File instead of file name val acc = Scanner.foldLeft("sclasner.cache", Seq.empty, entryProcessor)
If sclasner.cache exists,
entryProcessor will not be run. Otherwise,
entryProcessor will be run and the result will be serialized to the file. If you want to force
entryProcessor to run, just delete the cache file.
If the cache file cannot be successfully deserialized (for example, serialized classes are older than the current version of the classes), it will be automatically deleted and updated (
entryProcessor will be run).
For the result of
entryProcessor to be written to file, it must be serializable.
Cache in development modeCache in development mode
Suppose you are using SBT, Maven, or Gradle.
While developing, you normally do not want to cache the result of processing the directory
target (SBT, Maven) or
build (Gradle) in the current working directory.
Sclasner's behavior:
- If
containeris a subdirectory of
targetor
build, the result of processing that
containerwill not be cached.
- When loading the cache file, if a
containeris a subdirectory of
targetor
build,
entryProcessorwill be run for that
container.
Use with SBTUse with SBT
Supported Scala versions: 2.12, 2.11, 2.10
libraryDependencies += "tv.cntt" %% "sclasner" % "1.7.0"
Sclasner is used in Xitrum. | https://index.scala-lang.org/xitrum-framework/sclasner/sclasner/1.7.0?target=_2.12 | CC-MAIN-2019-43 | refinedweb | 557 | 59.8 |
What Is LINQ?
LINQ is an acronym for Language Integrated Query, which describes where it’s used and what it does. The Language Integrated part means that LINQ is part of programming language syntax. In particular, both C# and VB are languages that ship with .NET and have LINQ capabilities.
How Do I Use LINQ in My C# Code?
To use LINQ the first thing you need to do is add the LINQ using statement.
using System.Linq;
In your code, you need a datasource. For this example, I am going to use a simple array, but it can be anything, like SQL, XML, etc.
int[] data = new int[10] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
Next, you need a LINQ query. (Note: I know the Q in LINQ means query, so I have just written query query. If you are one of those people who hates seeing the phrase "PIN number," then you might not like this blog post.) A LINQ query is very similar to a T-SQL query, so if, like me, you are good with databases. this should make sense to you.
In T-SQL you can have:
SELECT num FROM data WHERE num = 1
In LINQ, this becomes:
var query = from num in data where num == 1 select num;
Finally, you need to do something with the query you have written. I am just going to print the results of my query to the console.
foreach (var num in query) { Console.Write(num); }
What Other SQL-Like Syntax Can I Use?
In T-SQL, you can control ordering using ORDER BY. LINQ has a similar syntax: orderby.
orderby num descending
In T-SQL, you can use GROUP BY. To do something similar with LINQ, use:
group num by num.Type into type select type
This now requires a change to your foreach loop so you can list what you are grouping by and what items are in that group:
foreach (var type in query) { Console.Write(type.Key); foreach (var num in type) { Console.Write(num); } }
JOINs
So you thought joining tables was a SQL Server only thing? Think again — you can do this in LINQ:
var joinquery = from cust in customers join prod in products on prod.CustomerId equals cust.Id select new { ProductName = prod.Name, CustomerName = cust.CompanyName };
Conclusion
There are loads more LINQ functionality that you can use. While writing this blog, I found this site, which has loads of examples of different queries that you can write with LINQ.
This has inspired me to use LINQ more in my code and learn more about the different queries that could be written.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/getting-to-know-linq-1 | CC-MAIN-2017-30 | refinedweb | 456 | 83.25 |
Bug Description
Binary package hint: gnome-app-install
The dialog that comes up when trying to play an unsupported file is untranslated, and the translation is not currently available in Rosetta.
I'd hope this can be fixed in Feisty Updates, too by introducing the necessary strings for the translators, and letting language pack updates take care of this in the future.
Looking at the /usr/bin/
if not askConfirmation
So I wonder why they are not available for translation at the moment.
What a pitty. We missed to add the gnome-app-install script to POTFILES
fixed in my local branch. waiting for mvo to merge and upload
Accepted into feisty-proposed.
I don't see updated gnome-app-install yet in Feisty repositories or https:/
To be more exact, I only see it feisty-proposed, not in feisty-updates, which means it's not "released" for Feisty. Also translation template in Launchpad has not been updated.
but it was released to gutsy.
Yeah, gutsy, but also "Feisty" row was marked as being Fix Released. Putting back to Fix Committed now, as long as it's in proposed only.
You are right. I made a mistake.
Verification for this should probably be a simple installation because neither the pot file nor the POTFILES.in makes it into the actual binary package.
I installed the source package from gnome-app-install version 0.3.31 from feisty proposed and confirm that the changes have been made.
Copied to feisty-updates.
Hi. I'd still like to reopen this for Feisty, since for some reason the translation template for gnome-app-install hasn't been updated for Feisty so that it would be possible to translate the new strings, even though the new version has now been uploaded.
Carlos: Are strings from feisty-updates added to the feisty Rosetta branch automatically? Does it require some manual intervention?
We accept any translation coming from Release, Security, Updates and Proposed pockets for 'main' component. So the answer is yes.
However, I saw that you did a 'copy' to feisty-updates, does such copy require a package rebuild? Without a source package rebuild, Launchpad translations doesn't get any .pot o .po updates.
Hi Carlos,
Carlos Perelló Marín [2007-06-27 10:11 -0000]:
> However, I saw that you did a 'copy' to feisty-updates, does such copy
> require a package rebuild? Without a source package rebuild, Launchpad
> translations doesn't get any .pot o .po updates.
No, it puts the very same source and binaries into the -updates
Packages.
Martin
--
Martin Pitt http://
Ubuntu Developer http://
Debian Developer http://.
Hi Carlos,
Carlos Perelló Marín [2007-06-27 11:25 -0000]:
>.
We only copy packages between -proposed and -updates now, we do not do
direct uploads to -updates. But shouldn't the -proposed build have
provided the updated tarball?
Martin
Yes, we accept -proposed uploads. From this bug report, I understood that you copied it from Gutsy to Feisty. Is that correct? If it's not, there is a bug...
Hi Carlos,
Carlos Perelló Marín [2007-06-27 14:11 -0000]:
> Yes, we accept -proposed uploads. From this bug report, I understood
> that you copied it from Gutsy to Feisty. Is that correct? If it's not,
> there is a bug...
No, of course we don't do crazy things like that :). We copy
feisty-proposed to feisty-updates once it has been verified.
Closing the g-a-i task, since the updated version is in feisty-updates. If there's still a problem, this is rather a Rosetta import bug.
Indeed this is a Rosetta problem, marking this bug there.
Carlos, while you're fixing this in Rosetta, it'd be nice if you could do the manual import for Feisty's gnome-app-install so that there would be possibility to translate gnome-app-install completely for the next Feisty's language pack updates.
Sure, I already asked for that file to do a manual upload (Right now only Launchpad Translation admins are able to do that). I didn't get it yet so I was waiting to have some spare time to get it myself. I hope I will have time to do it later this week.
Cheers.
I just did a manual upload for this file. It will take a while to have it available to translators due the number of entries being imported right now.
Btw, make gnome-app-
Hmm, still not seeing (https:/
Btw, I've filed a bug on the pot/make problem with gnome-app-install.
Hmm, still not seeing (https:/
Btw, I've filed a bug on the pot/make problem with gnome-app-install in bug 125046.
Hmm, seems like we have a performance problem with the import queue, the file is not yet imported.
I will take a look next week when I'm back at home.
The import has been done now. Though, after talking with Michael Vogt, the fact that it was not updated the first time would be just that the package build was not refreshing the .pot file.
If Feisty and Gutsy lacks the new strings, I will need a new version of both .pot files from someone that could confirm that contain such strings and will do another manual upload (this time should be faster) or a new package version needs to be rebuild.
Hi. Sorry to bother again, but the Feisty template does not seem to be updated in a way that it would include the codec strings available in gnome-app-install 0.3.31. I put this on IRC before Carlos's irc client quit:
Rosetta still doesn't have new strings. Did you fix the make update-po manually before uploading the pot, so that a new pot was generated?
If I get 0.3.31, make update-po in po/ doesn't work unless I add a line top_srcdir=../ in the po/Makefile. After that I get a POT and PO files are updated with the codec strings. The 0.3.31 is in feisty-updates.
This is the POT file I get in Feisty for gnome-app-install 0.3.31 when running make update-po after adding top_srcdir=../ in the Makefile. It includes the codec installation strings currently not in the Feisty translations.
I just upload it to Launchpad. Thank you.
It should be imported in next 10-20 minutes
Thanks! It is now marked as "Needs Review" [1]. msgfmt -c does not give any errors except for the unfilled header information which is natural for a POT file.
[1] https:/
It's now imported and should be available to translate in Launchpad
Cool, this is finally solved for Feisty! Thanks a lot!
Reopening this, unfortunately it seems that this is not working in gutsy. The translations for the dialog are included in eg. /usr/share/
msgunfmt gnome-app-
msgid "Search for suitable codec?"
msgstr "Etsi sopivaa koodekkia?"
It seems the gettext domain is not set correctly in this case, unfortunately I didn't test the fixes properly apparently before, only that the strings are translatable and in language packs. Adding:
import gettext
import gtk
import gtk.glade
(...)
#setup gettext
app="gnome-
gettext.
gettext.
gtk.glade.
gtk.glade.
to activation.py, copying from AppInstall.py, seems to fix the problem. Any chance of fixing this and doing a package for gutsy-updates at some point?
Not sure if the same problem might be elsewhere, too.
The gettext domain is still not getting properly set as of gnome-app-install 0.5.1-0ubuntu1, ie. the dialog is not translated in hardy, either.
That's what I get when I allow people to violate SRU policy and fix in stables first. Michael, please apply this fix to hardy ASAP. Thank you!
Actually this was not completely fixed in feisty, either. But anyway, the fix for the issue is pending. The translations are there now, but the remaining issue seems to be that of the unset gettext domain in some situation(s).
Thanks for the followup and sorry for the bug. I fixed it in bzr for hardy now.
This bug was fixed in the package gnome-app-install - 0.5.2-0ubuntu1
---------------
gnome-app-install (0.5.2-0ubuntu1) hardy; urgency=low
[ Michael Vogt ]
* AppInstall/
- wording fixes (thanks to Matthew Paul Thomas)
* remove some dead code
* README: fix some outdated information (closes: #437415)
* fix gettext init (LP: #106756)
* be more robust if the description is empty for some
reason
* add special handling for the canonical partner repositor
in the app view (own text + icon)
* add support for https URLs in the app view
[ Sebastian Heinlein ]
* support emblemes without description and remove
unclear descriptions
* move search entry so that its more clear that it searches
only in the current category
[ Julian Andres Klode ]
* man/: Add manpages for all commands
* debian/control:
- Update to Policy 3.7.3
- Remove dependency on software-
- Use Vcs-Bzr instead of XS-Vcs-Bzr, fix url
* data/gnome-
- Remove the Preferences Button
* setup.py:
- Install gnome-app-
* AppInstall/
- Use more efficient format to store cache (pickle protocol 2)
-- Michael Vogt <email address hidden> Tue, 05 Feb 2008 13:48:32 +0100
Hi, thanks! Works in both gutsy and hardy (though in hardy there's another problem with app-install crashing after clicking the Search button - bug #189490)
Could the gutsy version be moved from gutsy-proposed to gutsy-updates now?
I guess Gutsy can also be marked as "Fix Committed". I just don't find how to proceed in moving the package from gutsy-proposed to gutsy-updates, after verification-done has also been added and over a month has passed by. https:/
The 18 month support period for Gutsy Gibbon 7.10 has reached its end of life -
http://
Gutsy task.
Hmm, also the "Restricted Software" -dialog is untranslated. | https://bugs.launchpad.net/launchpad/+bug/106756 | CC-MAIN-2018-17 | refinedweb | 1,642 | 74.29 |
wholesale goose down comforterin Home & Garden also searched: duvet blankets duvets queen size duvet duck duvet queen covers eiderdown quilt duvet inners duvet donuts duck feather blanket goose feather comforter View More
Thickening goose down comforter, 3kg, free shipping to Russia, all sizes, 200*230, 150*200, 180*200cm,winter quilt, autumn quiltUS $75.00 - 85.60 / piece
Rated 4.5/5 based on 2 customer reviews (2) | Orders (4)
- Shipping:
- US $34.21 / piece
very popular,goose down comforter,duck down comforter , king queen full twin size, thickening eiderdowns white down by quiltUS $161.50 / piece
US $170.00 /piece
220cm*240cm,5kg,Down by thickening winter is quilt white goose down comforter / duvet / blanketUS $204.25 / piece
US $215.00 /pieceRated 5.0/5 based on 2 customer reviews (2) | Orders (3)
Blue Polka dot goose down comforter set comfy and warm duvet set twin/full/queen/king size pink 180*210 3kgsUS $70.40 / piece
US $140.80 /piece
Quality 100% Goose down comforter Twin Size 150*200cm Down duvet 2.5KG Freeshipping 5 years warrantyUS $149.12 / piece
Quality fabrics eco-friendly printing upscale duvet / household goose down comforter / hotel using thicken winter quiltUS $246.98 - 298.98 / piece
100% Goose Down Comforter, high-grade quality, ,Duvet Cover,Bedding Bag.150*200cmUS $76.50 - 87.90 / piece
Luxurious goose down comforter velvet bed Bedding Set linen thickening jacquard duvet covers modern queen 4pcs sets for winterUS $96.99 - 99.99 / piece
Top Quality Goose Down Quilt Doona Comforter Blanket Universal throughout the USA Queen Size 350GSM---92inchX98inch 234cmX249cmUS $225.98 / piece
US $309.56 /piece
2014new!98% goose down comforter Duvet down duck Twin Queen king size 200*240cm 4KG cmcomforter kids winter Autumn quiltUS $179.90 / piece
Goose Down comforterUS $69.99 / set
Home Textile 95% Bedding Sets white goose down comforter 100% cotton Quilt satin thickening down autumn&winter quilt duvetUS $199.90 / piece
Free Shipping 20%Off On Sale Summer Duvet 90% White Goose Down Comforter 150gsm 150*210cm Sleep Home textile Down DuvetUS $386.00 / piece
Factory Shop 7 Blanket 310GSM Duvet Quilt Blanket Comforter Doona 95% Goose Down From European King--Top Grade WorldwideUS $263.72 / piece
US $382.21 /piece
4 luxurious noble silk /frozen/cotton/superman queen size bedding set king size,roupas de cama casal queen/goose down comforterUS $87.89 / piece
US $97.66 /piece
import Siberian goose down/wedding Comforter 3d bedding sets bedding set/Star Hotel presidential suite quilt duvet coverUS $3,979.10 / piece
4pcs bedding set home textile solid pure color duvet cover bedsheet 100%cotton goose down comforter set free shipping all sizesUS $31.80 - 33.80 / piece
5KG cotton adult King goose down comforter blanket bedlinen set winter thick warm Duvet Cover velvet bed covers +20 colorsUS $151.32 / piece
luxury children reactive printing monkey Goose down cartoon comforter cover bedding set 4pcs bedclothes Duvet Cover Bed sheetUS $45.66 - 54.66 / set
bedspreads and comforters stripes christmas cloud galaxy dolphin elephant wolf goose down comforter bedding set queen bedclothesUS $63.87 / set
US $133.06 /set
Comforter / Patchwork Quilt /Duck Down Quilt/90% goose Down Duvets Quilt / Thickening Winter /Size 150*210CM/Keep Warm GoodUS $248.00 / piece
high quality downful filling cotton printed winter bedding set comforter duvet/quiltUS $52.45 / piece
US $55.80 /piece
Rated 5.0/5 based on 1 customer reviews (1) | Orders (6)
- Shipping:
- US $64.01 / piece
Free Shipping 90% Goose Down Quilt Thickening Autumn and Winter Comforter Duvet Insert Full Queen KingUS $133.00 / piece
galaxy elephant dolphin pink polka dot goose down comforter blue patchwork handmade crochet bedspread quilt bedding setUS $89.19 / piece
Hot Beyond Kids Home Textile Winter White Goose Down Quilt Comforter Silk Jacquard Handmade Duvet Bedding Bedclothes 150*210US $570.00 / piece
180cm*200cm French bread Style Quilt goose Down filling Twin size green color comforter/Blanket Fast shippingUS $57.98 / piece
French bread Style Quilt goose Down filling +modal Fabric Queen size 200*230cm white color comforter/Blanket Fast shippingUS $196.52 / piece
US $289.00 /pieceRated 5.0/5 based on 2 customer reviews (2) | Orders (2)
High Quality Winter Comforter Cotton Quilt Luxury Goose Down Comforter White/Pink/Yellow King Duvet Bedding SetUS $165.82 - 195.82 / piece
Super King Comforter Sets /90% White Goose Down Quilt /Duvet,High Comfort,Lightness Without Oppressive Feeling /240*240CM /2.8kgUS $612.43 - 755.48 / piece
US $1,749.80 - 2,158.51 /piece
FREE SHIPPING !SPECIALGRADE !goose down quilt /duvet/comforter single& double thicken winter quilt core high qualityUS $498.00 / piece
3kg 200*230cm print monkey white duvet\quilt\comforter\bedspreads\blanket set winter\spring\autumn goose down bedding setUS $93.50 / piece
US $110.00 /pieceRated 5.0/5 based on 2 customer reviews (2) | Orders (2)
Super Soft !!! Embossed Free Shipping 100% Mulberry Silk Filled Quilt Comforter Duvet sanded Comforter Anti Cold Double / QueenUS $198.00 / set
super soft white goose down comforter set Thick winter Comforter queen,christmas comforter,fluffy white quiltUS $219.00 - 249.00 / piece
- Shipping:
- US $19.12 / piece
1 piece/All cotton fabric/High density Luxury Velvet core duvet/Precious comforter with Light Yellow colour(180cmX210cm)2.75KGUS $190.00 / piece
Free shipping Winter home textile Comforter down by goose down goose down beddingUS $1,625.52 / piece
3D bedding sets 4pcs/set Free Shipping duvet cover bed sheet linen set king size bed cover duvet comforter cover clothingUS $38.99 / setRated 4.4/5 based on 17 customer reviews (17) | Orders (72)
Wholesale goose down comfor goose down comforter. If you want to find wholesale goose down comforter from goose down comforter wholesalers, this is the resource for you. These items are updated frequently to ensure you can find the latest styles and best models. Search this category for great discounts on cheap goose feather comforter, cheap goose comforter, cheap goose comforters! You can even find special savings like goose feather comforter promotion, goose comforter promotion, goose comforters promotion! If you’re comparison shopping, do a quick search for duvet queen size price, eiderdown comforter price, duvet filling price. Check out our customer feedback in goose feather comforter reviews, goose comforter reviews, goose comforters reviews to learn more. So if you’re ready to turn your home into your dream, start shopping for wholesale items today on AliExpress.com.
- View More | http://www.aliexpress.com/w/wholesale-goose-down-comforter.html | CC-MAIN-2014-42 | refinedweb | 1,064 | 66.23 |
MING-HSIEN CHEN11,297 Points
def squared(num) not identified as good although it seems to work in my test why?
Hello,
here are my lines of code
def squared(num): try: num1 = int(num) except ValueError: print (num * len(num)) else: print (num1 * num1)
when I put that into a python fiel and add squared (the examples give) they all return the right answer. I guess though it is not best as the text of the challenge said I might not have to use an "else"...
thanks for any help
# EXAMPLES # squared(5) would return 25 # squared("2") would return 4 # squared("tim") would return "timtimtim" def squared(num): try: num1 = int(num) except ValueError: print (num * len(num)) else: print (num1 * num1)
1 Answer
billy mercier6,259 Points
You need to return it.
def squared(num): return int(num)**2
For your function to give out it's value you need to return something, and if you don't store that into a variable you lose it.
To store the value put it inside a variable.
a = squared(5)
print (a) will give 25
MING-HSIEN CHEN11,297 Points
MING-HSIEN CHEN11,297 Points
hi Again, Sorry I should have checked the other questions first. Well I've seen how others have done that. and I can actually directly try to return the int so that I do not use the else... thanks | https://teamtreehouse.com/community/def-squarednum-not-identified-as-good-although-it-seems-to-work-in-my-test-why | CC-MAIN-2020-24 | refinedweb | 235 | 73.1 |
Opened 10 months ago
Closed 8 months ago
Last modified 8 months ago
#33043 closed Bug (fixed)
method_decorator() should preserve wrapper assignments
Description
the function that is passed to the decorator is a
partial object and does not have any of the attributes expected from a function i.e.
__name__,
__module__ etc...
consider the following case
def logger(func): @wraps(func) def inner(*args, **kwargs): try: result = func(*args, **kwargs) except Exception as e: result = str(e) finally: logger.debug(f"{func.__name__} called with args: {args} and kwargs: {kwargs} resulting: {result}") return inner class Test: @method_decorator(logger) def hello_world(self): return "hello" Test().test_method()
This results in the following exception
AttributeError: 'functools.partial' object has no attribute '__name__'
Change History (6)
comment:1 Changed 10 months ago by
comment:2 Changed 10 months ago by
PR:
Chris, can you take a look?
OK.
comment:3 Changed 10 months ago by
OK, I'm going to accept this for review. The regression test in the PR shows a change of behaviour at f434f5b84f7fcea9a76a551621ecce70786e2899 as Mariusz said.
Since the change was in Django 2.2 it would no longer qualify for a backport.
This behavior has been changed in f434f5b84f7fcea9a76a551621ecce70786e2899. Chris, can you take a look? | https://code.djangoproject.com/ticket/33043 | CC-MAIN-2022-27 | refinedweb | 205 | 64.2 |
paymentservice_event_get_error_id()
Retrieves the error ID from an event.
Synopsis:
#include <bps/paymentservice.h>
BPS_API int paymentservice_event_get_error_id(bps_event_t *event)
Arguments:
- event
- The event to retrieve the error ID from.
Library:libbps
Description:
The paymentservice_event_get_error_id() function retrieves the error ID from the specified event. The response code of the event must be FAILURE_RESPONSE.
Returns:
The error ID. These are the possible values for the error ID:
- 1
- User canceled. This error occurs when a user cancels the payment.
- 2
- Payment system is busy. This error occurs when a user attempts to purchase more than one item at a time.
- 3
- General payment error. There are a wide variety of errors returned from the payment system. In this case, the user should be prompted and shown the specific error message.
- 4
- Digital good not found. This occurs when the digital good matching the ID or SKU cannot be found.
- 5
- Digital good already purchased. This error can occur when a user attempts to purchase a non-consumable or subscription digital good more than once. | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.bps.lib_ref/com.qnx.doc.bps.lib_ref/topic/paymentservice_event_get_error_id.html | CC-MAIN-2020-34 | refinedweb | 171 | 61.53 |
assignToSite(emp, site) // procedural // OOP emp.assignToSite(site) // OO variation 1 Or site.assignEmp(emp) // OO variation 2 Or site.employeeList.add(emp) // OO variation 3 assignments.insert(emp, site) // relational{The last line is peculiar. What makes it "relational"? "Relational" isn't any programming paradigm that I'm aware of, and the syntax suggests it should be "OO variation 4". It appears that an object reference called 'assignments' has a method called 'insert' which is being invoked with arguments 'emp' and 'site'.}
locationPlan.assign(emp,site) // OO variation 3Which is, in many ways, very little different from your procedural form at the top. I would consider that procedural in OO clothing unless there was more to the locationPlan class. If not, then it is anti-YagNi IMO. There's no conflict between YouArentGonnaNeedIt and ResponsibilityDrivenDesign. If you think otherwise, perhaps you'd like to fill in ResponsibilityDrivenDesignConflictsWithYagni. If you have x.y when there is only a "y" at the moment, then it does conflict with YouArentGonnaNeedIt. (I would note that I don't always agree with YagNi either, but this is not one of them unless better justification is given.) You are not properly taking time into account. Typically, the responsibilities that a system has to meet are exposed over time, but even supposing that they were all known at the beginning of development, they can't all be implemented at once. So, one way or another the implementation of responsibilities is always partially serialized, I am not sure what you mean by "serialized". It usually refers to conversion to text for transfer in this context in my experience. Serialize - make serial, ie make come one after the other. In what jargon? Java's? I've seen that called "marshalling" in many other places.
class locationPlan { method assign(...) { // only method for class right now stuff... } } function assignToSite(...) { stuff... }The second one looks like it better fits YagNi to me. So, what are the possibilities in the case presented? Well, it might be that we've already seen fit to create Site and Employee, or one of them, or neither of them, before we get round to knowing that employees can be assigned to sites. We might already have created SitePlan?, because of some other responsibility that we have already seen, although that seems less likely. Anyway, the point is that we might very well not "have x.y" already, that is, when we come to consider assignment of Employees to Sites, perhaps we've not yet written Employee::assignToSite(Site), nor Site::assignEmp(Employee), perhaps we need to make a decision about which to write. And perhaps the answer is neither. And even if we do already have x.y, we might very well see fit to move y to some other class than X, as we discover more about the problem (and the solution). To introduce a new class to handle the newly discovered relationship between sites and employees would be an example of reification, a very useful design technique, one not widely enough known, understood, or applied. And one that in no way conflicts with the notion of only addressing the currently needed requirement. -- KeithBraithwaite It appears to be yet another case of OO forcing you to pick a taxonomy up front, and then all the rework associated with undoing your decision when the real world poops on it. Rather than reduce the impact of change, the OO crowd has embraced code rework and attached fancy labels to it to make it sound acceptable. "Refactoring Engineer". Translation: OO taxonomania mess cleaner-upper. Don't put taxonomies in code. They don't belong there. The less classifications and grouping of things you do in code, the more change-friendly it will generally be. OO designs often have one pulling their hair out trying to decide whether to put the operation in the likes of:
site.assignEmp(emp)otherwise, if we want to query an Employee about the site to which he is assigned, we may have to use
emp.assignToSite(site)If both queries are necessary, then probably doing both will be useful. Normally, OO design decisions cannot be made by focusing on a single relation but also the effects that they have on the objects. -- VhIndukumar
emp.assignToSite(site) site.assignEmp(emp)
BinaryRelation? employeeSiteRelation= new BinaryRelation?();and to "manage" it, I'd do something like :
employeeSiteRelation.add(new Object[] {emp, site} ) ; employeeSiteRelation.remove(new Object[] {emp, site}) ; employeeSiteRelation.remove( new Predicate () { /* predicate expression as inner class goes here */} ); employeeSiteRelation.select( new Predicate () { /* predicate expression as inner class goes here */} );Ouch! Which part of the requirement as stated makes you think you need all this stuff? Especially all that Predicate nonsense? Oh, wait a minute though, this is interesting. Perhaps the reason that the RelationalWeenies think that the ObjectOrientedWeenies? are busy "reinventing the database in the application" all the time is that the RW's can't imagine a way to implement a trivial feature like the one described without pulling in all this over-generalized junk right away, and exposing it, which indeed would be reinventing etc. And maybe they think, therefore, that the OOW's can't think of any other way to do it, either. Perhaps here is an aspect of the ObjectRelationalPsychologicalMismatch laid bare? Especially all that Predicate nonsense? How about the nonsense in your comments and invalid inferences? I don't need "all that stuff", I just wrote it once. If you think you have a better solution, than by all means show it. Be my guest to ManyToManyChallenge. Actually you're not enough OO weenie to realize that the above code sketch is the paradigmatic OO way of doing things. You put common abstraction in classes and you don't rewrite the same or analogue code over and over again. One such abstraction is "relation", but since you have a psychotic allergy to anything related (pun intended) to relational you refuse to acknowledge it. Objects don't have relations, they have associations and collections. The difference is fundamental. And if you think that duplicating that abstraction all over the place in classes like UserToSiteRelationship? is a good OO design, than maybe it's time to go back and study your OO better. Deary me. The example says "Suppose we need an operation to assign an employee to an office site." Ok, then, I'd provide an interface perhaps like this:
interface SitePlan?{ void assign(Site s, Employee e); }That interface can completely support the stated requirement. Now, the question is, how to implement this interface? Some other parts of the requirement will guide us. It might be that the only query that's needed is a list of the employees assigned to a site. In which case we might have SitePlanImpl? delegate to Site, which would require perhaps only an array of Employees. Maybe some other queries are required. If so, and if they are sufficiently complicated, then the interface will grow, and the implementation might end up looking like what you suggest. The implementation (not interface) that you suggest is a perfectly reasonable point to expect to reach - if the application ends up doing lots of clever things with Sites, Employees, and who's at which. But to jump straight into that stuff, and to mix classes like BinaryRelation? with classes like Employee, on an equal footing in the same area of code, that way lies madness. YouArentGonnaNeedIt. I don't even need a specific interface to support stated requirement. I just need an object which is an instance of a very generic class (it is not at all mixed with Employee as you claim). In order to set it up, I need just a one liner: specificRelation= new BinaryRelation? (). While you will need more, a lot more typically: think of the support needed for basic operations like at least removing users from sites, removing sites , removing users. Plus I will use the one liner consistently for lots of binary relations that you can see all over the place in object models. So it is you who is in breach of YAGNI, cause you are never going to need to customize the same relationship management code for lots of particular relations. I can use it for EmployeeTOSites, EmployeeToGroups?, DepartmentHierarchy?, EmployeeTODepartment, EmployeeToProjects?, DepartmentTOProject and so on so forth. While you will rewrite the same boring code to add and subtract pointers from various arrays and hashmaps. I guess the only problem you could see is that BinaryRelation? is not readily available in your favorite collection framework. If you had it ready made it would've been a no brainer to use it. If you don't all you could do is whine that it would be, quote "over generalized junk". Well, I guess it was procedural weenies that complain about OO abstraction that was "over generalized junk". Now you've just taken over their position, thinking that you fight against relational evil empire or something :) "fight against [the] relational evil empire" Don't flatter yourself :) First of all, I envy your ability to predict the future. From that single phrase "Suppose we need an operation to assign an employee to an office site." you are confident in inferring all those other requirements! I wish I could do that, accurately. But I can't. And I don't believe that you can either. But of course that's a RedHerring what you're objecting to. First, you don't need to be able to predict the future in order to use a proper relation object, it's enough to predict the past. We started from a contrived example mostly unspecified, to which I commented that typically I'd use an object rather than a class to manage a relation. And you objected to that, saying something to the effect that I didn't know what I was saying. I proved to you I did know what I was saying, and again you launched in an uncalled for diatribe waving at me some XP rhetoric and some quite insulting non-sense about YAGNI. Even when your alternative solution would be clearly in breach of YAGNI under the reasonable premise that BinaryRelation? is already written, and that's because it is more complex, it requires more lines of code and it is more fragile. As for predicting the future you just have to look in the past code you've written and see that for the a quasi majority of cases relationships needs to be created as well as destroyed, business objects need to be created as well as deleted. Guess why all collection classes have add as well as remove? Only these requirements applied to your average of more than 5 relations per object model makes it more than reasonable to write your BinaryRelation class even if you didn't have it. Secondly, a long time before I learned about YagNi and the rest, I learned another, but similar, principle which has served me well - always UseTheLeastPowerfulConstruct? that will do the job. As such, I'd never as a first choice use so powerful a class as your BinaryRelation. (By the way, where does it say in the requirement that Employee and Site are equal partners in the relationship? Is your BinaryRelation bidirectional? If so which part of the requirement makes you think that flexibility is worth paying for in this case?). Well, if you think that BinaryRelation is "such a powerful class", I should direct you to read the standard ADT libraries in Lisp, Haskell, SML, Ocaml, Clean to give just a few examples. They have much more powerful constructs (with list comprehensions, higher order functions and all enchilada) for free and they use those as regularly as your average Java dude uses the array construct. So much for your stupid objection of "overgeneralized junk". By the way, I do think that directionality of relationships is OO non-sense, and unless you convince me that we're in some kind of embedded system that makes an extra pointer too high of a price to pay (an unlikely case here), I pay nothing for having my relations proper relations, and programmers who care about directionality end up paying more. Just look above, I've written one line of code, so where the hell is the cost? Now pick your favorite algebra book and show me where they speak about "directionality" of relations. Relations do not have directionality, this is a useless concept in >90% of the cases. Thirdly, if a slew of extra requirements such as you suggest came along, I would not be "rewriting the same boring code", I'd be pulling up commonality into shared base classes and getting reuse. And I might, at some time, when sufficient of these requirements have been found, switch to a library-provided relation class such as you suggest. This is all a matter of taking small steps, and making suitable engineering tradeoffs at each step. Notice that this sequence of events could take a little time as a few minutes. Sure. I have nothing against your style, but don't make it my style, will you? Talk is easy, how about taking the ManyToManyChallenge? No thanks. You've already trolled that one into the ground. So Costin is a troll also for suggesting that relational theory may simplify this issue? RelationalWeenie == Troll in your mind, it does seem.
static void B() {...}} That's a plain-jane function. No beating around the bush. In practice I never use them, but they are available if you want them. You still have the "dot problem" of A.B() unless you redefine it locally for each using class (possibly a OnceAndOnlyOnce violation of sorts). YagNi dictates calling B() instead of A.B() unless A.B() solves some immediate problem. And, it is still more syntax to set up than a function. Notice the 2 levels of nesting instead of the one that would be needed for a function. There is no dot problem. YAGNI doesn't say "write procedural code". YAGNI says don't add features you don't need. Sheesh. What do you mean? You don't need the stuff before the dot. It is not needed. It serves no purpose at the time of writing the code. It is superfluous to identified needs. How else can it be interpreted? A function is less code both on the defining side and caller side (if outside the original class). YAGNI says don't add features you don't need. The stuff before the dot is not a feature and not addressed by YAGNI. I see no reason to exclude class wrappers from YagNi. Class "wrappers" are features? Did a customer ask for them? No. That is yet another reason to exclude them. Then we agree. Class "wrappers" are not features and as such are not subject to YAGNI. YAGNI is about features. For the sake of argument and your characterization of YagNi is correct, then is there not a coding equivalent of YagNi that perhaps goes by a different name? Not that I know of. My rule for code is that it should be easily understood for an average programmer. I am just as verbose as I need to be to meet that requirement and no more. I quit trying to write "less code" when I upgraded my 32x16 character display and 64K RAM. Now I try to write just enough code so that any programmer familiar with the language and idioms will be able to easily understand what I'm doing. Perhaps a case of QwertySyndrome. I like lean code. If something before the dot does not contribute, then I would rather see it cut off. Sometimes with bloated code I have to get a highlighter pen and highlight the stuff that really matters so that I can read it. If you have the ability to read bulky code fast without a highlighter pen, I applaud you. I guess I am too much like PaulGraham......without the millions. [Ha ha ha ha! You, my friend, are nothing like PaulGraham.] {I generally meant "in that regard". He believes that unnecessarily long code is a smell that your coding abstractions are not sufficient. BTW, your ha's are as repetitious as your code.} Aside: why do some people who are ignorant of a technology, methodology, or way of thinking insist on accusative, confrontational language, instead of a humble, inquisitive approach? You end up in situations like this one, anyone who's done ANY OO programming knows about static methods. Yet all this effort was expended due to a confrontational approach from someone who had no clue about OO and made an accusation that you can't do functional programming in OO languages. Reminds me of UseNet. Like it says at the top of this page, it's trolling. Accusation and confrontation are the goal, not wisdom sharing. {I simply set out to try to explore something about OOP that bothers me. Why is that "trolling" or "confrontational"? I honestly feel that the "other guy started it". Maybe I am delusional and need deep therapy and it really is all my evil fault, but I did not *consciously* set out to start fires, but to convey my view of things as clearly as I could in order for others to study them with me. Maybe when emotions cool off a bit, we can go back and soften any harshness found (without changing the meaning).} Fix my language if you don't like it. It is a round-a-bout function in OO clothing. The bloat builds up over time.
a = foo.bar(b) + foo.bar(c) + foo.bar(d) + foo.bar(e) Verses: a = bar(b) + bar(c) + bar(d) + bar(e)It makes code harder to read IMO. If you disagree and still find it just as easy to read, that is fine. Everybody is different. But why do you insist in calling me a troll because I claim it makes code harder for me to read???? It is objectively longer. Look at it! vs In a method of someClass:
int a = foo.barAndAdd(b,c,d,e);then in foo's class:
int barAndAdd(int b, int c, int d, int e) { return bar(b)+bar(c)+bar(d)+bar(e); }Now the dependency between someClass and foo's class is smaller. someClass sends foo one message instead of 4. It may be more characters than the procedural line, but that isn't enough reason to go back to a procedural language. OO languages add enough value to justify the added verbosity. As I said before, I don't call you a troll. I call what you do trolling. It is still a vague word and rude. If I call your content "biased" instead of your person, that is slightly nicer, but still rude. The word is well defined. It fits in this situation. Is there a less rude term we can use? So you are admitting to being rude? That is a start. I saw no consensus under TrollDefinition, and some definitions are about person A's guess as to person B's evil motivations rather than the nature of content. Every word/phrase/idiom falls someplace along the rudeness-politeness spectrum. I see a consensus on TrollDefinition. Your posts are a good example of the term. Is there a less rude term we can use? I don't see any goddam consensus. You are the "troll". A function is objectively less code than option #3, not an opinion. It is less ASCII characters that a 4-year-old can count. You are wrong. Swallow the living fact and shut up now. I am offended at being called a troll, and shall call you a blind stubborn OO shithead in response. You have been cranking out bloat for so long that you accept it as a way of life. The addition of an object or class name to the front of a method provides a common idiom for communicating subject-verb or direct object-verb relationships between OO programmers. Longer does not (in this case) mean harder. IDEs make it easy to type verbose identifiers once they have been created. Reading verbose identifiers can be much easier than reading terse identifiers. There are rare occasions when a method truly belongs to no class. I still see this one-class thing as arbitrary. There is no force in nature that I can identify that naturally selects one and only one entity for every action. Perhaps it models your own brain patterns well, or is a UsefulLie of some sort, but I see nothing universal or external about it. So what if it is arbitrary? So what if you can identify a force? It works well for me and many other programmers. So lets declare EverythingIsRelative, delete wiki and go home. I haven't had one of those for years. If I have one tomorrow I'll create a class for it to live in and get on with the job. I'm not ashamed to write a line of procedural code. Every one of my methods contains purely procedural code. Most of the methods they call are in the same class. My first OO language (C++) began as a preprocessor for a procedural language. OO is, for the most part, just a better way to organize procedural code. {So I've heard, but not seen.} [You sound convinced that you should write in a procedural language. Sorry, this is a tired old debate that's been repeated countless times on this wiki and through the annals of OO history, I'm not sure many people care to argue this again and again, especially in a confrontational manner.] Confrontational? You guys called me a "troll" first! Hypocrits! Arrrrg! [I never called you a troll] Well, then my frustration is directed at the being who did. Are you saying I am wrong? I am not wrong. x.y is anti-YagNi when y is appropriate. Are you saying it is too minor to matter? Or are you just so used to doing that way that you write it off as the cost of doing OOP? Is this wiki only for "major" issues? It could add up to roughly say 20 percent more code bulk. That is hardly "minor". IMO you are calling me a troll to avoid facing the issue. Prove my hunch wrong. Troll is such stupid vague word anyhow. [No I'm not saying you're wrong. You're kind of like the guy who comes along and tells me C++ is much better than Java (enumerate reasons). However that "debate" is so old and tired and subject to personal taste that I would rather stare at the wall for 15 minutes than spend 15 minutes debating the issue with you. I also think that the brain evolved to grep colors and icons and spatial features rather than just monochrome text, and therefore I like working in GUI applications better than Vi. But you won't find me arguing that one very often either]. We write it off as a cost. The occurrence of a classless method is very rare. It is possible that you habitually force the activity to go with one or the other noun in a somewhat arbitrary decision. I cannot very your code because it is InternalEvidence, but a possibility to consider. You're ignoring my next sentence: My methods tend to grow from my classes through refactoring. Think about that for a second. The only time I add a method is when one class's method needs to ask an object or class to do something. There's nothing arbitrary about which class the method belongs to, or if it belongs to a class at all. I would use a procedural language, or use an OO language in a more procedural way if I wanted to. YAGNI is not a dictate to do the dumbest or hardest to maintain thing that could possibly work. YAGNI is about features. Don't add features that aren't needed. Don't day dream in code. -- EricHodges Well, IMO there should be a YagniForCodeRule? also. Lean code makes it easier for me to read and understand (all else being equal). I find unnecessary dot-danglers and other doo-dads distracting. That is the bottom line. You cannot take that away from me, for I know my mind better than you. I have no intention of taking anything away from you. After working with large teams on large code bases I find that Java provides a very useful mix of verbosity and simplicity. I'd much rather maintain millions of lines of well refactored Java than C. We write for the average Java programmer, not any individual brain. -- EricHodges I suppose you could argue that somebody not good at reading "well decorated" code should not be in the business. But us lean-coders would probably say that "decorators" should not be in the business. Say whatever you like. I've written PDP-11 C code that was limited to 6 character variable names. If that's the sort of thing you consider "lean code", I hope I never see another line of "lean code" as long as I live. There are many more like me who have come to the conclusion that code is first and foremost a document through which programmers communicate with each other, and that languages like Java are well suited to that task. -- EricHodges I don't consider cryptic variables part of the "lean" philosophy, because good naming you *do* need. I would like to see more evidence that Java is well suited for programmer communication. JavaAndTeams? perhaps. Talk to Java programmers for whom Java is not their first language (like me). -- EricHodges Just because the first language you used stunk does not mean Java is the ultimate solution. C? I don't think highly of C either. You can talk to me if you want more evidence for why I use Java, or you can ignore me. I didn't claim Java is the "ultimate solution". -- EricHodges HowImportantIsLeanCode
x= foo.bar(a) + foo.bar(b) + foo.bar(c) + foo.bar(d)to
x= foo.barAndAdd(a,b,c,d) // ... where foo now contains the splendidly written method int barAndAdd(int b, int c, int d, int e) { return bar(b)+bar(c)+bar(d)+bar(e); }Wow !!! What a feat. And the person who wrote this was calling the other person a troll. That might be, but if the above was not a troll, well it was a sheer display of confusion and/or incompetence. Therein lies the epitome of ObjectOrientedPsychologicalMismatch?: people are getting so confused with all these classes, objects, refactorings, and now YAGNI + XP + agile and stuff that they forgot to apply common sense to writing programs. The psychological mismatch happens especially among that part of the OO population that devoured DesignPatterns, Refactoring, and all that jazz, before learning the basics of programming, therefore they ended up more confused than enlightened about what software is all about. And when people can't discuss software without having a common ground on the basics of it a TrollFest? is sure to happen. Boys and gals, we have news for you: you can throw out the window all those fancy-shmancy pattern books of yours and replace them with only one: StructureAndInterpretationOfComputerPrograms. It's also available on-line for free. You need to learn first how to program at least half-way decently, it ain 't that difficult unless you keep distracted of those fancy patterns and refactorings. Before you can engage in TrollFest? on how ObjectOriented subsumes procedural programming through the use of static methods you need to learn first what procedural programming is. Try to start with StructureAndInterpretationOfComputerPrograms, it will do you much good, you'll even learn more about objects and abstractions than you ever had a chance to learn from your favorite OO book. If you feel some pain reading SICP you might want to try to ease yourself into the subject with the wonderfully written HowToDesignPrograms () which has such fine guidelines for when to use auxiliary functions, how to introduce variables, and all the basics. I don't get it. What's wrong with refactoring all those calls to foo into foo's class? From all the available evidence, that's a better place for them. -- EricHodges It was an example that showed how many x.y() calls in a row can add up to visually more complicated code. It could by different classes and methods being called, not just the same ones. Beyond that, we would probably need more realistic examples to examine.
barAndAdd(int,int,int,int)for the sole purpose of allowing a client to simplify a trivial expression like Foo.bar(a) + Foo.bar(b) +Foo.bar(c) + Foo.bar(d). Now we tied the Foo to the fact that client has precisely four values that it will pass to bar, and that the results will be combined in a precise way. Anything at all changes in the needs of the client and either the barAndAdd becomes useless or it has to be changed as well, you can't have coupling worse than that. The only way in which Foo.barAndAdd can be justified as a piece of code is if barAndAdd reflected an operation of particular significance in the context of Foo so that it will always have 4 values and the addition bar(x1)+bar(x2)+bar(x3)+bar(x4) represented something really important. But that wasn't how that discussion evolved, and in any case then the operation should be called Foo.doSomeImportantComputation(int x1, int x2, int x3, int x4), while the name Foo.barAndAdd clearly reflects the intention of chaining the bar function and the addition operator. Well, it should never be the responsibility of particular classes to do such stuff. To go to the root of that part of the troll, the claim that: Foo.bar(a) + Foo.bar + ... is worse than bar(a) + bar(b) + ... had not any merit either. To begin with, it is a non-issue in most OO languages which allow the second form as well. It is one of JavaDesignFlaws : it doesn't allow free standing functions. In other words I have to use class names + static methods volens-nolens, for example: Math.cos(x1)*Math.cos(x2)*Math.cos(x3) [nobody thought of Math.cosAndMultiply(x1,x2,x3)??]. However this is a Java (and SmallTalk) specific design problem and even Java is gonna provide a workaround next time (in 1.5). Not Smalltalk. The Smalltalk expression is "theta sin" or "1.5 tan". It doesn't need free standing functions. The Java 1.5 workaround just saves keystrokes - it doesn't fix the problem that OO in Java is an afterthought bolted on to yet another C-like language. (Anyone who disagrees has the job of explaining wrapper classes.) Second, when you see Foo.bar(a) + Foo.bar(b) you normally think that Foo is used as a namespace to disambiguate the name bar which potentially can be used by several modules. And in non-trivial software you'll always have some name clashes, and you will need namespaces. So Foo.bar will enhance program clarity. Even if you program in a procedural language you'll often times want to say Foo.bar() instead of just bar(), and as a matter of fact it is a common pattern in large PascalLanguage programs where Foo is an unit and the function bar can be named Foo.Bar. Even databases (tablizers' favorite pet troll) use namespaces in order to organize their names better. So you'll see for example SELECT * FROM Accounting.Journal instead of SELECT * FROM Journal, and organizing names is so unrelated to what was discussed in the page that the whole thing should be wiped out. All in all, MuchAdoAboutNothing? as any serious troll fest, however some accents were really fun. -- CostinCozianu I don't like those in SQL either. If such references are not needed, I don't use them. Yes, it may be a personal preferences, but many things in software engineering are. Foo's class can still provide bar(), so adding barAndAdd doesn't limit how other classes use foo's class. A List wasn't used in the example, so I didn't introduce one for the refactoring. The point of the refactoring was that most of my method calls are within the same class and the objection to repeating the object or class name is baseless. Don't read too much into it. -- EricHodges No, it doesn't limit how other classes uses Foo, but it clutters Foo with useless code. If you were designing Java library, would you put Math.cosAndAdd(x1, x2, x3, x4 ) in the Math class because I don't like the aspect of Math.cos(x1) + Math.cos(x2) + Math.cos(x3) +Math.cos(x4)? I didn't think so. The solution is to fix the language design so that I can write cos(x1) + cos(x2) + cos(x3) + cos(x4). So if you provided examples in Java you had to eat your cake and acknowledge the limitation. Saying that you work around it by creating barAndAdd is nonsense. The ugliness of Math.cos(x1) + Math.cos(x2) + Math.cos(x3) + Math.cos(x4) is a language design problem that has nothing to do with the current troll fest (which was procedural vs OO I think?). But even in Java 1.5, when I'll be able to write cos(x) instead of Math.cos(x), I still won't be able to write
List.map( myList, cos );I'll have to write
List.map(myList, new Function() { public Object calculate(Object argument) { return cos((Double)argument)).doubleValue(); } };And once I have to compose more than one function in there it will be totally butt-ugly bloated code. Therefore your claim that you can do functional programming with static methods is kind of non-sense. Of course, tablizer's reply was non-sense as well. I'm not writing a library. I'm explaining why the cost of object.method or class.method is not high enough to ignore Java as a language. It doesn't matter if you're library writer or simple programmer, Java as a language is not adequate for functional programming, while your claimed was that you could do functional programming by using static methods. That's factually false. I never made that claim. You're confusing me with someone else. -- EricHodges I apologize for the confusion. The claim was made and implied several times on this page. The similarity of the operations in the example was taken too literally, I am afraid. More likely the pattern would resemble:
result = cos(x) + sin(y) + tan(z) * arctan(x) + exp(x, z) ....I find that much easier on the eyes and "grok lobes" than:
result = math.cos(x) + math.sin(y) + math.tan(z) * math.arctan(x) + math.exp(x, z) ....I find it slightly easier. Java 1.5 will add support for static imports. We'll still have classes to prevent name collisions, but we won't have to type them everywhere. -- EricHodges Java's math function library disguised as a static class is one of the several pathologies that distinguish Java from an object oriented language. What it should look like is:
result = x.cos + y.sin + z.tan * x.arctan + x.exp(z) ...-- MarcThibault I guess I am a visual thinker. I like to create a code image that is close to the IdealMentalImage, and all the class markers interfere with that, putting large distracting space between the parts that are conceptually relevant. Maybe my head needs more RAM or something. Perhaps. Read this:. So I am autistic? AspergersSyndrome perhaps, but not autistic. Many folks categorize AspergersSyndrome as a mild form of autism. It doesn't matter what you call it. Temple Grandin thinks in pictures and has written about it. It may be useful to you. Personally I don't think visual thinking has anything to do with syntax, but who am I to judge? I think in sounds. Perhaps most autistic or Asperger people are "visual thinkers", but that does not necessarily mean that most visual thinkers are also autistic and/or Asperger. What sounds do the class-dot thingies make? So
x = a + b * c x = glob.a + glob.b * glob.cThese are nearly equally grokkable to you? I personally find the first greatly superior. Dots make the "dot" sound. "glob.a" sounds like "glob dot a". Neither of your examples is clear. Are a, b and c in the same namespace as x? If so, there's no need for glob dot. If not, the first statement is meaningless. Mathematicians have kept their variables short for hundreds of years for good reason. (They generally define and describe the longer versions elsewhere.) The "path bath" approach is going against centuries of tradition. Also, repeating "glob" over and over is a violation of OnceAndOnlyOnce. I like to factor out the bloat and repetition that distracts one from the purpose of the task. The closer the code is to pseudo-code the better in my opinion. | http://c2.com/cgi-bin/wiki?ResponsibilityDrivenDesignConflictsWithYagni | CC-MAIN-2016-40 | refinedweb | 6,231 | 66.23 |
Defining member functions outside the class definition
All of the classes that we have written so far have been simple enough that we have been able to implement the member functions directly inside the class definition itself. For example, here’s our ubiquitous Date class:
However, as classes get longer and more complicated, having all the member function definitions inside the class can make the class harder to manage and work with. Using an already-written class only requires understanding its public interface (the public member functions), not how the class works underneath the hood. The member function implementation details just get in the way.
Fortunately, C++ provides a way to separate the “declaration” resolution that includes an externally defined constructor with a member initialization list:
becomes:
Putting class definitions in a header file
In the lesson on header files, you learned that you can put function declarations inside header files in order to use those functions.
#include "Date.h"
Doesn’t defining a class in a header file violate the one-definition rule?
It shouldn’t. If your header file has proper header guards, it shouldn’t be possible to include the class definition more than once into the same file.
Types (which include classes), are exempt from the part of the one-definition rule that says you can only have one definition per program. Therefore, there isn’t an issue #including class definitions into multiple code files (if there was, classes wouldn’t be of much use).
Doesn’t defining member functions in the header violate the one-definition rule?
It depends. Member functions defined inside the class definition are considered implicitly inline. Inline functions are exempt from the one definition per program part of the one-definition rule. This means there is no problem defining trivial member functions (such as access functions) inside the class definition itself.
Member functions defined outside the class definition are treated like normal functions, and are subject to the one definition per program part of the one-definition rule. Therefore, those functions should be defined in a code file, not inside the header. The one exception for this is for template functions, which we’ll cover in a future chapter.
So what should I define in the header file vs the cpp file, and what inside the class definition vs outside?
You might be tempted to put all of your member function definitions into the header file, inside the class. While this will compile, there are a couple of downsides to doing so. First, as mentioned above, this clutters up your class definition. Second, if you change anything about the code in the header, then you’ll need to recompile every file that includes that header. This can have a ripple effect, where one minor change causes the entire program to need to recompile (which can be slow). If you change the code in a .cpp file, only that .cpp file needs to be recompiled!
Therefore, we recommend the following:, and you should get used to doing so.
Default parameters
Default parameters for member functions should be declared in the class definition (in the header file), where they can be seen by whomever #includes the header.
Libraries
Separating the class definition and class implementation is very common for libraries that you can use to extend your program. Throughout your programs, you’ve #included headers that belong to the standard library, such as iostream, string, vector, array, and other. Notice that you haven’t needed to add iostream.cpp, string.cpp, vector.cpp, or array.cpp into your projects. Your program needs the declarations from the header files in order for the compiler to validate you’re writing programs that are syntactically correct. However, the implementations for the classes that belong to the C++ standard library are contained in a precompiled file that is linked in at the link stage. You never see the code.
Outside of some open source software (where both .h and .cpp files are provided), most 3rd party libraries provide only header files, along with a precompiled library file. There are several reasons for this: 1) It’s faster to link a precompiled library than to recompile it every time you need it, 2) a single copy of a precompiled library can be shared by many applications, whereas compiled code gets compiled into every executable that uses it (inflating file sizes), and 3) intellectual property reasons (you don’t want people stealing your code).
Having your own files separated into declaration (header) and implementation (code file) is not only good form, it also makes creating your own custom libraries easier. Creating your own libraries is beyond the scope of these tutorials, but separating your declaration and implementation is a prerequisite to doing so.
Hi!
I'm confused. Is implementation the same as definition?
For functions? Yes.
just wondering if we can instead put our forward declaration for the class on the header file like we used to in our earlier lessons ?
Date.h
Date.cpp
A forward declaration of a class only allows you to use at as a pointer/reference and you can't access any of its members. Classes need to be defined to be usable.
thanks very much.
C++ Standard Library, The: A Tutorial and Reference 2nd Edition
is this book still relevant for stl ?
been published in 2012
There's probably a lot of useful information in there, but take it with many grains of salt. There have been additions and changes to the language since C++11 and the things that are described in the book might no longer be true.
For something more current there's Mastering the C++17 STL by Arthur O'Dwyer.
There is also a video course by Kate Gregory called Beautiful C++: STL Algorithms but it was last updated in 2016 and does not go into as much depth as far as I know.
"declaration" and "definition" in regard to classes was used interchangeably in this section.
Can you explain the difference between the two? If whatever we code in .cpp file is "implementation", then what is the code in .h file called?
Inside the class, normal rules for declaration and definition apply
Most often, classes are defined in a header and their member functions are only declared in the class definition. The member functions are then defined in a source file.
Hi nascar (once again....)
At this point, I am confused again.
Perhaps it's not an 'update' error like in the previous chapter. But why are we once again calling the function inside the constructor? I don't see the difference between doing so, and just using a member initializer list. I've read in a previous chapter that this method can be used eg. when we want to reset our members, but other than that, I see no clear usage of calling our member functions in the constructor.
Coud you please help me understand that? Thank you a lot.
If `setDate` will always remain the same as it is, the constructor should initialize the members in the member initializer list.
But there are reasons why `setDate` might change, eg. to overflow into month when `day > 31`. By calling `setDate` from the constructor, that behavior would also apply to the constructor, which is desirable.
hey,I have a few questions here :I really don't know what the difference is between .cpp and .
Let's say I define my class and put member functions delcaration there but instead of creating a .cpp file for definitions I create another .h file and do it there.Can't this be done ? if yes why shouldn't we do this ?
2:"the implementations for the classes that belong to the C++ standard library are contained in a precompiled file that is linked in at the link stage. You never see the code." how is this done ? can we do the same with our own classes?.and one thing else,when I want to create an object of the class I have declared and defined in separate .h and .cpp file,ilcuding the .cpp file in another file such as main.cpp is enough or I have to include both of them ?
thanks in advance!
Headers files get copied and compiled as a part of every source file that includes them. If you include the header in 20 source files, it will get compiled 20 times. That's slow. On top of that, your header will probably need extra includes to hold the definitions. Those includes get copied and compiled multiple times too and might cause collisions with names from other files.
Do as much as possible in cpp files, and as few as possible in headers.
You can compile your project as a library. This produces a .lib, .a, .so, .dylib or so file depending on your platform. To use the library in another project, you only need that .lib etc. file and the headers. Users of the library don't see the definitions if you placed them in source files.
Never include cpp files. You include header files. The compiler is happy with declarations alone. It compiles main.cpp (Which included class.h) and class.cpp separately (These files don't know about each other). The linker later connects the compiled versions of these 2 source files together.
hello again, by the way I tried to include .cpp file into main and I couldn't.(#include "something.cpp"),I couldn't do it in this way.
Thanks for your answer.
Hey there, I've been very confused in trying to figure out the difference between
Class declaration VS Class definition , care to comment please?
P.S. 'Class definition' shouldn't make sense because isn't it just a blueprint for user-defined type and it doesn't allocate any memory not until we define an object of our class type thus shouldn't there be only the term 'Class declaration' ?
The memory allocation aspect of "definition" is only valid for variables, not for types.
Inside the class, there can again be declaration and definitions. These follow the same rules as if they were outside of the class
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/class-code-and-header-files/ | CC-MAIN-2021-17 | refinedweb | 1,717 | 65.12 |
removing commas with sbt-nocomma
August, 2016
During the SIP-27 trailing commas discussion, one of the thoughts that came to my mind was unifiying some of the commas with semicolons, and take advantage of the semicolon inference.
This doesn't actually work. @Ichoran kindly pointed out an example:
Seq( a b c )
This is interpreted to be
Seq(a.b(c)) in Scala today.
January, 2018
Recently @swachter opened a thread called Comma inference that reminded me of this topic:
Scala has a well known mechanism called “semicolon inference”. I wonder if a similar mechanism may be useful for parameter and argument lists which could then be called “comma inference”.
Here's my response:
I don’t think Scala (the spec as well as us users) can handle more than one punctuation inference, but there might be some tricks you could try.
You have to get past the parser, so you need a legal “shape” of Scala. For example,
scala> List({ 1 2 3 }) res1: List[Int] = List(3)
The above is still legal Scala. The curly brace gets parsed into
Blockdatatype in the compiler. It might be possible to define a macro that takes vararg
Int*as argument, and when
Blockis passed, expands each statements as an argument.
In other words, instead of pursuing a language change, I'm suggesting that we can first experiment by rewriting trees. By using blocks
{ ... } we can get around the infix problem pointed out by Rex.
scala> :paste // Entering paste mode (ctrl-D to finish) class A { def b(c: Int) = c + 1 } lazy val a = new A lazy val b = 2 lazy val c = 3 // Exiting paste mode, now interpreting. defined class A a: A = <lazy> b: Int = <lazy> c: Int = <lazy> scala> Seq( a b c ) res0: Seq[Int] = List(4) scala> Seq({ a b c }) res1: Seq[Int] = List(3)
The first is interpretted to be
a.b(c) whereas the second is
a; b; c.
removing commas in general
Let's implement the macro that would then transform
{ ... } into a
Vector. Here's a generic version:
package example import scala.language.experimental.macros import scala.reflect.macros.blackbox.Context object NoComma { def nocomma[A](a: A): Vector[A] = macro nocommaImpl[A] def nocommaImpl[A: c.WeakTypeTag](c: Context)(a: c.Expr[A]) : c.Expr[Vector[A]] = { import c.universe._ val items: List[Tree] = a.tree match { case Block(stats, x) => stats ::: List(x) case x => List(x) } c.Expr[Vector[A]]( Apply(Select(reify(Vector).tree, TermName("apply")), items)) } }
Here's how you can use it:
scala> import example.NoComma.nocomma import example.NoComma.nocomma scala> :paste // Entering paste mode (ctrl-D to finish) lazy val a = 1 lazy val b = 2 lazy val c = 3 // Exiting paste mode, now interpreting. a: Int = <lazy> b: Int = <lazy> c: Int = <lazy> scala> nocomma { a b c } res0: Vector[Int] = Vector(1, 2, 3)
Using type inferencing, it will automatically pick the last item
c's type, which is
Int. This may or may not be sufficient depending on your use case.
removing commas from build.sbt
One thing I miss about bare build.sbt notation like
name := "something" version := "0.1.0"
is its lack of commas at the end of each line.
We can hardcode
nocomma macro specifically to
Setting[_] as follows:
package sbtnocomma import sbt._ import scala.language.experimental.macros import scala.reflect.macros.blackbox.Context object NoComma { def nocomma(a: Setting[_]): Vector[Setting[_]] = macro nocommaImpl def nocommaImpl(c: Context)(a: c.Expr[Setting[_]]) : c.Expr[Vector[Setting[_]]] = { import c.universe._ val items: List[Tree] = a.tree match { case Block(stats, x) => stats ::: List(x) case x => List(x) } c.Expr[Vector[Setting[_]]]( Apply(Select(reify(Vector).tree, TermName("apply")), items)) } }
Published as sbt-nocomma, we can use this macro as follows:
import Dependencies._ ThisBuild / organization := "com.example" ThisBuild / scalaVersion := "2.12.7" ThisBuild / version := "0.1.0-SNAPSHOT" lazy val root = (project in file(".")) .settings(nocomma { name := "Hello" // comment works libraryDependencies += scalaTest % Test scalacOptions ++= List( "-encoding", "utf8", "-deprecation", "-unchecked", "-Xlint" ) Compile / scalacOptions += "-Xfatal-warnings" Compile / console / scalacOptions --= Seq("-deprecation", "-Xfatal-warnings", "-Xlint") })
Because we hardcoded the type to
Setting[_], it will catch things at loading time if you put
println(...) or something:
/Users/xxx/hello/build.sbt:14: error: type mismatch; found : Unit required: sbt.Setting[?] (which expands to) sbt.Def.Setting[?] println("hello") ^ [error] sbt.compiler.EvalException: Type error in expression [error] sbt.compiler.EvalException: Type error in expression [error] Use 'last' for the full log. Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore?
setup
To try this yourself, add the following to
project/plugins.sbt using sbt 1.x:
addSbtPlugin("com.eed3si9n" % "sbt-nocomma" % "0.1.0") | http://eed3si9n.com/removing-commas-with-sbt-nocomma | CC-MAIN-2018-47 | refinedweb | 798 | 60.11 |
I am wondering if my answer to this most recent assignment is correct. Just want see if anyone has any input before I submit. Any comments would be appreciated! Thanks!
Here is the assignment:The card game poker is a game of chance. If you aren't familiar with the game, you can read about it at the following links (among other places). The deck is described at; poker rules are described at Rules of Card Games: Poker. The names of hands, e.g., flush, straight, and so forth, and what they mean, can be found at Poker Hands l Official Poker Hand Ranking from Best to Worst. This assignment ignores betting completely; it is only concerned with the relative frequency of various kinds of poker hands.
The code provided in the three classes below calculates the frequency of obtaining a straight (five cards running consecutively, e.g. 3-4-5-6-7); and a flush (all five cards of the same suit). The calculation for a straight does not exclude the possibility of the straight also being a flush, e.g., 3-4-5-6-7 all of the same suit. As you will surely notice, the code for determining a straight is considerably more complicated than the flush code.
Using the given algorithms as a model, your job for this assignment is to
1) determine the chances (the relative frequency) of obtaining a full house;
2) determine the chances (the relative frequency) of getting a flush that is not a straight; and
3) determine the chances (the relative frequency) of obtaining nothing: five cards with distinct values, not of the same suit, and not a straight;
A word on how the given code works. We run one million experiments. Each time, we a) assemble an unshuffled deck (the cards appear in a specific order, e.g., all the hearts, then all the diamonds, etc.); b) we shuffle the deck, and then look at the first five cards - this is the hand we're dealt; and c) we then examine this hand to see if it's a straight or a flush.
Below we've given you 3 classes, a Card class, a Deck class, which makes use of the Card class, and a driver class, HandTester. Study these classes carefully. You need to copy each class to your machine and of course you need to compile them. Thye provide the framework for developing the three methods that make up the assignment.
Code java:
public class Card { private int value; private char suit; // H D C S, for hearts, diamonds, clubs, spaces public Card(int i, char c) { value = i; suit = c; } public int getValue() { return value; } public char getSuit() { return suit; } public String toString() { String ans = ""; switch (value) { case 1: ans = "A"; break; case 11: ans = "J"; break; case 12: ans = "Q"; break; case 13: ans = "K"; break; default: ans = ("" + value); } return (ans + '-' + suit); } }
Code java:
public class HandTester { public static void main(String[] args) { int runs = 1000000; int straightCt = 0; int flushCt = 0; Deck d = new Deck(); for (int j = 0; j < runs; j++) { d.freshDeck(); d.shuffle(); if (d.straight()) straightCt++; if (d.flush()) flushCt++; } System.out.print("straight frequency "); System.out.println(straightCt / (double) runs); System.out.print("flush frequency "); System.out.println(flushCt / (double) runs); } }
Code java:
public class Deck { // models a deck of cards private Card[] cards; // a random object from java.util private java.util.Random r = new java.util.Random(); public Deck() { cards = freshDeck(); } public Card[] getDeck() { return cards; } public Card[] freshDeck() { // makes a "mint" deck that runs through hearts, diamonds, clubs, spades Card[] deck = new Card[52]; int loc = 0; for (int j = 1; j <= 13; j++) { deck[loc] = new Card(j, 'H'); loc++; } for (int j = 1; j <= 13; j++) { deck[loc] = new Card(j, 'D'); loc++; } for (int j = 1; j <= 13; j++) { deck[loc] = new Card(j, 'C'); loc++; } for (int j = 1; j <= 13; j++) { deck[loc] = new Card(j, 'S'); loc++; } return deck; } public void shuffle() { /* * method randomly shuffles cards - shuffle details in ch 9 first five * cards make up a random poker hand */ int swapPos; Card temp; for (int i = cards.length - 1; i > 0; i--) { swapPos = r.nextInt(i + 1); // pick pos from 0 -> i (i is possible) temp = cards[swapPos]; // swap vals at i, swapPos cards[swapPos] = cards[i]; cards[i] = temp; } } public boolean flush() { /* * applied to array cards, after cards have been shuffled straight-flush * not excluded (so: true even if hand is both straight AND flush). * method works like this: gets suit of 0th card in deck -keySuit- and * checks it against next four */ char keySuit = cards[0].getSuit(); char suit; for (int j = 1; j <= 4; j++) { suit = cards[j].getSuit(); if (keySuit != suit) { return (false); } } return true; // case reached ONLY when all suits match keySuit } private int[] makeTally() { /* * looks at first 5 cards in (shuffled) deck - these make up the random * hand - and tallies their values on a scoreboard. A 2 in cell 7 means * that there are two 7's in the current hand. Method returns scoreboard */ int[] scoreboard = new int[14];// Ace count in cell 1,2 ct in cell 2 for (int j = 0; j <= 4; j++) { int k = cards[j].getValue(); scoreboard[k]++; } return scoreboard; } public boolean straight() { int[] tally = makeTally(); // scoreboard of values in the hand. // looking for a consecutive sequence of all 1's starting at // firstPos - where the first 1 falls on left int f = firstPos(tally); // f: first "1" entry in tally if (f == 0) return false; // means: none - so: no straight else // tricky case: single Ace, so check 2-5, 10-K (10-13) if (f == 1) return (legit(2, 5, tally) || legit(10, 13, tally)); if (f > 9) return false; // if first 1 is 10 or higher there's // no Ace - so a straight isn't possible else return (legit(f + 1, f + 4, tally)); // what about the 4 cards after // f } private int firstPos(int[] t) { /* * serves the straight method. Returns first tally position that holds a * singleton - a 1. If none do, 0 is returned which means that there are * no singletons - so a full house */ for (int p = 1; p <= 13; p++) if (t[p] == 1) return p; return 0; // happens when no '1' entry is found } private boolean legit(int first, int last, int[] t) { /* * serves the straight method. Returns true exactly when the tally - * what will be tied to the formal parameter t -- shows a string of 1's * and nothing else from first to last */ for (int i = first; i <= last; i++) if (t[i] != 1) return false; return true; // all 1's from first to last } }
I just have to submit the three methods ( fullHouse(), flushNotStraight(), and nothing() ) , so I do not have to worry about altering the original classes and driver that was given. Here are my three methods:
Code java:
public boolean fullHouse(){ if(firstPos()==0) return true; else return false; } public boolean flushNotStraight(){ if((flush()== true)&&(straight()== false)) return true; else return false; } public boolean nothing(){ if((flush()== false)&&(straight()==false)&&(fullHouse()==false)) return true; else return false; }
Is this correct? | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/26924-poker-hand-program-right-printingthethread.html | CC-MAIN-2014-15 | refinedweb | 1,193 | 65.56 |
Details
- Type:
Bug
- Status: Closed
- Priority:
P1: Critical
- Resolution: Done
- Affects Version/s: Qt Creator 4.0.0-beta1
- Fix Version/s: Qt Creator 4.0.0-beta1
-
-
- Environment:Windows 7 64 bit
MSVC2013 32 bit
cdb 6.12
Description
- Have a simple program:
#include <QCoreApplication> #include <QDebug> void func(){ int s = 23; qDebug() << s; } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); func(); qDebug() << "Program ran forward."; return a.exec(); }
- Place a breakpoint at the call to func().
- Run this in the debugger.
- When the debugger stopped at the breakpoint, click the "Operate by Instruction" button.
The editor won't change.
Instead, the disassembler view should be shown. With Creator 3.6 or even with my local build of Creator 4.0, this works correctly. With Creator installed from tonight's package, I can reproduce the problem.
Attachments
Issue Links
- resulted from
QTCREATORBUG-15858 [REG 3.6 -> 4.0] Debugger can't "step into"
- Closed | https://bugreports.qt.io/browse/QTCREATORBUG-15859?attachmentSortBy=dateTime | CC-MAIN-2022-40 | refinedweb | 158 | 62.24 |
Created on 2007-04-02 15:27 by nathanlmiles, last changed 2018-03-15 00:32 by terry.reedy. This issue is now closed.
When I try to use r"\w+(?u)" to find words in a unicode Devanagari text bad things happen. Words get chopped into small pieces. I think this is likely because vowel signs such as 093e are not considered to match \w.
I think that if you wish \w to be useful for Indic
scipts \w will need to be exanded to unclude unicode character categories Mc, Mn, Me.
I am using Python 2.4.4 on Windows XP SP2.
I ran the following script to see the characters which I think ought to match \w but don't
import re
import unicodedata
text = ""
for i in range(0x901,0x939): text += unichr(i)
for i in range(0x93c,0x93d): text += unichr(i)
for i in range(0x93e,0x94d): text += unichr(i)
for i in range(0x950,0x954): text += unichr(i)
for i in range(0x958,0x963): text += unichr(i)
parts = re.findall("\W(?u)", text)
for ch in parts:
print "%04x" % ord(ch), unicodedata.category(ch)
The odd character here is 0904. Its categorization seems to imply that you are using the uncode 3.0 database but perhaps later versions of Python are using the current 5.0 database.
Python 2.4 is using Unicode 3.2. Python 2.5 ships with Unicode 4.1.
We're likely to ship Unicode 5.x with Python 2.6 or a later release.
Regarding the char classes: I don't think Mc, Mn and Me should be considered parts of a word. Those are marks which usually separate words.
Vowel 'marks' are condensed vowel characters and are very much part of
words and do not separate words. Python3 properly includes Mn and Mc as
identifier characters.
For instance, the word 'hindi' has 3 consonants 'h', 'n', 'd', 2 vowels
'i' and 'ii' (long i) following 'h' and 'd', and a null vowel (virama)
after 'n'. [The null vowel is needed because no vowel mark indicates the
default vowel short a. So without it, the word would be hinadii.]
The difference between the devanagari vowel characters, used at the
beginning of words, and the vowel marks, used thereafter, is purely
graphical and not phonological. In short, in the sanskrit family,
word = syllable+
syllable = vowel | consonant + vowel mark
From a clp post asking why re does not see hindi as a word:
हिन्दी
ह DEVANAGARI LETTER HA (Lo)
ि DEVANAGARI VOWEL SIGN I (Mc)
न DEVANAGARI LETTER NA (Lo)
् DEVANAGARI SIGN VIRAMA (Mn)
द DEVANAGARI LETTER DA (Lo)
ी DEVANAGARI VOWEL SIGN II (Mc)
.isapha and possibly other unicode methods need fixing also
>>> 'हिन्दी'.isalpha()#2.x and 3.0
False
Unicode TR#18 defines \w as a shorthand for
\p{alpha}
\p{gc=Mark}
\p{digit}
\p{gc=Connector_Punctuation}
which would include all marks. We should recursively check whether we
follow the recommendation (e.g. \p{alpha} refers to all character having
the Alphabetic derived core property, which is Lu+Ll+Lt+Lm+Lo+Nl +
Other_Alphabetic, where Other_Alphabetic is a selected list of
additional character - all from Mn/Mc)
In issue #2636 I'm using the following:
Alpha is Ll, Lo, Lt, Lu.
Digit is Nd.
Word is Ll, Lo, Lt, Lu, Mc, Me, Mn, Nd, Nl, No, Pc.
These are what are specified at
Am I correct in saying that this must stay open as it targets the re module but as given in msg81221 is fixed in the new regex module?
I had to check what re does in Python 3.3:
>>> print(len(re.match(r'\w+', 'हिन्दी').group()))
1
Regex does this:
>>> print(len(regex.match(r'\w+', 'हिन्दी').group()))
6
Matthew, I think that is considered a single word in Sanscrit or Thai so Python 3.x is correct. In this case you've written the Sanscrit word for Hindi.
I'm not sure what you're saying.
The re module in Python 3.3 matches only the first codepoint, treating the second codepoint as not part of a word, whereas the regex module matches all 6 codepoints, treating them all as part of a single word.
Maybe you could show us the byte-for-byte hex of the string you're testing so we can examine if it's really a code point intending word boundary or just a code point for the sake of beginning a new character.
You could've obtained it from msg76556 or msg190100:
>>> print(ascii('हिन्दी'))
'\u0939\u093f\u0928\u094d\u0926\u0940'
>>> import re, regex
>>> print(ascii(re.match(r"\w+", '\u0939\u093f\u0928\u094d\u0926\u0940').group()))
'\u0939'
>>> print(ascii(regex.match(r"\w+", '\u0939\u093f\u0928\u094d\u0926\u0940').group()))
'\u0939\u093f\u0928\u094d\u0926\u0940'
Thanks Matthew and sorry to put you through more work; I just wanted to verify exactly which unicode (UTF-16 I take it) were being used to verify if the UNICODE standard expected them to be treated as unique words or single letters within a word. Sanskrit is an alphabet, not an ideograph so each symbol is considered a letter. So I believe your implementation is correct and yes, you are right, re is at fault. There are just accenting characters and letters in that sequence so they should be interpreted as a single word of 6 letters, as you determine, and not one of the first letter. Mind you, I misinterpreted msg190100 in that I thought you were using findall in which case the answer should be 1, but as far as length of extraction, yes, 6, I totally agree. Sorry for the misunderstanding. contains the code chart for Hindi.
UTF-16 has nothing to do with it, that's just an encoding (a pair of them actually, UTF-16LE and UTF-16BE).
And I don't know why you thought I was using findall in msg190100 when the examples were using match! :-)
Let see Modules/_sre.c:
#define SRE_UNI_IS_ALNUM(ch) Py_UNICODE_ISALNUM(ch)
#define SRE_UNI_IS_WORD(ch) (SRE_UNI_IS_ALNUM(ch) || (ch) == '_')
>>> [ch.isalpha() for ch in '\u0939\u093f\u0928\u094d\u0926\u0940']
[True, False, True, False, True, False]
>>> import unicodedata
>>> [unicodedata.category(ch) for ch in '\u0939\u093f\u0928\u094d\u0926\u0940']
['Lo', 'Mc', 'Lo', 'Mn', 'Lo', 'Mc']
So the matching ends at U+093f because its category is a "spacing combining" (Mc), which is part of the Mark category, where the re module expects an alphanumeric character.
msg76557:
"""
Unicode TR#18 defines \w as a shorthand for
\p{alpha}
\p{gc=Mark}
\p{digit}
\p{gc=Connector_Punctuation}
"""
So if we want to respect this standard, the re module needs to be modified to accept other Unicode categories.
Whatever I may have said before, I favor supporting the Unicode standard for \w, which is related to the standard for identifiers.
This is one of 2 issues about \w being defined too narrowly. I am somewhat arbitrarily closing this as a duplicate of #12731 (fewer digits ;-).
There are 3 issues about tokenize.tokenize failing on valid identifiers, defined as \w sequences whose first char is an identifier itself (and therefore a start char). In msg313814 of #32987, Serhiy indicates which start and continue identifier characters are matched by \W for re and regex. I am leaving #24194 open as the tokenizer name issue. | https://bugs.python.org/issue1693050 | CC-MAIN-2019-30 | refinedweb | 1,230 | 64 |
Arduino on Algorand Blockchain
Overview
The Internet of Things is growing and getting more and more attention over time.
With a community of millions of users, Arduino is the most popular development board on the market.
In this solution, you will learn how to connect any Arduino to the Algorand Blockchain.
We will see how to build a simple IoT device that sends temperature measures to the Algorand Blockchain.
Requirements
Any Arduino
Any kind of Arduino will do the job. However the Arduino UNO is recommended to follow along.
A HC-05 Bluetooth Module
The HC-05 Bluetooth Module is a Serial Bluetooth Module for Arduino and other MCUs.
A Computer
It will be used to develop our IoT ecosystem and will act as the IoT Node.
An Algorand TestNet Wallet
It will be our entry point to perform the transactions on the Algorand Blockchain.
You can create an account online using My Algo or locally by following the Create an Account on TestNet using JavaScript tutorial.
A PureStake API key
This will allow us to connect and interact with the Algorand Blockchain.
To get a PureStake API key, you can follow steps 1 and 2 of the Getting Started with the PureStake API Service tutorial.
How it works
- The temperature value is measured by the sensor. Here we will not use any temperature sensor. For simplicity, the temperature value will be hard-coded within the Arduino firmware.
- The measure is sent via bluetooth to the IoT Node.
- The IoT Node receives the data and send it to the Algorand Blockchain.
Note
- Steps 1 - 2 are performed by the IoT Device (Arduino).
- Steps 3 by the IoT Node (Computer).
IoT Device
The IoT Device firmware is pretty straightforward.
To start off, get a copy of the firmware with this command:
git clone
Then open up the
iot-device.ino file within the Arduino IDE.
#include <SoftwareSerial.h> SoftwareSerial BTserial(10, 11); void setup() { BTserial.begin(9600); char data[] = "70"; for (int i = 0; i < sizeof(data) - 1; i++) { BTserial.print(data[i] & 0xFF, DEC); BTserial.print(" "); //separator } BTserial.println(""); } void loop() { }
Let’s have a look at the code now:
- First we establish the Bluetooth serial communication.
- We define the plain data that we want to send (here we want to send a 70°C value)
- Finally we send the data to the IoT Node.
Note
To add security, we could encrypt the data before sending it.
IoT Node
The IoT Node is responsible for collecting the data from the IoT Device.
To start off, get a copy of the algorand-iot-node with this command:
git clone
To set everything up follow these commands:
cd algorand-iot-node npm i
Now that we are all set, we can configure the IoT Node. To do so, open up the
.env file.
Here we have to set up:
- The Bluetooth serial port and serial speed for connecting the IoT Node to the IoT Device.
You can find out the port on Mac or Linux with the following command:
ls /dev/cu.*
Here the port corresponding to the HC-05 Bluetooth Module is:
/dev/cu.DSDTECHHC-05-DevB
2. Your PureStake testnet node and API key.
3. Your Algorand mnemonic phrase is used to recover your account to sign the transactions used in the example.
You should have a configuration close to that:
SERIAL_DEVICE=/dev/cu.DSDTECHHC-05-DevB SERIAL_SPEED=9600 NODE="" APIKEY="af4Dyq6Pxb8c7I0ddWtJ********************" MN="huge damp drum waste eager sail symptom census source ****************************************************"
Now let’s see what our code does.
Move to the
app.js file.
const SerialPort = require('serialport'); const Readline = require('@serialport/parser-readline'); const algosdk = require('algosdk'); require('dotenv').config(); const mcu = process.env.SERIAL_DEVICE; const serialPort = new SerialPort(mcu, { baudRate: JSON.parse(process.env.SERIAL_SPEED) }); const parser = new Readline(); serialPort.pipe(parser); parser.on('data', temperature => { temperature = temperature.split(' ').map(Number); temperature = String.fromCharCode(temperature[0], temperature[1]); sendToAlgorandBlockchain(temperature); }); sendToAlgorandBlockchain = (value) => { const baseServer = process.env.NODE; const port = ""; const token = { 'X-API-Key': process.env.APIKEY } const algodclient = new algosdk.Algod(token, baseServer, port); var mnemonic = process.env.MN; var account = algosdk.mnemonicToSecretKey(mnemonic); let note = algosdk.encodeObj(value); (async () => { let params = await algodclient.getTransactionParams(); let endRound = params.lastRound + parseInt(1000); let txn = { "from": account.addr, "to": account.addr, "fee": 10, "amount": 0, "firstRound": params.lastRound, "lastRound": endRound, "genesisID": params.genesisID, "genesisHash": params.genesishashb64, "note": note, }; const txHeaders = { 'Content-Type': 'application/x-binary' } let signedTxn = algosdk.signTransaction(txn, account.sk); let tx = (await algodclient.sendRawTransaction(signedTxn.blob, txHeaders)); console.log("Transaction : " + tx.txId); })().catch(e => { console.log(e); }); }
At top, we initialize the environment:
The bluetooth serial communication gets set up. Within the parser object, we listen to any bluetooth data event. The method gets triggered whenever new data is available. Once data is available, it is then converted from Ascii Codes to Characters. Finally, the plain temperature text measure is sent to the Algorand Blockchain via the sendToAlgorandBlockchain function.
The beautiful thing with the Algorand Blockchain is that you do not have to write any Smart Contract to store data to the Blockchain.
To do so, we will use the transaction note field. It has a capacity of up to 1kb in length.
All we need to do is to create a new transaction. Here, we set the receiver address to the sender address inside the txn object to keep it simple. As the purpose of this application is not to send money, we can leave the transaction amount to 0.
Alright, we are all good now! We can move on to the Testing section.
Testing
IoT Device side
First make sure that the Bluetooth module is connected as follow:
Flash the firmware to the Arduino.
IoT Node side
Setting up the bluetooth:
- Enable the bluetooth communication on your computer.
- Pair your computer to the bluetooth HC-05 module (the pairing code should be
0000or
1234depending on the module).
Now head over to the IoT Node code:
- Run:
node app.js
- Click on the reset button of the Arduino to initiate a transaction.
- Copy the outputted transaction id:
Now we are going to visualize the transaction into a block explorer. To do so, head over to and paste the transaction id into the search field and select TESTNET on the right side before searching. Scroll down and click on the Message Pack tab.
Here we see that the transaction note with a value of
70 has been stored to the Algorand Blockchain.
You can also fetch the transaction note and display its content in a more elegant manner:
Here I will present how to visualize the transaction note into a web app.
To begin, get a copy of this repository:
git clone
- Open up the
index.jsfile.
- Set your PureStake API key and your account address (generated from your mnemonic).
- Open up the
index.htmlfile and paste the transaction id to the input field.
- Click on the Start button.
Video
This video shows how to set up the IoT Node.
Info
Please ignore the AES Encryption Algorithm part on the video. | https://developer.algorand.org/solutions/arduino-algorand-blockchain/ | CC-MAIN-2021-31 | refinedweb | 1,177 | 59.6 |
How to refresh Dropbox access token with the (older) Dropbox library included with Pythonista?
Hi all ---
Dropbox has started enforcing short-lived OAuth access tokens so it seems that any Pythonista scripts that want to use it will need to know how to either create or refresh access tokens if they've expired. It appears that current versions of the Dropbox Python library include convenience functions for doing this (e.g. get_and_refresh_access_token()) but those are not available in the version of the library that ships with Pythonista.
Has anyone been able to figure out how to refresh these tokens some other way? Or some other workaround to enable updated tokens to be used in Pythonista scripts?
Thanks in advance!
For the record, I've also been trying to get an updated version of the official Dropbox library installed via Stash, but no luck. I'm running into the error describe here:
I have created a modified version of dropboxlogin.py that generates short-lived access tokens. So far, it seems to work with dropbox version 6.4.0 that is provided with Pythonista.
First run will authorize your application and store a long-lived refresh token in your device keychain.
# simple test app import dropboxlogin dropbox_client = dropboxlogin.get_client() print('Getting account info...') account_info = dropbox_client.users_get_current_account() print('linked account:', account_info)
Useful links:
Migrating App access tokens
Dropbox OAuth Guide
I was able to update dropbox to the latest version with this work around script which downloads the dropbox wheel and unzips the contents in site-packages. I had to use Stash pip to install the requests, six and stone. I'm still testing to see if it is better to stick with the original Pythonista dropbox version or to use the upgraded versions of dropbox, requests and six.
# workaround to install latest dropbox in pythonista # install requirements using stash pip # pip install requests # pip install six # pip install stone import sys, os import requests from zipfile import ZipFile sitepath = os.path.expanduser('~/Documents/site-packages-3') os.chdir(sitepath) from zipfile import ZipFile url = '' url += '/2ef3c03ac039f9e5f29c0f557dc3f276241ef3c1627903a5f6a8c289cf24/' url += 'dropbox-11.7.0-py3-none-any.whl' dest_path = './dropbox.zip' r = requests.get(url) with open(dest_path, 'wb') as f: f.write(r.content) print('download complete...') with ZipFile(dest_path, 'r') as zip: # extracting all the files print('Extracting all the files now...') zip.extractall() print('Done!') os.remove(dest_path)
Thanks @bosco. I was following a similar path with one of the
stashdevelopers at. I've now installed
dropboxvia the wheel package, and dependencies via
pip. However I've not been able to get to work reliably, at least when called from within a Pythonista script triggered from a share sheet: it crashes immediately on my iPhone (never gets to the stack trace) and on my iPad the code runs but then hangs and hard-crashes at the end (reboots the iPad).
Curious to know whether you've had better luck? I've done all my work by hand, rather than your more elegant solution to code it automatically. If yours worked I might try a fresh install of Pythonista and your approach.
UPDATE: just confirmed that while I can get your approach to work from Pythonista itself, I'm unable to get it to work from a share sheet. It seems to crash / exit the share sheet during the authentication process.
Are you able to successfully call your little test script (above) from a share sheet?
@felciano There appear to be 2 issues.
- Dropbox version 11.7.0 crashes when using the appex module.
- The keychain module does not work with the appex module.
You will need to use Dropbox 6.4.0 and you will need to store your refresh_token somewhere else besides keychain.
You can modify dropboxlogin like this:
#refresh_token = keychain.get_password('dropbox', 'refresh_token') refresh_token = "your_refresh_token"
You can then remove the dropbox folders from site-packages-3 or try this script which removes site-packages-3 from sys.path when running from a share sheet and temporarily reverts back to dropbox 6.4.0.
import sys import appex import console print('file', appex.get_file_path()) def test(): print(sys.path[1]) if appex.is_running_extension(): # if share sheet sys.path.remove(sys.path[1]) # remove site-packages-3 from sys.path import dropbox print('dropbox version', str(dropbox.__version__)) import dropboxlogin dropbox_client = dropboxlogin.get_client() print('Getting account info...') account_info = dropbox_client.users_get_current_account() print('linked account:', account_info) test()
If you are just trying to upload a file from a share sheet, this script does not use the dropbox module:
import sys, os, json, shutil import appex import console import keychain import requests from requests.auth import HTTPBasicAuth app_key = "your_app_key" app_secret = "your_app_secret" refresh_token = "your_refresh_token" def dropbox_token(refresh_token): data = {'refresh_token': refresh_token, 'grant_type': 'refresh_token'} try: r = requests.post('', data=data, auth = HTTPBasicAuth(app_key, app_secret)) result = r.json() except: print(str(sys.exc_info())) return { 'refresh_token': None, 'access_token': None, 'error': {'.tag': str(sys.exc_info())} } #print('dbx:', result) return result def upload_file(source_filename, path): with open(source_filename, 'rb') as f: data = f.read() parms = {} parms['path'] = path parms['mode'] = 'overwrite' print (json.dumps(parms)) headers = {'Authorization': 'Bearer %s' % (access_token,),'Dropbox-API-Arg': json.dumps(parms),'Content-Type': 'application/octet-stream' } url = '' try: r = requests.post(url, stream=True, headers=headers, data=data) except: print("Exception requests: %s" % str(sys.exc_info())) result = r.json() return result local_path = os.path.expanduser('~/Documents/tmp') access_token = dropbox_token(refresh_token)['access_token'] files = appex.get_file_paths() if files != None and files != []: for i in range(len(files)): print('Input path: %s' % files[i]) basename = os.path.basename(files[i]) filename=os.path.join(local_path, basename) shutil.copy(files[i], filename) upload_file(basename, '/remote/dropbox/path/' + basename) | https://forum.omz-software.com/topic/7014/how-to-refresh-dropbox-access-token-with-the-older-dropbox-library-included-with-pythonista | CC-MAIN-2021-43 | refinedweb | 938 | 51.34 |
git-describe − Describe a commit using the most recent tag reachable from it
git describe [−−all] [−−tags] [−−contains] [−−abbrev=<n>] [<commit−).
<commit−ish>...
Commit−ish object names to describe. Defaults to HEAD if omitted.
−−dirty[=<mark>]
Describe the working tree. It means describe HEAD and appends <mark> (−dirty by default) if the working tree is dirty.
−−all
Instead of using only the annotated tags, use any ref found in refs/ namespace. This option enables matching any known branch, remote−tracking branch, or lightweight tag.
Instead of using only the annotated tags, use any tag found in refs/tags namespace.>
Instead of considering only the 10 most recent tags as candidates to describe the input commit−ish consider up to <n> candidates. Increasing <n> above 10 will take slightly longer but may produce a more accurate result. An <n> of 0 will cause only exact matches to be output. glob(7) pattern, excluding the "refs/tags/" prefix. This can be used to avoid leaking private tags from the repository.
−−always
Show uniquely abbreviated commit object as fallback.
−−first−parent
Follow only the first parent commit upon seeing a merge commit. This is useful when you wish to not match tags on branches merged in the history of the target commit.975b" suffix alone may not be sufficient to disambiguate these commits.
For each commit−−ish’s SHA−1. If −−first−parent was specified then the walk will only consider the first parent of each commit.
If multiple tags were found during the walk then the tag which has the fewest commits different from the input commit−ish will be selected and output. Here fewest commits different is defined as the number of commits which would be shown by git log tag..input will be the smallest number of commits possible.
Part of the git(1) suite | http://man.sourcentral.org/debian-stretch/1+git-describe | CC-MAIN-2018-43 | refinedweb | 305 | 67.35 |
Glib::GenPod - POD generation utilities for Glib-based modules
use Glib::GenPod; # use the defaults: xsdoc2pod ($xsdocparse_output_file, $destination_dir); # or take matters into your own hands require $xsdocparse_output_file; foreach my $package (sort keys %$data) { print "=head1 NAME\n\n$package\n\n"; print "=head1 METHODS\n\n" . podify_methods ($package) . "\n\n"; }
This module includes several utilities for creating pod for xs-based Perl modules which build on the Glib module's foundations. The most important bits are the logic to convert the data structures created by xsdocparse.pl to describe xsubs and pods into method docs, with call signatures and argument descriptions, and converting C type names into Perl type names. The rest of the module is mostly boiler-plate code to format and pretty-print information that may be queried from the Glib type system.
To make life easy for module maintainers, we also include a do-it-all function, xsdoc2pod(), which does pretty much everything for you. All of the pieces it uses are publically usable, so you can do whatever you like if you don't like the default output.
All of the information used as input to the methods included here comes from the XS files of your project, and is extracted by Glib::ParseXSDoc's
xsdocparse. This function creates an file containing Perl code that may be eval'd or require'd to recreate the parsed data structures, which are a list of pods from the verbatim C portion of the XS file (the xs api docs), and a hash of the remaining data, keyed by package name, and including the pods and xsubs read from the rest of each XS file following the first MODULE line.
Several custom POD directives are recognized in the XSubs section. Note that each one is sought as a paragraph starter, and must follow a
=cut directive.
All xsubs and pod from here until the next object directive or MODULE line will be placed under the key 'Package::Name' in xsdocparse's data structure. Everything from this line to the next
=cut is included as a description POD.
Generate POD in Package::Name but for the package Other::Package::Name. This is useful if you want POD to appear in a different namespace but still want the automatically generated hierarchy, signal and property listing, etc. from the original namespace. For example:
=for object Gnome2::PanelApplet::main (Gnome2::PanelApplet) =cut
This will create Gnome2/PanelApplet/main.pod containing the automatically generated documentation for Gnome2::PanelApplet (hierarchy, signals, etc.) plus the method listing from the current XS file.
This causes xsdoc2pod to call
podify_values on Package::Name when writing the pod for the current package (as set by an object directive or MODULE line). Any text in this paragraph, to the next
=cut, is included in that section.
Used to add a deprecation warning, indicating Package::Name as an alternative way to achieve the same functionality. There may be any number these in each package..
Paragraphs of this type document xsubs, and are associated with the xsubs by xsdocparse.pl. If the full symbol name is not included, the paragraph must be attached to the xsub declaration (no blank lines between
=cut and the xsub).
Within the apidoc PODs, we recognize a few special directives (the "for\s+" is optional on these):
Override the generated call signature with the ... text. If you include multiple signature directives, they will all be used. This is handy when you want to change the return type or list different ways to invoke an overloaded method, like this:
=for apidoc =signature bool Class->foo =signature ($thing, @other) = $object->foo ($it, $something) Text in here is included in the generated documentation. You can actually include signature and arg directives at any point in this pod -- they are stripped after. In fact, any pod is valid in here, until the =cut. =cut void foo (...) PPCODE: /* crazy code follows */:
Do not document this xsub. This is handy in certain situations, e.g., for private functions. DESTROY always has this turned on, for example.
This function or method can generate a Glib::Error exception.
Generate a function-style signature for this xsub. The default is to generate method-style signatures.
This function or method is deprecated and should not be used in newly written code.
(These are actually handled by Glib::ParseXSDoc, but we list them here because, well, they're an important part of how you document the XS files.)
Given a $datafile containing the output of xsdocparse.pl, create in $outdir a pod file for each package, containing everything we can think of for that module. Output is controlled by the
=for object directives and such in the source code.
If you don't want each package to create a separate pod file, then use this function's code as a starting point for your own pretty-printer.
Parse the given @filenames for entries to add to the
%basic_types used for C type name to Perl package name mappings of types that are not registered with the Glib type system. The file format is dead simple: blank lines are ignored; /#.*$/ is stripped from each line as comments; the first token on each line is considered to be a C type name, and the remaining tokens are the description of that type. For example, a valid file may look like this:
# a couple of special types FooBar Foo::Bar Frob localized frobnicator
C type decorations such as "const" and "*" are implied (do not include them), and the _ornull variant is handled for you.
Pretty-print the object properties owned by the Glib::Object derivative $packagename and return the text as a string. Returns undef if there are no properties or $package is not a Glib::Object..
List and pretty-print the values of the GEnum or GFlags type $packagename, and return the text as a string. Returns undef if $packagename isn't an enum or flags type.
Query, list, and pretty-print the signals associated with $packagename. Returns the text as a string, or undef if there are no signals or $packagename is not a Glib::Object derivative.
Creates a deprecation warning for $packagename, suggesting using the items inside @deprecated_by instead. sepcific postion information will be returned. pods is a reference to an array of pod hashes..
Pretty-print the list of GInterfaces that $packagename implements. Returns the text as a string, or undef if the type implements no interfaces.
Call
xsub_to_pod on all the xsubs under the key $packagename in the data extracted by xsdocparse.pl.
Returns the new text as a string, or undef if there are no xsubs in $packagename.
Creates a list of links to be placed in the SEE ALSO section of the page. Returns undef if nothing is in the input list.
Returns a string that will/should be placed on each page. You can control the text of this string by calling the class method set_copyright.
If no text has been set, we will attempt to create one for you, using what has been passed to set_year, set_authors, and set_main_mod. The year defaults to the current year, the authors default to 'The Gtk2-Perl Team', and the main mod is empty by default. You want the main mod to be set to the main module of your extension for the SEE ALSO section, and on the assumption that a decent license notice can be found in that module's doc, we point the reader there.
So, in general, you will want to specify at least one of these, so that you don't credit your work to us under the LGPL.
To set them do something similar to the following in the first part of your postamble section in Makefile.PL. All occurences of <br> in the copyright are replaced with newlines, to make it easier to put in a multi-line string.
POD_SET=Glib::GenPod::set_copyright(qq{Copyright 1999 team-foobar<br>LGPL});
Glib::MakeHelper::postamble_docs_full() does this sort of thing for you..
Convert an xsub hash into a string of pod describing it. Includes the call signature, argument listing, and description, honoring special switches in the description pod (arg and signature overrides).
Given an xsub hash, return a string with the call signature for that xsub.
Prepend a $ to anything that's not the literal ellipsis string '...'.
Mangle default parameter values from C to Perl values. Mostly, this does NULL => undef.
C type to Perl type conversion for argument types.
C type to Perl type conversion suitable for return types.
muppet bashed out the xsub signature generation in a few hours on a wednesday night when band practice was cancelled at the last minute; he and ross mcfarland hacked this module together via irc and email over the next few days.
This library is free software; you can redistribute it and/or modify it under the terms of the Lesser General Public License (LGPL). For more information, see | http://search.cpan.org/~xaoc/Glib-1.303/lib/Glib/GenPod.pm | CC-MAIN-2014-35 | refinedweb | 1,487 | 62.68 |
QEventDispatcherWin32 does not handle WM_INPUT messages properly - BUG?
Hi,
I need to integrate "3D Connexion Space Navigator ": into my Qt Win32 application. 3D Connexion recommends that developers implement the 3D mouse integration with using Raw Input API. To use the Raw input API, you first need to register the input device and assign a window handle of the window that should receive WM_INPUT messages. Because I would like to treat 3D mouse events in the same way like QMouseEvent I need to assign the internal window of the QEventDispatcherWin32. This internal message window receives all Win32 messages and forwards the messages to the event dispatcher object where an application can install an event filter to handle the messages. Because Qt does not provide any way to get the handle of the internal core window, I wrote my own function to get this handle:
@HWND QSpaceNavigator::get)
{
// check if window application instance is our Qt application;
}
@
The function getNativeAppWindow() works fine. I checked this by printing the name of the window and shows QEventDispatcherWin32_Internal_Widget.... Now I used this window handle to register it with my raw input device:
@bool QSpaceNavigator::initializeRawInput()
{
static RAWINPUTDEVICE SpaceNavigatorDevice =
{ 0x01, 0x08, RIDEV_INPUTSINK, 0x00 };
const uint32_t NumberOfDevices = 1;
const uint32_t DeviceStructSize = sizeof(SpaceNavigatorDevice);
SpaceNavigatorDevice.hwndTarget = getNativeAppWindow();
std::cout << "EventDispatcher HWND: " << SpaceNavigatorDevice.hwndTarget;
if(RegisterRawInputDevices(&SpaceNavigatorDevice, NumberOfDevices,
DeviceStructSize))
{
std::cout << "Raw input device registered successfully!" << std::endl;
return true;
}
return false;
}@
Now I used the function:
@QAbstractEventDispatcher::setEventFilter ( EventFilter filter )@
to install my own event filter that receives and handles WM_INPUT messages:
@bool spaceNavigatorEventFilter(void* msg, long* result)
{
std::cout << "Message received " << static_cast<PMSG>(message)->message << std::endl;
std::cout << "Window: " << static_cast<PMSG>(message)->hwnd << std::endl;
return QSpaceNavigator::instance()->processSpaceNavigatorEvent(static_cast<PMSG>(msg));
}@
So normally this should work properly and spaceNavigatorEventFilter() function should receive all events and then handles WM_INPUT events of space navigator device.
I created a small test application based on QMainWindow to check if this filter works. In the main window I placed two QSpinBoxes. When I start the application and move my mouse over the main window, I can see a lot of mouse messages on my console printed by std::cout lines in my event filter. That shows that the event filter works. When I move my 3D space navigator nothing happens - no messages arrive in my event filter. Now, if I move my normal mouse again, a burst of messages from 3D space navigator arrive in the message filter - that means only if I move my mouse, the WM_INPUT messages that are sent from the 3D space navigator previously arrive in my event filter.
If I click into one of the two QSpinBoxes, that means one spin box gets the input focus, then I can move the space navigator and I can see that WM_INPUT messages from the 3D device arrive in my event filter. If I print the hwnd field of the WM_INPUT message, I can see that the window handle is the window handle of the QEventDispatcherWin32 internal window. That shows, that the raw input device is properly registered with the internal event dispatcher window.
My question that may only get answered from a troll that knows the Qt event dispatching in detail: Why do normal mouse messages arrive in my event filter all the time I move my mouse and why do WM_INPUT messages of the windows raw API arrive only in my event filter if a widget has the input focus in the active window? This is quite strange and makes it impossible to use RAW input API with Qt because it would force the user to first click into a widget that can receive input focus before he can use the 3D space navigator.
Sorry for the long post but I spent two days now trying to integrate 3D space navigator in my Qt application and did not find a working solution yet. Is there any best practice how to integrate Win32 raw input API devices into Qt?
As far as I know there is an 'spnav' library that handles this (my project searches for spnav.h) and the open source app suite 'koffice' uses this to handle the navigator so its known to work.
See for example code.
Thank you for this hint. Unfortunately the "": driver is no option for us for two reasons:
It is not in production quality.
It requires "": which is also in beta state and requires .NET Framework 2.0. We would like to avoid any dependency to .NET Framework.
I will try to work around the issue by creating a second message pump in a different native Win32 thread that has its own message window and its own windows procedure. But this still does not answer the question why QEventDispatcherWin32 does not handle WM_INPUT messages properly.
I have been having the same problem trying to get the 3D Connexion mouse working and I think I have discovered where the problem is in QEventDispatcherWin32.
I am using Qt 4.6.2 with VisualStudio 2005 on Windows XP. The problem I was having was that the WM_INPUT messages were not being received by window/event filter the until some other event such as a mouse or timer event occurred. I could move the 3D mouse and nothing would happen then move the normal mouse and all the 3D mouse events would arrive.
I traced the problem to
@MsgWaitForMultipleObjectsEX(nCount, pHandles, INFINITE, QS_ALLINPUT, MWMO_ALERTABLE);@
In the processEvents function of QEventDispatcherWin32. This was blocking and not waking on WM_INPUT events.
The windows documentation for this notes that QS_ALLINPUT, which includes QS_INPUT, "does not include QS_RAWINPUT in Windows2000." Therefore I think the problem is that the binary Qt installer is built to work on all windows platforms, so during the compile _WIN32_WINNT is defined to something less that WinXP (0x501) so that raw-input messages are not handled.
I re-compiled Qt passing -D _WIN32_WINNT=0x501 to configure and now the WM_INPUT messages are being passed to the event filter when the 3D mouse is moved.
So the options seem to be
- Recompile Qt with _WIN32_WINNT=0x501 so that XP (and later) functions are enabled
or
- Use a timer to unblock the event loop at regular intervals so that the messages get processed. I am not sure yet what effect this will have on the smoothness of the mouse operation.
I also found that just using the winId() function of the main window for the HWND to use for registering the device also worked fine.
Hi David,
hey thank you very much for this solution. We use a self-compiled Qt installation here so it should be no problem to recompile it with _WIN32_WINNT=0×501.
We worked around the problem by creating a native Win32 DLL (without Qt) that creates an internal win32 message window in its own thread that receives the messages from Space Navigator device. Then we wrote a Qt wrapper for this DLL that translates the messages into Qt events an created a QTDx device.
But your solution makes it much easier to use Space Navigator from Qt - thank you :O)
Uwe
Hi,
can anybody of you post an example code which shows how to implement the spacenavigator in qt?
Thank you very much
I wrote an slightly updated version of my comments above and added some sample code of a Qt class to get the data from the mouse on Windows at:
"":
With Qt 4.7.1 it seems to work without having to re-complile the Qt.
@David Dibben: Thank you very much for the sample code. I tried to compile it with the qt eclipse integration but it stops during compiling the file "Mouse3DInput.cpp" in line around 345 "pri = NEXTRAWINPUTBLOCK(pri);" complaining that "NEXTRAWINPUTBLOCK was not declared in this scope".
I added the line in 3DMouse.pro
@LIBS += -lUser32 -LC:\programs\Microsoft SDKs\Windows\v6.0A\Lib@
added user32.lib and the paths to the lib und include directories of Microsofts SDKs.
Is there anything missing?
[EDIT: code formatting, Volker]
NEXTRAWINPUTBLOCK is a windows macro, defined in winuser.h which should be included with windows.h
See -
Which compiler are you using? I am guessing that it is mingw. I don't have experience using g++ on Windows - I compiled the sample using MS VC++ 2008 Express.
The windows headers should be available with mingw as far as I know, but the windows headers are full of conditional statements so it is possible that something has to be defined for the macros to show up correctly.
Yes I use mingw and did the following changes in your sourcecode:
I added this in Mouse3DInput.cpp
@#define RAWINPUT_ALIGN(x) (((x) + sizeof(DWORD) - 1) & ~(sizeof(DWORD) - 1))
#define NEXTRAWINPUTBLOCK(ptr) ((PRAWINPUT)RAWINPUT_ALIGN((ULONG_PTR)((PBYTE)(ptr) + (ptr)->header.dwSize)))@
and this
@#include "C:\Qt\2010.05\mingw\include\windows.h"
#include "C:\Qt\2010.05\mingw\include\winuser.h"@
Then I found a difference between Microsoft's WinUser.h and mingw's winuser.h (see following):
@
typedef struct tagRAWHID {
DWORD dwSizeHid;
DWORD dwCount;
BYTE bRawData;
} RAWHID,*PRAWHID,*LPRAWHID;
@
I changed it to @bRawData[1]@ like in Microsoft's WinUser.h. I don't know why it works with the range of one, because in the sourcecode you'll find the command
@pRawInput->data.hid.bRawData[1] == 0x01@
so it must be at least 2, right?
Additionally I added
@
CONFIG += console
DEFINES += _WIN32_WINNT="0x0501"
DEFINES += _WIN32_WINDOWS="0x0501"
INCLUDEPATH += "C:\Qt\2010.05\mingw\include" \
LIBS += -luser32 -L"C:\Qt\2010.05\mingw\lib"
@
to 3DMouse.pro, but I am not shure if I need all of that.
After that I was able to compile it. At the moment I am not sure if the translation works correctly, because the data appears too fast in the textboxes, so I will change the program that it saves the data in a textfile.
Hi,
I recorded some data which my program (original written by David Dibben) received from the spacenavigator. I made translational movements at first - followed by rotations. There is only noise on the x-axis and no reaction on y- and z-axis. But you can see the translational movements in the last three plots (plots of the rotations). Does anybody has any idea what's going wrong?
!!
The horizontal axis shows the number of received events - it's not the time!
Hi,
I switched to Visual Studio and your code worked out of the box.
Thank you for the sample code, David Dibben! | https://forum.qt.io/topic/495/qeventdispatcherwin32-does-not-handle-wm_input-messages-properly-bug/12 | CC-MAIN-2019-09 | refinedweb | 1,741 | 61.97 |
Subject: Re: [boost] gil::io "non-review" (was: [gil] Can not open test.jpg)
From: Phil Endecott (spam_from_boost_dev_at_[hidden])
Date: 2010-03-22 19:35:19
Hi Christian,
I've just had to remind myself what this was all about. I may have
forgotten or mis-remembered something.
Christian Henning wrote:
>>> - DCT type: You can specify the type when reading an image.
>>
>> That's good. ?And writing?
>
> Just added it. Good idea!
>>> - Scaled Decoding: Todo - Never figured out how to do that. Couldn't
>>> find anything on the net.
>>> - Partial Image Decoding - Same as Scaled Decoding
>>
>> Well, I think both of those things are described in the libjpeg
>> documentation. ?If you can't find it and are interested in implementing it,
>> please let me know.
>
> I would be interested in a pointer to details of Partial Image
> Decoding. Depending on the amount of effort I'll either add it now or
> after the review. My time frame is kinda small right now.
I think the situation was that you were decoding and discarding the
stuff after the area of interest, which was obviously wasteful and
fixable. The more difficult issue was whether it is possible to reduce
the work done prior to the area of interest. I don't have a good
enough understanding of libjpeg to know the answer to that. Presumably
there are similar issues with the other formats.
>>> - Rotation: Same state as Scaled Decoding.
>>
>> I agree that this is not well described anywhere, but the program jpegtran
>> can do it.
>
> I consider this a "nice to have". This extension is about IO but again
> if it's easy to integrate I'll do it.
Unfortunately it's not always possible to respect partitioning like
"this bit is IO" when that breaks the performance. But again, I still
don't know how to do this rotation. I have a feeling that there really
aren't many people around who actually understand libjpeg properly...
>>> - In-memory jpeg: This is done. Please see unit tests.
>>
>> The last time that I looked at your code - about a year ago - my problem was
>> to take a very large TIFF and to chop it up into 256x256 PNG tiles. ? ?I
>> wanted to do this without having to keep all of the image data in memory
>> simultaneously. ?This seemed to be beyond what you could offer. ?I'm unsure
>> whether or not it's fundamentally possible with gil or not; I had hoped that
>> its idea of "views" would make it possible if the views could lazily obtain
>> the data from the source image. ?Even if gil can't do this, it would be good
>> if your libtiff/png/jpeg wrappers were usable for general C++-ing of those
>> libraries.
>
> You can read subimages from a tiff image. At least in theory, I just
> noticed a bug when trying to do that. This bug is only with tiff
> images. All other formats work fine.
>
> Once that's done you have a gil image which you make into a png
So for my problem of chopping up the TIFF into lots of tiles, I think I
have two choices:
(1) Decode the whole TIFF into one gil image. For each tile, create a
gil view and create and save a PNG.
(2) For each tile, partially decode the TIFF into a gil image, and
create and save a PNG.
The first choice uses lots of RAM. The second choice uses lots of time
because the partial decoding has to decode and discard all of the
content that precedes the area of interest. Is that true?
>> I also recall some concerns about how you were handling errors from one of
>> the libraries. ?I think you had a setjmp() that was bound to fail, or
>> something like that.
>
> I'm using setjmp in the jpeg and png specific code.
In jpeg/read.hpp you have a namespace-scope static jmp_buf. I
immediately worry about thread-safety. Even in a single-threaded app I
can't see how this works if you have more than one jpeg file open.
Then you setjmp() in the constructor. The jmp_buf is invalid as soon
as that scope returns, so if libjpeg calls your error_exit() from after
the ctor has finished (e.g. in jpeg_read_scanlines()) it will crash.
(Have I grossly misunderstood how this works?)
I haven't looked at your other uses.
Regards, Phil.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2010/03/163636.php | CC-MAIN-2019-43 | refinedweb | 755 | 75.3 |
On Fri, Oct 24, 2008 at 12:59 AM, Assaf Arkin <arkin@intalio.com> wrote:
>> Buildr (rake) interpreted "compile.from('sources')" as a dependency to
>> the new 'sources' task instead of the 'sources' subdirectory.
>
> Short term, rename the task to something that's clearly not a
> file/directory name, say artifact:sources, or download:sources.
Done.
> Long term, this has been bugging me for a while. All tasks share the
> same name, including file tasks and occasionally we get collisions.
> There's a work around, using full paths as often as possible, which
> works most of the time, but brings its own annoyance: console output
> runs a mile long.
I agree reading absolute paths is annoying. What I'd like is for
buildr to display paths relative to the project root dir whenever
possible.
Instead of
Compiling foo:bar:mail into
/home/lacton/projects/mycompany/foo/bar/mail/target/classes
I'd rather have
Compiling foo:bar:mail into bar/mail/target/classes
For files in the project root dir, we could prefix the path with './'.
That way, 'sources' would be the task and './sources' would be the
file task.
> I'd like at some point to truncate these too-long paths, maybe the
> solution would be to use namespaces for most other tasks?
Do you mean like typing 'buildr buildr:clean buildr:build' instead of
'buildr clean build'?
Lacton
> Assaf
>
>>
>> Any better idea?
>>
>> Lacton
>>
>>> Modified: incubator/buildr/trunk/spec/core/compile_spec.rb
>>> URL:
>>> ==============================================================================
>>> --- incubator/buildr/trunk/spec/core/compile_spec.rb (original)
>>> +++ incubator/buildr/trunk/spec/core/compile_spec.rb Thu Oct 23 14:17:25 2008
>>> @@ -379,7 +379,7 @@
>>> it 'should complain if source directories and no compiler selected' do
>>> mkpath 'sources'
>>> define 'bar' do
>>> - lambda { compile.from('sources').invoke }.should raise_error(RuntimeError,
/no compiler selected/i)
>>> + lambda { compile.from(_('sources')).invoke }.should raise_error(RuntimeError,
/no compiler selected/i)
>>> end
>>> end
>>> end
>>
> | http://mail-archives.us.apache.org/mod_mbox/buildr-dev/200810.mbox/%3Cfbb63c440810250902y725ba168pd8918351301f02df@mail.gmail.com%3E | CC-MAIN-2021-43 | refinedweb | 314 | 50.94 |
Changelog
1.3.1 (Pending)). 1.2.10 (2007-11-26) * Changed: Resources sets permission on copied files to make them read/write-able (Shane Witbeck). * Changed: Artifact download no longer generates destination directory if not downloaded (Antoine). * Fixed: EOL in MANIFEST.MF. * Fixed: Bunch of typos, courtesy of Merlyn Albery-Speyer and Soemirno Kartosoewito. 1.2.9 (2007-11-08) * Changed: Upgraded to RJB 1.0.11. * Fixed: Backward compatibility in Java.rjb/wrapper. 1.2.8 (2007-11-01) * Added: Resolving Maven snapshots from remote repository (Rhett Sutphin) * Changed: scala options.target now takes number, e.g. "1.5" instead of "jvm-1.5" (Nathan Hamblen) * Changed: Eclipse task uses updated Scala plugin nature and builder (Alex Boisvert) * Fixed: Bringing Buildr back to 1.0.9, XMLBeans fix. 1.2.7 (2007-10-29) * Added: You can create an artifact from a given file using artifact(<spec>).from(<path>). You can then install it into the local repository or upload it to the release server using install(<artifacts>) and upload(<artifacts>). (Idea: Shane Witbeck and Tommy Mason). * Added: ANTLR support. * Changed: Speed boost to ZIP packaging. * Changed: RjbWrapper is now JavaWrapper, and revised to nicely support JRuby. A few other minor tweaks to make JRuby support possible in the future. (Travis Tilley) * Changed: JUnit now runs tests with clonevm false by default, you can change with test.using :clonevm=>true (Karel) * Changed: JUnit now switches over to project's base directory. * Changed: package(:war).with(:libs, :classes) uses only these specified libs and class directories, replacing any previous value. * Fixed: Jetty task no longer sets "log4j.configuration" system property * Fixed: release task didn't work 1.2.6 (2007-09-26) * Added: Option for setting environment name (-e) and attribute accessor (Buildr.environment). Default taken from BUILDR_ENV environment variable. * Added: AAR packaging for Axis2 service archives (Alex Boisvert) * Added: Environment variable for JUnit tests (test.using :environment=>). * Added: tar method similar to zip method. * Added: Experimental transitive method. Looks like artifacts, quacks like artifacts, but returns artifacts by the boat load. (Credit, Daniel Roop) * Changed: Now accepting JAVA_OPTS in addition to JAVA_OPTIONS. * Changed: TarTask is now based on ArchiveTask, same API as ZipTask. * Changed: Javadoc array arguments now passed as multiple command line options (e.g. :link=>['foo', 'bar'] becomes --link foo --link bar). (Daniel Roop) * Changed: Jetty task now uses SLF4J instead of commons-logging + log4j for better hot-swap capability and plugability (Alex Boisvert) * Removed: Turns out --verbose command line option is useless. Removed. * Fixed: Jetty task now uses WebAppContextClassLoader to support hot-swapping webapps (Alex Boisvert) * Fixed: "release" task now works with SVN URLs ending with /branches/*/ (Alex Boisvert) * Fixed: Resources not included in JAR/WAR unless there's a src/main/java directory (Olexandr Zakordonskyy). * Fixed: Files starting with dot (e.g. .config) not copied over as resource files, and not included in ZIP (Olexandr Zakordonskyy). * Fixed: Empty directories not copied over as resources (Olexandr Zakordonskyy). * Fixed: JAVA_OPTS and test.options[:java_args] not passed to JUnit task (Staube). * Fixed: archive.exclude doesn't work when including a directory using :from/:as option. * Fixed: JUnit/TestNG no longer run inner classes as test classes (Mark Feeney). 1.2.5 (2007-08-13) * Fixed: Buildr not finding buildfile in parent directory, or switching to parent directory. * Fixed: checks.rb:103: warning: multiple values for a block parameter (2 for 1) * Fixed: ZIPs include empty META-INF directory. 1.2.4 (2007-08-03) * Added: Forking option for JUnit test framework: :once to fork for each project, :each to fork for each test case, and false to not fork. (Tammo van Lessen) * Added: Path traversal in Zip, so zip.path("foo/bar").path("..") returns zip.path("foo"). * Fixed: JUnit test framework output shows errors in console, more readable when forking is on (Tammo van Lessen). * Fixed: Cobertura reports not working (Anatol Pomozov). * Fixed: Zip creates funky directory name when using :as (Tommy Mason). * Fixed: package_as_tar incorrectly calling with(options) (Tommy Mason). * Fixed: Loading of everything which should get rid of "already initialized constant VERSION" warning. * Fixed: --requires option now works properly when using buildr. * Fixed: MANIFEST.MF lines must not be longer than 72 characters (Tommy Mason). * Fixed: Creating manifest from array does not place Name first. * Fixed: Complain if no remote repositories defined, add at least one repository when creating from POM, POM reader fails if dependencyManagement missing (Jean-Baptiste Quenot). * Fixed: Not looking for buildfile in parent directory. * Fixed: Project's compile/test task looking for options in local task of same name. * Fixed: ZIP/JAR/WAR include directory entries in some cases and not others. * Fixed: Computation of relative paths in Eclipse project generation (Cameron Pope) 1.2.3 (2007-07-26) * Added: Get your buildfile created form existing POM, just run buildr on existing Maven project (Anatol Pomozov). * Added: package(:tar), package(:tgz), TarballTask dn TarTask (Tommy Knowlton). * Changed: The ArchiveTask needs no introduction: it's a base task that provides common functionality for ZipTask, TarTask and friends. * Fixed: Release runs buildr instead of buildr.cmd on Windows (Chris Power). * Fixed: Cobertura reports broken (Anatol Pomozov). 1.2.2 (2007-07-18) * Added: resources.using and filter.using now accepts a format as the first argument, default being :maven, but you can also use :ant, :ruby or pass a regular expression (). * Fixed: Sleek upload with changelog for each release courtesy of Anatol Pomozov. * Fixed: Zip.path.contains fails on paths with more than one directory (). * Fixed: Speed of sorting entries when creating new Zip file (). * Fixed: Uploading using SFTP creates directory for uploaded file (). 1.2.1 (2007-07-12) *. * Fixed: IntelliJ Idea project files generation for projects more than two degrees deep. 1.2.0 (2007-06-06) * Added: Artifact.list returns specs for all registered artifacts (those created with artifact or package). * Added: Buildr.option.java_args are used when creating the RJB JVM, when running a Java process (unless you override directly), and when running JUnit tests (again, unless override). * Added: TestNG support (test.using :testng). * Added: You can run multiple tests from the command line, e.g. rake test:foo,bar. * Added: If you want to distribute source code and JavaDoc alongside your JARs (helpful when using IDE/debugging), you can now do so by calling package_with_sources and package_with_javadoc on the project (or the parent project to affect all its sub-projects). * Added: junit:report task generates XML and HTML reports in the reports/junit directory. * Added: test=all option runs all test cases ignoring failure. * Added: project generation for IntelliJ Idea. Imports dependencies properly from your local repository (the M2_REPO path variable must be defined), supports tests and resources. * Added: A check task for each project that runs after packaging and can be used to check the build itself, using RSpec matchers. * Added: The help task can be used to get basic information about your build. Right now it returns a list of described tasks, but you can extend it using the help method. Try it out: rake help. * Added: Integration tests that run after packaging (unless tests are disabled). There's only one integration tests task (duh) that you can access from anywhere. You can tell a project to run its tests during the integration phase with test.using :integration. * Added: package :sources and package :javadoc, used by package_with_sources and package_with_javadoc. * Added: Unzip paths now return root/target. (Nathan) * Added: buildr command line, replacing rake. Differs from rake in two ways: uses buildfile by default (but Rakefile also works) and offers to create buildfile if you don't already have one. * Added: options.proxy.http now set from the environment variable HTTP_PROXY (Anatol Pomozov). * Added: options.java_args now set from environment variable JAVA_OPTIONS. * Changed: Filter now complains if source directory or target directory not set, or if source directory does not exist. * Changed: Filter.run returns true if filter run, false otherwise, and can be run multiple times. * Changed: repositories.proxy returns a URI or nil; you can still set a proxy using a hash. * Changed: Transports went the way of the Dodo, instead we now use read/write/download/upload methods implemented on URI itself. * Changed: We now have a way to configure multiple proxies through the options.proxy method; use that instead of repositories.proxies. * Changed: Upgraded to Ant 1.7.0, JUnit 4.3, JMock 1.2. * Changed: TestTask now provides list of test classes and failed classes through test_classes and failed_tests attributes. * Changed: The jetty method is now available everywhere, so you can change the URL using jetty.url = at the top of the Rakefile. Also upgraded to 6.1.3. * Changed: Test classes are now identified as either starting with Test* or ending with *Test, before attempting any include/exclude patterns. Anything ending with *TestCase or *Suite ignored for now (but if you explain why, we can add it back). * Changed: What used to be the projects task is now help:projects task, anticipating more help: tasks to come. * Changed: We now have 3(!) JDepend tasks: jdepend:swing (with windows!), jdepend:text (console) and jdepend:xml (enterprisy). * Changed: Good news for packagers: package_as_ yield no longer required, just make sure to create the task once and return it each time. * Changed: JUnit tests now run using Ant, which makes them faster to run, and gives you text/XML reports (check out the reports/junit directory). * Changed: Cobertura now writes reports to reports/cobertura, in fact, if you're looking for a report of any kind, the reports directory is the place to find it. * Changed: Upgraded to AntWrap 0.6. Note that with AntWrap 0.6 we yield to the block instead of doing instance_eval, so any call to the ant project must be prefixed with an AntProject object. Code that relies on the old functionality (and that's pretty much any code with element-containing tasks) will break. * Changed: artifacts now accepts a struct. * Changed: The repositories.download method folded into Artifact, the repositories.deploy method renamed upload and folded into ActsAsArtifact. * Changed: The deploy task is now called upload, and repositories.deploy_to is now repositories.release_to. * Removed: The check task, which previously was a way to find some circular dependencies (multitask) but not others (dynamically defined). * Removed: JUnitTask, test.junit and Java.junit methods all deprecated; anything you need to affect the unit tests is right there in TestTask. * Removed: The package(:jar) and package(:war) options, such as :manifest, :include, :libs are all deprecated. Instead, use the package method to define the package, and the with method to enhance it, e.g. package(:war).with(:libs=>...) instead of package(:war, :libs=>...). * Removed: The []= method on ZipTask and anything derived from it is deprecated in favor of using attribute accessors. * Removed: Ant.executable and Ant.declarative are deprecated. Use Buildr.ant instead of Ant.executable. Use AntWrap directly if you need the Ant.declarative functionality. * Fixed: Filter now properly handles multiple keys on the same line. * Fixed: Tests teardown now properly executing. * Fixed: Cobertura tasks now run tests, even if test=no. * Fixed: XMLBeans compile task not detecting change to XSD file. * Fixed: URI.download and download task do not create directory path for downloaded file (Anders Bengtsson). * Fixed: Gets JVM version number from system property java.version instead of calling java -version. * Fixed: Artifact downloads POM first, such that you can download/create/fake it youself. 1.1.3 (2007-06-12) * Added: Long awaited idea project files generation. Very early code, the iml seems to be generated okay but needs testing. The ipr is still missing but will come in due time (and it's not always necessary anyway). *Fixed: Doc bug: unzip doesn't have an into(dir) method. *Fixed: File names don't always have a dot. *Fixed: For Jetty servers, is not 1.1.2 (2007-05-29) * Added: Allow passing :java_args option to the junit task * Added: Hibernate XDoclet and SchemaExport tasks. (Requires buildr/hibernate) * Added: JDepend UI for seeing depenencies across all projects. (Requires buildr/jdepend) * Added: Cobertura test coverage tasks, reporting both html and xml. (Requires buildr/cobertura) * Changed: tools_jar now returns empty array on OS/X, part of the ongoing Write Once/Test Everywere effort. (Credit Paul Brown) * Fixed: Work around keep_alive bug in Net::HTTP. () 1.1.1 (2007-05-16) * Changed: Test case class names must end with Test, TestCase, Suite or TestSuite. * Changed: You can now run rake test:{foo,bar} to match against either foo or bar (requires \{..\} on UNIX). * Changed: JAVA_HOME now required on all platforms, along with more OS/X fixes. (Credit Paul Brown) * Fixed: You can now run rake test:<name> from any directory, and it will find just the right test cases. 1.1.0 (2007-05-13) * Added: Proxy setting for downloading from remote repositories (use repositories.proxy = ...). * Added: projects task to list all the projects you can build. * Added: Project attribute target to specify the target directory. * Changed: The project and projects methods now accepts relative names when called on a project. For example, project("foo").project("bar") finds the sub-project "bar" in "foo". * Changed: The project method now returns self if called on a method with no name. * Changed: The -warning flag (javac) is now set to true only when verbose. * Changed: OpenJPA mapping now using Ant task instead of spawning another Java instance. * Changed: The test:name pattern translates to *name* so you can run tests by package name, but only if you don't use * in the pattern. * Changed: All projects are not evaluated when referenced (i.e. calling project/projects) or before running any task. Project tasks do not exist until a projet is evaluated. * Removed: The projects method no longer accepts the :in argument, call projects on a project instead. * Fixed: Local directory tasks now work from any directory in the project. * Fixed: Artifacts no longer created with timestamp from server. * Fixed: Buildr no longer fails when run without tools.jar or JAVA_HOME (OS/X). (Credit Lyle Johnson) * Fixed: Manifest gets EOL to keep EOF company. (Credit Tommy Knowlton) * Fixed: Compile tasks clean after themselves when target directory changed. (Credit Lyle Johnson) 1.0.0 (2007-05-04) * Added: buildr:freeze and buildr:unfreeze task. These set the Rakefile to use a particular version of Buildr, freezing by setting to the current version of Buildr, unfreeze to use the latest Gem. * Added: Buildr.options, with three options to start with: test, debug and parallel. * Added: Buildr.option.debug or environment variable DEBUG to control the compiler debug option. Defaults to yes, except when doing a release. * Changed: Buildr now fails nicely if JAVA_HOME not set. * Changed: Migrated test cases to RSpec 0.9. * Changed: Extended circular dependency check to multitask. * Changed: JavaCC using RJB. * Changed: OpenJPA 0.9.7 no longer snapshoted. * Fixed: For Windows users: user's home directory, fu_check_options is now rake_check_options, java command works around funky system bbug. 0.22 (2007-04-26) * Added: Calling projects(:in=>foo) returns only the sub-projects defined in foo. * Added: _() as shortcut for path_to(). * Added: You can pass properties to java by setting the :properties options. * Added: JUnit task has a way of setting options (options accessor and using method), which for now supports passing properties to java. * Added: You can now use the struct method to create a Struct for structoring your multiple artifacts. * Changed: Use rake artifacts to download all artifacts not already in the local repository, and also download modified artifacts (*cough*snapshots*cough*) * Changed: Transport.download now uses timestamp on the destination file and If-Modified-Since header to skip downloads of unmodified files. * Changed: Downloading artifact sets the time stamp from the repository. * Changed: Use buildr.rake in the project's directory and your home directory, instead of buildr.rb. * Changed: filter method accepts one argument, the source directory. Use filter(src).into(target). * Changed: Running Javac/Apt/Javadoc in process. * Changed: Using Ant for OpenJPA enhancer and XMLBeans schema compiler. * Changed: Jetty, JavaCC, OpenJPA and XMLBeans are no longer included by default. You need to require them explicitly, e.g. require "buildr/jetty". * Removed: Tasks no longer use a base directory, always map paths directly using file, path_to or _(). * Fixed: The artifacts task no longer downloads POMs for artifacts created by the Rakefile. 0.21 (2007-04-20) * Added: Methods to read and write a file (shortcut for File.read/File.open.write). * Changed: Filter task now takes a source directory and target directory, and copies all included (sans excluded) files between the two. * Changed: Artifact type is now symbol instead of string (so :jar instead of "jar"). You can still specify a string, but the return value from #to_spec or #type is a symbol. * Changed: Eclipse task now adds "src/main/resources", "src/test/java", "src/test/resources" to build path, and excludes ".svn" and "CVS" directories from being copied into target directories. * Changed: The test task will now run JUnit test cases from classes ending with Test or Suite. And the inclusion pattern is always set. * Fixed: Project property not inherited if false. 0.20 (2007-04-18) * Added: JavadocTask to generate Javadoc documentation for the project, javadoc method on the project itself to return its javadoc task, and Java.javadoc to do all the heavy lifting. * Changed: Release code is now implemented as module instead of class. SVN copy made from working copy instead of double commit. * Removed: package :file_name options. Does not work with deployed artifacts or POMs. * Fixed: Packages not deployed in the right path (but POMs are). * Fixed: JARs and WARs include redundant META-INF directory. * Fixed: The local package task is now a dependency for install/deploy, and build is dependency for package. 0.19 (2007-04-13) * Fixed: Eclipse task correctly handles FileTasks * Fixed: Eclipse task output directory is "target/classes" (Project.compile.target) instead of "/target" * Added: Set specific file permissions when uploading with SFTP transport with :permission option * Fixed: Correctly use JAVA_HOME environment variable, if available, for determining java version * Added: ConcatTask and concat: a file task that creates or updates the target file by concatenating all the file prerequisites. * Added: Ant module (requires antwrap and rjb Gems), so also added RJB setup module. * Added: When zipping you can include the contents of a directory using :as=>".". * Added: Convenience apt method returns a file task that generates sources using APT. * Added: Convenience open_jpa_enhance method to enhance compiled files. * Added: Convenience compile_xml_beans setups the compiler to include XSD-generated XML Beans. * Added: Convenience javacc/jjtraa methods return file tasks that generate source files. * Added: build is now the default task. * Added: jetty:start and jetty:stop tasks to start/stop the server from the console. * Added: jetty:use to start Jetty inside the build or hook to an existing server. * Added: jetty:setup and jetty:teardown to perform tasks around jetty:use. * Added: The local build task will now execute the local test task. So building a project (or sub-project) will run the test cases on that project (or sub-project) but not any of its dependencies. * Added: ZipTask accepts nested path (i.e. calling path inside a path). * Added: package(:war) by defaults picks libraries from the compiler classpath. You can always override by passing the :libs option. * Changed: Eclipse task now generates library path with M2_REPO variable or project-relative paths where appropriate * Changed: compile.target (CompileTask) and resources.target (Filter) are now file tasks, not strings. So passing the target to someone else will hopefully convience them to invoke or enhance it. * Changed: Java related tasks like OpenJPA, XMLBeans, JavaCC all moved to the Buildr::Java module. * Changed: Handling of package_as arguments to support JBI packaging. * Changed: meta_inf project property is an array accepting filenames (strings) and file tasks. * Changed: meta_info by default only includes the LICENSE file from the top-level project. * Changed: The WarTask :classes argument is now a directory name, and will include all files in this directory. * Changed: WarTask and JarTask accept meta_inf argument. * Changed: Behavior of needed? and prerequsities in base Rake::Task. This will probably not affect you, but don't be surprised if it disappears (see lib/core/rake_ext.rb for details). * Changed: Were previous the test task would link to test.run, it now executes the entire test lifecycle, and is the major point for extending the test lifecycle. * Changed: test.run is now test.junit. * Changed: Ant.define is now Ant.declarative, Ant.execute is now Ant.executable. * Changed: The filter method now returns a Filter class that can be used to set a filter, but is not itself a task. Instead, it creates a task when setting its target. * Changed: Project.resources now returns a ResourceTask that includes, but is not itself a filter, accessed using the accessor filter. * Changed: UnzipTask eliminated and replaced with Unzip which you now have to run directly by calling extract. However, unzip method creates a file task and returns an Unzip object that can be used as a reference to that file task. * Changed: Attributes is now InheritedAttributes. * Changed: The first call to package configures the package task from the options, the second call only returns the package task. * Removed: :cp argument, always use :classpath. * Removed: src_dir, java_src_dir, target_dir, webapp_src_dir and all other premature configuration attributes. * Removed: Project tests method deprecated in favor of a single test method; it now accepts an enhancement block, not an instance_eval block. * Removed: FilterTask is dead. * Removed: sub_projects method. Is anyone using this? * Fixed: Local buildr.rb not loaded from running from inside a sub-project directory. * Fixed: Eclipse task now executed whenever a change is made in the Rakefile, or any file it requires, include buildr.rb and task files. * Fixed: Circular dependency in release task. 0.18 (2007-03-26) * Added: manifest attribute on project, used by default when packaging JAR/WAR. * Added: default manifest includes build-by, build-jdk and implementation-title. * Added: compile.from(sources) in the same vein as compile.with(classpath) * Added: load all *.rake files form the tasks directory (if exists) for use in the main Rakefile. * Added: Java.tools returns a reference to tools.jar on JDKs that include it. * Added: brought back experimental test tasks. * Added: artifacts task to download all artifacts referenced by project (using either artifact or artifacts method). * Changed: back to old behavior, compile task only executes if there are any files to compile, and compile? method removed. * Changed: repositories.remote is now an array instead of a hash, and repositories are searched in the order in which they appear. * Changed: release task is now a regular task, using the Release object instead of being a ReleaseTask. * Changed: eclipse task executes artifacts task. * Fixed: inherited attributes now cache default value, useful when working with arrays/hashes. * Fixed: manifest file generated even if manifest attribute is false. * Fixed: compile task now properly detects when not all files compiled. * Fixed: bug that caused project file tasks to execute twice. 0.17 (2007-03-14) * Added: project.task acts like Rake's task but can also fetch a task from a project using the project's namespace. * Added: project.file acts like Rake's file but resolve relative paths based on the project base directory. * Added: Rake tasks execute in the directory in which they were defined. * Added: enhanced Rake with circular dependency, and you can find all circular dependencies by running rake check. * Added: enhanced Rake in_namespace, if the namespace starts with colon, creates a namespace relative to the root instead of the current namespace. * Changed: a project definition is now a task definition. * Changed: use enhance to extend the project definition instead of after_define. * Changed: LocalDirectoryTask replaced with Project.local_task. * Changed: projects method accepts multiple names, returning only these project definitions, returns all of them with no arguments. * Changed: packge only defines the essentials once, so you can call package on a project to retrieve a specific package by type/id. * Changed: zip task (and jar/war) no longer resolve artifacts for you, must call artifacts directly. * Changed: cannot access a project before it's defined, but can do that with sub-projects to establish dependencies. 0.16 (2007-03-07) * Added: zip.include :as=> to include file under specified name. * Added: zip.merge to include the (expanded) contents of one zip file in another. * Added: experimental test task using JUnit and JMock. * Changed: project.to_s returns name, projects returns sorted by name. * Changed: project definition now executed using project's base directory as the current directory. * Fixed: artifact test cases and minor code cleanup. * Fixed: attempts to download artifact even if created by task. * Fixed: release task now deletes old tagged copy and reports SVN usage. * Fixed: OpenJPA not including target directory in classpath. 0.15 (2007-02-28) * Fixed: tasks fail unless deployment server specified. * Changed: deploy method executes deployment, instead of returning a task. 0.14 (2007-02-28) * Added: check task that looks for obvious errors in the Rakefile. * Added: deploy task for managing deployment. * Added: release task that updates version numbers, commits and tags SVN. * Changed: the project name is now the fully qualified name, e.g. ode:axis2 * Changed: you can now lookup a project before it's defined; you still can only define a project once. * Changed: you can lookup projects by full qualified name. * Changed: release_to changed to deploy_to, which is now a getter/setter. * Fixed: removed Java.home which conflicted with JRuby. * Fixed: install task did not re-install modified files. * Fixed: deploying only uploads one artifact. * Fixed: timing issues. * Fixed: Maven classifier now used properly. 0.13 (2007-02-26) * Added: global java method. * Added: project build method. * Added: OpenJPA mapping_tool method. * Added: Rakefile to generate Gem. * Changed: you can now lookup a sub-project from the top project method. * Changed: the projects methods return all sub-projects. * Fixed: bug in JarTask that resolved artifacts too early. * Fixed: global tasks (clean, build, etc) now complain if executed from a directory that does not map to any project. * Fixed: to work with Rake 0.7.2. 0.12 (2007-02-24) * Added: call prepare with list of tasks to add them as prerequisites. * Added: project.id returns the compound name, e.g. foo, foo-bar, foo-bar-baz. * Added: JavaCC, XMLBeans schema compiler, OpenJPA enhancer, APT tasks. * Changed: the default package ID is take from the project ID instead of its name. * Changed: renamed buildr and moved here. * Changed: moved all code into Buildr module. * Fixed: download breaking when POM not found. * Fixed: compile task fails if classpath is empty. * Fixed: zip task fails if target directory does not exist. * Fixed: packaging task does not require build. * Fixed: compiler not showing command when trace is on. * Fixed: zip dependencies were all fucked up. * Fixed: package should not depend on build. 0.11 (2007-02-16) * Added: test cases for unzip task * Added: prepare method to access prepare task * Added: prepare, compile and resources accept a block you can use to enhance the task * Changed: ZipTask executes all includes files as prerequisites, and now includes directories correctly * Changed: Jar/WarTask are now extended using with(options) method * Changed: JarTask now accepts array of sections (each being a hash) for the manifest, and a proc/method to generate it * Changed: added HighLine to hide password entry on the command line * Changed: unzip now using UnzipTask with its own shorthand syntax. * Changed: filter task gets a consistent syntax to unzip 0.10 (2007-02-13) * Added: modifier for artifacts * Added: ZipTask, WarTask * Added: get POM artifact directly from artifact * Changed: JAR and WAR packaging based on new and improved Zip task * Changed: options for packaging, but not affecting current Rakefile * Remove: delete task 0.9 (2007-02-09) * Added: attributes for configuring compile (sources, classpath, target, options) * Added: shorthand notation for specifying compilation (to, with, using) * Changed: copy task is dead (name conflict), instead we get the better filter task with include/exclude patterns * Changed: rewrite of compile task, now better than ever * Changed: compile can be used inside and outside project * Changed: compiler no longer infers anything from its prerequisites * Changed: compiler accepts files, artifacts and tasks on the classpath * Changed: resources task now working as expected * Remove: global task artifacts was the root of all evil and got canned. 0.8 (2007-02-05) * Added: release task and release_to configuration for repositories * Added: SFTP uploader for releases * Added: convenience method group() for specifying multiple artifacts in same group and version number * Added: install target copies package to local repository and adds a POM, uninstall package removes package (and POM) from local repository * Changed: project lookup now happens through project() method * Changed: locating file in the local repository now happens through Repositories * Changed: downloading file into the local repository now happens through Repositories * Changed: notation for specifying multiple artifacts in a string is now foo,bar,baz * Changed: artifact identifier is now specified with the key :id * Changed: download POM alongside artifact and install in local repository * Changed: no more scoping artifacts collection in project, use compile.with instead * Changed: moved HTTP download logic to transports.rb * Removed: deprecated grouping with multiple artifacts under id key 0.6 (2007-02-01) * Added: Artifact resolution introduces the notion of a spec, which can be supported using ActsAsArtifact * Added: You can now use a project as an artifact, resulting in all its packages being added, or use a task as artifact * Changed: project.sub_projects renamed project.projects * Changed: what used to be called dependencies is now called artifacts * Changed: all artifacts are now created as tasks that know how to download themselves unless some other behavior is specified * Changed: local and remote repositories are now defined on the Rakefile instead of individual projects * Changed: attributes now stored directly as instance variables * Changed: ANSI colors and progress bar now using Ruby Facets 0.5 (2007-01-24) * Added: Build number for each top-level project, build_number method for accessing it and build:increment task for updating the build number file. * Added: to_path method on project to resolve paths relative to base_dir. * Added: recursive_task method on project to create task in project/sub-project. * Added: compiler property for passing any options to Javac. * Changed: remove task renamed uninstall. * Changed: and to confuse more remove task (RemoveTask) renamed delete. * Changed: consolidated before_create/after_create to on_create. * Changed: version, group, artifact added as accessors to project. * Changed: project definition block takes project as argument. * Changed: project enhanced only if new settings or block. * Changed: local_repository is now separate attribute from repositories. * Changed: Directory structure, now split into rbs, rbs-java and tasks. * Removed: project.options. Using a different attributes mechanism. 0.4 (2007-01-23) * Added: CopyTask now deals with files and directories, can copy multiple files, and applies filter to all of them. Filter can be a hash or a proc. * Added: Project gets resources_filter attribute that can be used to set the filter on all copied resources. * Added: HTTP module for getting and downloading files, and a download task. * Changed: Dependencies now check signatures for every file, if available, and show download progress. 0.3 (2007-01-22) * Added: Dependencies loaded from Maven repositories if not existing or built by project. Use rake dependencies to force update, or let compilation take care of it. * Added: Copy task for copying one file to another, and filtering support. 0.2 (2007-01-21) * Added: remove task to get rid of packages added to the local repository. * Changed: recompile project if any of its dependencies is newer than the source code. Will cause recompile if any of the dependencies was compiled and packaged again. * Changed: compile task depends on javac task and resource copy tasks. This might change when adding filtering later on. 0.1 (2007-01-19) * Added: build and clean tasks * Added: resources are now copied over during compilation * Added: POM file generated in local repository (keep Maven happy) * Added: compile scope for use by javac * Added: WAR packaging. * Changed: Root project operates on the current directory, sub-projects on sub directories. See Rakefile for example. | http://incubator.apache.org/buildr/changelog.html | crawl-001 | refinedweb | 5,345 | 59.09 |
Python is a powerful programming language with a robust library of functions and other resources for faster development. Its inbuilt functions are a great support to any Python Programmer. Sometimes it is easier to write your own functions. If in-built functions are not sufficient to implement your logic you have to create your own functions. For such cases you can create User Defined Functions. This post is about how to create and call Python User Defined Functions.
Python User Defined Functions-Syntax
def Function-Name(Argument list):
statements to define function body
[return statement]
To define a function you will always use the keyword def. It is followed by the name of the python user defined function and the list of arguments in parenthesis. The argument list is followed by colon “:”.
You will recall that the blocks in Python are defined with the help of indentation. To define the function body statements press enter (to go to net line of code) and tab key (to define indentation of the statement)
Return statement is the last statement of function . It returns value evaluated within the function.
Example – A python user defined function to add two numbers passed as parameters.
def addnums(x,y): # name of function and two arguments in parenthesis. c=x+y # expression to sum two numbers print(c) # print the added value stored in variable c
OR
def addnums(x,y): # name of function and two arguments in parenthesis. c=x+y # expression to sum two numbers return (c) # returns the added value stored in variable c
Calling Python User Defined Functions
The created function can be called by writing the Function name and passing the required parameters. This function created can be called after the function definition in the saved Python file. Other method is to call it from Python Idle command prompt. If the program is compiled with no errors it will display the result.
Syntax
Function-name(actual parameter list)
Example – calling the function addnums
addnums(40,50)
Note: A function created with def keyword on the Idle Command Prompt will be available. If you close Idle or any tool to run Python commands, the function defined in this session will not be available in the next session.
So, it is always advisable to save Python files. This way you can use the python user defined functions again by importing the compiled Python files in other programs.
Be First to Comment | https://csveda.com/python-user-defined-functions/ | CC-MAIN-2022-33 | refinedweb | 407 | 63.39 |
G-API backends available in this OpenCV version. More...
G-API backends available in this OpenCV version.
G-API backends play a corner stone role in G-API execution stack. Every backend is hardware-oriented and thus can run its kernels efficiently on the target platform.
Backends are usually "black boxes" for G-API users – on the API side, all backends are represented as different objects of the same class cv::gapi::GBackend. User can manipulate with backends by specifying which kernels to use.
#include <opencv2/gapi/fluid/gfluidkernel.hpp>
Get a reference to Fluid backend.
#include <opencv2/gapi/ocl/goclkernel.hpp>
Get a reference to OCL backend.
At the moment, the OCL backend is built atop of OpenCV "Transparent API" (T-API), see cv::UMat for details.
#include <opencv2/gapi/cpu/gcpukernel.hpp>
Get a reference to CPU (OpenCV) backend.
This is the default backend in G-API at the moment, providing broader functional coverage but losing some graph model advantages. Provided mostly for reference and prototyping purposes. | https://docs.opencv.org/4.5.0/dc/d1c/group__gapi__std__backends.html | CC-MAIN-2022-33 | refinedweb | 169 | 50.63 |
On Tue, 3 Dec 2002 05:13, Noel J. Bergman wrote:
> I saw Pete's comments. I, personally, don't agree with him and I didn't
> see him veto the approach, hence my summary.
If need be I will ;)
> By parsable, I mean in accordance with the RFC into the urn:NID:NSS parts.
> The NID gets you to the namespace, which should be useful for scalable
> container context and with component registration, the NSS is handled by
> the namespace and can certainly be a simple key. In other words, the
> contract for the NSS is provided by the namespace definition, and can be as
> restrictive as desired.
We have spent a lot of time removing the need for parsed lookup keys for
ServiceManager/ComponentManager. We have moved to a set of recomended
conventions (postfix with "/" then a discriminator) but they are just that -
recomendations. They are not enforced or required in many containers.
Separating out namespace from local name using a ":" is already accepted as a
convention I believe. Prefixing everything with "urn:" is unecessary noise.
Theres no problem if people choose to not follow the convention but we will
support them better if they do.
--
Cheers,
Peter Donald
----------------------------------------
Why does everyone always overgeneralize?
----------------------------------------
--
To unsubscribe, e-mail: <mailto:avalon-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:avalon-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/avalon-dev/200212.mbox/%3C200212030954.30489.peter@realityforge.org%3E | CC-MAIN-2018-05 | refinedweb | 229 | 57.37 |
Getting to Grips with Blocks II
In the last article we had a whistle stop tour of blocks. How they are defined, what purposes they serve and the differences between various Ruby versions. It was all very academic. But practically what benefit do we actually get from blocks. What makes them such a powerful tool for Ruby developers.
Procs and Lambdas
Before we get into some useful examples of blocks, we have to discuss
Proc and
lambda. These two allow you to define a block for later use using the
call method.
These “callable objects” are intended for use down the line in our applications execution. To drive the point home we need an example. Anything to do with money springs to mind, exchange rates, rates for different clients, discounts and so on.
def discount( rate ) Proc.new { |n| n - ( n * rate ) } end ten_percent_off = discount(0.1) twenty_percent_off = discount(0.2) ten_percent_off.call(100) # => 90.0 twenty_percent_off.call(60) # => 48.0
In the above example, we define a method
discount that accepts a rate and returns a
Proc object. No code is executed (well nothing is returned) all we have done is set up a little environment that takes a percentage off). We setup up two discounts and call them on the values 100 and 60 using
call. Just so you know now the
discount method would work just as well written using a
lambda:
def discount( rate ) lambda { |n| n - ( n * rate ) } end
Why Two Ways to Define a Proc?
Proc and
lambda have subtle differences. From the previous example we see the block accepts a parameter
n, the
lambda is pretty strict about these parameters, if we called it incorrectly like so:
ten_percent_off.call(10, "This should not be here") # => ArgumentError: wrong number of arguments (2 for 1)
The
Proc doesnt mind this sort of thing, it simply ignores the additional parameters. Similarly the
lambda will complain if we do not pass enough parameters, the
Proc again doesn’t mind and just assigns the excess parameters to
nil. It sounds like
Proc is a nice and more flexible way of doing things, however I rarely use
Proc.new it’s almost always
lambda, why? Well, 90% of the time I like airity (number of parameters) to be enforced and a horrible error thrown if I do something wrong. If that reason seems weak, then we can consider how
Proc and
lambda handle
return.
def hello_me hiya = Proc.new { return "Hiya" } hiya.call puts "Woe is me, I will never fulfill my purpose" end hello_me #=> Hiya # Using a lambda def hello_me hiya = lambda { return "Hiya" } hiya.call puts "Finally I fell like I have contributed" end hello_me Finally I fell like I have contributed
As seen the
Proc returns from the scope in which it was called, and the
lambda returns from the block, allowing our final message line to fulfill its purpose. I am not saying “pick one, and never use the other”, far from it. Knowing the implications of both simply allows us to use the correct definition when appropriate. Still I do use
lambda a heck of a lot more.
Practical Blocks
So now we want to look at how we can use these blocks in a practical way that is more than just iteration that we are used to.
Simplifying interfaces
The first thing that springs to mind is a technique I use on factory methods. A factory being a piece of code that returns an object based on it’s inputs. A good example would be producing text for a label print.
class AddressLabel def initialize @lines = [] end def self.generate &block label = AddressLabel.new label.instance_eval(&block) label.print end def print @lines.each do |line| puts line end end def address(lines) lines.each do |line| @lines << line end end end AddressLabel.generate do address ["Lets Be Avenue", "Glasgow"] end # Another way of creating the same label would be ... label = AddressLabel.new label.address ["Lets Be Avenue", "Glasgow"] label.print
Here the
instance_eval evaluates the block within an instance of
AddressLabel. Both methods of producing the label do the same thing only using the block format is way sexier as well as making it more natural to read. No doubt you will have seen this type of interface before in many of the gems available.
Setup and Tear Down
Another common use for blocks is pre and post processing. The most common example on the web is probably Ruby’s standard library for
File.open
File.open('testfile', 'w') do |f| f.write 'Some text' end file = File.open('testfile2', 'w') file.write 'More text' file.close
The second example is probably what you will be used to with PHP, but we can see the nice block implementation takes care of closing the file behind us. You can apply this kind of implementation anywhere tear down actions are involved. I remember writing some php code that created PDF files. The code was something like
try { $pdf = new PDFlib(); if ($pdf->begin_document("file_name", "") == 0) { die("Error: " . $pdf->get_errmsg()); } $pdf->show(“hello world”); $pdf->end_page_ext(“”); $pdf->end_document(“”); } catch (Exception $e) { exit(); }
This simply opens a new file writes some text and closes it again, there is some very basic exception handling to exit the script if things go awry. It was never meant to be the most elegant piece of code. However as we have such elegant examples of interfaces in the Ruby standard library, writing such code would feel just plain old “dirty” in Ruby.
It could be cleared up a lot in PHP (even more so with PHP anonymous functions) but the tools at our disposal pre PHP5.3 were just not up to the job of extracting all this open document / close document chores without wrapping up the PDFlib library.
So what about setup? Everything we just discussed was post processing so what benefit do blocks provide on setup. When creating methods that start to use a lot of conditional logic to determine the outcome I look at passing it blocks.
def more_than_ten_rows? rows if rows.length > 10 yield rows end end more_than_ten_rows? (1..10).to_a do |row| puts row end
So with this simple piece of code
more_than_ten_rows? acts as a gate, allowing the block to run when we have more than the required number of rows. The alternatives would be to move the conditional logic to the presentation, or the presentation to the logic, using the block we have achieved some basic separation.
Wrapping up
I think it’s fair to say unless you get a good grip on blocks when learning Ruby you will never really fulfil your potential. It’s common to hear “blocks are everywhere” and it’s true, so you will be missing the party if you don’t get some quality time with them.
They are there to help us developers trivially fulfill code in a succinct clean way, using them will become more natural over time and once you get to that stage you will wonder how you got by without them.
It’s definitely worth using the PHP equivalent, I never used them (even though I had 5.3 on the production environment) until I came to writing this article, the syntax is more verbose than Ruby, but, hey, its a pretty decent trade off and I fully believe that understanding this relatively new addition in PHP will help you grasp a core fundamental in Ruby.
- Panayotis Matsinopoulos | http://www.sitepoint.com/getting-to-grips-with-blocks-pt-2/ | CC-MAIN-2014-15 | refinedweb | 1,246 | 63.7 |
itertools.combinations() solution in Clear category for The Rows of Cakes by ddavidse
import itertools
""" For each 3 points, we use the first 2 points to find the a,b that satisfy
the line equation y = a*x + b for both points. Then, we check whether the
3rd point also lies on the line.
Because such lines cannot be vertical (this would require a == infinity),
we first have some additional logic to deal with constant x values. """
def checkio(cakes):
count = 0
x_stored = []
lines = []
for triplet in itertools.combinations(cakes, 3):
s1 = set([x[0] for x in triplet])
s2 = set([x[1] for x in triplet])
xval = s1.pop()
if not s1:
if xval not in x_stored:
count += 1
x_stored.append(xval)
elif len(s1) == 1 or len(s2) == 2:
continue
else:
x1 = triplet[0][0]
x2 = triplet[1][0]
x3 = triplet[2][0]
y1 = triplet[0][1]
y2 = triplet[1][1]
y3 = triplet[2][1]
a = (y1 - y2) / (x1 - x2)
b = y1 - x1 * a
if abs(a * x3 + b - y3) < 1e-5 and [a,b] not in lines:
lines.append([a,b])
count += 1
return count
Dec. 21, 2020
Forum
Price
Global Activity
ClassRoom Manager
Leaderboard
Coding games
Python programming for beginners | https://py.checkio.org/mission/cakes-rows/publications/ddavidse/python-3/itertoolscombinations/share/3f9d7ed05ab05131c922a30c105c8b9b/ | CC-MAIN-2022-05 | refinedweb | 204 | 69.11 |
Lazy Class Infrastructure
Do you ever feel you should implement equals(), hashCode() and toString, but just can't be bothered to do it for every class? Well, if you aren't bothered by speed, you can use Jakarta Commons Lang to do it for you. Just add this to your class:
import org.apache.commons.lang.builder.ToStringBuilder; import org.apache.commons.lang.builder.EqualsBuilder; import org.apache.commons.lang.builder.HashCodeBuilder; class Foo { public int hashCode() { return HashCodeBuilder.reflectionHashCode(this); } public boolean equals(Object other) { return EqualsBuilder.reflectionEquals(this,other); } public String toString() { return ToStringBuilder.reflectionToString(this); } }
And that's it. Your class will just do the right thing. As you can probably guess from the function names, it uses reflection, so may be suboptimal. If you need performance, you can use tell it to use particular members, but I think I'll leave that up to a future article. I also recommend you don't use this technique if you are using something like Hibernate, which does things behind the scenes on member access; you may find it does undesirable things. :) | http://www.davidpashley.com/blog/2007/Jan/28 | crawl-001 | refinedweb | 183 | 59.4 |
Everything.
Somewhere, somebody is going to hate me for saying this, but if I were to try to explain monads to a Java programmer unfamiliar with functional programming, I would say: "Monad is a design pattern that is useful in purely functional languages such as Haskell. Although the concept of a monad has been taken from mathematics, this design pattern can be applied in Haskell to implement a wide range of features."
Philip Wadler says, "Monads provide a convenient framework for simulating effects found in other languages, such as global state, exception handling, output, or non-determinism."
The Java programmer could argue fairly, "Hey, Java has all those things, and monads aren't even mentioned in Design Patterns. What's the deal?"
I'd respond: "Design patterns fluctuate by language. Just as each language has its own set of native features, it also has its own set of both idioms and design patterns that make sense within that language; naturally, there's a lot of crossover between similar languages. So although I don't expect monads will be the next great thing in Java, they're a necessary part of understanding Haskell." [see Footnote 1]
The Java programmer might continue, "By why would I care about Haskell, especially if I have to learn about this whole monad thing, simply to do the things I already know how to do in Java?"
I'd slyly say:
Have you been paying attention to the The International Conference on Functional Programming Contest over the last several years? It's one of the largest programming contests of the year, and it's often mentioned on Slashdot. Users of any language may enter, and a wide variety of languages are routinely used by the contestants.
During the last several years, the winners of the ICFP Programming Contest were mostly Haskell, OCaml and Dylan programmers, all of which are functional programming languages.
Learning new things occasionally can be painful. Someone once said, "If a programming language doesn't teach you a new way to think about problems, it's not worth learning." I've found that learning about monads and Haskell in general has both been painful and rewarding.
Both Wikipedia and "Monads for Functional Programming" offer introductions to monads from a category-theoretic perspective, but they aren't for the faint of heart. Despite having a degree in math, I found that it took a while for me to grasp what was going on. Perhaps I'm rusty, but I think a simpler explanation is possible.
"Monad" is an interface. For any type "<InnerValueType>" (notice the template-ish syntax), a class "MonadExample" that implements the "Monad" interface has the following two methods:
It has a constructor with a parameter named "value" of type "InnerValueType".
It has a method named "pass" that takes a callback. The callback must accept a parameter of type "InnerValueType", also named "value". It must return an instance of "MonadExample". [see Footnote 2] The "pass" method invokes the callback, passes "value" and immediately returns the result.
Whacky interface, eh?
It may help if you start by thinking of monads as a special type of container. Just as "List <Integer>" makes sense, so should "MonadExample <InnerValueType>". Some classes that implement the Monad interface always contain exactly one value. Others might contain zero or more values, as List <Integer> does.
Continuing, let's imagine a simple function "f()" that accepts a value and returns another. Now, instead of simply passing around values, let's wrap the values in monads and pass around the monads. To use the function f() on that value, you must call the monad's "pass" method. Doing so, you suddenly get two benefits. First, you can put a lot of other stuff in the monad besides the value. Second, you can do whatever you want when someone calls the pass method. For example, you can keep track of how many times it was called.
What's even more interesting is you can pass different objects to f(); each implements the Monad interface. Per polymorphism, they can do totally different things, and f() doesn't have to know or care.
In Java, this behavior is not novel--any method may have side effects. But, in a language such as Haskell, side effects are strictly forbidden; functions should take values and return values. Use of the monad, in such a language, enables you to make use of side effects where otherwise impossible.
Philip Wadler has written several papers detailing how monads can be used to implement a variety of language features in a purely functional programming language, such as Haskell. This article barely scratches at the surface. However, I'd like to offer one example that is somewhat interesting.
The "Maybe" class implements the Monad interface. The Maybe class has two states: it either can contain a real value or it can contain nothing. It's a fairly common cause of bugs in a lot of languages to end up with NULL in a place you weren't expecting it. Sometimes NULL is a valid value in some parts of the code, and sometimes that NULL escapes into code not expecting NULL. The Maybe monad "captures" this behavior in a way that is useful, easy to understand and verifiable at compile time in Haskell. This may be of interest to anyone who's stumbled across such a bug. [see Footnote 3]
At this point, the reader may be begging for some code. One of the hardest things about learning Haskell is you can't understand Haskell without understanding monads, and you can't understand most of the monad examples without understanding Haskell. Hence, here's the Maybe example in Python:__ # This strange indentation is perhaps more readable. return left_monad.pass_to(lambda left_value: ( right_monad.pass_to(lambda right_value: ( monad_class(left_value + right_value))))) n1 = Maybe(5) n2 = Maybe(6) n3 = Maybe(None) print n1, "+", n2, "=", add(n1, n2) # => Maybe(5) + Maybe(6) = Maybe(11) print n2, "+", n3, "=", add(n2, n3) # => Maybe(6) + Maybe(None) = Maybe(None)
Here the same example in JavaScript:
function Maybe(value) { this.value = value; } Maybe.prototype.constructor = Maybe; Maybe.prototype.pass = function(maybe, callback) { if (maybe.value == null) { return new Maybe(null); } return callback(maybe.value); } Maybe.prototype.toString = function() { return "Maybe(" + this.value + ")"; } function add(leftMonad, rightMonad) { // Pass leftMonad and function taking leftValue. return leftMonad.pass(leftMonad, function(leftValue) { // Pass rightMonad and function taking rightValue. return rightMonad.pass(rightMonad, function(rightValue) { // Return the resulting monad. leftValue and // rightValue are accessible via a closure. return new leftMonad.constructor( leftValue + rightValue); }) }); } n1 = new Maybe(5); n2 = new Maybe(6); n3 = new Maybe(null); document.write(n1 + " + " + n2 + " = " + add(n1, n2) + "<br>"); // => Maybe(5) + Maybe(6) = Maybe(11) document.write(n2 + " + " + n3 + " = " + add(n2, n3) + "<br>"); // => Maybe(6) + Maybe(null) = Maybe(null)
A few things deserve note:
To implement this, your language must support closures.
The "add" function works with any monad. That's part of the beauty of monads. All the knowledge about Maybe was encapsulated in the Maybe class (Python) or prototype (JavaScript).
The successively nested anonymous functions used in the add function are perhaps a bit difficult to understand, but this is a literal translation of how it would work in Haskell. Fortunately, Haskell has syntactic sugar that makes this style programming very easy to read. It's fair to say that the translation into either Python or JavaScript is a bit awkward in comparison.
If you read either Wikipedia's or Philip Wadler's introduction to monads for functional programming, you may wonder, "what's up with all the math?" Why is it that computer scientists feel compelled to justify things mathematically before they feel comfortable using them as techniques in computer science? Sure, early computer scientists were mostly mathematicians, but is the connection really so crucial? I can think of three reasons:
Mathematicians are trained to base their work on previous work. Euclid taught us how to create chaos out of definitions, postulates and common notions.
If you base your work on an existing body of work that already has been scrutinized via proof, you reasonably can assume that things probably will turn out better.
If you can prove that your work meets certain criteria, you can let the math do a lot of the work for you. For instance, if you can prove that your code implements a true monad, everything else you know about monads will "just work".
At the risk of stating the obvious, mathematical models aren't perfect. If you add two cups of one substance to two cups of another substance, you might end up with four cups because 2+2=4--or you might end up with an explosion. Obviously, the model must fit the facts, not the other way around. Only then can you use the model to try to ascertain additional facts.
The idea of a function in computer science comes from the Lambda Calculus. What happens if you start ignoring the math? C already fails to comply with Lambda Calculus because of the side effects, but let's take it further. Imagine a version of C called C' that passes all arguments by reference instead of by value and has no return statement. Hence, when you call a function, you can pass a bunch of arguments (note, some of those arguments might contain only junk values). The function itself uses some of those parameters to do something. Then it may, as a side effect, change some of the parameters. Because the arguments are passed by reference, the function is really changing the caller's variables:
/* Increment x. */ void inc(int *x) { (*x)++; } /* ... */ int x = 5; inc(&x);
We suddenly have a language with no return values and only side effects. Nonetheless, it's easy to see that any program in C easily can be translated into C'. Furthermore, it'd be easy to write a pure Lambda Calculus programming language in C'. C' still would be a useful language if you had nothing else.
Next, let's consider the associative nature of integers:
(a + b) + c = a + (b + c)
In languages that permit operator overloading, such as C++, Python and Haskell, you can implement the "+" operator for objects. There's nothing in the compiler that forces you to make + associative. Of course, it's usually foolish to implement + in a way that isn't associative, but there's nothing stopping you.
Similarly, in Haskell, you can declare that a type is a member of the type class "monad"--that is, it implements the interface--even if your type doesn't mathematically qualify as a monad. I'm sure that you can expect bad behavior, in some manner or another. But as intelligent as Haskell compilers are, they won't stop you (which is mentioned here).
To belabor the point in the interest of humor, consider the identity principle: a thing must equal itself. My buddy, Leon Atkinson, use to jokingly argue with me that "Nothing makes sense if you violate the identity principle". Well, observe:
$ python >>> class A: ... def __eq__(self, other): ... return False ... >>> a = A() >>> a == a False
Notice that the universe didn't cease to exist. I didn't get a core dump or even an exception. I bet if I embedded this naughty class in some dark corner of my Python program, and never made use of it, my program would run.
Now consider that in SQL, "NULL" can best be translated as "nothing"; it's not FALSE, 0 or "". Observe the identity principle in SQL:
mysql> SELECT FALSE = FALSE; +---------------+ | FALSE = FALSE | +---------------+ | 1 | +---------------+ 1 row in set (0.06 sec) mysql> SELECT NULL = NULL; +-------------+ | NULL = NULL | +-------------+ | NULL | +-------------+ 1 row in set (0.00 sec)
Notice that NULL does not equal NULL. This leads to my point: In computer science, nothing [still] makes sense [even] if you violate the identity principle.
Okay, so what? Should we forsake our math background? Is purely functional programming a lame holdover from a bunch of bitter mathematicians? If it works, it works. Right? No. Sometimes less is more. In math, additional constraints on a set of elements may lead to more theorems that you can prove about that set of elements. Sometimes being more restrictive about what code is allowed to do allows you to make assumptions that lead to optimizations. Sometimes, entire new features can result.
Consider the bad old days of assembly programming. As programs got larger, and as more pieces relied on one another, assembly programmers eventually discovered that you have to be pretty stringent about who gets to modify what. These days, we've learned to be rather controlling about which subroutines are allowed to modify which registers. You can push and pop all you want, but by the time you leave a subroutine, the registers that aren't supposed to be changed better still have their original values intact.
Similarly, as we moved away from structs in C to objects in C++, we started being a lot more strict about who could modify or even know about the internal structure of an object. As soon as you break encapsulation and start poking around the inside of an object in code using that object, your own code becomes tied to the implementation of that object. What seemed like freedom is really slavery.
Now, consider the order of evaluation of C function parameters. Expert C programmers will know where I'm going with this. What does the following program print?
#include <stdio.h> int *inc(int *x) { printf("%d\n", *x); (*x)++; return x; } void printboth(int x, int y) { printf("x: %d, y: %d\n", x, y); } int main(int argc, char *argv[]) { int x = 0; printboth(*inc(&x) + 10, *inc(&x) + 100); return 0; }
The question comes down to which inc(&x) comes first. I believe it's actually part of the ANSI standard, although I could be wrong, that arguments are evaluated right to left:
$ ./test 0 1 x: 12, y: 101
Notice that y ends in 1, whereas x ends in 2. Indeed, the y argument was evaluated first. Can you guess what would get printed if you translated this code into Java? [see Footnote 4]
It makes me wonder, if a function's side effects can't influence anything beyond the monad in a purely functional programming language such as Haskell, and if the programmer is completely in control of the monad, does that make the program easier to understand? If you don't have to worry about the side effects of some random function that you didn't even write, do you get fewer bugs?
As a further point, consider what "Joel on Software" reminds us of when explaining MapReduce, the algorithm that makes Google so massively scalable [see Footnote 5]: if a list of functions [isn't] allowed to have any side effects, you can safely call them all in parallel. Considering how hard it is to make something thread safe, suddenly parallel programming in a functional programming language such as Haskell, Erlang or Alice seems a lot nicer!
Purely functional programming languages such as Haskell are very strict in their approach to side effects. Monads can be used to compensate for this strictness in order to implement a lot of the features we have come to take for granted in non-strict languages. They can do this without compromising the strictness of the language as a whole. Although staying true to an underlying mathematical model isn't fundamental to the continued existence of the universe, sometimes doing so can have substantial benefits. Sometimes the ideas that are hard to understand and that seem overly restrictive can open up whole new worlds in the universe of computer science.
I'd like to thank Brandon L. Golm and Brennan Evans for their substantial feedback on this article.
Footnote 1: I've occasionally heard Lisp programmers such as Paul Graham bash the concept of design patterns. To such readers I'd like to suggest that the concept of designing a domain-specific language to solve a problem and then solving that problem in that domain-specific language is itself a design pattern that makes a lot of sense in languages such as Lisp. Just because design patterns that make sense in Java don't often make sense in Lisp doesn't detract from the utility of giving certain patterns names and documenting them for the benefit of all the less experienced programmers.
Footnote 2: Strictly speaking, "pass" may return a value of "MonadExample" that wraps a value of "SomeOtherInnerValueType". This added bit of flexibility is difficult to describe without bewildering the reader.
Footnote 3: Cyclone is an interesting variant of C that has special syntax for this concept of "a pointer that might be NULL vs. a pointer that can never be NULL". It too could catch this class of errors at compile time.
Footnote 4: It prints x: 11, y: 102. If you attempt this exercise, remember that instead of using a pointer to an int, you should use an object that wraps an int.
Footnote 5: The same idea is also central to Erlang, a functional programming language used by Ericsson.
Shannon Behrens is a self-professed language lawyer who works for IronPort Systems in San Bruno, California. His eventual goal is to implement a Python-like systems language and then develop a practice kernel in that
Null
The null comparison has some analogies to the liar paradox, because if we negate null it is still null. If I know the knowledge of flight is unknown to me, but I also negate that the knowledge of flight is unknown to me, then I better leave the windows alone.
Re: footnote 1
"... bash the concept of design patterns... domain-specific language to solve a problem ... is itself a design pattern... design patterns that make sense in Java don't often make sense in Lisp doesn't detract from the utility of giving certain patterns names..."
I absolutely *agree* with the characterization above. However the BIG PROBLEM is that these so-called patterns are being touted as _universal_ ones, e.g. they apply no matter which language you use, which as you also say above is most definitely untrue.
I do find myself using pattern names nowadays much as I feel the idea was rather bogus from the beginning. The problem is that "patterns" are being presented as a silver bullet/panacea when they are actually nothing more than an occasional mere _convenience_ to facilitate communication.
The urge to hype is partly due to the fact that it wouldn't have taken off without overselling it, but even if I do find pattern vocabulary *occasionally* useful, I find that hype most disingenuous.
Identity
I don't get your notion that using an equals sign to represent an irreflexive relation is 'breaking the identity principle'. Redefining the meaning of the equals sign does not change the meaning of equality, of being the same thing. Null is still equal to null, even if the relation you're using for = doesn't say so.
You seem to have circumscribed mathematics rather narrowly. Do you consider mathematics done with paraconsistent logic not to be mathematics, for instance?
More Monads - Am I getting it?
Okay, so a monad is a design pattern that can be used to encapsulate some functionality that will generally work on any type (otherwise generalizing the functionality wouldn't be helpful, since you would have to still deal with all the different types).
Am I the only one who thinks of blocks and yield in Ruby when they hear that? The block is the part of the functionality that you are encapsulating in the monad (potentially w/ side effects) and the function that takes the monad calls "pass" (or yield) to get that functionality executed on the values passed to it. Assuming you could encapsulate the block (say in a proc object) and its state (think closure) then you could reuse that block just like how you could reuse a monad.
I have only been reading about Haskell and monads for about two hours though, so I might be completely missing the point here. Can somebody straighten me out if I am off base please?
Awesome article
I really enjoyed reading this. I've been struggling through Haskell 98 Report, A Gentle Introduction to Haskell (Yeah right ;) and Yet Another Haskell Tutorial for a few months now. It was nice to see the examples in Python and JavaScript. Keep up the good work.
peace,
core
other Haskell tutorials
I'm really enjoying:
Haskell for C Programmers
A Tour of Haskell's Syntax
about C and ANSI
It seems a matter of CPU arch...
on i386, you get:
$ ./a.out
0
1
x: 12, y: 101
---
on powerpc:
0
1
x: 11, y: 102
---
both are gcc 4.0.3
sequence points.. arg evaluation order unspecified in C99
I am not completely sure now, but look at Appendix C of the Ansi std. Read through the list, and read item #10 of section 6.5.2.2, which states "The order of evaluation of the function designator, the actual arguments, and subexpressions within the actual arguments is unspecified, but there is a sequence point before the actual call." A function call like f(x,y) must have x and y evaluated fully before f is called but x and y can be evaluated in any order. [Note that the comma between x and y is not the comma operator as defined by the standard but is just syntax inside the paren of function call. Also, note that f can be evaluated before or after x and y but it isn't *called* until all of f (obviously), x, and y have been evaluated fully. Ex, (p+x)(x++,x--) is basically illegal becuase even without looking at how the params are used inside the function, the actual function that is to be called can be any of 3 possible values depending on when p+x is evaluated in relation to x++ and x-- (first or last; after x++ but before x--; or after x-- but before x++).]
Thus it is unspecified which *inc(&x) is computed first. For the program to follow the standard strictly, basically you are supposed to get the same results either way (not the case in the current example) or else condition the statement with #if #else (perhaps to execute one way or the other depending on the compiler used).
The natural disclamers apply (IANAExpert, etc)
Evaluation order
Yep. The function arguments will usually be evaluated right-to-left on a register-starved architecture like x86 which pushes the parameters on the stack in that order ... but the compiler is free to evaluate them in another order (or simultaneously, in interleaved instruction streams) if it feels like it. On a ppc (and several other architectures), the first few parameters are passed in registers, so there's no evaluation order that's more natural than another.
Your increments example is, AFAIK, perfectly legal C, but the standard says that any of the three behaviors is valid and you can't count on any particular one happening. So, it's legal, but not all that useful.
(Disclaimer: I am not a language lawyer.)
Re: Why the Universe Won't Come Crashing Down If You Violate the
It is not the universe that will come crashing down, but your own ability to perceive the universe. The universe, of course, will continue to exist and behave according to its rules.
On a different topic, I might ask you, "why have you slighted PHP by not including an example?" I might quickly follow that question with, "How would I implement a monad in PHP 5?"
To that, you would probably say, "I have left this as an excercise for the (perverse) reader."
I would then be forced to exclaim, "Touche! The student becomes the teacher!" and then regret using a silly French word, possibly mispelling it and definitely mispronouncing it.
Teasing aside, I do hope you continue to write articles like this for LinuxJournal. As with all your work, they are top quality.
Monads in PHP
> On a different topic, I might ask you, "why have you slighted PHP by not including an example?"
I could claim that it's because PHP doesn't have closures that I know of, but the real reason is because a pack of wild Haskell programmers would probably hunt me down and chain me in a room with nothing more than a Cobol compiler ;)
> Teasing aside, I do hope you continue to write articles like this for LinuxJournal. As with all your work, they are top quality.
Thanks for the encouragement!
Hey! What's with the slams on COBOL?
The earliest databases were interfaces for COBOL - IDMS and CICS for instance.
And *those* were certainly more capable than that toy MySQL.
Vive l' Postgres!!
so a monad is a design pattern
got it
Monads
It may be time to drop the word "monad" from Computer Science. Since it first started to appear a few years back, it has appeared in numerous places and usually without a definition, so that many are now running hither and whither trying to come up with a definition for these beasties. One hears, for example, that Microsoft's new Longhorn OS with be based on monads. A simple web search will quickly produce references to things like HTML monads, XML monads, C# monads, with little explanation of what they might be in each case.
Trying to guess what circuits exist in a black box from input and output patterns can produce many different results. It is usually better to be provided with a schematic to understand what is happening. So, it is desireable that the guy who first coined this "monad" thing would come out and tell us what he meant. Then, once we have a definition, it would be quite easy to see why a particular piece of code is a monad or not. But, giving examples claiming that "this is a monad" and "this is one too" only leads to conjectures about what was meant and to uncomfortable moments where one may at first think that one understands which is quickly followed by a next moment when one thinks that perhaps one does not.
Does anyone out there have a definitive definition of monad? If not, I would suggest that the term be dropped. It is notable that "design pattern" is now another beastie that has been effectively dropped because it is used in so many different ways that no one can be sure anymore about what it might be.
Monad reply
A monad is simply a monoid object in the category of endofunctors. See Saunders Mac Lane's Category Theory for the Working Mathematician.
The reason it seems to you that monad might not have a precise definition is that:
1. It doesn't map very clearly to languages that don't have type systems based on System F or other typed lambda calculi (e.g. ML or Haskell), as it then becomes difficult to define the concept of functor and natural transformation in these languages.
2. To define monad properly, you should use the following concepts:
Monoid
Category (and object and morphism)
Functor
Functor composition
Natural transformation
both ways of composing natural transformations
Perhaps initial objects and products too.
It might also be useful to define the Kleisli category, as that's where monads find their use in Haskell (they tend to be used to define 'functions with extra stuff signified by the type of their return type') and is where Haskell's >>= operator comes from.
Fortunately, it is not necessary to understand all that to actually use monads, any more than it is necessary to understand relational calculus to use SQL or to understand boolean rings to use OR and AND.
Incidentally, the Longhorn thing was not to do with the category theoretic monads. I think the original version of PowerShell was named monad after the use of the term in Leibniz's philosophy.
Monads are originally from
Monads are originally from category theory (the same term is used in predicative logic to indicate the unary operators, which is only ~).
More recently (say, 10-15 years) it has been adopted in functional languages as a method of expressing containment and processing on the contained items. This has definite links to the category theory meaning. The monads of the functional language haskell allow elegant IO within a perfectly pure language.
C#'s 'monads' are basically a limited adoption of haskell's - apparently this connection is not ephemeral - the implementors know haskell.
I'm unsure about HTML/XML Monads. I have a feeling you are confused on that matter.
Monads
The definition is actually quite concrete both in math and CS. For *precise* details, please see:
I'm not sure how monads would work in, say HTML, since HTML is a markup language and doesn't have functions. :-/
Definitive definition of monad
The word monad came to computer science from mathematics, more precisely from category theory (this was done by Eugenio Moggi). In category theory, monad is a structure with a standard and well known definition.
I do not know how people misuse the word in computer science, but given its historical origin, it should only be used in connection with those aspects of computer science which are mathematically modeled by (mathematical) monads. You do not really have the freedom to "redefine" monads here.
Neither is it true that monads in computer science have no definition. In good computer science they have a precise definition, such as e.g. in Haskell, and this definition is based on the mathematical one. Just because a bunch of ignorant people use the word as a buzzword is not a reason to drop the word from all of computer science. It is the ignorant people who should be dropped.
Avoiding lambdas in python code
Nice article. However, I had quite a hard time decoding lambdas in the Python example, and I rewrote them as named functions. Usually, it is a bad idea to use lambdas in python, since named function can always do just the same thing.__
def left_pass(left_value):
def right_pass(right_value):
return monad_class(left_value + right_value)
return right_monad.pass_to(right_pass)
return left_monad.pass_to(left_pass)
n1 = Maybe(5)
n2 = Maybe(6)
n3 = Maybe(None)
print n1, "+", n2, "=", add(n1, n2)
# => Maybe(5) + Maybe(6) = Maybe(11)
print n2, "+", n3, "=", add(n2, n3)
# => Maybe(6) + Maybe(None) = Maybe(None)
PS: looks like it is impossible to get formatting correct... you can download this at my homepage.
Avoid lambdas?
I feel a bit silly replying to a year and a half old thread, but:
"Usually, it is a bad idea to use lambdas in python, since named function can always do just the same thing."
That's like saying we shouldn't use multiplication because it can just as well be done with a loop and addition.
I don't get why people are so afraid of such simple concepts. And Shannon, I quite frankly don't get the point either of your section "What's with all the math then?".
I'm not trolling, but imo the notion of wanting to drop or avoid things because they are hard is not they correct mindset to make progress for future generations to enjoy.
lambdas versus named functions
I know where you're coming from. In fact, I wrote the same code as you, but I really disliked the way the code flowed:
def1
def2
def3
call3
call2
call1
I was worried readers would find this more unreadable than:
lambda1
lambda2
lambda3
I think it's fair to say that both ways are awkward. I think in the longterm, readers could learn to visually parse the lambda version more accurately since the code isn't "split". Like I said, I'm glad Haskell has special syntax for this.
Design patterns only for those with little experience?
Nice article but I agree with Joshua that I miss a good explanation of the "why" of all this.
I do take exception with your comment in footnote 1 where you suggest that Design Patterns are only for those programmers with little experience. ;-)
Design patterns only for those with little experience?
> Nice article but I agree with Joshua that I miss a good explanation of the "why" of all this.
Hmm, I thought I went to great lengths to explain that in my "Monads as a Design Pattern" section. Specifically:
You need monads to understand Haskell.
You need Haskell because it's a functional programming language.
You need functional programming languages because they're kicking butt right now in the programming contests.
> I do take exception with your comment in footnote 1 where you suggest that Design Patterns are only for those programmers with little experience.
I apologize, but you have misread my statement. (More below.)
> ;-)
Absolutely. We're in 100% agreement.
argument evaluation
The article says:
"The question comes down to which inc(&x) comes first. I believe it's actually part of the ANSI standard, although I could be wrong, that arguments are evaluated right to left ...
Notice that y ends in 1, whereas x ends in 2. Indeed, the y argument was evaluated first. Can you guess what would get printed if you translated this code into Java?"
That is not correct at all. The order of evaluation for arguments is undefined so a conforming program can not rely on it. In addition, multiple side effects on the same variable wihout a sequence point in between are not only unordered, but undefined. I believe that applies here, even though the change is inside of another function. So a conforming compiler could do something completely unexpected like deleting all your files. :)
Err... maybe that was your point. I didn't quite understand the point about experienced C programmers. If so, nevermind.
argument evaluation
> Err... maybe that was your point.
giggle. Yep ;)
> I didn't quite understand the point about experienced C programmers. If so, nevermind.
I stole this joke from "Expert C Programming: Deep C Secrets".
I have stared at your Maybe
I have stared at your Maybe python class for some time, squinting and peering.
I do not understand why the implementation is so verbose, for starters. left_monad? how about x. Secondly, the formatting is really terrible. Even in native haskell there's no reason to dodge a few bind expressions to break out the concepts.
Also, you don't ever explain the why of any of this. The behavior could be easily achieved by simply testing for none in the add function. Why create a new Maybe(None) istead of just returning self.value?
What does any of this achieve?.
I have stared at your Maybe
> I have stared at your Maybe python class for some time, squinting and peering.
Thanks for reading ;)
> I do not understand why the implementation is so verbose, for starters. left_monad? how about x.
I was going overboard in the hope of making it more readable.
> Secondly, the formatting is really terrible.
The indentation was my own. The lack of blank lines was done by "Linux Journal". For that, I apologize.
> Even in native haskell there's no reason to dodge a few bind expressions to break out the concepts.
> Also, you don't ever explain the why of any of this.
Hmm, I thought I went to great lengths to explain that in my "Monads as a Design Pattern" section. Specifically:
You need monads to understand Haskell.
You need Haskell because it's been kicking butt lately in the programming contests.
> The behavior could be easily achieved by simply testing for none in the add function. Why create a new Maybe(None) istead of just returning self.value?
> What does any of this achieve?
The idea is that the add function can be used with *any* monad. Other monads might work completely differently. The Maybe monad is very simple.
>.
Fair enough. I admit that I have not learned every functional programming language. Thanks for the paragraph above, I find it interesting. Haskell has my fancy right now, because I'm debating internally the question of function vs. purely functional. Obviously, I don't have all the answers.
Do you really need Haskell?
> You need Haskell because it's been kicking butt lately in the programming contests.
Hm. We, the Dylan Hackers team, came in second, but mainly because we made mistakes in choosing the meta-game strategy for the second round. We kicked Haskells butt in round one. And we won the prize for the most readable and refactorizable code, which is not a light aspect in maintaining real-life systems.
For my life, I cannot wrap my head around monads, or read the type errors you get out of Haskell or ML compilers. Monads may be nice as a mathematical tool, but they're unwieldy as a programming abstraction.
Oh, and speaking of a Python-like system langage: we already have that, and it's called Dylan. We share your dream of using that to write a new system from scratch.
Dylan, eh?
Andreas, yes, I have been keeping my eye on Dylan ;) You said, "Oh, and speaking of a Python-like system langage: we already have that." Does that mean there's a compiler for Dylan? I thought there was only an interpreter. I shouldn't be surprised, since there are compilers for other derivatives of Lisp. I wonder if it has Scheme-style macros :-/
Ok, I will consider myself "encouraged" to learn Dylan ;)
Yep, Dylan
There's not only one, but two Dylan compilers in existence. One is Gwydion Dylan, which was written by the people at CMU who were also responsible for CMU CL. Project lead was Scott Fahlman, known as the inventor of the smiley.
The other compiler, now known as Open Dylan, was written at the now-defunct company Harlequin. A lot of the people who worked on the Lisp machine in better times long gone ended up at Harlequin, and worked on the LispWorks compiler there. These were also the people who worked on the Dylan compiler. Harlequin eventually went bankrupt, and was split apart. The Dylan compiler ended up in the hands of the authors, because commercial interest had faded, and they eventually decided to release everything under an open source license. This code is impressive, the garbage collector alone represens 30 person years of work by experts in the field.
Dylan was designed to bridge the gap between the power of dynamic languages and the speed of static languages. It uses a type annotation system, much like CL does, and the compiler does what is called "optimistic type inferencing" to optimize away type checks and generic function dispatch. It is possible to get within the same order of magnitude in speed as if the program had been written in C, in a lot of cases even matching the speed.
Regarding macros: Dylan does have proper hygienic macros. It's missing procedural macros, but there's a proposal for an extension available, and Open Dylan (ex Functional Developer, ex ex Harlequin Dylan) even used procedural macros internally.
Thanks for you interest in Dylan. If you have any questions, feel free to drop by on #dylan on irc.freenode.net.
Dylan Hackers
Hey... I watched your talk at CCC and read some of the IFPC site a few months ago but I haven't looked at Dylan yet since I was still trying to grok Haskell. Perhaps I should just stop wasting time and move straight to Dylan. Might stop by IRC sometime soon...
peace,
core
Nice article.
For people wishing to see an implementation of the monad abstraction in their own language, A collection of links to monad implementations in various languages is handy. It includes links to versions in C++, Clean, Haskell, Java, Joy, Miranda, OCaml, Perl, Prolog, Python, Ruby, Scala, Scheme, Tcl, XSLT, and maybe more at this point.
Nice article, a few issues
Nice article, good to see people trying to get these ideas out to the masses.
There are some issues with it that I'd like to mention.
The term "monad" is misused a couple of times, especially in the
examples in your article. The only thing which is a monad is the type
constructor -- the function on types, never the elements of any given
type. So Maybe is a monad, but (Just 5) isn't.
Another slightly odd thing which you might try to fix with your Python
implementation of Maybe is that Nothing is properly part of the Maybe
type, that is, it's a value of type (Maybe a) which isn't just return
applied to some value. Here, you're using the constructor for the
class as return, but you also get the equivalent of Nothing by
applying the constructor to the value None, which is a little odd.
I'd also have some reservations with the comment that you need to
understand monads to understand Haskell and vice versa. That's sort of
true, but it really shouldn't be a difficulty, because you pass back
and forth between them in learning. You don't need very much Haskell
to understand some simple examples of monads in Haskell (Lists are
probably the best example), and you don't need to know all that much
about monads to understand most Haskell code.
By the way, did you read my article?
It takes a rather different perspective on monads than a lot of the
tutorials do, and it's one that I've found people new to monads have
an easier time with.
The point about monads being a design pattern I actually wouldn't take
too much exception to, except for the fact that I don't really like
the term "design pattern" in the first place. :) Too often, the things
people call design patterns are just ordinary functions or general
interfaces in a more expressive programming language.
In Haskell, you have the Monad class which unifies the use of monads
throughout the language, so that you don't end up with the same
structures of bind and return and liftM and join over and over on
different types. (Which is the point at which I'd think it more
reasonable to call it a design pattern.) It started out in programming
use as a design pattern, and now it's just a typeclass whose members
satisfy some laws.
I suppose one could say that the general use of the Monad class is a
design pattern, but I'd argue that it's no more a design pattern than
the use of the Num class (which is what Haskell uses to unify operations on numeric types). :)
- Cale
Nice article, a few issues
Thank you very much for your corrections as well as your helpful comment in general.
Part of the reason that
Part of the reason that nothing really bad happens if you have side effects in your program is that you can actually represent side effects in the lambda calculus without too much difficulty: each function takes and returns an extra argument, which is the state of the system. Things passed by reference are actually instructions for interacting with the state, not values. Of course, this is essentially the monad concept. You can look at a language like C (or C') as having everything in one huge monad, which the compiler handles for you, and the math works out just like using small explicit monads in Haskell. Of course, your language features aren't operating directly on the lambda calculus elements that they look like (i.e., "x" isn't a variable with an integer value in the lambda-calculus sense, it's a variable with an address value, and the state at the address holds the integer). The "everything is constant" functional aspects come through in that you can't write "&x = &y;" or "5++" in C.
The difference between C and Haskell
The difference between C and Haskell, regarding monad behavior, is that in C, every function is in the IO monad, whereas in Haskell, only some functions are, and there are other monads besides IO.
Part of the reason that
Brilliant point.
Many thanks
It warms the cockles of my heart to see somebody in industry talking about such subjects!
Monads in Ruby
MenTaLguY said:
Having thought about it a little bit, the definition of "pass" in
the article is a bit weird -- some monads can hold more than one
value at a time, for instance, so skipping the fact that "pass" is
sort of a "fanout-then-recombine" operation is kind of an
unfortunate omission.
On the plus side, one really important thing you picked up on is
that the data type the monad is built on needs to be a type
constructor -- i.e. a generic data type. That's something I've
been angsting about in my Monads-in-Ruby article, as I've had to
rather deliberately gloss over some of the important type issues
involved because of Ruby's laissez faire approach to typing.
I reply:
I agree. Sorry for the omission. The List Monad in Ruby tries the function with every item in the list. Its approach to computation is "try everything", and that approach is encapsulated in the Monad, right?
Maybe Monad
I've never bothered to try understanding monads, but this is not the first time I read about them. Not the first time I see a Maybe monad too. Could you give another example why a Maybe class might be useful (or say how the one you have is useful)? Maybe(6) + Maybe(None) = Maybe(None) has no obvious explanations. f(any,None) = None would seem pretty easy to define in any language.
This reply might be a bit
This reply might be a bit less erudite than the sibling, but I thought it might be useful to write something a bit more to-the-point. A lot of imperative programs I've seen use a pattern where a routine returns a sentinel value (usually False) for failure. So you get code like this:
def do_something():
(success, val) = do_somethingelse()
if not success:
return False
(success, val2) = do_another()
if not success:
return False
(success, val3) = do_more(val, val2)
if not success:
return False
...
return final
Exceptions are a way out of this, but that is not ideal when it's not actually an error for the various routines to fail. In Haskell, you can have all the routines return a Maybe type and do:
do_something = do val1 <- do_somethingelse
val2 <- do_another
val3 <- do_more val1 val2
...
return final
The Maybe monad will automatically collapse the whole computation to Nothing if one of the sub-processes returns Nothing; otherwise, the return value will be (Just final).
I apologize for the indentation; I can't seem to get proper pre-formatted text from this comment entry box.
Avoidance of redundancy, separation of concerns, and factoring
Maybe(6) + Maybe(None) = Maybe(None) has no obvious explanations. f(any,None) = None would seem pretty easy to define in any language.
Yes. And g(any, None), and h(any, None), and foo(any, None), and bar(any, None), and frobnicate(any, None), and redundantize(any, None), and all the rest, are just as easy, of course. And if you ever need versions that don't have the None-handling logic, you can write even more versions of these functions!
The trick that the Maybe monad is doing is to encapsulate the logic that you'd have to implement over and over in those functions in just one place. You write them without having to worry about whether somebody might call them with None as an argument (and in Haskell, in fact, that means that unless the functions explicitly accept aguments of type Maybe, you can never call them with None, guaranteed at compile time). Then, if you need to use them in the context of a computation that only optionally returns a value, you build that computation using the the Maybe monad. Or if you actually change your mind, and want the computation to fail with an error as early as possible, you just replace the Maybe monad with the Error monad. Or if you want it to be nondeterministic, you can use the List monad. And so on...
Avoidance of redundancy, separation of concerns, and factoring
Brilliant comment. Wow, I wish I could have said it that nicely ;) I tried, but I fear it didn't come through. | http://www.linuxjournal.com/article/8850?quicktabs_1=0 | CC-MAIN-2014-52 | refinedweb | 8,109 | 62.27 |
The array at position 2 (the a) has not a real char value. So you must either:
// stepthrough the sentence array, get the first character
// of each string @ each index, and store...
The array at position 2 (the a) has not a real char value. So you must either:
// stepthrough the sentence array, get the first character
// of each string @ each index, and store...
Take a look at :
String (Java Platform SE 6)
and
Double (Java 2 Platform SE v1.4.2)
Here's an small example
try{
The first problem is that you operate on only two Arraylists. Both exist on the heap and not - like you may think - on the stack of the recursive calls. So after the the first three calls the...
Take a look at this:
public static interface IFunction<T> {
public void callFunction(T t);
}
private static IFunction<String> functionPointer = new IFunction<String>() {
Of course!!! You are right. There I was a little bit blockheaded:)
The problem is that the type of the value of the map is the wildcarded type Collection<EventHandler<? extends Event>>. To this type you can cast. When you write your method, it returns a concrete...
The program would be more convenient with different threads because the main thread stops every time you call server.accept() and stream.readUTF() Both methods stop the thread in which they are...
If you think of an OS like - for instance - a linux kernel which you can install on a hard drive. That's impossible. Java needs a VM and thus a basic OS. There is no way to evade these fact. But I...
You should use dynamic structures in this case:
public static void main(String[] args) {
ArrayList<Integer> inputVals = new ArrayList<>();
Integer temp = 0;
Scanner scanner = new...
I think so. I'm not a java networking expert and I dont know the best practices. Anyway here's a small example
Client impl:
package example.socket;
import java.io.DataOutputStream;
import...
I strongly recommend to use Netbeans and JavaFX Tools when you want to develop GUIs in JavaFX. It's the native IDE for the framework.
JavaFX : JavaFX Tools
Netbeans: JDK 7 with NetBeans
I think that there exists no better language to do network programming like java. There exist several frameworks and standard specifications in any scale.
One of the core and basic technics are...
The two inputs results from the two constructor calls(you make the input operation in the constructor):
First:
public static void main(String[] args) {
calc_main c=new calc_main();
and...
The easiest way is to use Long.toBinaryString but I think that's not the purpose of the thread;)
If you have a JDK you can also check the source of the toBinaryString-method there you can find a...
The simplest way should be an extra break condition where you compare the parameters r and c to the row and column class var. But I'm not sure if you will go through the maze the right way then. ...
It happens at the exit of the maze. The exit condition doesnt work in this case becaus there is no wall.
char[][] test = new char[][] {
{ '*', '*', '*', '*', '*' },
{ '*', ' ', ' ',...
Regadring the interface view of multiple iheritance, you could use two or more interfaces, it will work.
Apart from that you can simulate multiple inheritance by declare and instaniate abstract...
Just a hint from a programmer, it's bad style to exit methods or return values in functions in more than one line. I didnt mean it in a bad way.
you make an assignment:
test = true means in java that you set the var test to true. And in java the assignment return its value so it allways true
Give (test == true) a try.
P.S.
In pascal...
I m not sure, cause i cannot debug the code due to the missing Recipse class(I know it's easy to implement but it's late;)
but I think the start value of currentRecipeNumber is wrong. You...
If you have Netbeans and JavaFX in your enviroment, then give that a try:
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package ztest;
...
I dont know how to resovle this but if you have a newer versionof eclipse then give the marketplace a try. Search there for maven and you will find the plugin. if I remember right you can install it...
And be careful with encapsualting complex objects. Strings Integer and so are allways new initialized during assignments but check this for instance:
public static class...
The reason isin the questionMissed-method
for (int i = 0; i < questionWrong.length; i++) {
if (!userAnswers[i].equals(correctAnswers[i])) {
i =...
Check the functionality of postincrement and preincrement
return ++i;
will work | http://www.javaprogrammingforums.com/search.php?s=cbb015d83d802b0760f186afeab72180&searchid=1272761 | CC-MAIN-2014-52 | refinedweb | 788 | 73.98 |
#include <LevelGodunov.H>
Actual constructor.
Inside the routine, we cast away const-ness on the data members for the assignment. The arguments passed in are maintained const (coding standards).
Take one timestep for this grid level.
Compute the time-centered values of the primitive variables on cell faces.
This API is used in cases where some operation over the whole level must be performed on the face-centered variables prior to the final difference update. Examples include incompressible flow and MHD, in which it is necessary to compute the projection of a face-centered vector field on its divergence-free part. To complete the differencing, it is necessary to call the member function computeUpdate().
Compute the increment in the conserved variables from face variables.
Compute dU = dt*dU/dt, the change in the conserved variables over the time step.. | http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.1/classLevelGodunov.html | CC-MAIN-2018-43 | refinedweb | 139 | 58.18 |
Some.
For each test case output whether the permutation is ambiguous or not.
Adhere to the format shown in the sample output.
4
1 4 3 2
5
2 3 4 5 1
1
1
0
ambiguous
not ambiguous
ambiguous
when i press submit button its giving
the requested page could not be found......
whats wrong???
the submit page is stilll not found
anshuman, don't worry about that error. It doesn't acutally affect your submission.
it seems like no one got any problem with this problem...but i can't get the problem.
can someone pls explain what these different permutations actually are?
The statement itself explains pretty clearly.. what line don't you understand exactly?
how are the ambiguous permutation and the inverse permutation similar?
i know i'll feel stupid after reading the reply.
You mean this line from above?
You create a list of numbers where the i-th number is the position of the integer i in the permutation. Let us call this second possibility an inverse permutation.
can you give an example showing a permutation ,its inverse permutation and an ambiguous permutation similar to that?
a different example from one given above pls.
someone?
Original permutation = 1 3 2 4
It's inverse permutation is 1 3 2 4 and so it is ambiguous.
If Original permutation was 1 4 2 3, its inverse would be 1 3 4 2 and it would be non ambiguous.
The sample input and output arnt clear.......if possible someone can help me out.
hello all,
plz help me.m gttng right answer on my computer.But when I submit it says wrong answer.
Heres my code
#include<iostream>using namespace std;int main(){ int n,i=0,flag=0; cin>>n; if(n!=0) { int *A=new int(n+1); int *B=new int(n+1); for(i=1;i<=n;i++) { cin>>A[i]; } for(i=1;i<=n;i++) B[A[i]]=i; for(i=1;i<=n;i++) if(A[i]!=B[i]) { flag=1; break; } if(flag==1) cout<<"not ambiguousn"; else cout<<"ambiguousn"; } return 0;}
i hv checked my code with the condition"There is exactly one space character between consecutive integers.". bt still gttng wrong answer.
plz help
my code is working properly on comp.. but m getting runtime error here.. can anyone help me out plz.
You are declaring far too much memory (200000 longs) on the stack. Use heap memory instead (ie declare them globally).
I have applied all sort of algorithm for execution efficiency.! Still its shows 'time limit exceeded'
whats wrong with my code?[python]
def qsort(L): if L == []: return [] return qsort([x for x in L[1:] if x< L[0]]) + L[0:1] + qsort([x for x in L[1:] if x>=L[0]])n=input()arr=[]for i in range(n): arr.append(input())arr=qsort(arr)for i in arr: print i
got it!!!!!!!
what is wrong with sample input 2inputs and 3 outputs
Nothing is wrong with the sample input. There are 3 inputs.
hey dude what is the flaw in this code?
plzz let me know...
#include<iostream>using namespace std;int y;int a[100001][100001];int b[100001][100001];void algo(int a[][100001],int y){int count =0; for(int i=1;i<a[y][0];i++){ if(a[y][i]!=b[y][i]){ count =-1; break; }} if(count==-1){ cout<<"not ambiguous"<<endl; } else{ cout<<"ambiguous"<<endl; } }int main(){int z;y=0;while(cin>>z && z!=0){ a[y][0]=z; int t; for(int i=1;i<=z;i++){ cin>>t; a[y][i]=t; b[y][t]=i; } y++;}for(int i=0;i<y;i++){ algo(a,i);} // system("pause"); return 0;}
@prashant
You have declared too big arrays and moreover int z cannot store 10^5.
i am getting run time error while the code runs fine on my system....
how can i get where's the error??
how to declare large long int arrays that can hold integers prescribed in the problem....
please help!!!!
I have declared array containing the permutation like this
long int *A=new long int(100005);
GLOBALLY.
and am appending a string with the appropriate strings after checking for the required condition.
what could be the problem?
That isn't an array at all. Did you mean to use square brackets, ie [100005]?
Also, you really shouldn't be using cin/cout or waiting until the very end to output results. See the FAQ.
i meant to say that this is how i declare the storage for the permutation. Is this okay?? as this can be used as an array i misnamed it as array. I could not understand what do u mean by " you really shouldn't be using cin/cout or waiting until the very end to output results". can u please where in FAQ you wanted me to direct?
Just print out each result as you process it, rather than adding it to a string and waiting until the very end.
And no, that is not a correct way of declaring memory as I mentioned.
I have tried my program on my computer with a certain number of test cases and its working fine. But here it says wrong answer. Here is my code:
//PERMUT2
#include<stdio.h>
#include<stdlib.h>
#define SIZE 65536
int main()
{
unsigned int i, num[SIZE], t, test;
scanf("%d", &t);
for(i=0;i<t;i++)
{
scanf("%d", &num[i]);
}
if(num[i]==i+1)
test=1;
else test=0;
if(test)
printf("ambiguousn");
else printf("not ambiguousn");
return 0;
}
What made you choose the number 65536? Read the problem statemenr.
the code is working fine with my laptop but not here..can any body help me to figure out the problem plzzzzz
#include<iostream>using namespace std;int main(){ int n; int cntr=0; cin>>n; int a[n+1],b[n+1],c[n+1]; for(int i=1;i<=n;i++){ cin>>a[i]; b[i]=i; } for(int i=1;i<=n;i++){ for(int j=1;j<=n;j++){ if(a[j]==i){ c[i]=b[j]; } } } for(int i=1;i<=n;i++){ if(c[i]==a[i]){cntr++;} } cout<<endl; if(n!=0){ if(cntr==n){cout<<"ambiguous";} else{cout<<"not ambiguous";} } return 0;}
@sukhmeet
u consider only one input case at a time but there r several input case at a time.. u need to take input till value of n is not equal to zero...
read input/output format given in question again..
thanks devendra,
really bad eye of me. but now it says Time limit exceeded. I think rather than going for n^2 complexity I should try something faster.What say !!! let c... thnaks again btw
getting wrong answer for this code which seem to work fine with my inputs. Please suggest a case where it dosen't.
#define no_of_tests 100
int main(){
long unsigned int a=0,b=0,c=0,index=1,num=0;
char arr[no_of_tests],str[10];
int i,count=0;
while(1)
gets(str);
if(str[0]==' ')
continue;
for(i=0;str[i]!=' ';i++)
c=10*c+(str[i]-'0');
if(c==0)
break;
for(i=1;i<=c;i++)
a=0;
while(fread(str,1,1,stdin))
if(str[0]=='1'||str[0]=='2'||str[0]=='3'||str[0]=='4'||str[0]=='5'||str[0]=='6'||str[0]=='7'||str[0]=='8'||str[0]=='9'||str[0]=='0')
a=10*a+(str[0]-'0');
else
break;
if(index>a)
b=b^index;
num=num^a;
else if(a>index)
b=b^a;
num=num^index;
index++;
if(b==0&&num==0)
arr[count]='a';
else
arr[count]='u';
b=0;
num=0 ;
index=1;
count++;
c=0;
for(i=0;i<count;i++)
if(arr[i]=='u')
printf("not ambiguousn");
printf("ambiguousn");
Dear Admin
My program is working properly. I satsified all the limititing conditions as well. But still when i am submitting it, it is showing wrong. I couldn't find where the problem is can you please help me out.
^ Did you test your program on the sample input (the complete sample input) ?? Is the output from your program 'exactly' like its supposed to be ?
yes i checked for the complete sample input, the output is exactly like its supposed to be
Are you sure ?? ... output for each test case is on a new line, your program does that ?
yes
@Admin - Same problem - Correct on sample input, but says wrong answer.. I've declared all variables as long int in C++.. All outputs are on new line; please let me know where I have gone wrong.. BTW my complexity is n and not n^2 ! :-) ..
Your code gives the wrong answer on the majority of inputs; it was lucky to pass the sample input. Giving you a test case would make things far too easy, but note that in order to be ambiguous the inverse permutation must have every element identical to the initial permutation.
Oh thanks Stephen, got it running..
@Stephen-my solution times out...
any hints..plz.. ?
n can be up to 100000. Two nested loops up to 100000 has no chance of running in time. I'm afraid you'll have to think of a completely new idea.
@Stephen-
yup..i got AC finally..the problem was exactly what you pointed out..i managed to put it in one loop..
thnx :)
#define MAX 100001
#include<ctype.h>
int str_len,str_len2;
char temp[MAX];
char arr[MAX];
int len = 1;
int var,var2;
int value;
int main(void)
temp[0]= ' ';
while( len > 0 && len <= 100000)
int i=1;
int flag=1;
scanf("%d",&len);
if(len == 0)
fflush(stdin);
fgets(arr,MAX,stdin);
str_len = strlen(arr);
for(var = 0; var<str_len-1; var++)
if(!(isspace(arr[var])))
temp[i]=arr[var];
i++;
str_len2 = strlen(temp);
for(var2 = 1; var2 < str_len2; var2++)
value = (temp[var2]-48);
if((temp[value]-48) != var2)
flag = 0;
if(flag)
Can someone plz simplify thils question. How do we get inverse permutations?
I am getting Runtime Error..Please Someone HELP!!
@Vidura Yashan: Maybe you can read this article
this program is providing the expected output as given according to sample input and question logic,but i am getting always wrong answer response.If it is wrong then please provide the specific error description.
#include<malloc.h>
unsigned long n,*in,*inv,i;
int flag;
do
flag=1;
scanf("%lu",&n);
in=(unsigned long *)malloc(sizeof(unsigned long)*n);
inv=(unsigned long *)malloc(sizeof(unsigned long)*n);
for(i=0;i<n;i++)
scanf("%lu",&in[i]);
if(in[i]>n)
i--;
inv[in[i]-1]=i+1;
if(in[i]!=inv[i])
{ flag=0;
if(flag==0)
printf("not ambiguous");
else if(flag==1&&n!=0)
printf("ambiguous");
free(in);
free(inv);
}while(n!=0);
return 0;
It doesn't even give the right answer for the sample input. Read the FAQ if you do not know how to test your code properly.
is out of memory a compile error???
#include<iostream>
using namespace std;
{ unsigned int num[100000], per[100000];
float n;
while(1)
{ cin>>n;
if(n==0)
break;
for(float i=;i<n;++i)
cin>>num[i];
for(i=0;i<n;++i)
per[num[i]]=i;
int flag=0;
if(num[i]!=per[i])
cout<<"ambiguous"<<endl;
cout<<"not ambiguous"<<endl;
will some1 please point out compile errror in my code!!!
Anyone please explain me what is the meaning of
"internal error in the system "...
my code is running correctly on my PC....
what does this mean??? am confused!!
Is there anything special that should be kept in mind for this problem? it seems preetty straightforward for me but can`t seem to find where my code could fail.
Thx,
SB.
There's nothing special in the problem. Your method of reading an integer then trying to store it in a char seems pretty special to me though :)
@ Stephan: Can you figure out where am I going wrong? Here's the code...
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
int n;
String currentLine = null;
while (in.hasNext()) {
n = in.nextInt();
if (n == 0) {
System.exit(0);
} else {
int[] a = new int[n+1];
for (int i=1; i<=n; i++){
a[i] = in.nextInt();
}
if (isAmbiguous(n, a))
System.out.println("ambiguous");
System.out.println("not ambiguous");
public static boolean isAmbiguous(int n, int[] a){
boolean answer = false;
if(n == 1){
if(a[1] != 1)
answer = true;
else if(n == 2){
if (! (a[1] == 1 && a[2] == 2))
if (! (a[1] == 2 && a[2] == 1))
else if(a[1] == 1 || a[n] == n)
else if (a[1] == n){
for(int l = 2; l <= n; l++){
if(a[l] != (l - 1)){
else if (a[n] == n-1){
for(int m=2; m<=n; m++){
if(a[m] != (m+1)){
return answer;
I'm a bit confused about the question. Do the original permutations have to be ordered (ie 2 has to have 1 and 3 on either side of it)? In other words, can there be input permutations like 1,5,2,4,3 or not?
The problem defines a permutation as any ordering of the numbers 1 to N. Not just a few of them.
I'm still confused. Can 1,5,2,4,3 be a potential permutation? From the examples it seems like it cannot but I just want to double check
Yes, it can. There is nothing in the examples that says it cannot or puts any restriction on what permutations are allowed.
long int a[100000],n,i;
scanf("%ld",&n);
if(n==0)
for(i=0;i<n;i++)
scanf("%ld",&a[i]);
if(i+1!=a[a[i]-1])
if(i==n)
printf("Ambiguousn");
printf("Not Ambiguousn");
If it were correct, it wouldn't say wrong answer. It says wrong answer, therefore it is incorrect. (In fact, it even fails on the sample input).
I got Error ..... :)
very lame part of me !
@ admin ...i get the right answer on my pic but get a wrong answer on code chef ..please check
why those are not working?
@admin I couldn't get the problem.
"You create a list of numbers where the i-th number is the position of the integer i in the permutation." What does it mean and how did you create 5 1 2 3 4 from 2 3 4 5 1 ?
@ stephen
@admin m getting wrong answer please help#include<stdio.h>int a[100001];int main(){ int t,i; int n,c=0; scanf("%d",&t); while(t>0) { scanf("%d",&n); if(n==0) break; for(i=1;i<=n;i++) scanf("%d",&a[i]); for(i=1;i<=n;i++) { if(i==a[a[i]]||a[i]==i) c++; } if(c==n) printf("ambiguous"); else printf("not ambiguous"); t--; c=0; } return 0; }
@admin
see the following soln has been submitted successfully
but mine with almost same logic is wrong..(in c++)
main() { int n; int flag; while(n!=0) { cin>>n; flag=1; int a[n]; int b[n]; for( int i=1;i<=n;i++) { cin>>a[i]; } for( int i=1;i<=n;i++) b[a[i]]=i; for( int i=1;i<=n;i++) { if(a[i]!=b[i]) { flag=0; break; } } if(flag==1) { cout<<"ambiguous"<<endl; } else { cout<<"not ambiguous"<<endl; } } return 0; }
will u please explain me why..??
whats wrong with code
can anybody please xplain in deep how the inverse permutation is calculated, with xample?
heylo,,, can some body plz look to my code and tell me the error coz m getting "wrong answer"....and for the test cases discussed here its working correctly...
int n;
int a[100001],b[100001];
int i,f;
cin>>n;
f=0;
for(i=1;i<=n;i++)
cin>>a[i];
b[a[i]]=i;
if(a[i]!=b[i])
f=1;
if(f==1)
cout<<"not ambiguous";
cout<<"ambiguous";. | https://www.codechef.com/problems/PERMUT2/ | CC-MAIN-2015-48 | refinedweb | 2,691 | 75.91 |
Ok please help me with this, I have a final due tonight and I've tried to do this **** for like an hour now, but I have no idea how to do any of this **** with arrays and I've looked it up on the internet for like half an hour and nothing I've tried has worked.
We have to use this header: public static int threshold (double [] gains, double goal)
And this is the "psuedo-code" for part a:
i. Set earned to 0
ii. Set days to 0
iii. While earned < goal and there are still elements left in the array:
1. Add the next dayʼs increases to earned
2. Add 1 to days
3. Move to the next element in the array
iv. Decide whether to return days or -1
Then part b I have no idea about:
Complete your program by writing a main() method that asks the user to enter a profit goal. You
may define your own array of stock increase values. Your program should then calculate and print out the investment threshold for that profit goal, or state that the goal cannot be reached.
So here's what I have so far:
import java.util.*;
public class investment_thresholds
{
public static int threshold (double [] gains, double goal)
{
gains = new double{5, 3, 6};
int earned = 0;
int days = 0;
while (earned < goal)
{
earned = earned + gains;
days = days + 1;
}
}
}
But pretty much the thing that isn't working and I have no ****ing clue about is the arrays!!!! Any help whatsoever even the slightest bit is greaty appreciated
Plleeaasseee im going to fail i'm freaking outttttt
Heeeeeeeeellllllllllllpppppppppppppppp!
Oh poor you wasting a whole hour on your homework. If you want help, my advice is don't start off being Mr Angry and insulting the people who can help you.
Arrays are not difficult to use once you understand the basics and a quick google on using how to use arrays will bring up millions of hits sch as this tutorial
Plleeaasseee im going to fail i'm freaking outttttt
Not my problem. There are some lessons to be learned here: Don't leave your homework until the last minute and screaming and shouting doesn't help. If you had been politer in your first post you would have got a lot more help.
Posting code? Use code tags like this: [code]...Your code here...[/code]
my data structure exam is a few hours away, LOL... you are not asking questions with the right attitude pal. Anyways good luck.
Originally Posted by 99999
Plleeaasseee im going to fail i'm freaking outttttt
actually this is good news, more job opportunities for
actually this is good news, more job opportunities for me
Agreed, the last thing we need out there are more crappy developers.
There are many lessons to be learned in life. Some of us have to learn the hard way
Forum Rules | http://forums.codeguru.com/showthread.php?506676-need-help-pressing-a-button-within-an-embeded-web-page.&goto=nextoldest | CC-MAIN-2015-18 | refinedweb | 489 | 79.09 |
#include <TIL_Raster.h>
Definition at line 29 of file TIL_Raster.h.
Copy over all the information and data from the src. This allocates a new raster (rather than doing a shallow copy).
Reimplemented from PXL_Raster.
copies only the raster properties (res, type, etc), not the data, nor does this method allocate memory for a raster.
Reimplemented from PXL_Raster.
Definition at line 47 of file TIL_Raster.h.
Definition at line 122 of file TIL_Raster.h.
Definition at line 136 of file TIL_Raster.h.
Scales the source raster window to the destination window size and inserts the result at (tox, toy) location of this raster. Incoming coordinates and sizes should be compatible with the respective rasters. You can't do format conversions here.
Definition at line 121 of file TIL_Raster.h. | https://www.sidefx.com/docs/hdk/class_t_i_l___raster.html | CC-MAIN-2022-21 | refinedweb | 129 | 70.39 |
A Ruby example of
String#[]
str = "Oh, this is a pen." p str[/this is a (\w+)\./, 1]
The result is "pen". Since
String#[] is just an alias of
String#slice, (*1)
p str.slice(/this is a (\w+)\./, 1)
The result is completely same.
in Vim script
Vim script version needs two separate processes; getting the list of first-matched string itself and containing sub matches, and then getting the specific item.
let str = "Oh, this is a pen." echo matchlist(str, 'this is a \(\w\+\)\.')[1]
(Added at Sun Jun 12 09:26:44 PDT 2011) thinca suggested that
matchstr() with
\zs and
\ze is very handy, particularly because of the different behavior in the case when it didn't match.
let str = "Oh, this is a pen." echo matchstr(str, 'this is a \zs\w\+\ze\.')
in Haskell
Haskell version needs three separate processes with
Text.Regex.Posix.=~; it's almost same to Vim but the default
=~ behaviour is to assume the regex object has "global" option, so you have to pick which match.
import Text.Regex.Posix ((=~)) main = do let str = "Oh this is a pen." print $ head (str =~ "this is a ([a-zA-Z_]*)" :: [[String]]) !! 1
(Added at Sun Jun 12 12:54:01 PDT 2011) The following code is another example; it's safe in runtime and also this supports Vim's
\zs and
\ze.
import Text.Regex.PCRE ((=~)) import Data.String.Utils (replace) import Safe (headMay, atMay) import Data.Maybe (fromMaybe) matchstr :: String -> String -> String matchstr expr pat = let x = replace "\\zs" "(" $ replace "\\ze" ")" pat in fromMaybe "" $ headMay (expr =~ x :: [[String]]) >>= \y -> atMay y 1 main = print $ matchstr "this is a pen" " \\zs\\w+\\ze"
in Clojure
(Added at Thu Mar 22 17:49:40 PDT 2012)
((re-find #"this is a (\w+)\." "Oh, this is a pen.") 1) ; "pen"
- (*1) Precisely no. Try and check the differences between them without the second argument. | http://ujihisa.blogspot.com/2011/06/stringslice-of-ruby-in-vim-script-and.html | CC-MAIN-2018-22 | refinedweb | 321 | 75.71 |
On Wed, 2005-06-08 at 10:58 -0800, luya jpopmail com wrote: > > It looks like patch is not finding the correct files. Diagnosing this > > would be elped by specifying where you think that file resides instead > > of skipping over it. So on the "File to patch:" line try > > blender/source[...]KX_HashedPtr.cpp > > source/gameeng[...]KX_HashedPtr.cpp > > etc > > and see if you can figure out where patch is going wrong. > > > Yeah, both spec and patch are from FE cvs. I have tried lines above but it asked me > for another file to patch. Thanks to Michael A. Peters, I used a condition for i386: > > %prep > %setup -q -n %{name} > %ifarch x86_64 > %patch0 -p1 -b .x86_64 > %endif > > until I have an odd problem: > > ----- > $ rpmbuild -ba rpmbuild/SPECS/blender.spec > Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.26366 > + umask 022 > + cd /home/Luya/rpmbuild/BUILD > + LANG=C > + export LANG > + unset DISPLAY > + cd /home/Luya/rpmbuild/BUILD > + rm -rf blender > + /usr/bin/gzip -dc /home/Luya/rpmbuild/SOURCES/blender-2.37.tar.gz > + tar -xf - > + STATUS=0 > + '[' 0 -ne 0 ']' > + cd blender > ++ /usr/bin/id -u > + '[' 500 = 0 ']' > ++ /usr/bin/id -u > + '[' 500 = 0 ']' > + /bin/chmod -Rf a+rX,u+w,g-w,o-w . > + exit 0 > Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.95915 > + umask 022 > + cd /home/Luya/rpmbuild/BUILD > + cd blender > + LANG=C > + export LANG > + unset DISPLAY > + sed -i 's/use_openal =.*/use_openal = '\''true'\''/g;' SConstruct > sed: can't read SConstruct: No such file or directory > error: Bad exit status from /var/tmp/rpm-tmp.95915 (%build) > > Strange problem because this line only declares SConstruct, it was not > present with previous spec. Okay, I've figured out your problem. You aren't using the upstream tarball. The last Compton release used a blender that installed into the standard %{name}-%{version} directory structure. The current blender instals to %{name}. From the tarball you listed in your previous post, it appears you created a new tarball with a %{name}-%{version} directory structure and left the upstream %{name} directory structure as well... except empty. This is why rpm isn't erroring out with a more informative message about directory not found.... instead it can't find the files it expects to operate on. What you're going to want to do is either: 1) Use the blender built in the buildsystem. They're in the fedora extras repository now for FC4 and FC3. 2) Redownload the pristine blender tarball and the spec file from cvs. Rebuild with that. -Toshio
Attachment:
signature.asc
Description: This is a digitally signed message part | https://listman.redhat.com/archives/fedora-extras-list/2005-June/msg00275.html | CC-MAIN-2021-21 | refinedweb | 430 | 67.45 |
Getting Started with EZ-BLE™ Module Code Differencesuser_252 Jun 23, 2016 12:04 PM
When following this document,, it references the complete main.c file in Appendix E. However downloading the code from the AN96841 page,, the main.c file is different and causes errors when compiling.
The PDF document only has
#include <project.h>
while the example project has the following:
#include <project.h>
#include "ias.h"
#include "common.h"
After trying different combinations I still get a build error of undefined reference to 'Advertising_LED_Write' so I'm not sure what I really should be following at this point.
Has someone gone through this before and can let me know the proper way to "Get started"? The page states that it was updated last on June 03, 2016 but maybe someone didn't test going through the document?
Thanks,
George
1. Re: Getting Started with EZ-BLE™ Module Code Differencesuser_1377889 Jun 23, 2016 12:49 PM (in response to user_252)
Welcome in the forum, George!
I (personally) would never trust a code snippet or project in a .pdf.
The best source for getting started I could recommend is on Creator's "Start" page. Under "Kits" you will find all the kits you have installed with code examples that run.
Bob
2. Re: Getting Started with EZ-BLE™ Module Code Differencesuser_252 Jun 23, 2016 1:48 PM (in response to user_1377889)
Thanks Bob. I agree, but then why have a 154 page document that steps through every aspect and getting set up and then differs from the updated code?
The PDF even goes so far as to have the following statement:
After including the above code snippets in the correct order, go to Build > Clean and Build your project to compile the firmware. Appendix E details the complete main.c firmware
Hopefully Cypress is reading this and will make corrections where needed. The project code does compile but I'll have to hook up hardware and verify that it works properly.
Thanks again,
George
3. Re: Getting Started with EZ-BLE™ Module Code Differencesuser_1377889 Jun 24, 2016 4:22 AM (in response to user_252)
This is a developer forum. The chance that Cypress reads this is there, but acting on it is not a "must". When you want to have Cypress to correct some flaws in documentation go to top of this page "Design Support -> Create a support case" and describe the bug you've found so far. Cypress is always thankful improving the documentation.
Bob | https://community.cypress.com/message/50789 | CC-MAIN-2019-22 | refinedweb | 416 | 73.47 |
Destroy a thread immediately
#include <sys/neutrino.h> int ThreadDestroy( int tid, int priority, void* status ); int ThreadDestroy_r( int tid, int priority, void* status );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
These kernel calls terminate the thread specified by tid. If tid is 0, the calling thread is assumed. If tid is -1, all of the threads in the process are destroyed. When multiple threads are destroyed, the destruction is scheduled one thread at a time at the priority specified by the priority argument. If priority is -1, then the priority of the calling thread is used.
The ThreadDestroy() and ThreadDestroy_r() functions are identical, except in the way they indicate errors. See the Returns section for details.
If a terminated thread isn't detached, it makes the value specified by the status argument available to any successful join on it. Until another thread retrieves this value, the thread ID tid isn't reused, and a small kernel resource (a thread object) is held in the system. If the thread is detached, then status is ignored, and all thread resources are immediately released.
When the last thread in a process is destroyed, the process terminates, and all thread resources are released, even if they're not detached and unjoined.
Blocking states
If these calls return, they don't block.
If the calling thread is destroyed, ThreadDestroy() and ThreadDestroy_r() don't return.
The only difference between these functions is the way they indicate errors: | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/t/threaddestroy.html | CC-MAIN-2021-10 | refinedweb | 254 | 65.22 |
For that I will publish multiple posts, explaining different functions of the embedding.
In the first post I show, how to include iTunes to a C# program in general - therefor even no external software like the iTunes SDK is needed, already the "normal" iTunes installation will install the needed COM object.
To add such a reference to your C# project, click Project - Add Reference. There switch to the tab COM and select the object iTunes x.xx Type Library. And that is it, iTunes is ready.
For simplicity we import the corresponding library with using:
using iTunesLib;
In the program then an instance of the iTunes player can for example be instanced like this:
iTunesApp iTunesPlayer = new iTunesApp();
If this line is executed, iTunes opens and the player instance is connected to the C# object.
In the next post I describe how to traverse the music library / different playlists. | http://csharp-tricks-en.blogspot.com/2011/02/include-itunes.html | CC-MAIN-2017-51 | refinedweb | 149 | 61.16 |
Our infrastructure is on AWS. I want to get a daily report on how much spent on the previous day. What is the best way to do it?
AWS has just announced the general availability of the functionality to Monitor Estimated Charges Using Billing Alerts via Amazon CloudWatch (it apparently has been available to AWS premium accounts already since end of 2011, see Daniel Lopez' answer to Is there a way to set Amazon AWS billing limit?):
We regularly estimate the total monthly charge for each AWS service
that you use. When you enable monitoring for your account, we begin
storing the estimates as CloudWatch metrics, where they'll remain
available for the usual 14 day period. [...]
We regularly estimate the total monthly charge for each AWS service
that you use. When you enable monitoring for your account, we begin
storing the estimates as CloudWatch metrics, where they'll remain
available for the usual 14 day period. [...]
As outlined in the introductory blog post, You can start by using the billing alerts to let you know when your AWS bill will be higher than expected, see Monitor Your Estimated Charges Using Amazon CloudWatch for more details regarding this functionality.
This is already pretty useful for many basic needs, however, using the CloudWatch APIs to retrieve the stored metrics yourself (see GetMetricStatistics) actually allows you to drive arbitrary workflows and business logic based upon this data, and of course you could generate a daily report on how much spent on the previous day like so as well.
Regarding the latter, the scope of this offering is stressed as well though:]
That is, the granularity of the reported metrics has yet to be analyzed (I see data points every 4 to 8 hours, but not necessarily updated values every time, as one would expect actually), so deriving a sufficiently precise daily report might require some statistical post processing.
Unfortunately this is less straight forward than one would think, especially given that the desired data can be inspected manually via your account. There are two monitoring options one would expect:
Neither AWS nor any other IaaS/PaaS/SaaS vendor I'm aware of does offer API access to their accounting data currently (maybe due to the potential financial/legal implications), making any form of 3rd party integration (which would be easy to do nowadays) cumbersome at best, insofar you need to resort to web scraping to retrieve the data in the first place.
Fortunately a new offering from Cloudability [link removed after discontinuation of free tier] has entered the stage recently to do just this for you in a professional and vendor agnostic way, we are using it with great success already for AWS specifically - you'll currently receive a daily (or less frequent) report of your monthly spending only though, i.e. not broken down to your daily spending yet. Adding the daily increase would be trivial of course, so I'd hope and expect they'll make more information like this available over time.
Their approach to pricing [link removed after discontinuation of free tier] is refreshing as well (despite being obvious) and simply tied to your own cloud spending, thus should pay for itself as soon as you realize respective saving potential (they don't charge anything at all if you spend less than $2.5k/mo).
Update 20121016: Unfortunately Cloudability has changed their pricing model to a more common one, which still includes a free tier (and is reasonable priced in general), but removes access to advanced features therein, which I considered a refreshingly fair and smart approach for users with small budgets, who might still be multipliers elsewhere or upgrade once growing into it.
Update 20150115: Unfortunately Cloudability has chosen the path of many freemium SaaS vendors and finally discontinued the free tier entirely: From February 1, we will no longer offer the Cloudability Free edition that you are using today.
The former caveat (kept for reference below) of requiring your main AWS credentials doesn't apply anymore - AWS recently introduced New IAM Features: Password Management and Access to Account Activity and Usage Reports Pages:
Cloudability has now integrated this as well, thus you don't need to hand them your main AWS credentials anymore or spent the extra effort to establish Consolidated Billing just to gain insight into your cloud spending, see How to Setup Amazon IAM (Identity Account Management) for details.
There is one caveat one should be aware of upfront though:
In order to access your data you'll need to hand them your main AWS credentials, because otherwise they can't scrape your account, obviously. For AWS in particular you can still avoid this by facilitating Consolidated Billing, where you consolidate payment for multiple Amazon AWS accounts [...] by designating a single paying account, which in turn has no access to your computing resources and data.
Using awscli tools, you can get your month-to-date total:
$ aws --region us-east-1 cloudwatch get-metric-statistics \
--namespace "AWS/Billing" \
--metric-name "EstimatedCharges" \
--dimension "Name=Currency,Value=USD" \
--start-time $(date +"%Y-%m-%dT%H:%M:00" --date="-12 hours") \
--end-time $(date +"%Y-%m-%dT%H:%M:00") \
--statistic Maximum \
--period 60 \
--output text | sort -r -k 3 | head -n 1 | cut -f 2
2494.47
Totals from two different days can be subtracted to get the daily delta. Or, an estimate can be obtained in one go by increasing the time window (end-time - start-time) to 24h and subtracting the earliest data point from the latest.
end-time
start-time
Notes:
date
date -v-12H
Here is a simple script that demonstrate how to parse and analyze your detailed AWS billing CSV file:
Should be easy enough so you can build your own analysis !
There is fairly new tool open-sourced by Netflix called Ice: which allows to visualize the billing details as retrieved via the AWS reports generated into your S3 buckets.
Take a look at Xervmon. They provide day to day spend and usage in addition analytics on historical. They are an upcoming service provider with detailed integrations with Amazon AWS planned in next 3 months.
Some screenshots from my current account is as below.
Bunch of professionals have built and it is quite neat.
Maybe this Python module on Github can help you getting started:
pyec2costs (for reserved or ondemand instances).
I've seen companies build their own in-house tools for this - basically they scrap the AWS billing page and on their own dashboard, display the current cost, and in one example, they divide it by the days in the month that have passed, and multiple that to get the estimated total month cost.
AWS don't offer a billing API yet (I'm sure they will in future), but there are a couple of external services that can help. One is CloudVertical (disclosure: I work here), where you can get your daily, monthly, and hourly cost, broken down by service, and for multiple accounts.
The real holy grail for a service like AWS though is not just to track daily spending, but to show insights on efficiency (cost+usage = efficiency) and also highlight opportunities for savings (ie: times to use reserved or spot instances)
If you really need a day-to-day cost report, you'll need to use the "Usage Report" tool in your AWS account. You can request a report for each service you use, on whatever time period you want, in granularity from by hour to by month. Then it downloads a CSV.
You'll need to do some post-processing on that CSV (since it's not in cost, but in usage etc), but it will provide you the data you need to have a day-to-day cost.
Amazon provides your current month-to-date charges here:
Towards the top of the page it indicates how current the data is. I find it tends to lag by a few hours.
This is the most accurate and up-to-date record you can get from Amazon or anybody else at this time.
By posting your answer, you agree to the privacy policy and terms of service.
asked
4 years ago
viewed
13921 times
active
10 months ago | http://serverfault.com/questions/350971/how-can-i-monitor-daily-spending-on-aws?answertab=active | CC-MAIN-2016-18 | refinedweb | 1,373 | 51.11 |
A server with Docker, part 5.1: Towards a simple web app container
The fifth part in a series about the process of creating a flexible, multi-purpose webserver with Docker on a Digital Ocean droplet. Unlike the previous articles, this will be a series of smaller, more focused posts. In this one, we will take a look around, make some arbitrary choices, and get started, working on a container setup able to harbour a very simple webapp.
In the previous post, we got to serve static content via Nginx and have a basic Gitolite installation take care of all basic code versioning needs in addition to the progress made before. Meanwhile a Docker daemon is already up and running on the server, with nothing to do! In this post, I would like to start with the long overdue topic of getting useful containerized applications to do some work. As a first goal, let’s start with a very simple dynamic web application written in Python and put it into a container. Simple, in the sense that it will produce dynamic responses, will not require any complex auxiliary services and be stateless. In addition, the app is assumed not to be mission critical, this way we can skip redundancy and availability considerations entirely for now. Overall, the closer it is to ‘The Twelve-Factor App’ methodology, the better. By the way, if you have not had the chance to read this particular set of guidelines, I can only highly recommend doing so. It is written by the glorious people behind Heroku, based on real experience and very practical.
Alright, But How?
In the case of a web app built, for example, with Flask, we will want to have a few components to run it properly. Important building blocks which come to mind, are a way to serve static content and route traffic (Nginx), something to actually run the Python application in a sensible manner (uWSGI ) and the ability to phone home using emails meant for internal consumption, with the goal of simple monitoring and debugging purposes. Some null-mailer setup (only able to send out mails) will be absolutely sufficient for this.
The only question left is, how to accomodate this setup in terms of containers? There are two prevalent approaches of structuring containerizing applications, both can be preferable, depending on the task at hand. They are at two different ends of the commonly acceptable spectrum, which ranges from single-process containers to single-service containers. In the second case, logical services are meant, which in turn can consist of multiple OS processes each. What is in general frowned upon, is treating containers like pseudo-VM, namely introducing manual changes after they have been started, thus making them unique and not disposable anymore, as well as any other kind of pet-like handling in the pet-cattle analogy. We will try to stay clear of it.
Based on the above constraints and choices, it will be best to go with a single-service container to get started. This will make it easier to develop, operate and maintain the container setup. The main benefits of this approach lie in the facts that we keep the setup and environment simple, with a minimal operations overhead while still harvesting most benefits of running containerized applications. Unlike with larger applications, we can skip the consideration of issues such as orchestration, practical scaling details, availability, log forwarding, serious monitoring and the wiring of external auxiliary services.
The Phusion-Baseimage
With all of this said, we can get to the practical side of laying the foundations for the new container! The phusion-baseimage is a solid starting point for a single-container web app. Among others, the image includes a custom init process, the possibility to gracefully run and terminate (supervise) multiple processes through the lightweight runit and also to have cron and syslog running out of the box if they should be needed. By the way, it is also a pretty nice read. As an alternative, we could just as well pick a Linux distribution image and go from there, but this would involve a bit more boilerplate work and yield approximately the same result, just with different components.
There have been some in-depth discussions on many aspects and design choices of this baseimage. The gist of @jpetazzo was that it is a nice gateway to start using containers, but might eventually get in your way. The aspect of running an SSH daemon inside of a container caused the most controversy and was quickly followed by the emergence of nsenter, a tool to easily enter container namespaces. Recently, the Phusion guys published a blog post explicitly and there are regular revisions of the baseimage, which in part have addressed many of the critiqued points.
Although many of the fixed issues described in the baseimage salespage can either be solved in a different fashion (supervisor) or are not quite relevant anymore (the apt fix mentioned), I find the baseimage to be well crafted, great to get started and perfectly suited for what we want to achieve. The container might end up distantly resembling “a small VPS”, but in this case it is exactly what we would like to achieve. As with everything else, it is fine if used thoughtfully and with caution instead of always and for no particular reason :)
We will not use a SSH daemon running in the container, it is disabled by default anyway in the current baseimage version. We will go the extra step and nuke all traces, as for debug purposes, we can execute arbitrary commands in the same environment easily (even without nsenter, as the exec command is integrated into Docker by now). Syslog-ng, logrotate and cron might come in handy sometime and will be kept around, although they should not be strictly needed.
On To The Container
Let’s create our baseimage deriving it with small modifications! You can find the code on GitHub and the finished image on the Docker Hub Registry, ready to be pulled. The Dockerfile, looks the following way:
# based on FROM phusion/baseimage:0.9.16 MAINTAINER th4t ENV REFRESHED_AT 2015-01-25 # set correct environment variables ENV HOME /root # disable SSH, this is not really necessary in the newest baseimage anymore RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh # clean up APT when done RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # the custom init process ENTRYPOINT ["/sbin/my_init"] # -- starts all runit services, and executes the next command # -l makes the bash to be invoked as a login shell CMD ["--" ,"bash", "-l"]
The REFRESHED_AT variable is a trick from the Docker Book by James Trunbull, it has a few more very neat hints and is a great start into working with Docker. I hope that the comments will leave no open questions.
In addition, I usually create a Makefile for all Docker related projects, to build and images in a simple manner
build: docker build -t th4t/baseimage . run: # you can detach and leavev it running # in the background with ctrl-p ctrl-q # later you can attach with "docker attach container_id" docker run -i -t th4t/baseimage .PHONY: build run
Try running the new container! Just by issuing ‘make run’, you will get into a bash and can take a look around, as soon as you exit the bash session, my_init will exit gracefully and bring all runit services down and give useful output.
That’s it for now. In the next post, we will start building the actual Dockerfile for the web app container. It will be based on this baseimage, add the application code, our additional services and their configuration files. By the end of this series, we will have a perfectly self-contained container, able to serve simple dynamic webapps on a single exposed port. If you are interested in Docker and similar posts, please sign up to the mailing list below! | https://vsupalov.com/a-server-with-docker-part-5-1-towards-a-simple-webapp-container/ | CC-MAIN-2017-39 | refinedweb | 1,333 | 56.49 |
SOLID principles in .NET revisited part 3: the Open-Closed principle
April 30, 2015 3 Comments
Introduction
In the previous post we looked at the letter ‘S’ in SOLID, i.e. the single responsibility principle. We identified the reasons for change in the OnDemandAgentService class and broke out 3 of them into separate classes. The OnDemandAgentService can now concentrate on starting a new on-demand agent and delegate other tasks to specialised services such as EmailService. Also, an additional benefit is that other classes in the assembly can re-use those specialised services so that e.g. logging can be centralised.
In this post we’ll look at the second constituent of the SOLID principles, the letter ‘O’ which stands for the Open-Closed Principle OCP.
Open and closed?
So what’s open and what’s closed according to the OCP? OCP states that classes should be closed for modification but open for extension. Closed for modification?! What?? Yes, that’s correct, a class that is put into production and called upon by several consumer classes in your application should not change its implementation with the exception of correcting bugs. That sounds harsh, right? At least that was my first reaction to this rule. The reason for this rule is that classes that rely on e.g. the logging service will be affected by an implementation change of that service. The clients will depend on the behaviour of the logging service and may be susceptible to modifications. Those clients in turn will be called by other clients which also can be affected by those changes.
So what do we do if we really need to change something in the implementation of the service? We can ensure that future clients will have the opportunity to extend its behaviour without breaking the original implementation. There are multiple ways to achieve this. In this post we’ll look at the inheritance-based solution that I actually wouldn’t recommend you to follow in real code. That might sound counterproductive but there are at least two reasons why I decided to do this:
- OCP has been often linked with inheritance, i.e. that classes should offer extensibility points through overridable methods. You may still encounter such solutions today and it’s good to be familiar with them so that you can propose a different solution.
- The more ideal solution, which is based on abstractions would lead us towards other parts of SOLID such as the letters ‘L’ and ‘D’. My goal, however, is to consider each component of SOLID in isolation as much as possible even though they are all interconnected.
We’ll come back to these points later on in the series and propose a better solution than the one we’ll see here.
Extensibility points in LoggingService
Logging is implemented in the FileBasedLoggingService class. It is hard coded to write the log messages to a file on the hard drive. However, there are lots of other ways to implement logging. First off, there are multiple libraries that you can use, e.g. log4net or NLog. Alternatively you can roll your own solution and write log messages to the database – which is not too wise in reality but that’s beside the point. The point is that FileBasedLoggingService offers one and only one way to save log messages. Currently if you want to modify the logging logic you’ll need to overwrite the implementation of the methods in FileBasedLoggingService – and most likely modify the name of the class as well to reflect the changes. However, all that will have an effect on the consuming classes of course. They might not be aware of the changes and the code owners will be looking for the log messages in vain when trying to fix a bug.
The way to indicate overridable methods in C# is through the ‘virtual’ keyword. We’ll also rename FileBasedLoggingService to LoggingService. Here’s the modified logging service class with virtual methods:
public class LoggingService { private readonly string _logFile = @"c:\log\log.txt"; public virtual void LogInfo(string info) { File.AppendAllText(_logFile, string.Concat("INFO: ", info)); } public virtual void LogWarning(string warning) { File.AppendAllText(_logFile, string.Concat("WARNING: ", warning)); } public virtual void LogError(string error) { File.AppendAllText(_logFile, string.Concat("ERROR: ", error)); } }
Then if you’d like to modify the behaviour of the class you can derive from it and override the methods as necessary. Here comes a skeleton example:
public class DatabaseLoggingService : LoggingService { public override void LogInfo(string info) { //implementation omitted } public override void LogError(string error) { //implementation omitted } public override void LogWarning(string warning) { //implementation omitted } }
OCP in OnDemandAgentService
The next step is to offer a way to modify the logging mechanism in OnDemandAgentService. It is currently hard coded to LoggingService everywhere. Again, we’ll take a sub-optimal solution but we’ll soon improve it. We’ll add a property in LoggingService which lets us define the implementation from the outside. The property will ensure that we can fall back onto the file based LoggingService in case the private _loggingService field is null:
public class OnDemandAgentService { private LoggingService _loggingService; public LoggingService LoggingService { get { if (_loggingService == null) { _loggingService = new LoggingService(); } return _loggingService; } set { if (value != null) { _loggingService = value; } } } =; } }
We have now modified the original FileBasedLoggingService class to offer extensibility points for future modifications. Also, OnDemandAgentService provides a way to employ a logging mechanism different from the default implementation by a public property. You can imagine that the same technique can be applied to other dependencies like the EmailService and AuthorizationService.
A useful language feature in .NET is the “obsolete” keyword as seen in this post. It can happen that you simply have to modify an existing class which has a lot of clients. OCP states that you shouldn’t modify the implementation of that class because it can have unforeseeable effects on the existing code base. Instead of poking in the original class you can create a brand new one and declare the old one obsolete. Clients will then be advised to use the new, improved implementation. Gradually all usage of the old class can disappear and you’ll be able to delete it. However, this process can take months or even years.
In the next post we’ll take a look at the Liskov Substitution Principle LSP.
View the list of posts on Architecture and Patterns here.
Nice post. As always. Looking forward to the rest of the series. Keep up the good work!
Hej Anders, thanks for your kind words. //Andras
Pingback: Architecture and patterns | Michael's Excerpts | https://dotnetcodr.com/2015/04/30/solid-principles-in-net-revisited-part-3-the-open-closed-principle/ | CC-MAIN-2019-04 | refinedweb | 1,091 | 55.95 |
Microsoft Visual C++ Team & the Future of C++
Following Microsoft Egypt. I met with the Developer Evangelist folks. We discussed the Visual C++ team’s participation in the upcoming MDC (Middle-East Developers Conference) in Cairo (nothing is confirmed yet regarding the participation). Last year the VC++ Team delivered 3 talks. The conference is a really great one with lots of attendees and interactions.
Currently I am catching up with lots of college friends mainly from the AUC soccer team. It is amazing to see how everyone grew in his/her own way. Lots of surprises, good ones thoughJ.
Thanks,
Ayman
The VC++ Team blog has all the details here.
Thanks, Ayman
You.
Ayman Shoukry
The VC++ Team blog has all the details at
Thanks,Ayman Shoukry:
If you want to know more about the VC++ IDE, make sure to watch Shankar's channel9 video at
Brois Jabes, a program manager on the VC++ team talks about a number of tips and tricks C++ developers can take advantage of when using VC++ 2005 IDE. Check out his Channel9 video at
Thanks,Ayman ShoukryVC++ Team
slow chat at. The VC++ team talked about the future of VC++ and plans to come.
ThanksAyman ShoukryVC++ Team
During the week of June 19th, the Visual C++ team will be hosting a slow chat titled "Visual C++: Yesterday, Today and Tomorrow" on CodeGuru ().
Please come join us at:
The Visual C++ Team has started a team blog at
This is a great step for directly communicating with C++ developers in the community.
Thanks, Ayman Shoukry: 'T::X' : dependent name is not a type prefix with 'typename' to indicate a typesample.cpp(2) : error C2143: syntax error : missing ';' before '&'sample.cpp(2) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-intsample.cpp(2) : fatal error C1903: unable to recover from previous erro(s); stopping compilation
Code after applying the fix:
template<class T>const typename T::X& f(typename T::Z* p);
template<class T>struct Blah : public Baz<typename T::Type, T::Value> { typedef typename T::X Type; Type foo(); typename T::X bar(); operator typename T::Z(); };
Thanks,Ayman shoukry
#include <iostream.h>
int main(int argc, char *argv[]) { cout<<"Hello World\n";}
Error VC2005 issues:
sample.cpp(1) : fatal error C1083: Cannot open include file: 'iostream.h': No such file or directory
#include <iostream>using namespace std; //important to be able to use cout
Thanks, Ayman Shoukry
int main(){ const x = 0;}
sample.cpp(3) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int
int main(){ const int x = 0;}
For more details, check out | http://blogs.msdn.com/aymans/ | crawl-002 | refinedweb | 442 | 63.59 |
to be able to do so because such a method could be relying on an external dependency (such as a database) and you don’t really want to worry about it – for example, to ensure that a database is up and running, has consistent data, etc…
Setting it up
Maven is used for the example setup. In the pom.xml of your project add the following dependency:
<dependency> <groupId>org.mockito</groupId> <artifactId>mockito-core</artifactId> <version>2.27.0</version> </dependency>
If you have troubles with Maven check how to build a Java project with Maven.
Creating an example class
To demonstrate how Mockito works we’ll need a non-final class, i.e. one that Mockito can extend in a way that we’ll instruct it.
In real life this class and its methods would be doing something complex such as getting results from a database. When the database is not available, an exception will be thrown.
For brevity, we’ll write just the exception part and assume that in our development environment, where the JUnit test will be run, there is no database.
public class MyReturner { public String returnSomething() throws Exception { // in real life, here will be returned the result from a database query throw new Exception("Can't find the database."); } }
Creating the Unit test
To avoid getting the exception when there is no database, we’ll mock the method returnSomething() to return a specific result on which we can always rely and build our tests on top of it.
Furthermore, we’ll write two tests – one which passes because we have mocked the method and one which does not because it gets the exception.
import org.junit.Test; import static org.junit.Assert.*; import static org.mockito.Mockito.*; public class MockitoTest { private static final String DB_RESULT = "some complex results"; @Test public void testThatPasses() throws Exception { MyReturner mockedReturner = mock(MyReturner.class); when(mockedReturner.returnSomething()).thenReturn(DB_RESULT); assertEquals(mockedReturner.returnSomething(), DB_RESULT); } @Test public void testThatDoesNotPass() throws Exception { MyReturner realReturner = new MyReturner(); assertEquals(realReturner.returnSomething(), DB_RESULT); } }
In our passing test, we have created a mock of the MyReturner class along with a condition what to be returned when the method returnSomething is called. This is accomplished with when / thenReturn.
On the other hand, the unmocked test (testThatDoesNotPass()) fails because the assertTrue comparison fails.
Of course, the above tests are just for demonstration. In real life the tests are expected to do much more than just to assert the result from a method. On the contrary, this is just the fundament of the test.
This is how easy it is to write a simple Unit test with Mockito by mocking the behaviour of an object and eliminating external, unpredictable dependencies which are not directly related to the testing. | https://knowledgebasement.com/simple-mockito-unit-test/ | CC-MAIN-2021-43 | refinedweb | 459 | 53.61 |
> bison.zip > main.c
/* Top level entry point of bison, Copyright (C) 1984, 1986, 1989, 675 Mass Ave, Cambridge, MA 02139, USA. */ #include
#include "system.h" #include "machine.h" /* JF for MAXSHORT */ extern int lineno; extern int verboseflag; /* Nonzero means failure has been detected; don't write a parser file. */ int failure; /* The name this program was run with, for messages. */ char *program_name; extern void getargs(), openfiles(), reader(), reduce_grammar(); extern void set_derives(), set_nullable(), generate_states(); extern void lalr(), initialize_conflicts(), verbose(), terse(); extern void output(), done(); /* VMS complained about using `int'. */ int main(argc, argv) int argc; char *argv[]; { program_name = argv[0]; failure = 0; lineno = 0; getargs(argc, argv); openfiles(); /* read the input. Copy some parts of it to fguard, faction, ftable and fattrs. In file reader.c. The other parts are recorded in the grammar; see gram.h. */ reader(); /* find useless nonterminals and productions and reduce the grammar. In file reduce.c */ reduce_grammar(); /* record other info about the grammar. In files derives and nullable. */ set_derives(); set_nullable(); /* convert to nondeterministic finite state machine. In file LR0. See state.h for more info. */ generate_states(); /* make it deterministic. In file lalr. */ lalr(); /* Find and record any conflicts: places where one token of lookahead is not enough to disambiguate the parsing. In file conflicts. Also resolve s/r conflicts based on precedence declarations. */ initialize_conflicts(); /* print information about results, if requested. In file print. */ if (verboseflag) verbose(); else terse(); /* output the tables and the parser to ftable. In file output. */ output(); done(failure); } /* functions to report errors which prevent a parser from being generated */ void fatal(s) char *s; { extern char *infile; if (infile == 0) fprintf(stderr, "fatal error: %s\n", s); else fprintf(stderr, "\"%s\", line %d: %s\n", infile, lineno, s); done(1); } /* JF changed to accept/deal with variable args. DO NOT change this to use varargs. It will appear to work but will break on systems that don't have the necessary library functions. This is the ONLY safe way to write such a function. */ /*VARARGS1*/ void fatals(fmt,x1,x2,x3,x4,x5,x6,x7,x8) char *fmt; { char buffer[200]; sprintf(buffer, fmt, x1,x2,x3,x4,x5,x6,x7,x8); fatal(buffer); } void toomany(s) char *s; { char buffer[200]; /* JF new msg */ sprintf(buffer, "limit of %d exceeded, too many %s", MAXSHORT, s); fatal(buffer); } void berror(s) char *s; { fprintf(stderr, "internal error, %s\n", s); abort(); } | http://read.pudn.com/downloads8/sourcecode/unix_linux/23324/main.c__.htm | crawl-002 | refinedweb | 400 | 68.36 |
Hello, I need some help with my if statement.
I want to create a simple text base game.
The game begin with the character reaching the top floor of a castle and encounters the final boss. The character can has two choices, fight or kneel before the boss.
Right now if i type "fight" it display the following message, "You have decided to fight"
and if i type "knee" it will display the following "You have decided to join the Bloody Mistress"
How could i add an if statement into an if statement.
So if I type "fight" it display the following message, "You have decided to fight" followed by typing "axe" to display "You used your axe, your attack was avoided");
Can someone please help me!
Here are my codes:
import java.util.Scanner;
public class TxtG
{
public static String A = "fight";
public static String B = "axe";
public static String C = "kneel";
public static void main(String[] args)
{
Scanner x = new Scanner(System.in);
System.out.println("You gave reached the 100th Floor of the Castle of Blood");
System.out.println("The Bloody Mistress awaits you at the center of the room");
System.out.println("Will you fight her?");
System.out.println("Or will you kneel before her?");
System.out.println("Choice: ");
String i = x.nextLine();
if(i.equals(A))
{
System.out.println("You have decided to fight");
if(A.equals(B))
{
System.out.println("You used your axe, your attack was avoided");
}
}
else if(i.equals(C))
{
System.out.println("You have decided to join the Bloody Mistress");
System.out.println("You will help her kill the Priestess of Hope");
}
}
} | http://www.javaprogrammingforums.com/%20java-theory-questions/18994-need-some-help-my-if-statment-printingthethread.html | CC-MAIN-2016-18 | refinedweb | 272 | 66.23 |
Returning ArrayList from Method
James Dudley
Greenhorn
Joined: May 20, 2011
Posts: 17
posted
May 28, 2011 13:16:27
0
I have been trying to condense some duplicates of code and hit a barrier
I have a class that reads information from a database. I had 4 methods to return an array of 4 different items from database.
I got told about merging them into one method and create a
ArrayList
of Objects
Here is the new method
public ArrayList<ScoreButtonValues> getScoreValues(String pShootType) { shootType = pShootType; scoreButtonValuesArray = new ArrayList<ScoreButtonValues>(); try { Connection conn = DatabaseScoreDisplay.getConnection(); Statement st = conn.createStatement(); ResultSet rec = st.executeQuery( "SELECT * FROM SCORE_VALUES WHERE " + shootType + " = 'Y'"); while(rec.next()) { ScoreButtonValues buttonValues = new ScoreButtonValues( rec.getInt("ARROW_VALUE"), rec.getString("ARROW_DISPLAY_NAME"), rec.getString("ARROW_COLOUR_BACKGROUND"), rec.getString("ARROW_COLOUR_FOREGROUND") ); scoreButtonValuesArray.add(buttonValues); } } catch(Exception e) { System.out.println("*** Error : "+e.toString()); System.out.println("*** "); System.out.println("*** Error : "); e.printStackTrace(); System.out.println("################################################"); } return scoreButtonValuesArray; }
So I have returned the Array of Objects (I believe)
My issue now is reading that back
In the Class I am reading it to I have called the Class
DatabaseScoreDisplay db = new DatabaseScoreDisplay();
I have defined a new
ArrayList
<ScoreButtonValues> called scoreButtonValue
I have tried this
scoreButtonValue = db.getScoreValues(shootType)
and then do the normal
for (ScoreButtonValues output : scoreButtonValue) {}
but the ScoreButtonValues values cannot be accessed
Can someone point me in the right direction
Ralph Cook
Ranch Hand
Joined: May 29, 2005
Posts: 479
posted
May 28, 2011 15:00:06
0
Well, there's almost enough information here.
Do you expect multiple records back from the database query? Your loop is going to create one ScoreButtonValues object per record returned. But your text talks about getting back four values, making me wonder if you expect only one record back from the query.
Your text also makes me think you want to use an array list to return 4 values from your function; but your
ArrayList
is of ScoreButtonValues, and each object of that type evidently has 4 values in it.
If your method is supposed to return one object with 4 values in it, you don't need the
ArrayList
; just have the method return a ScoreButtonValues object, then access the values within it with "getter" methods, for example public
String
getArrowValue(), etc.
If you're trying to return multiple objects, each of which has 4 values, then this is closer to correct. And in fact, in that case, I don't know what your problem is because you didn't tell us. You say the "values cannot be accessed", but you don't tell us what you mean. Are you calling methods that return null? Do you not know how to define methods to return the values? Does the loop blow up before you get that far, and, if so, what kind of error do you get?
rc
James Dudley
Greenhorn
Joined: May 20, 2011
Posts: 17
posted
May 28, 2011 15:12:53
0
ok
The table will output 10 rows. I created 4 methods to bring back a single column for each of the 10 rows and add them to an
ArrayList
. I was then using each
ArrayList
in for loops to create buttons
I was told I could do this within a single method by adding each of the 4 columns into an object and then adding them into an
ArrayList
Here is the Object Class code
public class ScoreButtonValues { int scoreValue; String scoreValueDisplayName; String scoreValuesColourBackground;; } }
So as far as I understand the code in the first forum thread will create 10 objects of the above into an arraylist
What I trying to do then is while loop though the
ArrayList
in the main class and create 10 buttons with names as scoreValueDisplayName, background as scoreValuesColourBackground and forground as scoreValuesColourForeground
Does that help
Thanks
Ralph Cook
Ranch Hand
Joined: May 29, 2005
Posts: 479
posted
May 28, 2011 15:23:51
0
Ok. I'm going to guess that all this OO and Java stuff is very new to you. My apologies if this is too elementary, but something about the way you said "I've been told" leads me to believe this is what you're looking for.
The ScoreButtonValues object you gave us needs some additions. The traditional, better-software-engineering way to do it is something like this:
public class ScoreButtonValues { private int scoreValue; private String scoreValueDisplayName; private String scoreValuesColourBackground; private; } public int getScoreValue() { return scoreValue; } public String getScoreValueDisplayName() { return scoreValueDisplayName; } }
I have made your four variables "private", meaning they cannot be accessed by code outside the class, and I've written two of the "getter methods" that you can use to retrieve values. So if you have an object x of this type, you can use "x.getScoreValue()" to return the int score value, etc.
You could also declare the variables public, and then access each variable using the object reference (such as x) followed by a period followed by the variable name ("x.scoreValue"). I don't recommend this, because it gives you less flexibility than the getter method method.
One more idea for you to ponder. You are putting these vars in an object of type "ScoreButtonValues". You could consider shortening their names to things like "value", "displayName", etc. -- then you would access them with constructs like "x.getValue()" or "x.value". Since they are already in an object of a type that indicates what value it is, I invite you to consider whether the brevity makes it more or less clear.
I hope this answered your question.
rc
James Dudley
Greenhorn
Joined: May 20, 2011
Posts: 17
posted
May 28, 2011 16:19:03
0
Thanks for your time, yes I relative new to this
java
thing
Changed the class to make the values private but not 100% sure why this is better then having them public
also not sure what your last paragraph was about if you could explain further I would be grateful
I agree. Here's the link:
subject: Returning ArrayList from Method
Similar Threads
ArrayList Question
Problem accessing Database from JSP
Adding To An ArrayList
Arraylist with Objects
Global Array not working
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/539761/java/java/Returning-ArrayList-Method | CC-MAIN-2014-15 | refinedweb | 1,041 | 56.79 |
Ever had the problem that inheritance just wasn’t enough? For example in this case:
package core; public class Main { public static void main(String[] args) { sayHello(); } private static void sayHello() { System.out.println("Hello World!"); } }
Say you need to override
sayHello(). What are your options?
You could edit the source. But if you only have the bytecode?
You could decompile the bytecode. But what if you need the same change in several places?
Eventually, you’ll have to use copy more or less code around because Java just doesn’t offer enough ways to combine code. The solution? Object Teams. See this article for a solution of the problem above.
What is Object Teams?
In a nutshell, a team is a group of players which have roles. Usually, the roles are distributed by way of the strengths of your players. Slow and sturdy ones will be defense. Quick and agile players will attack. The calm ones will do tactics.
OT/J (the Java implementation of Object Teams) does just that: You define teams and roles and then apply those to existing Java code. Let’s take the Observer pattern (you know that design patterns are workarounds for flaws in the language’s design, yes?).
The problem with this pattern is that you can’t apply it retroactively. You must modify the involved classes to support the pattern even if they couldn’t care less. That leads to a lot of problems. Classes which don’t need it must still allocate resources for it. It clutters the API. You can’t add it to classes which don’t support it. If a class only allows you to observe some behavior, you can’t change that.
OT/J fixes all that. The OTExample Observer shows three classes which could make use of the Observer pattern but don’t. Instead, the pattern is retrofitted in all the places where it’s necessary. This allows for many optimizations. For example, you can create a single store for all your observers (instead of one per instance). The API isn’t cluttered. You can observe almost any change. | https://blog.pdark.de/2010/09/16/when-inheritance-is-not-enough/ | CC-MAIN-2021-43 | refinedweb | 353 | 77.84 |
Settings Types
Let’s say you’re writing a webserver. You want the server to take a port to listen on, and an application to run. So you create the following function:
run :: Int -> Application -> IO ()
But suddenly you realize that some people will want to customize their timeout durations. So you modify your API:
run :: Int -> Int -> Application -> IO ()
So, which
Int is the timeout, and which is the port? Well, you could create
some type aliases, or comment your code. But there’s another problem creeping
into our code: this
run function is getting unmanageable. Soon we’ll need to
take an extra parameter to indicate how exceptions should be handled, and then
another one to control which host to bind to, and so on.
A more extensible solution is to introduce a settings datatype:
data Settings = Settings { settingsPort :: Int , settingsHost :: String , settingsTimeout :: Int }
And this makes the calling code almost self-documenting:
run Settings { settingsPort = 8080 , settingsHost = "127.0.0.1" , settingsTimeout = 30 } myApp
Great, couldn’t be clearer, right? True, but what happens when you have 50 settings to your webserver. Do you really want to have to specify all of those each time? Of course not. So instead, the webserver should provide a set of defaults:
defaultSettings = Settings 3000 "127.0.0.1" 30
And now, instead of needing to write that long bit of code above, we can get away with:
run defaultSettings { settingsPort = 8080 } myApp -- (1)
This is great, except for one minor hitch. Let’s say we now decide to add an
extra record to
Settings. Any code out in the wild looking like this:
run (Settings 8080 "127.0.0.1" 30) myApp -- (2)
will be broken, since the
Settings constructor now takes 4 arguments. The
proper thing to do would be to bump the major version number so that dependent
packages don’t get broken. But having to change major versions for every minor
setting you add is a nuisance. The solution? Don’t export the
Settings
constructor:
module MyServer ( Settings , settingsPort , settingsHost , settingsTimeout , run , defaultSettings ) where
With this approach, no one can write code like (2), so you can freely add new records without any fear of code breaking.
The one downside of this approach is that it’s not immediately obvious from the Haddocks that you can actually change the settings via record syntax. That’s the point of this chapter: to clarify what’s going on in the libraries that use this technique.
I personally use this technique in a few places, feel free to have a look at the Haddocks to see what I mean.
Warp: Settings
http-conduit: Request and ManagerSettings
xml-conduit
Parsing: ParseSettings
Rendering: RenderSettings
As a tangential issue,
http-conduit and
xml-conduit actually create
instances of the
Default typeclass instead of declaring a brand new
identifier. This means you can just type
def instead of
defaultParserSettings. | http://www.yesodweb.com/book/settings-types | CC-MAIN-2015-35 | refinedweb | 485 | 61.26 |
"The views expressed in this article are the author's own and do not necessarily reflect the views of Oracle."
(For more resources on Oracle, see here.)
Introduction
This article is better understood by people who have some familiarity with Oracle database, SQL, PL/SQL, and of course Java (including JDBC). Beginners can also understand the article to some extent, because it does not contain many specifics/details. The article can be useful to software developers, designers and architects working with Java.
Oracle database provides a Java runtime in its database server process. Because of this, it is possible not only to store Java sources and Java classes in an Oracle database, but also to run the Java classes within the database server. Such Java classes will be 'executed' by the Java Virtual Machine embedded in the database server. The Java platform provided is J2SE-compliant, and in addition to the JVM, it includes all the Java system classes. So, conceptually, whatever Java code that can be run using the JREs (like Sun's JRE) on the operating system, can be run within the Oracle database too.
Java stored procedure
The key unit of the Java support inside the Oracle database is the 'Java Stored Procedure' (that may be referred to as JSP, as long as it is not confused with JavaServer Pages). A Java stored procedure is an executable unit stored inside the Oracle database, and whose implementation is in Java. It is similar to PL/SQL stored procedures and functions.
Creation
Let us see an example of how to create a simple Java stored procedure. We will create a Java stored procedure that adds two given numbers and returns the sum.
The first step is to create a Java class that looks like the following:
public class Math
{
public static int add(int x, int y)
{
return x + y;
}
}
This is a very simple Java class that just contains one static method that returns the sum of two given numbers. Let us put this code in a file called Math.java, and compile it (say, by doing 'javac Math.java') to get Math.class file.
The next step is to 'load' Math.class into the Oracle database. That is, we have to put the class file located in some directory into the database, so that the class file gets stored in the database. There are a few ways to do this, and one of them is to use the command-line tool called loadjava provided by Oracle, as follows:
loadjava -v -u scott/tiger Math.class
Generally, in Oracle database, things are always stored in some 'schema' (also known as 'user'). Java classes are no exception. So, while loading a Java class file into the database, we need to specify the schema where the Java class should be stored. Here, we have given 'scott' (along with the password). There are a lot of other things that can be done using loadjava, but we will not go into them here.
Next, we have to create a 'PL/SQL wrapper' as follows:
SQL> connect scott/tiger
Connected.
SQL>
SQL> create or replace function addition(a IN number, b IN number) return number
2 as language java name 'Math.add(int, int) return int';
3 /
Function created.
SQL>
We have created the PL/SQL wrapper called 'addition', for the Java method Math.add(). The syntax is same as the one used to create a PL/SQL function/procedure, but here we have specified that the implementation of the function is in the Java method Math.add(). And that's it. We've created a Java stored procedure! Basically, what we have done is, implemented our requirement in Java, and then exposed the Java implementation via PL/SQL.
Using Jdeveloper, an IDE from Oracle, all these steps (creating the Java source, compiling it, loading it into the database, and creating the PL/SQL wrapper) can be done easily from within the IDE.
One thing to remember is that, we can create Java stored procedures for Java static methods only, but not for instance methods. This is not a big disadvantage, and in fact makes sense, because even the main() method, which is the entry point for a Java program, is also 'static'. Here, since Math.add() is the entry point, it has to be 'static'. So, we can write as many static methods in our Java code as needed and make them entry points by creating the PL/SQL wrappers for them.
Invocation
We can call the Java stored procedure we have just created, just like any PL/SQL procedure/function is called, either from SQL or PL/SQL:
SQL> select addition(10, 20) from dual;
ADDITION(10,20)
---------------
30
SQL>
SQL> declare
2 s number;
3 begin
4 s := addition(10, 20);
5 dbms_output.put_line('SUM = ' || s);
6 end;
7 /
SUM = 30
PL/SQL procedure successfully completed.
SQL>
Here, the 'select' query, as well as the PL/SQL block, invoked the PL/SQL function addition(), which in turn invoked the underlying Java method Math.add().
A main feature of the Java stored procedure is that, the caller (like the 'select' query above) has no idea that the procedure is indeed implemented in Java. Thus, the stored procedures implemented in PL/SQL and Java can be called alike, without requiring to know the language in which the underlying implementation is. So, in general, whatever Java code we have, can be seamlessly integrated into the PL/SQL code via the PL/SQL wrappers. Putting in other words, we now have more than one language option to implement a stored procedure - PL/SQL and Java. If we have any project where stored procedures are to be implemented, then Java is a good option, because today it is relatively easier to find a Java programmer.
(For more resources on Oracle, see here.)
Interaction among SQL, PL/SQL and Java
So far, we have seen how to create a stored procedure implemented in Java, and how to invoke it from SQL or PL/SQL. The integration among these languages is much more possible because we can do the other way round also - invoke SQL statements or PL/SQL code from Java. This can be done using JDBC. We don't have enough scope to discuss this further here, but those who are familiar with the basics of JDBC programming would know how to invoke a SQL statement or call a PL/SQL procedure/function from Java. Moreover, Oracle provides a server-side internal JDBC driver, which can be used by the Java programs running inside the database server, for faster access to the data within that database. So, the picture is:
The Java stored procedure example we have seen is simple but good enough to illustrate conceptually how these languages (SQL, PL/SQL and Java) can work together.
What can we do with Java in Database?
There are several things we can do by having our Java code inside the Oracle database. We will discuss a few ideas here conceptually, without going into the particulars.
Reduce round-trips: In general, we can reduce the number of round-trips between the middle-tier and the database (in a typical 3-tier architecture) by putting all the relevant Java code in the database itself. With one request from the middle-tier, the Java code in the database can process the request, derive the required data from existing data in the database, and then send it back to the middle-tier, perhaps in some required format.
Triggers: Another useful way of using Java in the database is to have a Java stored procedure underneath a trigger. For instance, if we want to send some message, perhaps encrypted, to external clients, whenever a new row is inserted into a particular table, or whenever a row is deleted from a particular table, then we can do this message sending part from Java, which is relatively easier.
Database as client: Oracle database provides a pure Java JDBC driver in the server, also called 'server-side thin driver'. Since we have such a driver inside the database, we can turn our Oracle database into a 'client' to another Oracle database, in the sense, the Java code residing in one Oracle database can use the database-resident JDBC driver to connect to, and access, another Oracle database, or possibly multiple Oracle databases. The Java code in an Oracle database can even access a non-Oracle database. All we have to do for this is, to load a pure Java JDBC driver for such a non-Oracle database into the Oracle database, and make the Java code in the Oracle database use that driver to access the non-Oracle database.
Web services: With the Java support inside the Oracle database, we can make the database a provider of web services as well as a consumer of web services. To make the database a web service consumer, we can load the web service Java client proxy, along with the Java code on top of it, into the database. To help achieve this, there are tools like Oracle’s JPublisher that generate the necessary web service client side code in Java, from a given WSDL URL. It is also possible to consume the web service from PL/SQL by means of the PL/SQL wrappers written on top of the Java client. We can also consume the web service by loading the web service Java clients, that use Dynamic Invocation Interface, into the database. In the other way round, the Oracle database can be used as the back-end for a web service implementation. For instance, we can implement a web service in Java, load it into the database, write a PL/SQL wrapper, and invoke the PL/SQL wrapper via JDBC from the Application server.
Features of the Java support in Oracle database
There are several features of the Java platform provided by (and inside) an Oracle database. Let us see briefly some of them here:
J2SE compliance: Every release of Oracle database comes with the support for a particular version of J2SE. Java Compatibility Kit (JCK) tests are run for compatibility checking. For instance, Oracle database 11g supports J2SE 5.0, and the compliance is verified using JCK 5.
Efficient session-based model: Every database session has its own JVM, however, all the JVMs share the common read-only metadata (like method byte codes, and other internal memory structures representing a class). In this sense, n database sessions having their own JVMs are better than n JVMs running on the operating system. Also, it is possible for Java 'execution' in an RDBMS call to gain advantage in performance from the Java execution that happened in earlier RDBMS calls in the same session.
Enhanced performance: From Oracle database 11g, the JVM embedded inside the database server does JIT compiling of selected methods at run time, thereby making the Java 'execution' much faster.
Internal Java compiler: Just as we can store Java classes inside the database, we can store Java sources also. For instance, we can load .java files into the database by using the loadjava tool mentioned earlier. Such Java sources can be compiled explicitly, or they get compiled implicitly when needed. In either case, an internal Java compiler is used for compiling the Java sources.
JMX support: We can start a JMX agent inside the database server, and connect to it using a JMX client like jconsole, and get various kinds of information, like what classes are currently loaded, what threads are running, what is the status of Java heap, etc.
Default security manager: The Java runtime in Oracle database server always has a security manager installed, which is consulted before certain operations are carried out, like writing to a file, reading from a socket, creating a class loader, etc.
In addition to the above, many other things that can usually be done on JREs running on the operating system can also be done inside the Oracle database. For instance, we can set various Java system properties. The PL/SQL package DBMS_JAVA that comes with the database server provides several such functionalities.
What is not supported?
Since the Java runtime exists inside the Oracle database server process, there are certain things that are inherently not supported.
JNI libraries: We cannot load JNI libraries by System.loadLibrary(), for security reasons. From the security perspective, allowing Java code to run inside the database server is quite different from allowing native code, because in case of Java, the JVM itself provides some safety for the runtime. For instance, the JVM simply throws a NullPointerException if some Java code accesses a null reference, whereas the database server process could crash if some native code (that is part of a JNI library) accesses a null pointer.
AWT: There is no full support for AWT. Instead, what is supported is Headless AWT. It does not make sense anyway, for instance, to open up some window in the database server. However, any computations regarding the graphics manipulation are allowed.
Summary
In this article, we have familiarized ourselves with various things that Oracle database offers by means of an embedded Java runtime, how Java stored procedures can be used to extend the PL/SQL world to Java, and how SQL, PL/SQL and Java can work with each other. There are many details that are not covered here. This article just serves as a portal, and anyone who is interested more in a particular feature or usage can continue from here.
Further resources on this subject:
- Oracle JRockit: The Definitive Guide [Book]
- Fine Tune the View layer of your Fusion Web Application [Article]
- Debugging Java Programs using JDB [Article] | https://www.packtpub.com/books/content/java-oracle-database | CC-MAIN-2015-11 | refinedweb | 2,286 | 50.77 |
- Preventing Unnecessary Duplication
- Building a Panel
- Where Do We Go from Here?
Preventing Unnecessary Duplication
Part 9 of this series introduced test code to verify the contents of the Texas Hold ’Em title bar. It’s one simple line in HoldEmTest:
assertEquals("Hold ’Em", frame.getTitle());
It’s also one simple line in the production class, HoldEm:
frame.setTitle("Hold ’Em");
Those two lines each contain the same hard-coded string literal. Sometimes we create duplication in production code, sometimes we create it in the tests, and sometimes we create duplication across test and production code. Regardless, we’ll need to eliminate it before moving on.
We could introduce a constant, perhaps a static final field defined on HoldEm. We could also consider using the Java resource bundle, a construct designed to help us manage locale-specific resources. We might want to sell our application internationally; in that case, we’d have to provide internationalization support in the application. Part of that internationalization would be done by using resource bundles.
I said we might want to sell our application internationally. We’re really not sure yet. So, do we want to use resource bundles yet? From a pure Agile standpoint, they’re something we don’t need. Introducing them would seem to be premature.
What isn’t premature, however, is our need to eliminate duplication. We must stomp out all duplication; otherwise, our application will slowly but surely die. Given a number of options for eliminating duplication, we can choose any one of them, as long as the one we choose doesn’t introduce unnecessary complexity. Using resource bundles is a simple solution to this problem, and also one that fits into an established standard. The cost is roughly the same either way, so we choose the solution that results in a more flexible design.
We’ll want to create a utility method that extracts a string from the resource bundle. A test for this utility might write a sample properties file containing fixed key-value pairs, and then assert that the utility method extracted this information. The problem is, however, that we don’t want to overwrite the same properties file that the rest of our application normally uses.
One way we can solve this problem is by designing the bundle utility to allow the use of different property files. That sounds more like a related handful of methods than a single utility method. Let’s apply the single responsibility principle and put this common functionality into its own class. We’ll name it Bundle. The test and associated production code are shown in Listings 1 and 2.
Listing 1 BundleTest.
package util; import java.io.*; import junit.framework.*; public class BundleTest extends TestCase { private String existingBundleName; private static final String SUFFIX = "test"; private static final String TESTFILE = String.format("./%s/%s%s.properties", Bundle.PACKAGE, Bundle.getName(), SUFFIX); protected void setUp() { deleteTestBundle(); existingBundleName = Bundle.getName(); Bundle.use(existingBundleName + SUFFIX); } protected void tearDown() { Bundle.use(existingBundleName); deleteTestBundle(); } private void deleteTestBundle() { new File(TESTFILE).delete(); } public void testGet() throws IOException { BufferedWriter writer = new BufferedWriter(new FileWriter(TESTFILE)); writer.write("key=value"); writer.newLine(); writer.close(); assertEquals("value", Bundle.get("key")); } }
Listing 2 Bundle.
package util; import java.util.*; public class Bundle { static final String PACKAGE = "util"; private static String baseName = "holdem"; private static ResourceBundle bundle; public static String get(String key) { if (bundle == null) load(); return bundle.getString(key); } private static void load() { bundle = ResourceBundle.getBundle(PACKAGE + "." + getName()); } public static String getName() { return baseName; } public static void use(String name) { baseName = name; bundle = null; } }
I see a lot of systems in which each class contains code that loads the resource bundle. To me, this is unnecessary duplication. It also introduces strong dependencies of your system to Sun’s implementation specifics. We’ll instead encapsulate that detail in our Bundle class.
Once the Bundle class is in place, we can update our HoldEmTest code:
assertEquals(Bundle.get(HoldEm.TITLE), frame.getTitle());
and our HoldEm code:
static final String TITLE = "holdem.title"; ... frame.setTitle(Bundle.get(HoldEm.TITLE));
Of course, we need to create the properties file! Per the code, it should be named holdem.properties, and should appear in the util directory. Here are its contents:
holdem.title=Hold ’Em
Having the Bundle utility class in place will pay off as we add more text to the user interface. | http://www.informit.com/articles/article.aspx?p=461088&seqNum=2 | CC-MAIN-2019-43 | refinedweb | 728 | 50.12 |
Getting Started with FlexCel Studio for the .NET Framework
0. Before starting: Choosing how to install and reference FlexCel
When installing FlexCel, there are 2 options:
Download the exe setup. This is the preferred way to install FlexCel in Windows, since it will install the NuGet package, the libraries, the examples and docs.
Download the NuGet packages. This includes only the NuGet packages (which are also included in the exe setup), but doesn't include example code or docs, and won't register the NuGet source in your machine. This is the preferred way to install FlexCel in platforms different from Windows, and you can find more information on how to install it on the installation guide.
Once you have FlexCel installed, you need to decide how to reference it. There are 2 ways you can reference FlexCel:
Install via NuGet packages. This is a standard installation same as any other NuGet installation, but with the difference that FlexCel is not stored in nuget.org How to install via NuGet is detailed step by step in the installation guide
Install by manually referencing the assemblies. This is possible in all platforms except in .NET Core, where the only option is via NuGet.
How you decide to reference the assemblies is up to you: .NET is moving from a monolithic framework to a framework "on demand" via NuGet, and so we would recommend you to use NuGet too. But if you prefer to reference the assemblies directly, you can do that too.
Tip
If you are unsure, just install the exe setup and use the FlexCel NuGet packages.
1. Creating an Excel file with code
The simplest way to use FlexCel is to use the XlsFile class to manipulate files.
To get started, create an empty Console application and save it. Add the TMS.FlexCel NuGet package to your application or add a manual reference to FlexCel.dll
Then replace all the text in the file by the following:
using System; using FlexCel.Core; using FlexCel.XlsAdapter; namespace Samples { class MainClass { public static void Main(string[] args) { /File xls = new XlsFile(1, TExcelFileFormat.v2019, true); /, new TFormula("=Sum(A2:A3)")); //Saves the file to the "Documents" folder. xls.Save(System.IO.Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Personal), "test.xlsx")); } } }.
You can also download APIMate for your operating system from the following locations: C# or VB.NET.:
Create a new Console application, and write the following code:
using System; using FlexCel.Core; using FlexCel.XlsAdapter; namespace FileReader { class MainClass { public static void Main(string[] args) { XlsFile xls = new XlsFile(System.IO.Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Personal), "test.xlsx")); xls.ActiveSheetByName = "Sheet1"; //we'll read sheet1. We could loop over the existing sheets by using xls.SheetCount and xls.ActiveSheet for (int row = 1; row <= xls.RowCount; row++) { for (int colIndex = 1; colIndex <= xls.ColCountInRow(row); colIndex++) //Don't use xls.ColCount as it is slow: { int XF = -1; object cell = xls.GetCellValueIndexed(row, colIndex, ref XF); TCellAddress addr = new TCellAddress(row, xls.ColFromIndex(row, colIndex)); Console.Write("Cell " + addr.CellRef + " has "); if (cell is TRichString) Console.WriteLine("a rich string."); else if (cell is string) Console.WriteLine("a string."); else if (cell is Double) Console.WriteLine("a number."); else if (cell is bool) Console.WriteLine("a bool."); else if (cell is TFlxFormulaErrorValue) Console.WriteLine("an error."); else if (cell is TFormula) Console.WriteLine("a formula."); else Console.WriteLine("Error: Unknown cell type"); } } } } } Exce ExcelFile.DeleteRange to delete ranges of cells, full rows or full columns.
Use ExcelFile.MoveRange to move a range, full rows or full columns from one place to another.
Use ExcelFile.InsertAndCopySheets to insert a sheet, to copy a sheet, or to insert and copy a sheet in the same operation.
Use Exce OSX:
using System; using System.Collections.Generic; using FlexCel.Core; using FlexCel.Report; namespace Samples { class MainClass { public static void Main(string[] args) { var Customers = new List<Customer>(); Customers.Add(new Customer { Name = "Bill", Address = "555 demo line" }); Customers.Add(new Customer { Name = "Joe", Address = "556 demo line" }); using (FlexCelReport fr = new FlexCelReport(true)) { fr.AddTable("Customer", Customers); fr.Run( System.IO.Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Personal), "report.template.xlsx"), System.IO.Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Personal), "result.xlsx") ); } } } class Customer { public string Name { get; set; } public string Address { get; set; } } }
Note
If creating an OSX Console application, you will need to add a reference to System.Data, System.Xml and XamMac or MonoMac to the app. You might also need to copy XamMac.dll or MonoMac.dll to your output folder. And in a console application, you will need to initialize the Cocoa framework by calling MonoMac.AppKit.NSApplication.Init() For normal OSX applications you will not need to do anything: This applies only to Console OSX apps.:
public static void ExportToPdf(string inFile, string outFile) { XlsFile xls = new XlsFile(inFile); using (var pdf = new FlexCel.Render.FlexCelPdfExport(xls, true)) { pdf.Export(outFile); } }.
public static void ExportToHtml(string inFile, string outFile) { XlsFile xls = new XlsFile(inFile); using (var html = new FlexCel.Render.FlexCelHtmlExport(xls, true)) { html.Export(outFile, null); } }
8.Browsing through the Examples
FlexCel comes with more than 50 examples of how to do specific things. You can open each demo as a standalone project, but you can also use the included "Demo Browser" (this is MainDemo.cs. | http://www.tmssoftware.biz/flexcel/doc/net/guides/getting-started.html | CC-MAIN-2019-04 | refinedweb | 891 | 51.95 |
NAME
Wait for signals on multiple objects.
SYNOPSIS
#include <zircon/syscalls.h> zx_status_t zx_object_wait_many(zx_wait_item_t* items, size_t num_items, zx_time_t deadline);
DESCRIPTION
zx_object_wait_many() is a blocking syscall which causes the caller to wait
until either the deadline passes or at least one object referred to in
items has a specified signal asserted. If an object is already
asserting at least one of the specified signals, the wait ends immediately with
ZX_OK.
typedef struct { zx_handle_t handle; zx_signals_t waitfor; zx_signals_t pending; } zx_wait_item_t;
The caller must provide count
zx_wait_item_ts in the items array,
containing the handle and signals bitmask to wait for for each item.
Each item should contain a valid handle referring to an object to
wait for, and a bitmask waitfor indicating which signals should wake
the calling thread.
The deadline parameter specifies a deadline with respect to ZX_CLOCK_MONOTONIC and will be automatically adjusted according to the job's [timer slack] policy. ZX_TIME_INFINITE is a special value meaning wait forever.
Upon return, the pending field of items is filled with bitmaps indicating which signals are pending for each item.
The maximum number of items that may be waited upon is ZX_WAIT_MANY_MAX_ITEMS, which is 64. To wait on more objects at once use Ports.
RIGHTS
Every entry of items must have a handle field with ZX_RIGHT_WAIT.
RETURN VALUE
zx_object_wait_many() returns ZX_OK if any of waitfor signals were
active when the call was made, or observed on their respective object before
deadline passed.
In the event of ZX_ERR_TIMED_OUT, items may reflect state changes that occurred after the deadline passed, but before the syscall returned.
In the event of ZX_ERR_CANCELED, one or more of the items being waited upon have had their handles closed, and the pending field for those items will have the ZX_SIGNAL_HANDLE_CLOSED bit set.
For any other return value, the pending fields of items are undefined.
ERRORS
ZX_ERR_INVALID_ARGS items isn't a valid pointer.
ZX_ERR_OUT_OF_RANGE count is greater than ZX_WAIT_MANY_MAX_ITEMS.
ZX_ERR_BAD_HANDLE one of items contains an invalid handle.
ZX_ERR_ACCESS_DENIED One or more of the provided handles does not have ZX_RIGHT_WAIT and may not be waited upon.
ZX_ERR_CANCELED One or more of the provided handles was invalidated (e.g., closed) during the wait.
ZX_ERR_TIMED_OUT The specified deadline passed before any of the specified signals are observed on any of the specified handles.
ZX_ERR_NOT_SUPPORTED One of the items contains a handle that cannot be waited one (for example, a Port handle).
ZX_ERR_NO_MEMORY Failure due to lack of memory. There is no good way for userspace to handle this (unlikely) error. In a future build this error will no longer occur.
BUGS
pending more properly should be called observed.
NOTES
See signals for more information about signals and their terminology. | https://fuchsia.dev/fuchsia-src/reference/syscalls/object_wait_many | CC-MAIN-2020-29 | refinedweb | 445 | 56.15 |
I want to display google maps on a view that's then added to
self.view
self.view
GMSMapView
gmView
let mapView = GMSMapView.map(withFrame: CGRect.zero, camera: GMSCameraPosition.camera(withLatitude: 51.050657, longitude: 10.649514, zoom: 5.5))
gmView = mapView
mapView
self.view
self.view.addSubview(mapView)
self.view.insertSubview(mapView, at: 0)
If you want to add a
mapView after the loading of the
view, then you need to create an object of
GMSMapView. So break the outlets of your
mapView since it will be created dynamically.
import UIKit import GoogleMaps class MapViewController: UIViewController { //Take a Google Map Object. Don't make outlet from Storyboard, Break the outlet of GMSMapView if you made an outlet var mapView:GMSMapView? override func viewDidLoad() { super.viewDidLoad() mapView = GMSMapView.map(withFrame: CGRect(x: 100, y: 100, width: 200, height: 200), camera: GMSCameraPosition.camera(withLatitude: 51.050657, longitude: 10.649514, zoom: 5.5)) //so the mapView is of width 200, height 200 and its center is same as center of the self.view mapView?.center = self.view.center self.view.addSubview(mapView!) } }
Here is the output.
mapView is of width = 200 and height = 200 with center as same as
self.view | https://codedump.io/share/vebjnnAEIK10/1/google-maps-gmsmapview-on-custom-uiview | CC-MAIN-2017-39 | refinedweb | 198 | 62.44 |
The ps3game.c program looks pretty simple to exploit: it receives code from the network, executes it, and sends back the response. So… just send your shellcode and you’re done, right?
Not quite 🙂
In the program there is this comment:
/* GEOWITCH: this is the port protected by codeserv */ .sin_port = 0x2823,
(Note that the port number is actually 9000, since the port number is given in network byte order. 0x2328 == 9000.)
And there is a mysterious codeserv.ko file sitting in the programs directory. The file extension “.ko” indicates that this is a kernel module, and indeed “lsmod | grep codeserv” confirms that it is loaded.
No source for this kernel module, so time for some reversing. Unfortunately, the disassembler is not able to resolve the kernel symbols (or even the internal cross references), so the disassembly is a bit harder to follow than usual. There’s probably some way to get around this, but since there isn’t much time let’s try to figure it out as it is.
The function names show that RSA and TEA are probably used, with TEA (Tiny Encryption Algorithm) being somehow used to implement a hash function:
00000000000000b0 <codeserv_hash_tea>: ... 0000000000000180 <codeserv_verify_rsa>: ...
So this kernel module probably checks a signature on incoming packets and discards them if it is incorrect.
How does such a signing scheme work? Well, to start with it relies on having a public-key cryptography algorithm like RSA and a hash function. The data which should be signed is hashed, and the signer encrypts the hash with RSA using his private key. This encrypted hash is the signature. Now the receiver can verify the signature by hashing the data itself and verifying that the hash is equal to what you get when you decrypt the signature using the RSA public key.
There are a couple of things which can go wrong with such a signing scheme:
All of these problems make it possible for an attacker to potentially generate new signatures. And in fact, this implementation suffers from all of these problems! The weak RSA parameters are by far the easiest to exploit however, so we’ll continue with that one in a minute. First we’ll discuss the other problems a bit since it shows many interesting details about how this code signing works!
How would one exploit a bad hash function? If is possible to change the signed data in such a way that the hash of the changed version is equal to the hash of the original data then the original signature will also be valid for the changed version.
Now, how does the hash function in codeserv work? As mentioned before, it is a hash function constructed using TEA, which is an encryption algorithm. A standard way to construct a hash function from an encryption algorithm is to loop over the data to be hashed and use the chunks of data as encryption keys to repeatedly encrypt some fixed starting value. The final encrypted value is the hash. This is also how it is done in codeserv: it takes the first two 32-bit words of data as the plaintext and encrypts this repeatedly using successive chunks of data as the key.
A good encryption algorithm does not allow you to recover the key even when you know a plaintext and ciphertext (this is called resistence to known-plaintext attacks). When used as a hash function this means that you can’t find new data that hashes to the same value (collision resistence).
Well, the wikipedia page for TEA mentions that TEA should not be used as a hash function because each key is equivalent to three others. The linked reference has more details, and explains that flipping the most significant bit in the first and second words (or the third and fourth words) of the 4 * 32 bit key has no effect on the encryption. In our case, this means that we can flip some specific bits in the signed data without affecting the signature! The wikipedia page mentions that this was used to hack the XBOX, but it is probably not enough to hack this challenge…
To understand this, you need to know a little bit about how RSA works. RSA works using modular exponentiation. Exponentiation is something we all learn in high school, and "modular" simply means that after every mathematical operation you take the remainder after division by some number called the modulus. In most programming languages you can express this as:
result = pow(a,b) % m
(START SIDENOTE ON EXPONENTIATION)
The result of the exponentation gets large very quickly, so you really want to take the remainder many times during the exponentation to keep the size of the result down (taking the remainder after every multiplication gives the same result as just taking the remainder at the end). This means doing the exponentiation manually, for example using the repeated squaring algorithm (you can actually see this being used in the codeserv_verify_rsa disassembly!). We cheated by using python instead, since it has built-in modular exponentiation using the third parameter of the pow function 🙂
result = pow(a,b,m)
(END SIDENOTE ON EXPONENTIATION)
The trick with RSA is that it uses a modulus which is the product of two large prime numbers. Without getting into the details, this means that when you raise some number n to a well-chosen exponent a modulo m, there is another number b which can be used to recover the original n:
n == pow(pow(n, a, m), b, m)
In RSA, a and b are the public and private exponents, m is the modulus (which is public), and n is value to be encrypted. Which one is called the "public" exponent and which one is the "private" exponent is decided based on other things.
This means RSA encryption and decryption are really just modular exponentiations. Since pow(n1, a, m) * pow(n2, a, m) == pow(n1 * n2, a, m), you can actually create an encryption for n1 * n2 without knowing the key a if you know the encryptions of n1 and n2.
In our case this means that you can generate signatures for a lot of different hashes from a single valid signature. You can generate many such signatures and generate many different variants of some data you want to sign, and hope that one of the generated signatures corresponds to the hash of one of the generated pieces of code. This will still take a long time, but it is more efficient than just bruteforce generating useful code payloads until you find one which has a hash that you have captured a signature for. Of course, the next section describes a much more efficient exploit, so it’s not really relevant here 🙂
As mentioned before, the public and private exponent can be calculated from each other. Usually the public exponent is 0x10001 and the private exponent is calculated from that using the Extended Euclidean Algorithm. To perform this calculation you need to know the prime factors p and q which were multiplied to form the modulus.
If you are the one who generated the keys this is easy, but if you are an attacker you need to factorize the modulus. (The modulus is public, remember.) Factorization is a problem which gets more difficult very quickly when the size of the number to factorize gets larger.
Looking in the disassembly of the codeserv_verify_rsa function there are some interesting constant values:
1ad: 48 ba 31 fb 08 cf 03 movabs rdx,0x908a8003cf08fb31 1b4: 80 8a 90 1b7: 48 89 55 e8 mov QWORD PTR [rbp-0x18],rdx 1bb: 31 f6 xor esi,esi 1bd: bf 01 00 00 00 mov edi,0x1 1c2: b9 01 00 01 00 mov ecx,0x10001 1c7: 48 89 45 e0 mov QWORD PTR [rbp-0x20],rax 1cb: eb 03 jmp 1d0
As mentioned before, 0x10001 is the number which is usually used as a public exponent. And maybe 0x908a8003cf08fb31 is the modulus? If so we are in luck, because a 64 bit value is easy to factor! Let’s try it using the GNU factor utility:
reinhart@bossbox:~$ python -c 'print 0x908a8003cf08fb31' 10415277842094422833 reinhart@bossbox:~$ factor 10415277842094422833 10415277842094422833: 3003703709 3467478437
So this number is the product of two primes of rougly equal size. Looks like an RSA modulus to me! So now we can calculate the private exponent.
# this is based on the extended euclidean algorithm, see: # def invmod(i,n): (a, aa, b, bb) = (1, i, 0, n) while bb: d = aa / bb r = aa % bb (a, aa, b, bb) = (b, bb, a - b*d, r) return a % n # a final modulo operation just to fix up negative values # prime factors of the RSA modulus p = 3003703709 q = 3467478437 # calculate private exponent from the public exponent and the prime factors # (p-1)*(q-1) is the order of the multiplicative group N/modulus print invmod(0x10001, (p-1)*(q-1))
This gives the private exponent: 5680034867677956657.
The next step is to figure out the details of the implementation, in order to produce signed packets in the format codeserv.ko expects.
To begin with, we have some guesses about the code layout:
init_module: // install hook function for intercepting udp packets cleanup_module: // remove hook function for intercepting udp packets __udp_rcv: // this is the hook function. if port == 9000 and !codeserv_verify_rsa(packet): // toss packet else: // invoke original UDP packet handler codeserv_verify_rsa: // don't know where "data" and "signature" parts of the packet are for now hash1 = codeserv_hash_tea(packet_data) hash2 = rsa_decrypt(packet_signature) return hash1 == hash2 codeserv_hash_tea: // use TEA to calculate a hash over the packet data somehow...
As mentioned before, we have some good ideas about how the hash and RSA code probably work. So it’s time to capture some packets to test these assumptions on. Capturing on the system itself did not work for us, so we sniffed a bunch of packets to port 9000 on our VPN gateway.
First thing is to read in these packets using Scapy, which is an excellent python library (and commandline app) for all kinds of network fun.
reinhart@bossbox$ python Python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53) [GCC 4.5.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scapy.all import rdpcap WARNING: No route found for IPv6 destination :: (no default route?) >>> pkts = rdpcap("./data.pcap") >>> pkts[0].show() ###[ IP ]### version = 4L ihl = 5L tos = 0x0 len = 524 id = 1 flags = frag = 0L ttl = 63 proto = udp chksum = 0x62c8 src = 10.11.0.1 dst = 10.11.3.2 \options \ ###[ UDP ]### sport = 56801 dport = 9000 len = 488 chksum = 0x0 ###[ Raw ]### load = "\xd9\xec\xbbj\x8e\xe1~\xb9\x8e\xbb\xe1j\x8dL\xe2\xc2\x0f\xb6\xc0\xb0g\xb9~j\xbb\x8e\xb91\x8e~\xbb\xd9t$\xb4\x8bT$\xc0ff\x81\xd9\xd1\xc31\\\x825\xff\xc8y\xf6\xe6\xde~\xcbdrR\x01\xf4\xfb\xd9\xed~\x92
\x9f\xf0\x1a\xef\xbePg\xb9\xb8\xce\x8b=\x98\xcc\x04\xc7Bt\xf3\xfa\x12\xf2\xaapg\x8e,s\xd2\xf9\xee\x80\x9d\xd8\\\xfb\xec\xb1`W\xfb\xb0\xcb\xc2\xca*\x16Z\x18\x84\xcb\x13n\x9e\x15,\x89\xf3\xf3\xc4\x1bd\x96z\xb4\xf0\xbc1\x06\xa8\x88O\xfc\xa9\x7f\x8fEv\xee\xcf4\xbbb\xb3\xdc\xb0\xf2\x7f\x99l\x9a\xdcd\x1b\xe7\xc4\x84\xa1\xc3\xe6\t\x03\x86\x13\xddU\xb0Y\x8b_\x80|\xbfc:\x7f$p\x91)\xbb\xc0\x1dA\xff~v\x19`\x1bj\xa2\x96\xb5{j\x10\xa09\xd4\x1f\xfb" ###[ Padding ]###>> len('\xb2x\xefG0\xdb\xa7xN\xc2qK\xc9Q\xf7}') 16
Interesting! As you can see there is 16 bytes of data which is not part of the UDP payload, but comes right after it. This does not normally happen, so this is probably the signature.
However, this is a bit strange. TEA has a block size of 8 bytes, and the RSA key size is 64 bits (which is also 8 bytes), so why isn’t the signature 8 bytes as well?
Remember that RSA decryption is modular exponentiation. Since we are taking the remainder by the modulus, the result of a decryption can never be larger than the modulus. But what if the hash is larger than the modulus? It would not be decrypted correctly, and the signature check would fail! So, our first guess is that the codeserv author fixed this the easy way and encrypted the hash in two chunks.
The way this is implemented is a very bad idea from a cryptographic standpoint, but I’ve talked about crypto problems enough and want to get to some results now 🙂
First thing is to implement the hash function. Based on the knowlege from the "Hash function problems" section this is our first guess (without even looking at the disassembly, this is based on the TEA code on wikipedia):
def tea_hash(d): words = unpack('I' * (len(d) / 4), d) v0,v1 = 0,0]
As described before, it justs encrypts a fixed string repeatedly with TEA, using the next block of data to be hashed as the key for each iteration. Now we check this against the disassembly (nops removed for readability):
00000000000000b0
: b0: 55 push rbp b1: 48 89 e5 mov rbp,rsp b4: 41 55 push r13 b6: 49 89 f5 mov r13,rsi ; datalen b9: 41 54 push r12 bb: 53 push rbx bc: 48 83 ec 18 sub rsp,0x18 c0: 48 8b 07 mov rax,QWORD PTR [rdi] c3: 48 85 f6 test rsi,rsi c6: 48 89 45 d0 mov QWORD PTR [rbp-0x30],rax ; fetch first two words of data ca: 0f 84 a4 00 00 00 je 174 ; early exit for no data d0: 8b 55 d0 mov edx,DWORD PTR [rbp-0x30] ; use them as plaintext d3: 8b 4d d4 mov ecx,DWORD PTR [rbp-0x2c] d6: 45 31 e4 xor r12d,r12d d9: eb 05 jmp e0 .... outerloop: e0: 42 8b 1c 27 mov ebx,DWORD PTR [rdi+r12*1] ; fetch four words of data, use them as encryption key e4: 42 8b 74 27 04 mov esi,DWORD PTR [rdi+r12*1+0x4] e9: 45 31 c0 xor r8d,r8d ec: 46 8b 5c 27 08 mov r11d,DWORD PTR [rdi+r12*1+0x8] f1: 46 8b 54 27 0c mov r10d,DWORD PTR [rdi+r12*1+0xc] f6: eb 08 jmp 100 .... innerloop: ; this is the TEA encryption loop 100: 41 81 e8 47 86 c8 61 sub r8d,0x61c88647 ; tmp += 0x9e3779b9; // note 0x61c88647 == -0x9e3779b9 107: 41 89 c9 mov r9d,ecx ; v0 += (((v1<<4) + k0) ^ (v1 + tmp) ^ ((v1>>5) + k1)); 10a: 41 8d 04 08 lea eax,[r8+rcx*1] 10e: 41 c1 e1 04 shl r9d,0x4 112: 41 01 d9 add r9d,ebx 115: 44 31 c8 xor eax,r9d 118: 41 89 c9 mov r9d,ecx 11b: 41 c1 e9 05 shr r9d,0x5 11f: 41 01 f1 add r9d,esi 122: 44 31 c8 xor eax,r9d 125: 01 c2 add edx,eax 127: 89 d0 mov eax,edx ; v1 += (((v0<<4) + k2) ^ (v0 + tmp) ^ ((v0>>5) + k3)); 129: 41 89 d1 mov r9d,edx 12c: c1 e0 04 shl eax,0x4 12f: 41 c1 e9 05 shr r9d,0x5 133: 45 01 d1 add r9d,r10d 136: 44 01 d8 add eax,r11d 139: 44 31 c8 xor eax,r9d 13c: 46 8d 0c 02 lea r9d,[rdx+r8*1] 140: 44 31 c8 xor eax,r9d 143: 01 c1 add ecx,eax 145: 41 81 f8 20 37 ef c6 cmp r8d,0xc6ef3720 ; 32 iterations of TEA loop, since 0xc6ef3720 == 14c: 75 b2 jne innerloop ; (32 * 0x9e3779b9) & 0xffffffff 14e: 49 83 c4 10 add r12,0x10 ; increment data index by 16 152: 89 55 d0 mov DWORD PTR [rbp-0x30],edx 155: 89 4d d4 mov DWORD PTR [rbp-0x2c],ecx 158: 4d 39 e5 cmp r13,r12 ; loop until all data processed 15b: 75 83 jne outerloop 15d: 48 83 c4 18 add rsp,0x18 161: 48 89 c8 mov rax,rcx 164: 89 d2 mov edx,edx 166: 5b pop rbx 167: 41 5c pop r12 169: 48 c1 e0 20 shl rax,0x20 16d: 48 09 d0 or rax,rdx ; return (v0 << 32) | v1; 170: 41 5d pop r13 172: c9 leave 173: c3 ret 174: 8b 55 d0 mov edx,DWORD PTR [rbp-0x30] 177: 8b 4d d4 mov ecx,DWORD PTR [rbp-0x2c] 17a: eb e1 jmp 15d 17c: eb 02 jmp 180
So our initial idea matches the disassembly almost perfectly, except that it uses the first two words of data as the initial plaintext. So now we have the correct hash function (we still return the result as two 32-bit values for convenience):
def tea_hash(d): words = unpack('I' * (len(d) / 4), d) v0,v1 = words[:2] # fixed!]
Now for an initial guess at the RSA code:
m = 0x908a8003cf08fb31 # RSA modulus e = 0x10001 # RSA public exponent def checksig(d,s): h1 = tea_hash(d) s1,s2 = unpack("QQ", s) h2 = [pow(s1, e, m), pow(s2, e, m)] return h1 == h2
This takes the data to be checked and the 16-byte signature as arguments. It first hashes the data, then it decrypts the two chunks of the hash from the signature, and checks that the two hashes are equal.
Now let’s try this on the captured packets:
from scapy.all import UDP, rdpcap for pkt in rdpcap("./data.pcap") udp = pkt.getlayer(UDP) if udp.dport != 9000: continue # signature is in the ip payload just after the udp packet sig = udp.getlayer(Padding).load code = udp.payload.load print checksig(code,sig)
This works! So now we know our guess for the signature format was correct. The only thing that remains is to write the entire exploit:
#!/usr/bin/env python import logging logging.getLogger("scapy.runtime").setLevel(logging.ERROR) from struct import unpack, pack from scapy.all import IP, UDP, Padding, rdpcap, send import socket import sys import hashlib # public rsa parameters: modulus and public exponent # these were taken from the codeserv.ko disassembly m = 10415277842094422833 e = 0x10001 # factored m using the GNU 'factor' utility p = 3003703709 q = 3467478437 assert(m == p * q) # calculates i**-1 mod n using the extended euclidean algorithm def invmod(i,n): (a, aa, b, bb) = (1, i, 0, n) while bb: d = aa / bb r = aa % bb (a, aa, b, bb) = (b, bb, a - b*d, r) return a % n # calculate private exponent priv = invmod(e, (p-1)*(q-1)) assert(pow(pow(42,e,m),priv,m) == 42) def dump_rsa(): print "RSA modulus: %d" % m print "RSA public exponent: %d" % e print "RSA prime factors: %d %d" % (p,q) print "RSA private exponent: %d" % priv # for debugging, check signatures on all packets and dump received code def dump_pcap(fn): for pkt in rdpcap(fn): udp = pkt.getlayer(UDP) if udp.dport != 9000: continue # signature is in the ip payload just after the udp packet sig = udp.getlayer(Padding).load code = udp.payload.load sigok = checksig(code,sig) sigtxt = "good" if sigok else "BAD" print "Received %d bytes of code from %s, signature %s" % (len(code), pkt.src, sigtxt) # dump code for packets with good sigs if sigok: fn = "code/%s" % (hashlib.md5(code).hexdigest()) with open(fn,"w+") as f: f.write(code) # returns list in chunks of n def chunks(data,n): return [data[i:i+n] for i in range(0, len(data), n)] # hash function based on the Tiny Encryption Algorithm. # this takes the first two words of data as plaintext and repeatedly encrypts them # using successive blocks of data as the encryption key. def tea_hash(d): words = unpack('I' * (len(d) / 4), d) v0,v1 = words[:2]] # verifies a signature using the public key def checksig(d,s): h1 = tea_hash(d) s1,s2 = unpack("QQ", s) h2 = [pow(s1, e, m), pow(s2, e, m)] return h1 == h2 # sign the given code using the private key # returns the 16-byte signature def signature(d): h = tea_hash(d) s1 = pow(h[0], priv, m) s2 = pow(h[1], priv, m) return pack("QQ", s1, s2) # builds a correctly signed packet with the given destination and payload def build_exploit_pkt(destip, code): # code size needs to be a multiple of 16 for the hash function, pad it with zero bytes code = code.ljust((len(code) + 0xf) & ~0xf, "\0") # build packet... <3 scapy pkt = IP(dst=destip)/UDP(sport=10000, dport=9000)/code # put the signature in as padding after the udp payload pkt.getlayer(UDP).add_payload(Padding(signature(code))) return pkt def exploit(destip, command, cbip, cbport): print "%s: %s > %s:%d" % (destip, command, cbip, cbport) # read shellcode template with open('shellcode','r') as f: code = f.read() # replace ip and port for connectback assert(code.count("\x7f\x00\x00\x01") == code.count("\x23\x29") == 1) code = code.replace("\x7f\x00\x00\x01", socket.inet_aton(cbip)) code = code.replace("\x23\x29", pack("!H", cbport)) # append shell command code += command code += "\0" # build packet pkt = build_exploit_pkt(destip,code) # send packet #print "Sending packet:", repr(pkt) send(pkt, verbose=False) #dump_rsa() #dump_pcap("data.pcap") if len(sys.argv) != 5: print "\n\nUsage: ./exploit ip shellcmd connectback_ip connectback_port\n\n" sys.exit(1) exploit(sys.argv[1], sys.argv[2], sys.argv[3], int(sys.argv[4]))
Awesome writeup, thx guys | https://eindbazen.net/2011/10/rwth-ctf-2011-ps3game/ | CC-MAIN-2018-26 | refinedweb | 3,561 | 63.32 |
I don't like the idea to be limited to one paradigm either. I use them as I see fit. However, to these kind of things, classes are best IMHO. All functionality for an object to be grouped inside its object and not broken out.
Why is it so difficult to learn C++ and not VB or Java (unsure about this one) or such languages, I wonder? Because everything is neatly organized into one place.
If a programmer has to rely on algorithms to get things done (and I mean simple tasks), or go browse through a namespace that's not named in the docs, things DO get harder.
Difficulty is what I try to avoid. No wonder C++ is difficult to learn.
And some don't seem to understand my point either...
Classes, like functions, can be extended. It's your job to make sure they can be. They should be. Otherwise you did something wrong.
One class should be one thing. Don't add everything into it. But a string class should have certain things, such as ToLower/ToUpper, which the standard library seem to lack.
Classes can grow just as free functions can grow. Now, the problem is to find a way to make it possible for the classes to grow similar to free functions.
I don't think I'd use static members, though. They would appear inside the class namespace. I'd rather make them free and move them into the Strings namespace... Neatly organized.
Perhaps another question is: how do I break out the implementation from the definition of a specialized template?
Instead of:
This:This:Code:template<> class StrConversion<wchar_t> { public: CStringExA ToANSI() { /*... */ } private: virtual const wchar_t* GetData() = 0; };
Code:template<> class StrConversion<wchar_t> { public: CStringExA ToANSI(); private: virtual const wchar_t* GetData() = 0; }; template<> CStringExA strConversion<wchar_t>::ToANSI() { /* ... */ } | http://cboard.cprogramming.com/cplusplus-programming/102944-though-implementation-problem-2.html | CC-MAIN-2015-14 | refinedweb | 305 | 68.26 |
- NAME
- SYNOPSIS
- DESCRIPTION
- METHODS
- INTERNAL METHODS
- CONFIGURATION
- PRIVACY CONCERNS
- RECIPES
NAME
Catalyst::Plugin::Snippets - Make sharing data with clients easy
SYNOPSIS
package MyApp; # use this plugin, and any Cache plugin use Catalyst qw/ Cache::FastMmap Snippets /; package MyApp::Controller::Foo; sub action : Local { my ( $self, $c ) = @_; # ... $c->snippet( $namespace, $key, $value ); } sub foo : Local { my ( $self, $c ) = @_; $c->serve_snippet( $namespace, \%options ); # namespace defaults to $c->action->name; } sub other_action : Private { my ( $self, $c ) = @_; my $value = $c->snippet( $namespace, $key ); }
DESCRIPTION
This plugin provides a means of setting data that can then be queried by a client in a different request.
This is useful for making things such as progress meters and statistics amongst other things.
This plugin provides an API for storing data, and a way to conveniently fetch it too.
METHODS
- snippet $namespace, $key, [ $value ]
This is an accessor for the client exposed data.
If given a value it will set the value, and otherwise it will retrieve it.
- serve_snippet [ $namespace, ] [ %options ]
This method will serve data bits to the client based on a key. The namespace defaults to the action name.
The optional options array reference will take this values. This array will take it's default first from
$c->config->{"snippets:$namespace"}and then it will revert to
$c->config->{snippets}.
See the "CONFIGURATION" section for detailed options.
- serialize_snippet $value, \%options
This method is automatically called by
serve_snippetto serialize the value in question.
- send_snippet $value, \%options
This method is automatically called by
serve_snippetto set the response body.
INTERNAL METHODS
- setup
Set up configuration defaults, etc.
CONFIGURATION
- format
This takes either
json,
plain(the default) or a code reference.
The
jsonformat specifies that all values values will be serialized as a JSON expression suitable for consumption by javascript. This is reccomended for deep structures.
You can also use a code reference to implement your own serializer. This code reference should return two values: the content type, and a a value to set
$c->response->bodyto
- allow_refs
If this is disabled reference values will raise an error instead of being returned to the client.
This is true by default.
- use_session_id
This fields allows you to automatically create a different "namespace" for each user, when used in conjunction with Catalyst::Plugin::Session.
This is false by default.
- content_type
When the formatter type is
plainyou may use this field to specify the content-type header to use.
This option defaults to
text/plain.
- json_content_type
Since no one seems to agree on what the "right" content type for JSON data is, we have this option too ;-).
This option defaults to
application/javascript+json
PRIVACY CONCERNS
Like session keys, if the values are private the key used by your code should be sufficiently hard to guess to protect the privacy of your users.
Please use the
use_session_id option for the appropriate namespace unless you have a good reason not to.
RECIPES
Ajax Progress Meter
Suppuse your app runs a long running process in the server.
sub do_it { my ( $self, $c ) = @_; IPC::Run::run(\@cmd); # done }
The user might be upset that this takes a long while. If you can track progress, along these lines:
my $progress = 0; IPC::Run::run(\@cmd, ">", sub { my $output = shift; $progress++ if ( $output =~ /made_progress/ ); });
then you can make use of this data to report progress to the user:
$c->snippet( progress => $task_id => ++$progress ) if ( $output =~ /made_progress/ );
Meanwhile, javascript code with timers could periodically poll the server using an ajax request to update the progress level. To expose this data to the client create an action somewhere:
sub progress : Local { my ( $self, $c ) = @_; $c->serve_snippet; }
and have the client query for
"/controller/progress/$task_id". | https://metacpan.org/pod/Catalyst::Plugin::Snippets | CC-MAIN-2019-43 | refinedweb | 610 | 52.7 |
As a newbee to PojoCache AND AOP (did many things with TreeCache), I started to make use of PojoCache from an Eclipse-RichClient Apps or Plugin.
First of all is it possible at all ?? I tried several things, but I failed.
I created an Eclipse-Plugin, containing all .jar's which come with JBossCache-all-2.0.0.GA. I defined my plugin dependent to this new created "PojoCache" plugin. It compiled well, also the Pojo's which I annotaded with
e.g
@org.jboss.cache.pojo.annotation.Replicable
public class Address
{......
I used the example code PojoCache.Annotated50 from the example ..
But then I stuck, when it comes to start my EclipsePojoCache plugin. I think because the Eclipse Class Loader and/or the -javaagent:D:\XXXX\workspace\EclipsePojoCache\javassist.jar definition.
To put -javaagent to the VM classpath when starting up the plugin referenced classes could not be found. But I think its needed for code injection.
Is there anyone out , who can help me ! seriously Andy
-javaagent should point to jboss-aop-jdk15.jar not javassist. Can you give that a try? | https://developer.jboss.org/thread/84715 | CC-MAIN-2018-13 | refinedweb | 184 | 62.54 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
An Introduction
Course Companion
©2002 Ariadne Training Limited
2
An Introduction to Java and J2EE
Contents
CONTENTS The Purpose of this Book THE HISTORY OF JAVA Background to Java Java Applets Java Applications Server Side Java Summary JAVA SOFTWARE DEVELOPMENT Java Variants The Java Development Kit Evolution of the SDK Java Help The Key Java Features Platform Independence The Java Virtual Machine Graphical User Interfaces The “AWT” Swing to the Rescue A Swing GUI Multi Threading Java Multithreading Multithreading Example Running the Multiple Threads The Output Error Handling Exception Handling Example Object Orientation Java Garbage Collection Java Applets Databases in Java Summary JAVA VS OTHER LANGUAGES Language 1: Visual Basic An Example VB Program Language 2: C / C++ Example C++ Program
2 5 6 6 7 8 9 9 10 10 10 11 12 13 13 13 14 15 16 17 18 19 19 20 20 21 22 23 23 24 24 24 26 26 27 27 28
© 2002 Ariadne Training Limited
3
An Introduction to Java and J2EE
Language 3: C# (C-Sharp) VB vs Java C++ vs Java C# vs Java Microsoft .NET Summary JAVA PERFORMANCE Java Performance Why Should Java be Slow? Slow Java Demo and Benchmarks Demo Benchmark Summary Native Code Compilers Summary INTRODUCING J2EE What is J2EE? Two-Tier Architecture Two-Tier Architectures: Disadvantages The Three Tier Model Application Servers The J2EE Standard J2EE Components Java Servlets Java Server Pages (JSP) Enterprise Java Beans (EJBs) The J2EE Architecture Concrete Example – A Warehouse Summary SERVLETS AND JSP Servlets and JSP The Static Web Making the Web Dynamic Solution: CGI The Servlet Solution An Example Servlet A Serious Problem Emerges Java Server Programming (JSP) JSP Architecture Another Dynamic Webpage Another Problem Emerges! Solution: Servlets! The MVC Architecture Java in JSP’s Custom Tags JSP Example
29 29 30 30 31 31 32 32 32 32 33 33 34 34 34 35 35 35 36 37 37 38 38 38 39 39 39 40 41 42 42 42 43 43 44 44 44 45 46 47 47 48 49 49 50 50
© 2002 Ariadne Training Limited
4
An Introduction to Java and J2EE
Using a Custom Tag Summary ENTERPRISE JAVA BEANS (EJB’S) Java Beans The Purpose of Java Beans Enterprise Java Beans EJB’s Are: EJB’s are Persistent EJB’s are Transaction Aware Place Order Example EJB’s are Distributed EJB’s are Multithreaded Types of EJB Entity Beans J2EE Design Patterns Example Design Pattern Design Problem Design Solution EJB Summary THE JAVADOC TOOL How the Tool Works Running the Tool Javadoc Demonstration Summary COURSE SUMMARY Java Problems BIBLIOGRAPHY
51 51 53 53 53 54 54 54 55 55 56 56 57 57 58 58 59 59 60 62 62 63 63 63 65 65 67
© 2002 Ariadne Training Limited
5
An Introduction to Java and J2EE
Introduction
The Purpose of this Book
This book accompanies the course “An Introduction to Java and J2EE”. Aimed at a general audience of both technical and non-technical people, the course describes the principles of the Java language – where it came from, why it is popular and what it can offer. The second half of the course describes the Java Enterprise Edition – a framework for building large scale Enterprise applications using the standard Java language. The course does not cover the technical details behind Java and J2EE, in particular how to write programs in Java. For these details, see the in-depth Programming in Java courses. To submit comments or questions, please email info@ariadnetraining.co.uk, or see our website at.
If you’re a project manager on a new Java project, you may find some of our suggestions to be useful as possible project standards. We’ll break out these suggestions in to boxes like this one. They’re only suggestions and may not work for your project, but they are guidelines that we have found to be valuable on other projects.
Project Standard
© 2002 Ariadne Training Limited
6
An Introduction to Java and J2EE
Chapter 1
The History of Java
Background to Java1
Back in the very early 1990’s a team of engineers were set up at Sun Microsystems to pre-empt the “next big thing in computing”. Led by legendary Sun programmer James Gosling, the team decided to concentrate their efforts on embedded electronic devices. Given the prevalence of micro chips in TV’s, washing machines, remote controls and so on, the team attempted to build a handheld device (their prototype was called “*7”) capable of a wide variety of household tasks. Which programming language would they use to program these electronic devices? C++ could well have been the language of choice for Gosling’s team. C++ is based on the decades old C language, and C has a proud tradition in the embedded systems arena. At the time (perhaps the same is still true today), C++ was the most popular programming language2 and by adopting this language for their devices, they would be guaranteed a large and ready made community of engineers to program for their products. However (as we will see later), C++ is a very difficult language to master. It is very easy to write C++ programs that may appear to work perfectly but which are in fact bug ridden and unreliable. C++ gives the programmer a lot of power and freedom, and while this power can be harnessed by experienced programmers, in the hands of less competent engineers, disasters are commonplace.3 Gosling realised that a safer, more straightforward replacement for C++ was required – and if the replacement could look a bit like C++, then so much the better for persuading existing C++ coders to cross over to the new language.
1
See for an excellent commentary on the inception of Java
2 Cobol is probably in heavier use, but of the “modern” languages, C++ would have been the language of choice for many software engineers
See reference [7] and read the details of the London Ambulance service fiasco – one source of the problem was a C++ memory leak
3
© 2002 Ariadne Training Limited
7
An Introduction to Java and J2EE
The new language was codenamed Oak, and was quickly put together by Gosling and his team (much of the language had already been created by Gosling, at home in his spare time!). The language was subsequently renamed Java when it was realised that another language called Oak already exists. Java isn’t an acronym by the way – it was named after the team’s favourite coffee! To cut a long story short, the “*7” venture was initially a failure. The take up of the technology was slow and the group failed to win any significant business. Meanwhile, the World Wide Web had been taking off dramatically and was causing much excitement in the industry (and beyond the industry too – at the time there was plenty of talk about the “Information Superhighway” in the media). Initially, the web was a fairly static place – lots of simple text documents linked together. Web visionaries craved interactivity – would it be possible one day to play games on the web, or to view a company’s products, and order them, via the web? The internet had never been considered relevant to Java during its design; but by accident all of the design goals of Java – the ones that Gosling had observed to make his language suitable for electronic devices - were directly applicable to the world wide web. Java programs are designed not to crash – essential for the web. Java programs are designed to run on any hardware – again, essential, given that users of many different types of hardware use the web. Gosling’s team changed tack, and in about 1995, they began to apply Java to the internet. The potential of the language was quickly recognised by the internet community and a mass of hype and excitement began to build up around Java. In 1995 and 1996, Java was definitely the hottest property in IT; the attention centred around the idea of running Java Applets inside web browsers…
Java Applets
A Java Applet is a small Java application (Application-let), and is essentially a graphical Java program that can run inside a web page.
Figure 1 - A Java calendar Applet Applets have many interesting features and advantages:
© 2002 Ariadne Training Limited
8
An Introduction to Java and J2EE
•
They will run on any browser (as long as the browser supports applets; the latest versions of IE and Netscape do). They will also run on any platform, regardless of whether the user is running on a PC, Mac, Sun Workstation, whatever. The biggest advantage of an applet is that they are dynamic. They can be embedded into the static text of a web page and they can do things; the user can interact with them. The example in Figure 1 is a fairly simple calendar, but technically an Applet can be as complex as the programmer’s imagination allows. A serious concern in the early days of the web was safety. Given that an applet is essentially a piece of software, how do we know that an Applet isn’t malicious? For example, the calendar Applet could be secretly deleting all the files on our hard disk, or it could be sending salacious emails to everyone in our email address list. Thankfully, Java addressed these concerns very well indeed. Users can be fairly sure that an Applet is not capable of doing anything nasty or unwelcome – we will look at Java security later on in the course.
•
•
All of the above is all very well, but Applets also suffer from some serious problems: • Firstly, Applets are very slow to download. Well, they’re not that bad – the files are relatively small, but an Applet that does anything vaguely interesting is probably going to be too big to download comfortably using a standard modem. Broadband internet wasn’t available back in 1996! Secondly, Applets are inherently limited in their functionality4. We said above that Applets are not capable of deleting all of the files on a user’s hard disk – in fact Applets are not capable of even accessing a user’s hard disk! Nor can an Applet contact any web site other than the one it was downloaded from. These are serious restrictions that essentially prevent many interesting Applets from even being written. Finally, many Applets look fairly shabby. This isn’t generally because the programmers of the Applet are incompetent; it is more because the graphical features supported by Applets are desperately limited.
•
•
In the early days of Java, there was a proliferation of cheap and nasty Java Applets all over the web – usually doing things like small animations. It is fair to say that many people still believe that Java is only capable of writing poor quality eye-candy on web pages…
Java Applications
… but Java has always been capable of building full blown Applications running outside web browsers. Certainly, the early releases of Java lacked some of the features
4
These restrictions can be circumnavigated these days –but not when applets were becoming popular
© 2002 Ariadne Training Limited
9
An Introduction to Java and J2EE
required to build very serious applications, but by version 1.2 of Java (released in late 1998), Java is now considered to be a full blown “serious” language. Java still may not be suitable for writing real time safety critical systems (this is open to debate, we won’t comment here), and certainly Java won’t be suitable for writing low-level applications such as new Operating Systems. Other than these areas, Java should be able to compete with any other language. We will compare Java to other languages in a later chapter.
Server Side Java
More recently, Java has also become recognised as a server-side language too. For example, the concept of a servlet enabled web pages to contact small Java applications running quietly on a server; the servlet could perhaps work with a database and return results back to the user viewing the webpage. We’ll look in more detail at servlets later. Server-side Java was evolving in many different directions until the whole concept was formalised by J2EE (Java Enterprise Edition), which we will explore in detail in the second half of this course.
Summary
• • • • Java was originally designed as a language for writing small, embedded applications on electronic devices Java became the first “internet” language in 1995-96; Java Applets were hot technology Applets have largely failed to deliver their initial promise However, Java is a very well designed language and is now being exploited to build full blown applications Java reached maturity at version 1.2 (1998) • Java is also being exploited as a Server Side language We’ll look at this in the second half of the course
© 2002 Ariadne Training Limited
10
An Introduction to Java and J2EE
Chapter 2
Java Software Development
In this session, we will look at the main features of Java Software Development, and what makes the language so well designed. We are not going to look at the “server side” Java just yet – but almost everything in this chapter is relevant when developing Java server applications as well.
Java Variants
Many newcomers to Java find the different variations of Java to be very confusing. Before we set out, we’ll define the three major variants of Java. We’ll explore two of them on this course: • J2SE stands for “Java 2, Standard Edition” and is basically classic Java. Don’t let the “Standard” phrase deceive you; J2SE is huge. It can be thought of as a combination of the programming language itself, all the tools required to build Java applications, and a set of libraries (we’ll look at these shortly). J2ME stands for “Java 2 Micro Edition”. This is a heavily cut down version of Java, designed to enable Java to run on very small embedded devices such as WAP telephones. It is exactly the same language as J2SE – it is just the libraries that have been restricted in their scope. We will not discuss J2ME further on this course, but you can check for more details J2EE stands for “Java 2 Enterprise Edition”, and provides the tools and libraries necessary to extend Java into the heavyweight server side technologies. Again, the language is exactly the same as J2SE and in fact J2EE is a strict superset of J2SE – there is no overlap. So if your engineers are using J2EE, they are actually using a combination of J2SE and J2EE. You cannot program J2EE applications without being a good J2SE engineer first. We will look at J2EE in the second half of the course.
•
•
The Java Development Kit
The Java language is actually fairly simple; as we have mentioned, the syntax is derived from C++, but with extreme simplifications. So, the core of the language is easy to learn - however, the strength and power of Java (and also its complexity) comes from the vast class library that is supplied alongside the core Java language. In this chapter we will look at the main features of the language and the library, and see how they help a software development. © 2002 Ariadne Training Limited
11
An Introduction to Java and J2EE
The Java Development Kit (SDK) is Sun’s name for the combination of the programming language and a collection of libraries providing a rich source of prewritten functionality to the programmer. The SDK is free to download from the Sun website at. The library prevents the need to “reinvent the wheel” on our software developments. If you need to (for example) write a program that uses Networking, the networking functionality is provided as part of the class library. Similarly for Graphics, File Input/Output, Database Programming and Security. Support is even provided for areas such as Cryptography, File Compression, Multimedia – there are far too many areas to list5. With some other languages, such libraries have to be purchased from third parties – apart from the cost, this of course means vendor lock-in and your programmers writing non-standard code. With Java, it is all essentially part of the standard language.
Evolution of the SDK
The first release of Java, 1.0 emerged in January 1996 (beta versions had been available before this date). Whilst interesting, the language was in its fledgling state and suffered from many flaws. Version 1.1 (Feb 1997) improved things considerably, and then with the release of 1.2 in December 1998, Java reached what can be considered as the baseline of “maturity”. Since then, subsequent versions of Java (which are released every 18 months or so) have added more and more features without fundamentally affecting the core language. Version 1.3 was released in May 2000, and the latest (at the time of writing) is 1.4 which emerged in late 2001. Java programs are backwards compatible; if you upgrade from (say) version 1.2 to version 1.4, then your programs should still compile. If the Java designers decide to remove something from the class library, then your programs will probably still compile but your engineers will receive warnings from the compiler called deprecations. These warning indicate that although the engineer can get away with using the feature in the short term, they should reengineer their code to avoid using the removed features as soon as possible. Confusingly, Versions 1.2 and onwards are referred to as “The Java 2 Platform”. This was a bit of a marketing trick by Sun to indicate that Java 1.2 was a major improvement on what had gone before.
The SDK comes with a complete set of documentation describing every aspect of the library. Although these documents are difficult to read for newcomers to the language, experienced Java programmers can often pick up new areas of library with little or no training.
5
© 2002 Ariadne Training Limited
12
An Introduction to Java and J2EE
Java Help
There are thousands of Java Classes available in the standard Java Class Library (a class is basically a Java Module; all of your engineers programs will be constructed using classes). How can we deal with the complexity of thousands of classes? Thankfully, Java provides a comprehensive help system. Documentation is available for every single Java Class in the Library. This documentation can be a bit frightening for the newcomer to Java, but a programmer with a reasonable amount of experience soon becomes accustomed to the system. Interestingly (and often overlooked) is that your project can also automatically generate help files for your own custom built classes – a very valuable project resource. We’ll see how this is done later (see “The Javadoc Tool”, page 62).
Figure 2 - Example Java help file. An experienced Java programmer working with files for the first time would be able to read this help file and quickly get to grips with how the class works
© 2002 Ariadne Training Limited
13
An Introduction to Java and J2EE
The Key Java Features
We’ll now examine each of the major features of Java in turn and explain how they benefit a software development project:
Platform Independence
Arguably, this is the key feature of the language. Until the advent of Java, almost all programming languages would produce code that could only run on one platform (ie type of computer). A program written for Windows would definitely not run on Solaris or Macintosh, for example. To make a program run on a different platform, the program would have to be converted manually by a programmer or team of programmers – a process known as porting. This is a time consuming and difficult job to perform. Java was designed with platform independence in mind from the start, from its early days as a language for writing programs embedded into set top boxes. The team developing Java realised that each set top box manufacturer would use a different operating system, so this feature would be crucial to its success.
The Java Virtual Machine
Before Java, most programs would be run through a tool called a compiler. The job of the compiler is to convert the program into platform dependent machine code. Once the code is compiled, it would only run on the machine it had been compiled for. Java takes a different approach. Instead of converting the program into a series of instructions specific to a particular machine, the compiler converts the program into a series of platform independent instructions (Sun call these Bytecodes). So, Java programs are converted into bytecodes. What happens to these bytecodes? Well, the person running the program feeds the bytecodes into a tool called a Virtual Machine6. The Virtual Machine reads the bytecodes and converts them into the native instructions that the particular platform understands. This simple trick means that the bytecodes produced by a programmer can be run on any computer, providing a Virtual Machine exists for that computer. Since Virtual Machines have been produced for most common platforms (Windows, Linux, Solaris, Macintosh OS X), we can safely say that a Java Program is truly portable.
Virtual machines are prewritten pieces of software provided by a vendor. Sun produce one as part of the SDK, but there are other implementations provided by other suppliers too.
6
© 2002 Ariadne Training Limited
14
An Introduction to Java and J2EE
Java Source
Design time
Compiler
Bytecodes
JVM
Runtime
Native Machine Code
Figure 3 - How the JVM achieves platform independence Is Java really portable? Well, there are a couple of catches. Shortly we’ll look at Java’s Graphical and Multithreading capabilities – these two areas are difficult to make portable (each platform has its own way of doing graphics – compare a Macintosh user interface with a Windows one!), and problems can arise in these areas. Generally, though, everything should be ok. However, the advice is: If you are producing Java software which you intend your customers to run on different machines, make sure you extensively test your software on all target platforms.
Graphical User Interfaces
Java features the capabilities to build Graphical User Interfaces (GUI’s) as part of the class libraries. There is no need to use an external library or tool as with other languages. Perhaps the biggest challenge in achieving this was the problem of platform independence. GUI’s on different machines, whilst sharing features in common, usually have marked differences…
© 2002 Ariadne Training Limited
15
An Introduction to Java and J2EE
Figure 4 - A Macintosh running Max OS X (The Aqua Look and Feel)
Figure 5 - A PC Running Windows 98 So how can we write a platform independent program that will look the same and behave in the same way on different machines?
The “AWT”
Sun’s first attempt at a solution was the Abstract Windowing Toolkit, or AWT for short. Here, Sun identified a common set of Components – graphical elements that appear on all platforms, regardless of the operating system. For example, Buttons, Menu Items, Scroll Bars, Text Boxes and Labels were some of the common components.
© 2002 Ariadne Training Limited
16
An Introduction to Java and J2EE
Figure 6 - An AWT User Interface When a Java Programmer writes an application using the AWT, they construct the GUI using these AWT components. For example, they might request a button with the word “Exit” written on it. The Java Virtual Machine takes that request and asks the underlying operating system to draw a native button. This simplistic solution had many problems. First of all, it was very difficult for Sun to produce bug free implementations of the AWT for every single platform. Secondly, because the AWT could only feature components that are present in every single platform, the range of AWT was very limited indeed (components such as dividers, slides, spin buttons and like were absent). Perhaps more importantly, the look and feel of an AWT GUI is a little cheap and nasty; see the example in Figure 6 – this is a typical example of an AWT interface.
Swing to the Rescue
Just before the release of Java 1.2, a new approach to building GUI’s in Java was devised, called Swing. Using the new approach, Java simply asks that the operating system is able to: • • Present a Window Draw on a blank canvas (using simple pixels)
It is safe to say that all graphical operating systems should be able to fulfil the above. Now, when a programmer requests (say) a button, Swing will draw the button, from scratch, using pixels on a blank area of the window. This clever technique means that Swing is able to support almost any conceivable graphical component, even if the operating system you are running on doesn’t really support it. Swing was just the working title for this system; the real name for it is the “Java Foundation Classes” or “JFC”, but somehow the name “Swing” has stuck. Sometimes © 2002 Ariadne Training Limited
17
An Introduction to Java and J2EE
you will hear to it referred to as JFC, but Swing is the common term. By the way, as with the name Java, there is no significance in the name. It’s just another “cool” word these crazy Americans came up with. Something to do with Jazz I think…
A Swing GUI
As an example, here is a screenshot from a simple Swing application (this is a horse racing game). As you can see, all of the features you would expect to see from a standard windows application are present:
Figure 7 - A Swing GUI A very interesting feature of Swing is that the way the GUI is presented can be switched by the programmer. Recall that the buttons, text boxes, lists, menus etc are not being drawn by the operating system – it is Java that is laboriously drawing all of these components itself (and as you can see from Figure 7, Swing aims to make these emulated components look like the real ones). Since it is Java that is determining what these components look like, it is a fairly simple job for Swing to draw them in a slightly different way. As such, Swing allows you to choose from a set of predefined Look and Feels. The “System Look and Feel” is a look and feel that emulates the look and feel of the operating system you are running on. In Figure 7, we are running on Windows and we are using the Windows Look and Feel. If we ran the same application on a Mac7, then the Macintosh Look and Feel would be adopted instead.
7
Remember – we can do this, because Java programs are platform independent and portable
© 2002 Ariadne Training Limited
18
An Introduction to Java and J2EE
The “Metal Look and Feel” is a look and feel provided by Sun in an attempt to produce a “platform independent” look and feel. The benefit of using this look and feel is that your application will look the same regardless of what platform you are running on. The drawback is that your users might be a bit confused, because it won’t look like a regular application. Here is Figure 7 redrawn using the Metal look and feel:
Figure 8 - The Metal Look and Feel As you can see, everything is there as before, but presented in a very different way. Only one line of code was changed to make this happen. A third look and feel, the “Open Look” look and feel, is provided – this look and feel emulates the Unix type of look and feel.
Multi Threading
Every program we write is trying to solve a real world problem; whether we are trying to predict the weather using mathematical algorithms, or trying to provide an on-line shopping facility, the whole point of writing computer programs is to solve problems that exist in the real world. Unfortunately, computers are naturally and inherently (simply due to the way they were originally designed decades ago) poor at doing more than one thing at a time. Computers tend to churn through instructions, monotonously, one at a time. But the real world doesn’t work like that – things happen concurrently.
© 2002 Ariadne Training Limited
19
An Introduction to Java and J2EE
In a defence system, we must be able to fire torpedoes whilst the commander is still able to track the progress of targets on the screen (it would be a disaster otherwise). Similarly, in an on line shopping system, we must allow two customers to view the catalogue and order goods at the same time – if we could only service one customer at a time, our sales would be adversely affected. The provision of the facility to allow Multi-Tasking is woefully absent in many major modern languages. In particular, C++, a very common and popular language, has no “in built” support for doing more than one thing at the same time. Programmers in C++ have to rely on messy system calls (destroying any chance of portability), or they have to purchase non-standard third party libraries.
Java Multithreading
Java, however, provides Multi-Threading as part of the standard language, and, relatively speaking, it is easy for a programmer to write a multi threaded program. However, even with Java’s excellent support for multi-tasking, it is still a minefield. It is very easy, for example, to write programs that can “dead lock” (this is when one task in the program has to wait for the other task to complete and hand over some results – but at the same time, the second task has to stop and wait for the first task to do something!). This isn’t because Java is poor at multitasking; it is a simple fact that designing multi threaded programs in any language is much harder and requires a higher skill level than single threaded programs. If you are running a project that is likely to employ multithreading, invest in a good thread analyser tool – it will save you a fortune in the long run. Multi-threading is required for Enterprise systems of course (the previous example, where we have two customers trying to order at the same time is a case in point). We will see later, however, that the multi-threading is provided behind the scenes if we are using J2EE, so thankfully your programmers are relieved of this difficult and time consuming task.
Multithreading Example
Here is a simple Java program that is capable of printing out the numbers from 1 to 500. Don’t worry about the Java syntax; the code in the “run” block is the code that will run in parallel with whatever else is happening in the system; the code near the top is simple plumbing to make the threading happen:
© 2002 Ariadne Training Limited
20
An Introduction to Java and J2EE
public class NumbersThread implements Runnable { public NumbersThread() { Thread thread = new Thread(this); thread.start(); } public void run() { for (int i=1; i<=500; i++) { System.out.println (i); } } }
Figure 9 - Java Multithread Program
Running the Multiple Threads
If we now ask Java to run two versions of the code in Figure 9, we will have two parts of the program outputting the numbers from 1 to 500 at the same time.
The Output
Here is the output from a run of the program where two versions of the code are running concurrently:
© 2002 Ariadne Training Limited
21
An Introduction to Java and J2EE
… 118 119 120 121 122 123 124 125 1 126 2 127 3 128 4 129 5 130 6 131 7 132 …
Figure 10 - Output from the Program Interestingly, the program outputs 125 consecutive numbers, and then the second thread starts to output its numbers. From that point onwards, the two threads are interwoven – one number outputted from each thread in turn. The 125 consecutive numbers from thread 1 happened because thread 2 was busy starting itself up while thread 1 had stolen a head start. Multitasking is generally unpredictable – we could not have predicted that this would have happened, and we certainly cannot guarantee that the same thing will happen on a future run. It is entirely possible that next time around, 134 consecutive numbers are output followed by a block of 4 from thread 2, and so on. It is just this sort of problem that makes working with threads a difficult job.
Error Handling
Java features a mechanism known as Exception-Handling to allow the programmers to deal with errors (and unexpected situations) in an elegant way. This mechanism first became popular in the early 80’s with the Ada programming language. Designed for defence systems, Ada required strong error trapping capabilities, so the language enforced tight controls on the programmer. Java has copied the idea behind the system (and has even strengthened it slightly). This means that your programmers are forced to deal with any errors that can arise in your system.
© 2002 Ariadne Training Limited
22
An Introduction to Java and J2EE
Exception Handling Example8
Here, the programmer is trying to connect to a database using the following line of code:
DatabaseDriver.connectToDatabase ("employees");
Figure 11 - Connecting to the Employees database However, Java will refuse to even let the program run. Why? Well, in this example, the code behind connectToDatabase might raise an error – for example if the database is not available for some reason. Since this line of code might throw an error, Java insists that the programmer provides some code to handle the problem if and when it arises. Until the programmer does this, the program will not even run! The mechanism the programmer must use is called a try…catch block. The full details of this are beyond the scope of this course, but basically the programmer must indicate which line of code might throw an error (the try), and then in a separate block tell Java what to do should something go wrong (the catch). Here’s what the corrected program would look like:
try { DatabaseDriver.connectToDatabase ("employees"); } catch (Exception e) { // deal with the problem, if it happens }
Figure 12 - This will now work So, Java will run the “connectToDatabase” call – if all goes well, then fine. But if something goes wrong, it will run the code in the catch block. The programmer must do something sensible here – correct the error, warn the user, shut the system down, whatever.
Project Standard
8
One slight catch here – a lazy programmer could just put nothing in the catch block, and Java won’t mind. This is very poor programming practice that is justified in just a small number of cases. A good project standard would be “no empty catch blocks”.
This example doesn’t use 100% real Java code; it has been simplified to get a point across. For full technical details of the exception mechanism, see the full Ariadne Java Fundamentals Course.
© 2002 Ariadne Training Limited
23
An Introduction to Java and J2EE
Object Orientation
Java is what is known as an Object Oriented (“OO”) programming language. Object Orientation is a different approach to the classic “functional” or “structured” method of software development which “older” languages such as C, Cobol and Fortran support. A full description of Object Orientation is beyond the scope of this single day course (see reference [1] for a downloadable e-book from Ariadne Training for a much more detailed treatment), but we’ll try to describe the “high level” view of OO. Basically, in OO, instead of creating functions and breaking the tasks our system has to perform into smaller and smaller functions, we look at the real world problem we are trying to solve, and identify the Objects that are present in the problem. For example, in an e-commerce development, the objects might be Customer, Shopping Cart and Credit Card. The designers and programmers will then represent these real world objects in the solution – and Java provides all the necessary tools to allow these objects to be represented in the code. One of the key benefits of following OO is that the “gap” between the problem and the solution (requirements and code) is much smaller; hopefully then the eventual code should be easier to understand and modify. The real problem with OO is that there is still a lack of true understanding of Object Orientation – it is certainly difficult to learn at first. Designing real OO systems is a very valuable skill and OO does not provide a “silver bullet” to solve all problems. Java is a full OO language, and if you are not writing true Object Oriented code in Java, you are essentially wasting much of the power and elegance of the language.
Java Garbage Collection
A serious problem in some languages is the need for a programmer to constantly “clear up” after themselves. For example, imagine a programmer has created an “object” to hold the contents of a user’s shopping cart. When the customer logs off, the cart and its contents (in memory) are no longer required. In many languages, the programmer must remember to explicitly delete the object. If the programmer forgets, then a memory leak has been created. This means that the operating system thinks the memory is still in use by the program, and the operating system will not use that area of memory. This situation will continue even after the application has terminated. Think how serious this is if your application is running on a server. Your application will probably need to run continuously for weeks on end. If the memory is slowly being leaked by your application, eventually, the server will crash! Thankfully, Java programmers never (or rarely) need to worry about memory leaks. A process known as “Garbage Collection” regularly checks the program for unused © 2002 Ariadne Training Limited
24
An Introduction to Java and J2EE
memory; when unused memory is found, the memory is returned to the operating system. If you are cross training existing C++ programmers into Java roles, they will hate this – even though it is a very useful feature. The reason is that C++ programmers (good ones) keep the problem of memory leakage at the forefront of their minds at all times – it is difficult to “unlearn”!
Java Applets
A final word on Java Applets – we have discussed them already in previous sessions. Applets can be written using any version of Java – but, the browsers only support a limited range of Java versions. Users running Internet Explorer version 5, for example (a very large percentage of the internet population, at the time of writing) will not be able run Java 1.4 applets. The solution to this problem is provided by Sun in the form of a “Java Plug In” – this can be downloaded by the user from Sun, and the functionality of the web browser is extended to cover the latest version of Java. The flaw in this idea is that the process requires operator intervention (they will have to agree to the download and plug in taking place), and worse, it will take a long time for the plug in to download. We think it is fair to say that technologies such as Flash currently have the upper hand over Java.
Databases in Java
Interfacing to relational database technology is important in any language, and thankfully Java provides the necessary tools to enable this as part of the language. The technology is called JDBC, or “Java Database Connectivity”. As long as the programmer knows and understands SQL, then connecting to databases and manipulating the database is easy in Java. The database you are connecting to must support JDBC, or alternatively, Java can talk to an ODBC9 compliant database through an inbuilt Java tool called an “ODBC/JDBC bridge”.
Summary
• Java provides a wide variety of features as part of the standard language No extra libraries, plug ins or third party tools are required • The language is very well designed It is fully object oriented
9
A standard for connecting to relational databases; most databases will support it
© 2002 Ariadne Training Limited
25
An Introduction to Java and J2EE
It is platform independent It features GUI capabilities, built in Multithreading in an integral feature Elegant error handling is also built in “Garbage Collection” is automatic and requires no programmer intervention Java is database aware
© 2002 Ariadne Training Limited
26
An Introduction to Java and J2EE
Chapter 3
Java vs Other Languages
In this chapter, we will briefly compare Java to other popular programming languages Note that we won’t explore languages such as Pascal, Cobol and Fortran in this book. Although these languages are still in very common use (and Cobol is perhaps the most widely used language of all), their use is certainly in decline and they are obsolescent. Although we promised at the outset to avoid as much code as possible, we will even force you to debug a “working” C++ application in this session! When faced with a choice of language for an application development, the major choices these days are probably: • • • • Java C or C++ Visual Basic C#
Other languages such as ASP and JavaScript are popular, but these are niche languages lacking the general purpose uses of the languages above. JavaScript is a script language that allows some dynamic behaviour to happen inside a web browser – it has absolutely no relation to Java at all! ASP is a Microsoft language enabling server side behaviour to be programmed in a HTML-like form – we’ll be discussing the Java version of this, JSP, in a later session. First of all, we’ll look at VB, C++ and C# in turn and discuss the features of each, before we compare them with Java.
Language 1: Visual Basic
It is claimed (or was once claimed) that Visual Basic is used by over 50% of the world’s programmers. A quick web search has failed to confirm this claim (we can find a few references to the claim scattered around the web, but we can’t find the source of the claim). It was a fairly bold claim, distorted by the fact that Microsoft Office uses Visual Basic as its Macro creation language – so if you have ever used a macro in MS Office, you have technically “used” Visual Basic. Whatever – VB is a very popular language indeed. It is a proprietary language, produced by Microsoft Corp. It’s goal is to enable the creation, rapidly, of graphical user interface intensive applications. Not surprisingly, the language is based on the old language BASIC (Beginners All Purpose Symbolic Instruction Code), a language that dates back decades and was originally designed for teaching purposes. VB’s heritage has lead many “serious” programmers to deride the © 2002 Ariadne Training Limited
27
An Introduction to Java and J2EE
language as being a toy language for unskilled programmers, but the truth is that as VB has evolved, it has become a serious programming language, and one that is now fully object oriented. Despite the serious nature of VB, it is still arguably much easier to learn than Java, and it is certainly easier to use than C++. The modern feel of the language is completed by its central paradigm – applications are constructed using components, small, simple and reusable modules.
An Example VB Program
You lecturer will now expertly build a full blown windows application before your very eyes in just a couple of minutes…
Language 2: C / C++
C is a language that can be traced back to the 1970’s and has a long heritage as a programming language. C++ dates from the early 1980’s, and is an extension to the C language (you can write pure C programs using C++; C++ is a strict superset to C). The extensions revolve around Object Orientation (and a few other minor things); so C++ and Java can both be considered Object Oriented languages (see “Object Orientation”, page 23).
Figure 13 - Bjarne Stroustrup, the creator of C++ It is a very widely used language; most people who have used a computer have had some interaction with C++ - Windows is written in C++ and Windows 95 was 30 millions lines of it! Unlike VB, C++ is not owned by any organisation in particular, although Bjarne Stroustrop created the language at Bell Labs. Many compilers exist for C++ and in theory, an organisation implementing a C++ project are able to switch compilers at will (in practice, however, many variants of C++ have evolved, and so this theory doesn’t hold true). Visual C++ is Microsoft’s version of a C++ compiler, and while their tool allows close integration with the windows platform, it isn’t a special version of C++. It is a very popular language with programmers; it is much deeper than Visual Basic and basically, you can do anything you like in C++ - the power is in the hands of the programmer.
© 2002 Ariadne Training Limited
28
An Introduction to Java and J2EE
C and C++ are both very terse languages indeed. Code written in either language is very, very difficult to read and understand (this depends on the skill of the author, but generally speaking, this is usually true). It is also difficult to learn – an average one week C++ course could never bring a programmer to the same standard of programming as a one week Java or VB course – for example, building GUI’s would be out of the question as there would be much ground to cover first. The biggest (general) advantage to C++? It generates very fast code. It is potentially the fastest code you can get without actually writing assembler or machine code (no chance of that these days). Recall earlier on in the course we said that the Oak team, James Gosling et al (Page 6) considered C++ as their language of choice for their electronic devices. One of the reasons they rejected the idea was the potential for serious errors in C++ causing run time crashes (not desirable in an embedded device). Here’s an example:
Example C++ Program
A nice, simple program. Just two lines of executable code. Don’t worry if you don’t know any C++ syntax; your lecturer will take you through it.
int main (void) { // a simple program that fills an array with squares… int square_array[10]; int count; for (count = 1; count <= 10; count++) square_array[count] = count*count; // now print the results… for (count = 1; count <= 10; count++) cout << square_array[count]; }
Figure 14 - A Simple C++ Program Will the program run? Your lecturer will give a demo of the program. Your job is to give the program a thumbs up or thumbs down, based on the demo of the program, and your code inspection.
© 2002 Ariadne Training Limited
29
An Introduction to Java and J2EE
This space is provided to make your own notes about the debugging session:
Food for thought – that’s two lines of executable code. Now imagine the kinds of problems you’ll get with a couple of million!
Language 3: C# (C-Sharp)
C# is another language designed by Microsoft. At the time of writing it is a fledgling language and reports of use in the field are a little thin on the ground at the moment. It is possible that there are legal rather than technical reasons for the development of this language; we will avoid discussing these reasons here, and instead look at the features of the language. It is pitched at a level somewhere between VB and C++. It aims to be more powerful and general purpose than Visual Basic, but “safer” and easier than C++ (ie avoiding the kind of mess we saw on page 28). In fact, the main feature of C# transcends the language itself. It is part of the new Microsoft .NET framework, which we will look at later. One of the main features of .NET is that any language can be used as part of the framework. Java could not be included (for legal reasons), and this is one of the reasons that C# was invented by Microsoft – as a Java replacement.
Now, let’s compare the three languages to Java.
VB vs Java
Visual Basic is clearly stronger at the production of graphical user interfaces. Swing interfaces can be written using pure Java code (time consuming and difficult), or using a tool – but still the tools are a little immature and not particularly easy to use. VB has always provided the tools to build interfaces, and as we saw earlier, the tools are easy to use and stable.
© 2002 Ariadne Training Limited
30
An Introduction to Java and J2EE
Visual Basic is definitely an easier language to learn and implement than Java. The scope of VB is more restricted however; if you are building small applications on Windows platforms then VB is a leading choice. Java, whilst being more complex, is able to handle a much larger range of tasks than VB. Visual Basic is only supplied by one manufacturer, and even a basic version of the compiler is fairly expensive. The full version is very expensive indeed. Java development systems are available from a wide variety of suppliers10 at a wide variety of prices, depending on your needs. Visual Basic only runs on windows platforms11, whereas Java will run almost anywhere.
C++ vs Java
The Java syntax is based heavily on the C++ syntax. They are definitely not the same language however – they both have a very different feel. As mentioned at the start of the course, many of the dangerous features of C++ have been removed – the example we worked through (Page 28) could not happen in Java. C++ was never tightly controlled; it evolved in many different directions with many people adding features in what seemed like an ad-hoc fashion. It finally became an ISO standard in the late 1990’s, but it is still a very painful exercise to move from one compiler to another. We will look at the performance of the two languages later, but C++ is definitely a much faster performer than Java; things are improving but not many people would seriously consider building a real-time 3d graphics game using Java. Perhaps the most important comparison is the relative productivity of Java and C++ programmers. The evidence from the field is that Java programmers are much more productive than C++ programmers. Less bugs emerge during the development process, and because Java has cut out many of the dangerous and abused features of C++, the programmers can spend more time concentrating on the problem at hand, rather than some obscure syntax problems.
C# vs Java
C# is relatively new, and Ariadne have little experience of C# to date, so it is difficult for us to provide an objective comparison of the two. From a programming view, the two are very difficult to separate. C# has taken a few technical points in Java and improved on them, but that really would be the minimum we could expect after Java has been in the field for seven years.
10
Although Java is still stubbornly owned by Sun Microsystems The new .NET Framework may change this
11
© 2002 Ariadne Training Limited
31
An Introduction to Java and J2EE
As a rough comparison, Java has been in the field for much longer than C#, but that situation will soon change. We shall watch the battle with a non-committal interest… Perhaps more important than the differences between Java and C# is where C# fits in with the new Microsoft .NET framework…
Microsoft .NET
.NET is promising to be one of the biggest revolutions in the industry ever. .NET is essentially the pulling together of many of Microsoft’s technologies such as COM, and the languages that Microsoft have been nurturing for so long (like VB and ASP). We’ll look at the “server side” part of .NET in the second part of the course. From the programming side, the most astonishing feature of .NET is that it is language independent. Programmers essentially have the chance to work in whichever language is suitable for them. A GUI developer would probably use VB; a programmer writing complex logic may use C#. All the languages plug into the framework and work together transparently - the C# coder can call the VB code without even knowing they are working with VB code! Coders working on a Java/J2EE project have no choice – it’s Java or nothing…
Summary
In this session, we compared Java to three other major languages • • VB is best for developing “small”, “simple” GUI intensive applications. It is owned by a major vendor and for many years has been proprietary Java is best as a more general purpose, powerful but safe applications/server based language. It is owned by a major vendor but one who seems to be committed to a free and open use of the language C++ has been popular for many years, but it is showing its age and despite its power, it is difficult and dangerous to use. It isn’t really owned by anyone in particular but has extensive tool support C# looks and feels like Java; users on the .NET platform will use C# instead of Java. It is owned by a major vendor; they appear to making the language free and open to use
•
•
© 2002 Ariadne Training Limited
32
An Introduction to Java and J2EE
Chapter 4
Java Performance
Java Performance
The performance of Java (or perhaps, the lack of it) is perceived to be one of Java’s greatest weaknesses. Here are two quotes we have dredged from the Java web site: • • “86% of users expressed concern about the performance of Java based applications” “Java technology seemed to have a knack for turning a fast Pentium into a plodding 386!”
Is the performance of Java Applications really a problem?
Why Should Java be Slow?
Recall from “The Java Virtual Machine” (Page 13), Java is not converted into a native executable application. Instead, the Java code is compiled into a set of platform independent bytecodes, a kind of platform independent machine code. At run time, the Java Virtual Machine (JVM) reads in these bytecodes one by one, and as they are read in, they are converted to machine code as and when required. So, as a Java program is running, a translation process is constantly going on – and this is the reason your Java programs run slow. But how slow is slow? Clearly, this is subjective and depends on your requirements and the domain you are working in….
Slow Java
The first releases of Java were indeed incredibly slow. Applets in particular ran very sluggishly even on the fastest machines (again, this didn’t help Java’s reputation). However, with each release of Java, the performance has become faster and faster. There are two technologies working behind the scenes to help here. A Just In Time Compiler is part of the virtual machine; it basically caches the machine code it generates, removing the need for it to constantly compile the same code again and again. This means that the first pass through your program may be slow, but future passes speed up.
© 2002 Ariadne Training Limited
33
An Introduction to Java and J2EE
Hotspot VM’s are special Virtual Machines that contain performance evaluation code. Where the VM detects bottlenecks in the code (ie Hotspots), the VM automatically optimizes the code around the bottleneck. Both of these technologies are completely transparent to you – they are just there. Both technologies go from strength to strength, and Java’s performance improves at each release.
Demo and Benchmarks
We are now going to run a simple benchmark. This small application adds a million customer records into memory, each record containing a customer id, a name and a credit rating. We have implemented the program using C++ and Java, and while our C++ skills are not first rate, we have aimed for similar implementations of the two. We are going to first of all run the Java version, using the Virtual Machine provided in release 1, back in 1996. Then, we will run the program using C++ (and we expect a much faster performance). Finally, we will run the program using the latest version of Java. The program will, at the end of the run, report how long the records took to create in memory.
Demo
Your results: Language Java 1.0 C++ Latest Java Time
© 2002 Ariadne Training Limited
34
An Introduction to Java and J2EE
Benchmark Summary
The Benchmark seems to suggest that up-to-date Java can compete in the same area of performance as C++. However, this is only one simple benchmark and should not be taken as cast iron proof. Certainly, on the computational side, Java is now running fairly quickly thanks to Just In Time and Hotspot VM’s. Having said that, the Java GUI is still a slow performer (just think of all the drawing work it has to do!!), and can give the feeling of a less responsive application. Another performance issue is that Garbage Collection (see page 23) can and will kick in at unpredictable intervals; sometimes this will pause program execution for a perceptible amount of time. A final and important point is that Java is very memory hungry; even 64Mb of Ram is not suitable for a non-trivial application.
Native Code Compilers
Compilers exist that claim to be able to compile Java into true 100% native code, ie full executable programs. Obviously, this renders your programs non-portable - but for many projects this isn’t a problem. Treat these compilers with caution and ensure a full evaluation is carried out before any commitment is made to licenses. Our evaluations have proved rather frustrating when compiling non-trivial applications.
Summary
• • • • • The Java language has a reputation for being slow This reputation is based on its early incarnations, and was necessary due to the platform independence Java is rapidly speeding up, thanks to “Just in Time” compilers and “Hotspot VM’s” Java can now cut it with the fastest languages However, C++ will still outstrip it and there are plenty of outstanding issues
© 2002 Ariadne Training Limited
35
An Introduction to Java and J2EE
Chapter 5
Introducing J2EE
In this chapter, we introduce the Java 2 Enterprise Edition (J2EE), and describe its main features. In the following chapters, we will go in to some deeper detail on the main areas of J2EE.
What is J2EE?
As we described earlier, the Java language was originally conceived as a language for building small programs embedded in electrical devices; its popularity was thanks to this idea being twisted into using Java as a programming language for running small applications (applets) in Web browsers. Java has certainly moved on dramatically since then. Now, Java is being used to build large enterprise systems – however, the Java language alone is not enough to meet the requirements of an Enterprise system. A platform is required to provide some of the services we need in such a system, such as12: • • • • • • The ability to store data in a commercial database The ability to distribute our application across more than one computer The support for transactions, to prevent data from becoming corrupted Multithreading, to allow concurrent access to our services Connection Pooling, to prevent our database performance from degrading Scalability – it is all very well our application working like a dream when we test it in the factory, what happens when hundreds of people try to use it, all at the same time?
An enterprise application cannot be located on a single PC – therefore we need to employ some kind of architecture – a framework which lays down the guidelines for where each component of our system is to be located. Very early systems were monolithic – very thin clients (dumb terminals) would connect to a large database running on a server – the so-called Two-Tier (or Client-Server) Model:
Two-Tier Architecture
The classic two-tier (or client-server) architecture was a move away, during the 80’s and 90’s, from monolithic mainframe applications using dumb clients.
12
All of these topics, and more, will be explained and covered in more detail shortly
© 2002 Ariadne Training Limited
36
An Introduction to Java and J2EE
In a two-tier architecture, presentation logic (ie the code that produces displays for the user and gathers input) is present on the client side of the application. In addition, the bulk of the business logic is also present on the client. The data is stored on a server, and the client would typically communicate with the database using a language such as SQL. Sometimes, some business logic may be present on the database server, perhaps in the form of stored procedures or daemon (long-running) processes.
Client (eg - PC)
Database Server
Presentation logic
Business Logic
Figure 15 - The Two-Tier Model The chief advantage of the client-server model is that is fairly simple to work with the database technology is well established and easy to understand, and the programmers don’t have to worry about the complexities of more than one tier. There are serious disadvantages to the two-tier architecture, however...
Two-Tier Architectures: Disadvantages
The fundamental problem with this architecture is that the business logic and presentation logic are tied together, making for a system that is difficult to understand and maintain. A classic example of the problems with the model is the problem such a system would face if the graphical user interface needed to be changed in any way (for example, changing the GUI from a Visual Basic application to a web based front end). As the business logic is so closely tied to the presentation logic, it is impossible to change the presentation code without impacting on the business code. It is probable that the business logic would need to be completely re-implemented. With the advent of the internet, it became possible to provide front ends to applications through web browsers; however, given that a particular web browser could be running on a myriad of platforms, embedding so much business logic on the client side is not necessarily feasible or desirable.
© 2002 Ariadne Training Limited
37
An Introduction to Java and J2EE
The Three Tier Model
The three tier model splits the presentation logic and the business logic into two separate tiers.
Client (eg - PC) Business Server Database Server
Presentation logic
Business Logic
Figure 16 - The Three Tier Model It is common to deploy the presentation tier on the client machines and the business tier is deployed on a server. The business and presentation tiers are as loosely coupled as possible; in other words the presentation tier simply dispatches requests off to the business tier without knowing or caring how the job is being done. Similarly, the business tier does it work and passes the results back the presentation tier without any knowledge of how the data is going to be displayed. The serious disadvantage with this model is the complexity involved in developing such an application - doing so requires a knowledge of distributed computing (for example, using RMI or Corba to call methods on objects on other tiers), multithreading (to allow the business tier to handle requests from multiple clients at the same time), security and so on. To remove the burden on the programmers, an application framework (these days commonly referred to as an application server could be purchased from a third party to provide such services...
Application Servers
Third party application servers can provide the framework that we need to develop a multi-tier application, and provide the required services such as distribution, multithreading, security and persistence. In the past, each server product was developed independently, and therefore each provided different services, each service was implemented in a different way (and so the programmer would have to jump through different hoops depending on the server in use). Clearly, this would mean a project would have to choose an Application Server and stick with it - changing between application servers would be difficult, if not impossible.
© 2002 Ariadne Training Limited
38
An Introduction to Java and J2EE
However, in 1997 a group of application server vendors (including BEA, IBM, Oracle, Sybase and Sun themselves) began working together to define a standard for application servers, based on the Java language. The vision was to create a standardised set of services, and a standard API for accessing these services.
The J2EE Standard
The J2EE Standard rigorously defines a set of services that application server must support, together with a standard API for accessing those services. The standard is closely tied to the Java language, so the method of writing code to access the Application Server’s services is clear. It is important to note that J2EE is not a product; you cannot download a software product called “J2EE” (as you can with J2SE). J2EE is a specification for which there are many different implementations. The implementations are provided by third-party vendors (such as IBM and BEA). These implementations must conform to the J2EE standard to be able to call themselves “J2EE Compliant”. Sun provide a free product called the reference implementation which conforms to the J2EE standard - it is free to download and use, and is ideal for learning about J2EE and also for testing or prototyping. It doesn’t do much more than the J2EE requires, however, whereas third party commercial implementations are likely to be much more suitable for live applications.
J2EE Components
The J2EE defines three types of component an application programmer can develop: • • • Servlets JSP’s Enterprise Java Beans
We will look at each of these in very brief detail for now – we’ll explore each of them as we progress through the course.
Java Servlets
Servlets provide a method of writing server side Java programs. A common use for servlets is the dynamic generation of web pages. For example, it would be possible to build an interactive guestbook on a web site using a Java servlet. Servlets are a replacement for the traditional “CGI” (Common Gateway Interface) method of providing interactive webpages. The use of CGI is very popular and widespread, although CGI’s can suffer from performance and scalability problems, and are usually written using Perl – a popular niche language but certainly not to everyone’s taste! © 2002 Ariadne Training Limited
39
An Introduction to Java and J2EE
Java Server Pages (JSP)
Java Server Pages allow web designers to build interactive web pages without getting into the deep details of the Java language. A JSP looks very much like standard HTML, and so should feel familiar and comfortable to existing HTML programmers. The difference is that JSP allows fragments of Java code to be embedded into the web page.
Enterprise Java Beans (EJBs)
Enterprise Java Beans are perhaps the most significant of the three types of component. An EJB is a Java class that possesses several special features. Enterprise Java Beans are: • • • • Distributed Transaction-aware Multi-threaded Persistent
Many of the features of an EJB are provided “for free” by the application server, relieving the programmer of the tedious job of implementing these details. Therefore, the programmer can concentrate on the important work of coding the business logic.
The J2EE Architecture
The J2EE Architecture is very similar to the three tier model, although it is extended a little further to accommodate web based applications:
Client (eg - PC or browser)
Web Container
EJB Container
Database Server
Presentation logic
Web Application
EJB
Figure 17 - The Full J2EE Architecture In order to accommodate the needs of building web based enterprise applications, the J2EE spec is based around 4 tiers rather than 3, although the theory and reasoning behind the separation remains the same.
© 2002 Ariadne Training Limited
40
An Introduction to Java and J2EE
A J2EE Compliant Application Server will provide a Web Container, which is a server that allows servlets to be deployed. We’ll explore this in detail in the chapter on servlets. An EJB Container will also be provided as part of the application server - this is a server that can provide our Enterprise Java Beans with all of the services that we need. Note that a J2EE based application does not need to be a web application - it supports any type of client - in an application not based on the web, the client tier talks directly to the EJB Container, and the Web Container is not used.
Concrete Example – A Warehouse
Consider a warehouse running an automated stock/picking system. Orders are entered into the system using web based pages running on a small intranet. Moving around the warehouse are a collection of workers carrying Radio Data Terminals dishing out instructions on which orders to pick etc. A possible J2EE implementation of this might look as follows:
Intranet Data Entry Forms/ Queries Webserver Running control servlets
EJB Server
HTML Forms
Orders EJB Stock EJB
Oracle
Radio Radio Data Radio Data Terminal (RDT) Radio Data Terminal (RDT) Radio Data Terminal (RDT) Data Terminal (RDT) Terminal (RDT) Running standard Java Applications
Customer EJB
EJB’s - control and model
Figure 18 - Warehouse J2EE Architecture The Radio Data Terminals are running standard Java Applications (or possibly even J2ME applications – see page 10). The HTML forms are standard web pages. We would have some J2EE servlets running on the webserver – these provide any control logic that the web forms need. More on this in the next chapter. The EJB Server holds the bulk of our business logic. For example, the functionality to credit check all customers held on the system will be held in a component, written in Java that has been deployed on this server. These components are the Enterprise
© 2002 Ariadne Training Limited
41
An Introduction to Java and J2EE
Java Beans. These EJB’s will contain data about the customers – and these will be stored on (in this case) an Oracle database. More on this later.
Summary
• • • The J2EE Specification provides a standard for Application Server vendors The specification enables the construction of applications built around a multi-tier architecture The components defined by the specification are written by the Java programmer: Servlets (server side Java classes accessed by browsers) JSP (HTML-like files with embedded Java) Enterprise Java Beans - distributed, transaction-aware, persistent, multithreaded Java components
© 2002 Ariadne Training Limited
42
An Introduction to Java and J2EE
Chapter 6
Servlets and JSP
Servlets and JSP
In this session, we will look at the Java technology called servlets, and see what they can do. Although they are an interesting idea, there is a serious problem associated with servlets that you need to be aware of – we will present the solution, a technology called JSP. To build elegant, maintainable and scalable J2EE systems, it is critical that your engineers follow certain rules when writing servlets and JSP’s – we will discuss these rules at the end of the session.
The Static Web
The World Wide Web, as it was originally designed, is a fairly static medium. The HyperText Transfer Protocol, or HTTP for short defines the general process for requesting web pages: The client (web browser) requests a page from the server. The server responds by sending a stream of data back to the client.
1 : get
2 : webpage
Web Browser Clients
Webserver - dishes out predefined webpages
Figure 19 - The HTTP Protocol simplified
© 2002 Ariadne Training Limited
43
An Introduction to Java and J2EE
Making the Web Dynamic
As the web matured, a demand grew for the web to become more dynamic. We have seen how Java Applets attempted to provide interactivity on the client side – but what if we want to do something dynamic on the server side? Let’s say we want an interactive message board, to which users can post information. We will need to store these messages on a database on the server, and when the users want to view the information, we will need a program to query the database and build the webpage to show the up-to-the minute collection of messages. The scheme described in Figure 19 is not sufficient to achieve this.
Figure 20 - A dynamic, constantly changing, message board
Solution: CGI
One solution was a technology called CGI, or Common Gateway Interface. This gave programmers the means to write server side programs that could intercept requests from clients, and to generate the response page dynamically. CGI is a very, very popular technology and although it is a bit old now, you will still see many web pages using CGI (look at the address of many web sites – if you see “cgi” or “cgi-bin” appearing somewhere in the address, you are using CGI). There were some problems associated with the approach. First of all, although any language could be used to implement CGI programs, a language called perl was considered the best. Perl is an excellent language, but rather specialist and definitely a niche language (text processing is one of its strengths). CGI is all very well if you are performing relatively simple tasks like building message boards, but moving into the realms of serious software engineering (ecommerce sites, etc) is pushing things a bit far. CGI is also renowned as being a poor performer (due to threading problems), and there are some serious security loopholes with CGI.
© 2002 Ariadne Training Limited
44
An Introduction to Java and J2EE
The Servlet Solution
A more recent solution, provided by Java (and J2EE) is to build small Java programs called servlets. These small programs are deployed and run on the server. Their operation is simple, and very similar to the CGI scheme described above. The servlet sits on the server, doing nothing, until a request from a web page comes in. The request is intercepted by the servlet, the serlvet then does some work (anything the programmer wants to do), and then the servlet sends the results back, in the form of a web page. Although this is a similar scheme to CGI, the main benefits are: • • We can use Java – a more serious and deep language than perl We can easily take advantage of Java’s features, such as JDBC
An Example Servlet
We’ll now demo a simple servlet. The code follows below - don’t worry about all of the details, but you should notice that the program is outputting HTML, and one of the outputs is the current date and time. So this is a dynamic webpage (albeit a very dull one), rather than a prewritten static webpage.
public class HelloWorld extends HttpServlet { public void doGet() { out.println (“<html><head><title>HelloWorld</title></head><body>"); out.println("<p>Hello to everyone! The time now is " + (new java.util.Date()) + "</p>"); out.println("</body></html>"); } }
Figure 21 - An Example Servlet (many technical details omitted for clarity)
A Serious Problem Emerges
The simple servlet above is all very well, but there is a major flaw in the idea behind servlets. In any non-trivial servlet, the code very rapidly becomes a nasty tangled combination of Application logic and HTML.
© 2002 Ariadne Training Limited
45
An Introduction to Java and J2EE
out.println ("<html><head><title>Extension List</title></head>"); out.println ("<body><h1>Extension List</h1><hr><pre>"); try { Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); } catch (ClassNotFoundException e) { out.println ("An error occurred loading a driver:<p>"); out.println ("<pre>" + e + "</pre>"); } String url = "jdbc:odbc:PERSONELL"; String user = "sysadmin"; String pass = "sysadmin"; Connection con = null
Figure 22 - A nasty combination of HTML and Java This creates a maintenance nightmare. What happens if you want the look and feel of your web page to change? You have to trawl through reams of Java code looking for those out.println() statements. Also, it is necessary for your HTML coders to have a reasonable working knowledge of Java to even know how to make the changes. This very unpleasant situation has a possible solution, called JSP – Java Server Programming…
Java Server Programming (JSP)
In JSP, instead of writing a Java Servlet, we write simple text files that look very much like standard HTML. Therefore, these files (called JSP files) should be relatively understandable to HTML coders. The difference is we are able to “drop in” small blocks of Java code anywhere in the file. These areas of Java are carefully delimited using some predefined markers. Consider the following JSP file, a web page that outputs the date and time:
<html> <html> <head> <head> <title>What Time Is It?</title> <title>What Time Is It?</title> </head> </head> <body> <body> <h3>You requested this page on :: <h3>You requested this page on <%= <%= new java.util.Date() new java.util.Date() %> %> </h3> </h3> </body> </body> </html> </html>
Figure 23 - A Simple JSP © 2002 Ariadne Training Limited
46
An Introduction to Java and J2EE
Here, most of the JSP is standard HTML, but notice the block of code in bold type, between the <% and %> markers. This is standard Java code.
Figure 24 - The resulting output
JSP Architecture
How does this actually work behind the scenes? It’s all a bit of a trick really. We deploy the JSP onto the server, where it is automatically converted into a servlet! So in essence, JSP is a way of creating servlets without realising you’re writing servlets, and without their inherent complexity! Here’s the process that goes on, step by step:
HTML Page 6 1 Client Browser Server 3 6 6
2
JSP Eng ine
Java Servlet
5
Source JSP File
Generated Java Servlet
4
Java Compiler
Servlet Clas sfile
Figure 25 - The JSP Architecture Step 1) the client requests a JSP page (in other words, the user types in a URL something like) Step 2) the server catches the request for a JSP page. The server finds the JSP file, and then passes it on to a program, running on the server, called the “JSP Engine” Step 3) The JSP reads the JSP file, and from this basic information, it generates a Java Servlet (automatically!) © 2002 Ariadne Training Limited
47
An Introduction to Java and J2EE
Step 4) The generated Java Servlet is compiled using a standard Java compiler (remember - all this is happening while the user is waiting for their page!) Step 5) The new Java servlet is now executed by the JSP engine. The servlet, as in the previous chapter, generates HTML “on the fly” Step 6) We now have a HTML page - the page is passed back to the web server and then finally it is returned to the client. Note that the compilation of the servlet does indeed take some time and you will notice that the first request of a JSP page is SLOW! The JSP then stores the resulting class file, however, because subsequent accesses will be much faster.13
Another Dynamic Webpage
Here’s another example of a simple JSP. Again, we are demonstrating that we can drop lines of Java code anywhere we like in the HTML:
<html> <html> <head> <head> <title>First 10 Squares</title> <title>First 10 Squares</title> </head> </head> <body> <body> <h1>The First 10 Squares:</h1> <h1>The First 10 Squares:</h1> <hr> <hr> <% for (int i=1; i<=10; i++) <% for (int i=1; i<=10; i++) { { out.println (i*i); out.println (i*i); } } %> %> <hr> <hr> </body> </body> </html> </html>
Figure 26 - Another simple JSP
Another Problem Emerges!
You might have spotted that despite the nice, fluffy feel of JSP’s (and they are certainly at first glance more friendly than servlets), we actually haven’t solved the problems we identified in “A Serious Problem Emerges” (Page 44). We still have them, albeit in a different way.
Some Application Servers compile the JSP when the JSP is deployed rather than first accessed. This obviously increases performance, but it isn’t mandated and the reference implementation does not do this.
13
© 2002 Ariadne Training Limited
48
An Introduction to Java and J2EE
If we begin to write JSP pages with Java logic scattered all over them, we have the same maintenance nightmare as before – if we need to make any changes to the way the application works, we are going to have to laboriously hunt through masses of HTML code! Conversely, the HTML is going to be scattered amongst lots of very awkward looking Java fragments. This is one of the biggest problems faced by many J2EE projects.
<%@ page import="java.sql.*" %> <%@ page import="java.sql.*" %> <% <% try try { { Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); } } catch (ClassNotFoundException e) catch (ClassNotFoundException e) {%> {%> <h1>An Error Occurred Loading a Driver</h1> <h1>An Error Occurred Loading a Driver</h1> <% } <% } String url = "jdbc:odbc:PERSONELL"; String url = "jdbc:odbc:PERSONELL"; String user = "sysadmin"; String user = "sysadmin"; String pass = "sysadmin"; String pass = "sysadmin"; Connection con = null; Connection con = null; try try { { con = DriverManager.getConnection(url,user,pass); con = DriverManager.getConnection(url,user,pass); } } catch (SQLException e) catch (SQLException e) { {
%> %> <h1>An error occurred connecting to the database</h1> <h1>An error occurred connecting to the database</h1> <hr> <hr> <% } <% }
Figure 27 - An example of some very unpleasant looking JSP - what on earth is going on here??
Solution: Servlets!
We now have the situation where: • • Servlets are a good idea, they are good at doing “brain-work” (application logic), but they are poor at the presentation of the results – those nasty out.println()’s JSP’s are a good idea, they are excellent at doing the presentation but degenerate into a mess if they try to do too much brainwork
© 2002 Ariadne Training Limited
49
An Introduction to Java and J2EE
Well, the answer is to get the two to work together as a team! Let the servlet handle the Java logic, then get it to send its results onto a JSP – which can then do the job of displaying the results.
The MVC Architecture
What we have discussed here is one form of a system architecture known as the Model-View-Controller, or MVC. A discussion of MVC is beyond the scope of the course, but at a simple level, MVC simply says “use separate modules to handle the display, the system flow and the business logic” – which is roughly what we have done with our JSP’s and Servlets: • • • The display logic is called the “View” in MVC. The system flow is called the “Controller “ in MVC. The business logic is called the “Module” in MVC.
It is imperative, to achieve success with J2EE on your project, to enforce this model.
Project Standard
JSP’s display results only. Servlets handle the Java logic.
The “View”
The “Controller”
The “Model”
HTML or JSP
servlet
Other Java
Figure 28 - The Model View Controller (albeit in a simplified form)
Note – this method of design is often called “Model-2” in J2EE circles.
Java in JSP’s
Are there rules about what Java should be allowed in a JSP page? The general answer is “as little as possible”, and the MVC/Model-2 principle must be followed.
© 2002 Ariadne Training Limited
50
An Introduction to Java and J2EE
Generally, Java will be used: • • • To display results – eg outputting the details of a product Simple loops – eg running through multiple products, and displaying each one in turn Minor cosmetic modifications to data – for example, turning a singular into a plural, changing a greeting depending on the time of day, etc.
The real problem is enforcement. There is nothing in Java or J2EE to prevent programmers from dropping huge blocks of code into JSP pages. We have seen this is a recipe for disaster – the only way to avoid it is to impose project standards and to carry out code inspections.
Custom Tags
A Custom Tag is a new enhancement to JSP that allows the JSP coder to perform complex operations without the need for Java code. A tag is essentially a macro used by the JSP programmer – even though the tag is quite simple, it turns into complex Java “behind the scenes”.14 The concept of custom tags has been, until recently, in its infancy. No standard set of tags were available, and it was really up to projects to write their own “Tag Libraries”. This is quite a difficult process to do, and of course this means the code your project writes is non-standard. An alternative was to download third-party tag libraries, but these libraries were patchy and sometimes quite complex. At last (and only just, at the time of writing), a standard tag library has been specified. It is called the “Java Standard Tag Library” or JSTL, and this library will become part of the J2EE specification. Here’s a quick example of where tags might be useful. Again, you don’t need to understand the Java, as long as you get a flavour of what is going on:
JSP Example
This JSP page is responsible for displaying all of the items in stock in a warehouse. We have some standard HTML in here as usual, but the bold code is the Java code that runs through each item of stock and extracts the details of the stock:
14
Ie – when the JSP is converted to a servlet
© 2002 Ariadne Training Limited
51
An Introduction to Java and J2EE
<i>The following items are in stock</i> <i>The following items are in stock</i> <pre> <pre> <% <% Collection results = Collection results = (Collection)request.getAttribute("stockListResult"); (Collection)request.getAttribute("stockListResult"); Iterator it = results.iterator(); Iterator it = results.iterator(); while (it.hasNext()) while (it.hasNext()) { { String next = (String)it.next(); String next = (String)it.next();
%> %>
Item : <%= next %> Item : <%= next %> <% } %> <% } %>
Figure 29 - JSP to display a stock list This JSP conforms to the rules we have discussed. It doesn’t do any brain work; it is given a set of data and the Java code simply extracts each result, and the JSP displays the name of the stock on the web page. Even though we have followed the rules, it is still a bit messy, and very heavy on the Java. Here, a Tag from the JSTL will help:
Using a Custom Tag
The syntax of a custom tag is beyond the scope of this book15, but the following code illustrates the benefit of replacing the heavy Java in Figure 29:
<i>The following items are in stock</i> <i>The following items are in stock</i> <pre> <pre> <c:forEach var=”item" items="$results"> <c:forEach var=”item" items="$results"> Item : <c:out<br> Item : <c:out<br> </c:forEach> </c:forEach>
Figure 30 - Replacing the Java with a Tag
Summary
• The J2EE specifies two type of web component that can be written in Java: Servlets
15
Reference [8] is a good guide to the JSTL
© 2002 Ariadne Training Limited
52
An Introduction to Java and J2EE
JSP’s • • • • Servlets enable the dynamic creation of web pages JSP’s enable the creation of dynamic web pages as well, but rely much less on explicit Java coding The MVC or Model-2 must be enforced if maintainability and understandability is a requirement Custom Tags and the JSTL provide another way of hiding the Java in a JSP Page
© 2002 Ariadne Training Limited
53
An Introduction to Java and J2EE
Chapter 7
Enterprise Java Beans (EJB’s)
In this session, we will look at the concept of Enterprise Java Beans, or EJB’s. We’ll look at the different types of EJB, and find out about the features that EJB’s offer.
Java Beans
Sun invented the concept of a Java Bean some years ago, in an attempt to extend Java’s reach into the area of Rapid Application Development. We saw in “Java vs Other Languages” session (Page 26) that Visual Basic allows developers to rapidly build applications by dragging and dropping prebuilt components on to the screen, and then by customising the components by changing some properties. Sun’s idea was that a Java Bean would become the equivalent of the Visual Basic Components. A Java Bean is a standard Java Class, but one that is simple and easy for programmers to reuse themselves. For example, the “Button” in Swing (see Page 16) is a Java Bean. The programmer can change the colour of the button, the behaviour of the button and so on, but most of the work has already been done for them. Java Beans are often graphical, but not necessarily. As long as the Java Class conforms to a collection of “Design Patterns”, simple guidelines as to how the class is written, then it qualifies as a Java Bean. The key aim is that regardless of what the Bean is, it has a collection of properties that can be accessed and customised by the programmer.
Figure 31 - A JavaBean (in this case, the Button from the Swing classes). The programmer has customised the bean slightly, but most of the work was already done for them
The Purpose of Java Beans
By assembling these Java Bean “Components” we can theoretically build applications without reinventing the wheel. Java Beans were originally envisaged to be graphical, but a Java Bean can actually be any Java Class, as long as it conforms to the “Design Patterns”.
© 2002 Ariadne Training Limited
54
An Introduction to Java and J2EE
From a theoretical point of view, Java Beans are easy to design and build – but in reality, there are some catches that make good Beans very difficult to build well. Therefore, commercial Java Beans can be quite expensive to buy.
Enterprise Java Beans
The next stage of evolution of Java Beans was the development of Enterprise Java Beans, or EJB’s. EJB’s are essentially the same as standard Java Beans – they are small components that perform some kind of job. However, EJB’s live on servers rather than inside standard desktop applications. Since they are server based, they are never graphical, unlike standard Java Beans, which often are. As an example, you might have a “Customer” EJB in an e-commerce application. This EJB would hold data about your customers, such as their credit rating and which products they are most likely to be interested in. As with any other Java Module, an EJB will be able to do things as well – for example we might be able to query the Customer EJB and find out the credit rating for each customer we are interested in. The most striking feature of EJB’s is that they provide your programmers with a whole raft of functionality that is essential when building large scale systems. All of this is done for them, so they can concentrate on the job in hand.
EJB’s Are:
• • • • Persistent Transaction Aware Distributed Multithreaded
We’ll look at each of these concepts in turn. All of these features are provided “for free” – the programmer of the EJB doesn’t need to manually implement any of it. The trick is that your programmer’s EJB’s are “wrapped” in a piece of software called the EJB Container, provided by the Application Server.
EJB’s are Persistent
The data held inside an EJB can be made to be persistent – in other words, it can be stored on a relational database automatically. Any changes to the data in the bean will be reflected in the database, without the need for the programmer to write laborious (and error-prone) JDBC code (see “Databases in Java”, page 24). The SQL required to perform the database transactions will be generated automatically by the deployment tool supplied by your application server (this SQL
© 2002 Ariadne Training Limited
55
An Introduction to Java and J2EE
can be tailored to your own needs if required, but it is done in the tool rather than polluting the Java code). If required, persistence can be handled manually by the programmer. This would be necessary if particularly complex database persistence is required, but it does mean polluting the Java code with database/JDBC calls. The terminology for the automatic persistence is “Container Managed Persistence”, or CMP – the manual variety is BMP. Our general advice is aim for CMP if at all possible – it is much more elegant and is one of the strongest features of EJB’s. We’ll return to this topic in a short while.
EJB’s are Transaction Aware
EJB’s that hold data can be made to be transaction aware fairly easily. Either the programmer manually demarks regions of code that must be considered atomic, or entire methods (Java procedures) can be set to be a single transaction, using the deployment tool. In this example, we have set a Java method called “buy” to be a single transaction. If anything goes wrong part way through the process, any changes made as part of that method will be automatically rolled back.
Place Order Example
As an example, consider an EJB that is responsible for placing an order for a customer. The EJB’s task is as in the following flow chart:
Place Order Use Case
Credit Check Customer no
ok?
yes
Add order to orders list
Return Error Message
Add invoice to the invoice queue
Send "order placed" message to customer
Use Case End
Figure 32 - The Place Order Process © 2002 Ariadne Training Limited
56
An Introduction to Java and J2EE
Clearly, if the process crashed (for whatever reason) after the order had been added to the list, but before the invoice was added to the queue, we would have a fairly serious business problem. By running the procedure as an EJB, we can make the entire process a transaction. If more fine grained control is required (let’s say we wanted to start the transaction after the credit check, and end the transaction once the invoice is raised), then this can be done by the programmer using a fairly simple “start transaction” and “end transaction” process. The automatic transaction handling is called “Container Managed Transactions” (CMT); the method using programmer intervention is called “Bean Managed Transactions” (BMT)
EJB’s are Distributed
As we have seen, Enterprise Java Beans are located on a server and managed by the Application Server. However, the programmer using the EJB (the client programmer) doesn’t need to know anything about where the EJB is physically located. They follow a fairly simple procedure of “looking up” the EJB to get a handle on the bean; and then they call its methods exactly as they would do with a regular Java class. All of the nasty network mechanics happens behind the scenes in complete transparency (by the way, the technique used is called RMI or Remote Method Invocation – any programmer who has used RMI by hand will testify, it is a fairly unpleasant technique!)
EJB’s are Multithreaded
The Application Server manages threading automatically. As a quick example, assume a programmer has written an EJB called “Book” representing a product in an online book store. What happens if two customers try to get information about the book at the same time? The application server will automatically manage threads so that the two tasks can happen in parallel – and once again, no programmer intervention is required to make this happen. In addition, the application server will also intelligently pool resources. For example, a thousand clients might all be trying to access the same EJB at the same time; the application server will be able to share a limited number of EJB’s amongst those thousand clients, thus preserving resources and preventing excessive server load – again, all this happens without the programmer needed to code anything special to make this happen.
© 2002 Ariadne Training Limited
57
An Introduction to Java and J2EE
Types of EJB
Although until now we have considered an EJB to be a single concept, in fact there are actually three different types of Enterprise Java Bean available. The three types are: • • • A Session EJB An Entity EJB Message Driven EJB’s (MDB)
Session beans are in very common use – these are usually beans that are primarily written to do something. For example, we might have a session bean responsible for credit checking customers, or for placing orders (as on Page 55). Session Beans are not persistent – any data they hold will be lost when the system is restarted. Entity Beans are EJB’s that provide the persistence features we discussed earlier. They are usually used to represent the real world “objects” that exist in the system. Examples of Entity EJB’s would be Customer, StockRecord and PurchaseOrder. You could think of an Entity Bean as being “data storage” beans, but they are Java classes, and so can do things just as with a session bean. Message Driven Beans are Java Classes that are capable of responding to “Messages” sent to them. The benefit of an MDB is that they can work asynchronously – in other words the client asks the bean to do something, and the client can go off and carry on with some other task, while the MDB is responding to the message at the same time. This is a powerful facility, but we won’t explore it in any depth on this course.
Entity Beans
The concept and implementation of Entity Beans is very controversial (at the time of writing; things change quickly in J2EE!). Many J2EE applications have studiously avoided implementing entity beans, and have been happy to settle with Servlets, JSP and Session Beans. This will work perfectly well; an entity bean can be replaced by a standard class (you will see these referred to as “Plain Old Java Classes” – POJO’s). These plain Java classes use JDBC calls to manually manipulate the underlying database. Remember, however, that this manual use of JDBC is a pollution of your Java code, it is less maintainable and perhaps less understandable. But on the other hand, many people find Entity Beans difficult to understand and use anyway. Much of the confusion about Entity Beans has been attributed to the original specification of them – it was allegedly poorly written and difficult to understand.
© 2002 Ariadne Training Limited
58
An Introduction to Java and J2EE
The choice is entirely down to your project and its senior designers. However, be aware that a new version of the Entity Bean spec has now been released – it is called EJB 2.0, and it fixes many of the original problems with Entity Beans. As long as certain design principles are maintained (more on this shortly), Entity Beans are now viable for serious developments, and they certainly are a powerful tool.
J2EE Design Patterns
Designing and building J2EE systems is difficult. Or perhaps more accurately speaking, it is easy to design and build poor J2EE systems; applications that perform and scale badly. Help is at hand. A collection of good design principles, called “Design Patterns” have been formulated by the J2EE community. Sun are making a big effort to collect and publish these design patterns – they are all available on-line at the Java website. We’ll look at a quick example of a simple design pattern – our aim is not to teach design patterns, as that is beyond the scope of this course – but we do want to give the flavour of design patterns, and establish that understanding them is critical to building good systems. Our advice is to ensure that your designers and programmers are familiar with at least some of the patterns, and that your project embraces the teachings of the patterns. If you are planning to buy a J2EE Design Patterns book, ensure that the book covers EJB 2.0, as the new version of Entity Beans has made some of the design patterns redundant. An excellent design patterns book is available for free download – see reference [4].
Example Design Pattern
Consider a simple system that allows the user to view a list of all customers in the database – and for each customer, their open purchase orders are also displayed. Here is our first attempt at designing the application…
© 2002 Ariadne Training Limited
59
An Introduction to Java and J2EE
getCustomerDetails
Customer EJB
Web Page (JSP)
getPODetails
Purchase Order EJB
Client
Application Server
Figure 33 - Our first design attempt
This will work. Our client application (in this case it is a web page, but it could have been a standard Java application) makes calls to the Enterprise Java Beans on the server. Remember, this is possible and easy because EJB’s are distributed and the client programmer can easily locate and call the beans. Our design has the client calling the Customer EJB first to get the customer data, and then it follows with a call to the Purchase Order EJB. The information is then displayed on the web page.
Design Problem
Assume that this application works. The J2EE Design Patterns tell us that it isn’t a very good design however – it will perform badly, especially as the application scales up. The reason is that the calls across to the EJB’s are slow. They are travelling across a network, or possibly even the internet. Essentially, having two separate calls is wasteful. Applying a Design Pattern will help here…
Design Solution
The solution is called a Session Façade – all design patterns have recognizable names. The Session Façade guides us to remove the multiple calls across the network and replace with one single call.
© 2002 Ariadne Training Limited
60
An Introduction to Java and J2EE
We will require an extra EJB to make this work, however – usually we will use a Session Bean16. We make a single call to the session bean, and then the session bean can talk to the EJB’s on the server.
getCustomerDetails
Customer EJB
Web Page (JSP)
listCustomers
ListCustomers Session EJB
getPODetails
Purchase Order EJB
Client
Application Server
Figure 34 - The new design, using a Session Façade This has cut down the network “chattiness” and will certainly have a significant effect on performance, especially when we scale the system up, or if the List Customers process becomes more complex with more calls. This is just one design pattern (it happens to be the most common and one of the most important), but as we have mentioned, there are plenty more and investing time and effort into learning them will pay off in spades for your project.
EJB Summary
• • • • • Enterprise Java Beans are similar to standard Java Classes They automatically provide Distribution, Transactions, Multithreading and Persistence There are three different types of EJB – Session, Entity and Message-Driven Entity Beans are the persistent type of Bean and have been quite slow to gain acceptance Use recognised “Design Patterns” when building your applications
16
This makes sense – recall we said that Session Beans usually do something rather than holding onto data
© 2002 Ariadne Training Limited
61
An Introduction to Java and J2EE
© 2002 Ariadne Training Limited
62
An Introduction to Java and J2EE
Chapter 8
The Javadoc Tool
Before we finish, we are going to take the opportunity to introduce the (sometimes neglected) javadoc tool. This remarkable tool, provided for free as part of the Java toolset, automatically generates technical documentation for your project’s code. The documentation follows exactly the same format as the Java Help that we saw in “Java Help” (page 12) – and the files you generate integrates closely with the standard Java help. This is a tremendous benefit to your programmers and designers, and is usually worth the added development effort involved.
How the Tool Works
The only requirement is that your programmers include a special kind of comment in their code, called javadoc comments. These comments are almost the same as standard comments (just a slightly different syntax). In general, everything should be given a javadoc comment, such as the classes themselves, each item of data in the class, each function in the class – as much as is required to understand and maintain the class.
/** This class holds information about an /** This class holds information about an * Employee in the company * Employee in the company */ */ public class Employee public class Employee { { protected int salary; protected int salary; private String employeeCode; private String employeeCode; private private Calendar dateOfJoining; Calendar dateOfJoining; … … } }
Figure 35 - A Java class with a javadoc comment Figure 35 shows an Employee class with a javadoc comment (the rest of class also needs to be commented – we’ve omitted this for space). The javadoc is distinctive because of the “/**” symbol.
© 2002 Ariadne Training Limited
63
An Introduction to Java and J2EE
Running the Tool
When the tool is run, documentation is generated for the classes you wish to be documented (you can document your entire codebase in one single run if required). The output is a collection of indexed HTML files, in exactly the same format as the standard Java Help files (in fact, if you haven’t already guessed, javadoc is how the standard help is produced!). These files can now be browsed by your project – they can also be picked up by your Java development IDE if you are using one (eg – pressing F1 on the Employee class will bring up the Employee help file!).
Figure 36 - Javadoc help for the Employee class
Javadoc Demonstration
We will now demonstrate the generation of javadoc help for a non-trivial project (more than 100 classes).
Summary
• • • • The Javadoc is an inspired addition to the Java toolset Projects that use it report higher productivity and a more maintainable product Clearly, to get value from the tool, an investment of time has to be made with putting the correct comments in the code A drawback is that is does rather obscure the code a little! © 2002 Ariadne Training Limited
64
An Introduction to Java and J2EE
© 2002 Ariadne Training Limited
65
An Introduction to Java and J2EE
Chapter 9
Course Summary
We have explored the principle features of Java, from the “standard features” (GUI’s, Threading, JDBC, etc) through to the more recent and advanced “enterprise features” provided by J2EE. It is important to be proficient in standard Java before diving into the enterprise side of Java – but once programmers reach a reasonable standard of ability in Java, many people report it is a well designed and “Engineer Friendly” language. Certainly, reports from the field suggest that Java projects tend to turn out code faster and more reliably than those working on C++ and similar languages. As always, the IT world is changing quickly; who knows how long Java will continue to hold such an enviable position? C# and .NET are certainly going to be major competitors in the near future…
Java Problems
After dozens of pages extolling Java’s virtues, what are the problems with Java? One problem (which could be considered a benefit by many) is that Java is still a cutting edge language and is riding on a wave of technologies. Check the java.sun.com website, and you will find a raft of acronyms for lots of very fancy ideas (and often it is impossible to tell what these ideas actually are – often the technical enthusiasm gets in the way of human understandable descriptions of the ideas!) Many of these technologies get trumpeted as “the next big thing”; most of them you will never hear of again. It can all be a bit overwhelming, especially when all you want is to get your product delivered on time and on budget. On the subject of Object Orientation, although it is a standard and accepted method of analysing, designing and coding software, there is still a lot of misunderstanding, suspicion and confusion about OO. J2EE is an incredibly difficult topic to get to grips with. Most of the books currently available on the topic were written by the people behind the development of J2EE itself – and in many cases these books prove to be too complex and involved to gain understanding from. At last, “Teach Yourself” books are beginning to emerge, so perhaps at last J2EE will be easier to grasp for most programmers. (see reference [5] and [6]). No matter how complex J2EE is, a grasp of the “Design Patterns” (see page 58) is essential to the success of any J2EE development. Finally, as we have touched on in many areas of this book, .NET is a serious new technology and is likely to change many things. A common view is that both .NET
© 2002 Ariadne Training Limited
66
An Introduction to Java and J2EE
and J2EE will continue to carve out their own niches, and a kind of “stalemate” will eventually occur between the two. Who would dare to predict the outcome?
© 2002 Ariadne Training Limited
67
An Introduction to Java and J2EE
Bibliography
[1] : Knowles, Richard; Dundas, Kenny; Golledge John. 2001 UML Applied : Object Oriented Analysis and Design Ariadne Training
A cracking guide to the UML and Object Orientation, but we would say that. Available for download at.
[2] : Hortstmann/Cornell. Core Java : Volume 1 Fundamentals Sun Microsystems/Prentice Hall
An excellent tutorial covering a wide range of Java topics. Excellent examples and very well explained. The only drawback is a second volume is required to cover many of the advanced topics
[3] : Hortstmann/Cornell. Core Java : Volume 2 Advanced Features Sun Microsystems/Prentice Hall
Another excellent volume, covering Threading, Collections and JDBC; in addition it covers Networking, Remote Objects, JavaBeans, Security, Java 2D, Internationalization, Native Methods and XML. It also goes into more detail on Swing.
[4] : Marinescu, Floyd. EJB Design Patterns John Wiley & Sons
This is superb and readable book; an excellent treatment of this very difficult topic. A downloadable version is available at - but we recommend you buy it!
[5] : Deitel and Deitel. Advanced Java 2 Platform : How to Program Prentice Hall
A very comprehensive and programmer friendly guide to advanced Java (such as Java 2D and 3D), but more significantly covers J2EE in astonishing detail, and avoiding the complex and unnecessary depth the appears in most J2EE books. They describe how to build full J2EE applications, and even cover J2ME.
[6] : Bond, Martin. Teach Yourself J2EE in 21 Days Sams
We haven’t yet had a chance to review this book, but it is pleasing that at last programmer friendly books in the “Teach Yourself” vein are finally appearing for J2EE. As always, 21 days might be a bit optimistic though!
[7] : Collins, Tony. 1998 Crash : Learning from the World’s Worst Computer Disasters Simon&Schuster
An entertaining collection of case studies exploring why so many software development projects fail; referenced on this course to prove that C++ programs can occasionally crash
[8] : Bayern, Shaun. 2002 JSTL In Action Manning Publications Co.
A guide to the new JSTL
© 2002 Ariadne Training Limited | https://www.scribd.com/doc/6942185/java-j2ee-introCourse-Book-v2 | CC-MAIN-2018-17 | refinedweb | 18,429 | 55.47 |
miff man page
MIFF — Magick Image File Format
Synopsis
#include <image.h>
Description
The keys describing the image in text form. The next section is the binary image data. The header is separated from the image data by a : character immediately followed by a newline.
The MIFF header is composed entirely of LATIN key=value combinations that may be found in a MIFF file:
- background-color=color
border-color=color matte-color=color these optional keys reflects the image background, border, and matte colors respectively. A color can be a name (e.g. white) or a hex value (e.g. #ccc).
- class=DirectClass
class=PseudoClass the type of binary image data stored in the MIFF file. If this key is not present, DirectClass image data is assumed.
- colors=value
the number of colors in a DirectClass image. For a PseudoClass image, this key specifies the size of the colormap. If this key is not present in the header, and the image is PseudoClass, a linear 256 color grayscale colormap is used with the image data. The maximum number of colormap entries is 65535. colorspace=CMYK the colorspace of the pixel data. The default is RGB.
- columns=value
the width of the image in pixels. This is a required key and has no default.
- compression=BZip
compression=Fax compression=JPEG compression=LZW compression=RLE compression=Zip the type of algorithm used to compress the image data. If this key is not present, the image data is assumed to be uncompressed.
- delay <1/100ths of a second>
the interframe delay in an image sequence. The maximum delay is 65535.
- depth=8
depth=16 the depth of a single color value representing values from 0 to 255 (depth 8) or 65535 (depth 16). If this key is absent, a depth of 8 is assumed.
- dispose value
GIF disposal method.
Here are the valid methods:={value}
defines a short title or caption for the image. If any whitespace appears in the label, it must be enclosed within braces.
- matte=True
matte=False specifies whether a DirectClass image has matte data. Matte data is generally useful for image compositing. This key has no meaning for pseudo-color images.
- montage=<width>x<height>{+-}<x offset>{+-}<y offset>
size and location of the individual tiles of a composite image. See X(1) for details about the geometry specification..
- colorspace=RGB
- is a required key and has no default.
- scene=value
the sequence number for this MIFF image file. This optional key is used
compression=RunlengthEncoded packets=27601
columns=1280 rows=1024
signature=d79e1c308aa5bbcdeea8ed63df412da9
<FF>
:.
Next comes the binary image data itself. How the image data is formatted depends upon the class of the image as specified (or not specified) by the value of the class key in the header.
DirectClass images (class=DirectClass) are continuous-tone, images stored as RGB (red, green, blue), RGBA (red, green, blue, alpha), or CMYK (cyan, yellow, magenta, black) intensity values as defined by the colorspace key. Each intensity value is one byte in length for images of depth 8 (0..255), whereas, images of depth 16 (0..65535) require two bytes in most significant byte first order.
PseudoClass images (class=PseudoClass) data in a MIFF file may be uncompressed, runlength encoded, Zip compressed, or BZip compressed. The compression key in the header defines how the image data is compressed. Uncompressed pixels are just stored one scanline at a time in row order. Runlength encoded compression counts runs of identical adjacent pixels and stores the pixels followed by a length byte (the number of identical pixels minus 1). Zip and BZip compression compresses each row of an image and preceeds the compressed row with the length of compressed pixel bytes as a word in most significant byte first order.
MIFF files may contain more than one image. Simply concatenate each individual image (composed of a header and image data) into one file.
See Also
display(1), animate(1), import(1), montage(1), mogrify(1), convert(1), more(1), compress(1)
Copyright (C) 2000.
Authors
John Cristy, ImageMagick Studio
Referenced By
ImageMagick(1). | https://www.mankier.com/4/miff | CC-MAIN-2017-17 | refinedweb | 680 | 58.48 |
CodePlexProject Hosting for Open Source Software
Hello,
I think this may be a silly question, but I have been unable to find an answer anywhere...
What are the steps to compile SharpMap 0.9 from source, specifically the latest Trunk (Change Set 95837) or Tagged version “v0.9\2011-11-13”?
I am trying to get to a point where I can run the WinFormSamples project contained within the source.
I have been able to run the WinFormSamples from source for Tagged version “v0.9\2011-08-03” OK, but I have another problem with that version which I am hoping a later version will fix.
The tagged version (\v0.9\2011-11-13) and the latest Truck (Change Set 95837) I am unable to compile fully.
When trying to compile the Trunk in the latest Change Set (95837) I run into the following problems:
Error 8 The type or namespace name 'PrecisionModel' could not be found (are you missing a using directive or an assembly reference?) ... \Tags\v0.9\2011-11-13\SharpMap.Extensions\Data\Providers\NtsProvider.cs
Line 109 Column 40 SharpMap.Extensions.VS2010
Should I be using NuGet somehow? If so how? Or is there something else I should be doing?
Any help would be much appreciated.
You need to get nuget working somehow, since it pulls the unresolved references you are getting.
These steps might help:
Maybe nuget is blocked as it was received from an internet download?
Hth FObermaier
Thanks for the quick reply.
It turns out there is a known problem with NuGet when used with VS2010 Premium (the edition I am using). During NuGet install it thinks VS2010 is Ultimate edition and does not install correctly. There is a workaround so I now have NuGet installed correctly
(I think).
Additionally I am working behind a web proxy which require authentication. It appears NuGet may have some problems with that.
I can see the build output from the NuGet.exe and the paths look correct, but still not working correctly (yet).
I will try building the source from home tonight using my home (non-proxy) internet connection to eliminate any possible proxy issues and will report back the results of your suggestions.
imho you do not need to install nuget visual studio extension.
The nuget command line tool is within the solution folder. It seems to be a plain proxy issue.
FObermaier
It appears our proxy is the reason I was unable to compile. I have just successfully compiled using a system NOT connected to a network which uses a proxy.
Also, I luckily happened to read you note about not needing to install NuGet just before I was about to install it in my test environment and indeed it does not need to be installed in order to successfuly compile!
Thanks for you help.
Some follow up information...
It turns out that the version of the NuGet.exe which is included in the SharpMap source (change set 95837) is version 1.5.20830.9001. This version does have problems with web proxy server that require authentication (i.e. it partially works, but not enough
to compile SharpMap).
The current version of NuGet.exe (1.6.21205.9031) works perfectly!
I have confirmed this by updating the NuGet.exe in my environment and was able to compile the source without any problem.
I suggest that it might be a good idea to update the version of NuGet.exe which is include with SharpMap in the various BuildTools directories to the latest version so that other people don't have this problem.
Thanks for the information, I fixed it
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://sharpmap.codeplex.com/discussions/295107 | CC-MAIN-2016-40 | refinedweb | 640 | 75.4 |
Currently mailman3 servers are running on EL7 and are 'stuck' with a version of python which is not getting any support. Code updates to mailman3 require newer versions of python so we need to either move to EL8 or latest Fedora.
Meh..
Metadata Update from @smooge:
- Issue assigned to smooge
Metadata Update from @smooge:
- Issue priority set to: Waiting on Assignee (was: Needs Review)
- Issue tagged with: lists
Metadata Update from @smooge:
- Issue tagged with: backlog
@ngompa you had started looking at the packaging for EL8 no?
Yes, I've started working on this for almost all of our software. Mailman is already in Fedora, but I don't know where our HyperKitty and Postorius packages are...
@ngompa out of curiosity, have you had a chance to look where it's at now?
I've started looking. I think I found this:
Is this what we're using? Or is it somewhere else?
@abompard d you know? :)
Taking info from associate ticket
present deployed seems to be:
Postorius Documentation • GNU Mailman • Postorius Version 1.1.2
upstream has released up to 1.2.3
Describe what you need us to do:
Update to a later version, as there is a an 'across panels' leak of data when two mailing lists are open and an update occurs in one ... an async status message is displayed in all panels
When do you need this? (YYYY/MM/DD)
no deadline -- nice to have
When is this no longer needed or useful? (YYYY/MM/DD)
no expiration date
If we cannot complete your request, what is the impact?
(From #8629 ) GitHub recently sent us this message:
Hi @FedoraInfra,
On February 6th, 2020 at 00:07 .
It's likely that the library that does social authentication in HyperKitty needs to be upgraded to use more recent APIs.
I'm working on updating the packages. Hopefully I'll have some progress in the coming days.
Metadata Update from @smooge:
- Assignee reset
Metadata Update from @cverna:
- Issue untagged with: backlog
- Issue tagged with: high-trouble, medium-gain
Metadata Update from @nphilipp:
- Issue assigned to nphilipp
I'll track packages and their status here:
As discussed on IRC, building current versions of the missing packages in COPR (and Koji as the case may be). Tracking status here.
Documenting caveats (differences to the old deployment etc.) here.
Documenting caveats (differences to the old deployment etc.) here.
I believe we have a way of shipping alternative versions of pytest... @churchyard might know something here.
But we could also simply just not run the tests on EPEL8. Not sure if we want to do that, given how much modifications we're doing...
I've been summoned. Reading the ticket now...
You can create a "compat" package with pytest4 in EPEL. People will love you for that. I can help.
Or you can use the Python 3.8 module for newer pytest, but that is another can of worms, because you don't have other deps.
Or you can figure out why does pytest-django actually need >=3.6 and have that fixed.
>=3.6
Let me know which way you'd like to proceed.
@churchyard In order of preference:
I don't think using the py38 module is realistic because we'd need to rebuild everything and adjust the packaging for it and deal with module building...
Package pytest-django 3.2.1?
Alternatively, revert this (but there might be more adjustments needed)
Personally, I'd prefer a somewhat current pytest4 compat package over packaging version 3.2.x of pytest-django (which seems to be incompatible with newer pytest versions, i.e. potentially obstruct the way forward). I concur with staying away from a Python 3.8 based stack if we can avoid it (though it would intuitively mesh well with "here be current versions").
@churchyard, how would I go about such a compat package? If I'm not off track, it wouldn't be parallel-installable to the existing version, so wouldn't need special treatment in terms of file conflicts etc. Would it simply get a namespaced (S)RPM name, e.g. pytest4, python3-pytest4 with the proper Python provides so consumers like pytest-django could then do s.th. like BuildRequire: %{py3_dist pytest} >= 3.6 to get the compat package?
pytest4
python3-pytest4
BuildRequire: %{py3_dist pytest} >= 3.6
Pretty much what you say. I can help. In fact, if we get the python-pytest4 component, we might end up reusing it as a "lower" version compat package in Fedora, because stuff is incompatible with pytest 5. I just don't want to update to pytest 5 before the 3.9 bootstrap to avoid having to bootstrap both of them. I can submit the package for review and we can build it first in epel8 only, and after the 3.9 bootstrap, also build it in Fedora when updating pytest to 5.
Sounds good?
Sounds good?
I'm game, sounds good.
I'd need comaintainers for the epel8 branch.
Count me in.
Feel free to add @infra-sig. Since we need it for infra apps, it's just better to make sure everyone can maintain it.
@infra-sig
OK, after some digging, this won't work:
Sorry.
Feel free to add @infra-sig. Since we need it for infra apps, it's just better to make sure everyone can maintain it.
👍
@nphilipp I think we're going to have to go with trying to get things to work with older pytest on EPEL8. The other strategies seem basically unworkable.
@duck let's coordinate reviewing your (OSPO's) more up to date set of packages here. As a reference for others, here's your COPR and associated repo which I understand to be preliminary work for the "reviewing pipeline":
Can you add me to the respective groups so I can try my luck at updating & building the packages? My account on GitLab is @nilsph. Not sure if it's strictly needed, I could just put my changes into another repo but I guess it'd be good to have everything in one place.
@nilsph
@duck My name is @Conan_Kudo there as well...
@Conan_Kudo
I added you to the GL project, please only make changes to the api-3.1 branch (but you can make your own), so that it does not affect our production.
api-3.1
As for Copr, if you wish to be able to trigger builds could you make a request to be a member? (I cannot add people in the UI, it works the other way around only).
@duck I guess I have to be a member of the pizza-cats FAS group to work with the COPR repo, but according to FAS, the group's invite only. As an admin of the group, the group page should show you an "Add User" field where you can put in my FAS user name (nphilipp).
pizza-cats
nphilipp
The pizza-cats group is for the OSPO Comminfra Team, to manage all our resources (even if right now I did not migrate them all yet), so I cannot use this group.
But you can make a request for a Copr Project and I will accept it. Once loggued in if you go to the project's Settings you can check for Request Permissions.
Settings
Request Permissions
The Other way around is to create a new FAS group for the Mailman maintenance that we could use (I suppose) in src.fedoraproject.org too. It would be nice to have proper group maintenance on all resources.
So I finally got mailman3 to build fine again. Interestingly Click decided to uselessly change its quoting it broke the tests and I had to backport a recent version for EL8. So now it's fine on both rawhide and EL8. I'm going to create a PR on the official package.
python-mailman-client requires vcrpy and pytest-services for its test suite so that's my next steps.
The pizza-cats group is for the OSPO Comminfra Team, to manage all our resources (even if right now I did not migrate them all yet), so I cannot use this group.
Understood.
But you can make a request for a Copr Project and I will accept it. Once loggued in if you go to the project's Settings you can check for Request Permissions.
Ah, wasn't aware of that. I've requested build permissions there.
The Other way around is to create a new FAS group for the Mailman maintenance that we could use (I suppose) in src.fedoraproject.org too. It would be nice to have proper group maintenance on all resources.
I think for taking care of stuff in koji, bodhi, Bugzilla, it's fine if we just use the existing groups, i.e. infra-sig on our end (I assume you have something comparable on the OSPO side).
infra-sig
I think for taking care of stuff in koji, bodhi, Bugzilla, it's fine if we just use the existing groups, i.e. infra-sig on our end (I assume you have something comparable on the OSPO side).
That's the goal of the newly created pizza-cats group, but limited to our custom stuff needed for our communities.
Should I join the infra-sig group?
Currently the mailman3 package and possibly some deps (I don't think he ever uploaded hyperkitty and postorius) are owned by abompard and it would be nice to get a change of ownership since he won't be involved anymore. I guess the python sig is taking care too, and some deps might not be useful anymore, thus some cleanup might be needed.
I hit a build issue and reported a bug to Copr:
Not reply yet but that's write annoying (or should I say blocking).
If you have any idea, please tell me or comment on the ticket.
Could someone review the mailman3 PR in the meantime?
to comment on this ticket. | https://pagure.io/fedora-infrastructure/issue/8455 | CC-MAIN-2020-40 | refinedweb | 1,662 | 73.88 |
Generic classes are classes which take a type as a parameter. They are particularly useful for collection classes.
Generic classes take a type as a parameter within square brackets
[]. One convention is to use the letter
A as type parameter identifier, though any parameter name may be used.
class Stack[A] { private var elements: List[A] = Nil def push(x: A) { elements = x :: elements } def peek: A = elements.head def pop(): A = { val currentTop = peek elements = elements.tail currentTop } }
This implementation of a
Stack class takes any type
A as a parameter. This means the underlying list,
var elements: List[A] = Nil, can only store elements of type
A. The procedure
def push only accepts objects of type
A (note:
elements = x :: elements reassigns
elements to a new list created by prepending
x to the current
elements).
To use a generic class, put the type in the square brackets in place of
A.
val stack = new Stack[Int] stack.push(1) stack.push(2) println(stack.pop) // prints 2 println(stack.pop) // prints 1
The instance
stack can only take Ints. However, if the type argument had subtypes, those could be passed in:
class Fruit class Apple extends Fruit class Banana extends Fruit val stack = new Stack[Fruit] val apple = new Apple val banana = new Banana stack.push(apple) stack.push(banana)
Class
Apple and
Banana both extend
Fruit so we can push instances
apple and
banana onto the stack of
Fruit.
Note:[A] is only a subtype of
Stack[B] if and only if
B = A. Since this can be quite restrictive, Scala offers a type parameter annotation mechanism to control the subtyping behavior of generic types.
Contents | http://docs.scala-lang.org/tutorials/tour/generic-classes.html | CC-MAIN-2017-22 | refinedweb | 279 | 56.15 |
Get started with Google VR to write immersive apps using Processing.
The Google VR platform lets you use your smartphone as a quick entry point to Virtual Reality.. For Daydream, you need the View headset, and a Daydream-compatible phone.
If you are using version 4.0-beta3 or newer of the Android mode, all additional packages needed to build VR apps are bundled into the mode, so there are no additional requirements.If you happen to be using an older beta release of the mode, then you will need to install the processing-cardboard library separately. It provides specialized stereo and monoscopic renderers that work with the phone sensors to automatically update Processing's camera. This library is not available in the Contributions Manager, meaning that you need to install it manually. To do so, first download the latest library zip package from here, uncompress it and copy the cardboard folder into the libraries subfolder inside your sketchbook folder, and then restart Processing.
You need to select the VR option in the Android menu to make sure that your sketch is built as a VR app:
A basic sketch for VR needs to include the library import, and the STEREO renderer:
import processing.vr.*; void setup() { fullScreen(STEREO); } void draw() { }
With this code, you should get an empty stereo view on your phone:
The stereo renderer works by drawing each frame twice, once for each eye. You can know what eye is being drawn in each call of the
draw() function using the
eyeType variable in the VR renderer:
import processing.vr.*; void setup() { fullScreen(STEREO); } void draw() { PGraphicsVR pvr = (PGraphicsVR)g; if (pvr.eyeType == PVR.LEFT) { background(200, 50, 50); } else if (pvr.eyeType == PVR.RIGHT) { background(50, 50, 200); } else if (pvr.eyeType == PVR.MONOCULAR) { background(50, 200, 50); } }
You can notice that
eyeType could also be
MONOSCOPIC, which happens if you use the
MONO renderer instead of
STEREO. With the monoscopic renderer, the frame is drawn just once but the camera still responds to the movements of the phone.
There is nothing special to add 3D objects to the scene, you simply use the same functions for drawing 3D primitives and shapes with the P3D renderer. You can also add textures, lights, and shader-based effects, like in the following sketch (full code available here):
import processing.vr.*; PShader toon; boolean shaderEnabled = true; void setup() { fullScreen(STEREO); noStroke(); fill(204); toon = loadShader("ToonFrag.glsl", "ToonVert.glsl"); } void draw() { if (shaderEnabled == true) shader(toon); translate(width/2, height/2); background(80); directionalLight(204, 204, 204, 1, 1, -1); sphere(150); } void mousePressed() { if (shaderEnabled) { shaderEnabled = false; resetShader(); } else { shaderEnabled = true; } }
Notice the use of
translate(width/2, height/2) to make sure that the scene is properly centered in front of of our eyes. This is needed because by default Processing sets the origin to be the upper left corner of the screen, which is convenient for 2D drawing, but not so much in VR. The output should look like this:
In this example, we will create a 3D scene with a few more objects. Let's start by defining a 2D grid as a reference.
Because performance is very important in VR to keep the framerate as high as possible and avoid simulation sickness in the user, we could rely on PShape objects to store static geometry and so avoid re-generating it in every frame:
import processing.vr.*; PShape grid;(); } void draw() { background(0); translate(width/2, height/2); shape(grid); }
Note how the y coordinate of the grid vertices is +1000, since the Y axis in Processing points down, so objects under our line of vision in VR should have positive coordinates.
Now we can add some objects! In order to optimize performance, we can group several 3D shapes inside a single PShape group, which is faster than handling each object separately, like so:
import processing.vr.*; PShape grid; PShape cubes;(); cubes = createShape(GROUP); for (int i = 0; i < 100; i++) { float x = random(-1000, +1000); float y = random(-1000, +1000); float z = random(-1000, +1000); float r = random(50, 150); PShape cube = createShape(BOX, r, r, r); cube.setStroke(false); cube.setFill(color(180)); cube.translate(x, y, z); cubes.addChild(cube); } } void draw() { background(0); lights(); translate(width/2, height/2); shape(cubes); shape(grid); }
The final result should look like this, depending on your viewpoint: | https://android.processing.org/tutorials/vr_intro/index.html | CC-MAIN-2018-51 | refinedweb | 734 | 58.11 |
Continuing the series! Feel welcome to dip in and weigh in on a past question.
Let's say I've never used C++ before. Can anyone give the run down of what the language does and why you prefer it? Feel free to touch on drawbacks as well.
We're a place where coders share, stay up-to-date and grow their careers.
Continuing the series! Feel welcome to dip in and weigh in on a past question.
Let's say I've never used C++ before. Can anyone give the run down of what the language does and why you prefer it? Feel free to touch on drawbacks as well.
For further actions, you may consider blocking this person and/or reporting abuse
Charlie Say -
Henrique Ramos -
Analyze_ -
Meat Boy -
Discussion (30)
I will list the benefits of modern C++ because old C++ kinda sucks
There are many more but these are my favorites
Oh, well... May I change the question?
It is at least 20 years since last time I coded in C++. Maybe something changed; nevertheless, let me summarize my memories about it in two short phrases
I still remember the somersault with twist that I had to do to define a friend function of a template class...
It has been a long time, but some of the things that I remembered that I did not like
#includethat physically (rather than logically) embeds the included file. This has as consequences
#ifdef INCLUDEDhack.
use namespacein an header file, you pollute the namespace of whoever includes you
Yeah, quite a lot. I respectfully submit that you learn modern C++ before commenting — unless you, for example, think comments about life in Italy under Mussolini's rule are still valid. Maybe something has changed in Italy since 1943?
Well, it was pretty clear that my remarks were about the old version.
Although I have some difficulties to imagine how many of the defects I remember can be solved in new releases. Usually languages add features with time and drawbacks due to some issues in the original design (in this case, inheritance from C, obscure template syntax, automatic casting, type system C-like, ...) are difficult to solve because of the need of back compatibility.
Anyway, now I am curious and maybe I'll give a look to the new version, although I do not anticipate I will move to it soon.
"You must use the ugly #ifdef INCLUDED hack."
For this you have the non-standard "#pragma once", supported for most of modern compilers.
And really modern compilers don't even need that.
I have a love hate relationship with C++.
There's many things I could say about why I hate C++, but that's not what you are asking for.
One of my mentor who taught me C++ during my school days, used to say:
Other than that, It has been 10 years I last wrote C++ during my college days, Few years ago, I was watching a youtube video on C++ and didn't know about
autotype, and many other functionalities.
I am not qualified enough to pitch for C++ but I am subscribed to following channels to keep me updated on the new features of C++ programming languages. So for anyone interested:
Horrible. That's like saying you're a bad builder because you're using machines for building a flat instead of putting bricks on top of each other.
No. He was not saying that. He was saying, you are not bad builder because you are using machines, you are bad builder if you are using machines, without understanding the basic concepts like how to put bricks by hand. Because sometimes, to get something done properly, you need to use your hands.
He was a great teacher. And often used these types of jokes to keep us interested in Maths and programming.
First of all, we were in middle-schools, (5-7 class) in mid-90s in India. He wasn't insulting us or the languages. This joke made us focused on practical stuff, not just whats in syllabus. We wanted to learn about basics of programming instead of just satisfied with examples at back of our books. And our syllabus didn't cover a lot of concepts.
And we conceptually wasn't wrong, Memory management in C++ is relatively easier than C, but the amount of control that C provided meant that your skills can directly impact the quality of resultant code.
That's okay, you should know the basics before you start using more advanced tools.
That's right, but on the other hand, C++ is much more complex language considering it has templates and other advanced constructs compared to C.
It is an old language that is still very popular. This means 2 things:
Best Language for Perfomarce, Low Level Programming, Memory Management, Constraint on Object Oriented Programming and a Good Standard Library with so many function.
Obviously there is a minimum requirement... some years of experience and a good knowledge of the language, in the other cases there are a lot of possibilities to do worse than other languages.
C++ has a rich past and a bright future.
Its past 40 years showed its value when it comes to writing low-level performance-critical code.
Its past 10 years also showed that it's not an old language staying with us for legacy reasons. The industry receives a new version every three years based on the rigorous processes of the standardization committee. These versions brought us things such as smart pointers or concepts that help us write similarly performant but easier to read, easier to write code as before.
The predictable schedules, the new features already in the pipeline and the rich ecosystem guarantee that C++ will stay with us for a long time not only as a must-use but as a want-to-use language.
If you like C#, imagine if the compiler automatically inserted "using" for you. You never needed to remember to write it. C++ does that.
Lots of C++ features are about letting developer hide complexity (almost always from the caller). When used correctly, the higher level ("business logic") code can use instances of types and safely assume that they are doing the right thing. You can then compose those solutions to build up the actual solution that you're trying to do, relying upon each component to do the right thing. (This is not possible in garbage collected languages because you never know when or if the garbage collector will run.)
For a concrete example, imagine that you are writing a program that records your screen while a window is up.
In other languages, you will need to remember to stop the recording, otherwise you will corrupt the file. There may be several ways that the window can close, so you will need to make sure that function is called in every possible control flow.
In C++: Make the handle to the video recorder a field of the window (NOT the memory address of the handle, but the actual handle itself). When the window gets destroyed, it will destroy all fields. If the person who wrote the video recorder made sure to stop any pending recordings when its handle is destroyed, then the compiler will automatically call stop. When the window instance is destroyed, the video recorder has no-where to live, so it is destroyed, and its destruction process safely stops the recording.
The caller didn't need to think about this, even if a (normal) error is encountered.
(Normal error meaning that your program is still in control of execution. It won't help you if the OS forcibly stops your execution, or you encounter a power failure, etc. Basically, if a "finally" would have helped you in another language, then you can rely upon C++ just doing the right thing without the caller needing to remember anything.)
I've never written C++ so I'll be following this thread with excitement!
Hit the comment subscribe button if you haven't! Not everybody knows how that works, but it's great for threads you care about.
Today I Learned. Thanks, Ben!
able to write cross platform libraries that you can share almost everywhere - mobile, backend services, various desktop OS's.
IMHO, that is not completely true, to not to say absolutely not true.
With electron, react-native, react, and node, a js library might have further reach.
And what is the relation of that with C++ being "able to write cross platform libraries that you can share almost everywhere...". Each version of each library will need its own different C++ code base, more if they use UI, that is completely different and incompatible between platforms.
I just remember that one company that I worked at had an engine that would compute locations for items, and they wanted to share that engine between desktop apps and mobile apps. There was no UI in the engine. So, they ended up implementing it in a subset of c++ to be able to use the same source code for the engine in those disparate locations. I wasn't directly involved.
I was wondering if they would use js today.
Problem from my side then
It's the oldest language in this series by a good margin. In a way, it fits in more places and teaches you more about certain programming fundamentals than more novel languages.
You can build IOS, android and desktop (mac, linux, and windows) with one code base and framework. This includes the UI!
I mainly use C++ for competitive coding or DSA practice. Its huge STL library support is what drives me to explore this language more and more.
The link is invalid | https://dev.to/ben/pitch-me-on-c-456j | CC-MAIN-2022-27 | refinedweb | 1,628 | 71.65 |
Although this seems like a trivial question, I am quite sure it is not :)
I need to validate names and surnames of people from all over the world. How can I do that with a regular expression? If it were only English ones I think that this would cut it:
^[a-z -']+$
I'll try to give a proper answer myself:
The only punctuations that should be allowed in a name are full stop, apostrophe and hyphen. I haven't seen any other case in the list of corner cases.
Regarding numbers, there's only one case with an 8. I think I can safely disallow that.
Regarding letters, any letter is valid.
I also want to include space.
This would sum up to this regex:
^[\p{L} \.'\-]+$
This presents one problem, i.e. the apostrophe can be used as an attack vector. It should be encoded.
So the validation code should be something like this (untested):
var name = nameParam.Trim(); if (!Regex.IsMatch(name, "^[\p{L} \.\-]+$")) throw new ArgumentException("nameParam"); name = name.Replace("'", "'"); //' does not work in IE
Can anyone think of a reason why a name should not pass this test or a XSS or SQL Injection that could pass?
complete tested solution
using System; using System.Text.RegularExpressions; namespace test { class MainClass { public static void Main(string[] args) { var names = new string[]{"Hello World", "John", "João", "タロウ", "やまだ", "山田", "先生", "мыхаыл", "Θεοκλεια", "आकाङ्क्षा", "علاء الدين", "אַבְרָהָם", "മലയാളം", "상", "D'Addario", "John-Doe", "P.A.M.", "' --", "<xss>", "\"" }; foreach (var nameParam in names) { Console.Write(nameParam+" "); var name = nameParam.Trim(); if (!Regex.IsMatch(name, @"^[\p{L}\p{M}' \.\-]+$")) { Console.WriteLine("fail"); continue; } name = name.Replace("'", "'"); Console.WriteLine(name); } } } } | https://codedump.io/share/wPw3wbArVuRq/1/regular-expression-for-validating-names-and-surnames | CC-MAIN-2017-34 | refinedweb | 287 | 75.91 |
Unity3D: Non-rectangular GUI buttons – Part 3
Posted by Dimitri | Jan 17th, 2011 | Filed under Featured, Programming
This is the last post of this series, which explains how to create non-rectangular GUI buttons for Unity3D games. If you haven’t read any of the other posts yet, it is highly recommended that you take a look at part one and part two before going any further.
So, let’s start from where the second post left: we already have our Unity3D scene set up with the 3D hit test model already imported and placed inside the scene. All we have to do now is import the PNG images and make then appear as a GUI element. To do that, just drag and drop then somewhere inside the Project tab. After the images are inside Unity3D, create a GUISkin by right clicking an empty space in this same tab and select Create->GUISkin as shown:
Click on \’Create\’ and then on \’GUISkin\’
Name this GUISkin as IrregularShapeSkin. The next step is to add the images we just imported to the recently created GUISkin. To do this, each state of the button, such as normal, hover and down must be assigned to a different Custom Styles slot. To do that, expand the Custom Styles tree and set the size to 3 or whichever number of buttons or button states you currently have. Then, just drag and drop the images from the Project Tab to the Normal->Background of each slot:
This image shows how to assign a image to a Custom Style slot.
Finally, let’s see the code that makes the non-rectangular buttons work. The code is much simpler than the scene’s setup we’ve being preparing all these posts. It must be attached to the Main Camera. Here it is:
using UnityEngine; using System.Collections; public class CustomShapeGUI : MonoBehaviour { //a variable to store the GUISkin public GUISkin guiskin; //a variable to store the GUI camera public Camera cShapeGUIcamera; //a variable that is used to check if the mouse is over the button private bool isHovering = false; //a variable that is used to check if the mouse has been clicked private bool isDown = false; //a ray to be cast private Ray ray; //create a RaycastHit to query information about the collisions private RaycastHit hit; void Update() { //cast a ray based on the mouse position on the screen ray = cShapeGUIcamera.ScreenPointToRay(Input.mousePosition); //Check for raycast collisions with anything if (Physics.Raycast(ray, out hit, 10)) { //if the name of what we have collided is "irregular_shape" if(hit.transform.name=="irregular_shape") { //set collided variable to true isHovering = true; //if the mouse buton have been pressed while the cursor was over the button if(Input.GetButton("Fire1")) { //if clicked, mouse button is down isDown = true; } else { //the mouse button have been released isDown = false; } } } else //ray is not colliding, { //set collided to false isHovering = false; } } void OnGUI() { //if mouse cursor is not inside the button area if(!isHovering) { //draws the normal state GUI.Label(new Rect(10,10,161,145),"",guiskin.customStyles[0]); //set mouse click down to false isDown = false; } else //mouse is inside the button area { //draws the hover state GUI.Label(new Rect(10,10,161,145),"",guiskin.customStyles[1]); //if the mouse has been clicked while the cursor was over the button if(isDown) { //draws the 'Pressed' state GUI.Label(new Rect(10,10,161,145),"",guiskin.customStyles[2]); } } } }
This is how this code works: the public variables define the Camera that the ray will be cast into and the GUISkin being used. Don’t forget to drag and drop the Camera and specially the GUISkin we created yearlier before running the code. If you don’t, the lack of a defined GUISkin can make Unity3D crash. This is a image of the cShapeGUIcamera variable and the guiskin variable set up, and defined, at the Inspector.
Image of the Main Camera Inspector showing the CustomShapeGUI script attached to it. Don\’t forget to assign these variables.
The private variables are used to create the ray, set the button states (normal, hover, pressed) and to query information about the object the ray collided with (that’s the purpose of the RaycastHit variable named hit). Actually, for this example, this RaycastHit variable wasn’t needed, since we only have one button. It was added to the script to make it possible to add more buttons later.
Basically, the Update() method checks for objects intersecting the ray within a 10 unit range (line 29). The ray has its origin in the dedicated GUI Camera and points forward. The ray’s origin and direction are updated every frame (line 26). If there was a collision, check what is the name of the object we collided with, and then, set the state of the button based on the mouse button input (if statement that starts in line 29 and and ends on line 49).
Finally, the OnGUI() renders the button on the screen at a specified coordinate and with the current state based on the the three boolean variables.
Since the code defines where the button is drawn on the screen, the last thing needed to be done is to scale and place the “irregular shape” 3D model exactly behind the 2D GUI. The 3D model will serve as the hit test area for the button, that’s why it needs to be placed precisely behind the 2D GUI, so it doesn’t appear.
3D model being positioned behind the 2D GUI Image.
The image above shows the 3D hit test area model being aligned with the 2D image. Note that the button’s PNG file isn’t transparent. The image above is like this, because, to precisely position the 3D hit test model, this line of code was added at the beginning of the OnGUI() method to make all GUI elements semi-transparent.
//place this line at the beginning of the OnGUI method GUI.color = new Color(0.5f,0.5f,0.5f,0.5f);
After you found the position and size to match the 2D image, just delete this line.
Update Nov/16/2012: As far as the code goes, Unity now features a pair of methods that can detect collisions with the mouse cursor in screen space with an object in world space. This means that casting a ray to see which object is being hit isn’t necessary anymore. Check out the official documentation for the MonoBehaviour.onMouseEnter() and MonoBehaviour.onMouseExit() methods.
As promised in the first post of this series, here is a Unity3D project with all the images, code and 3D model:
Please run this in the Web or the Standalone resolutions, as this code doesn’t resize the GUI or the 3D model based on the screen resolution. This isn’t an error, just a limitation. It would have required much more code to implement this feature, and at least one more post to explain it.
That’s it!
Thanks for putting this up, and for the update with the new methods. This is very helpful for the project I’m working on. | http://www.41post.com/2561/programming/unity3d-non-rectangular-gui-buttons-part-3 | CC-MAIN-2019-47 | refinedweb | 1,189 | 67.59 |
Suppose we have the following dataset in Python that displays the number of sales a certain shop makes during each weekday for five weeks:
import numpy as np import pandas as pd import seaborn as sns #create a dataset np.random.seed(0) data = {'day': np.tile(['Mon', 'Tue', 'Wed', 'Thur', 'Fri'], 5), 'week': np.repeat([1, 2, 3, 4, 5], 5), 'sales': np.random.randint(0, 50, size=25) } df = pd.DataFrame(data,columns=['day','week','sales']) df = df.pivot('day', 'week', 'sales') view first ten rows of dataset df[:10] week 1 2 3 4 5 day Fri 3 36 12 46 13 Mon 44 39 23 1 24 Thur 3 21 24 23 25 Tue 47 9 6 38 17 Wed 0 19 24 39 37
Create Basic Heatmap
We can create a basic heatmap using the sns.heatmap() function:
sns.heatmap(df)
The colorbar on the righthand side displays a legend for what values the various colors represent.
Add Lines to Heatmap
You can add lines between the squares in the heatmap using the linewidths argument:
sns.heatmap(df, linewidths=.5)
Add Annotations to Heatmap
You can also add annotations to the heatmap using the annot=True argument:
sns.heatmap(df, linewidths=.5, annot=True)
Hide Colorbar from Heatmap
You can also hide the colorbar entirely using the cbar=False option:
sns.heatmap(df, linewidths=.5, annot=True, cbar=False)
Change Color Theme of Heatmap
You can also change the color theme using the cmap argument. For example, you could set the colors to range from yellow to green to blue:
sns.heatmap(df, cmap='YlGnBu')
Or you could have the colors range from red to blue:
sns.heatmap(df, cmap='RdBu')
For a complete list of colormaps, refer to the matplotlib documentation. | https://www.statology.org/heatmap-python/ | CC-MAIN-2022-21 | refinedweb | 297 | 72.26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.