text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Over the last couple of months, we’ve been shipping quite some hybrid iOS and Android apps.
Initially, we were building these apps with RubyMotion, which is excellent for building native apps. But recently, we’ve decided to give Turbolinks 5 and the accompanied iOS and Android adapters a spin.
In this post, I’ll briefly go over some of the things I ran into when wrapping our Rails app into a native iOS app with the turbolinks-ios adapter.
Preparing your Rails app
For the adapter to work, you need to have Turbolinks 5 set up in your Rails app. Rails 5 will include Turbolinks 5 by default. So, for now, you can either choose to upgrade to the latest Rails 5 beta or install the Turbolinks gem manually.
Form submission
When using Turbolinks, form submissions have to be done via XHR. In your view, this is as easy as adding the
remote: true attribute to your
form_for tag. However, the response to a form submission does need some more manual changes.
As an example, I’ll take this typical pattern in Rails for saving a model:
def create @post = Post.new(post_params) if @post.save redirect_to @post else render :new end end
In this case, when the
@post is valid, Turbolinks will detect the redirect and just forward the client to the
@post. However, when the
@post is not valid, we want to execute some Javascript to update the view and let the user know something went wrong saving the
For this, I’ve added a SJR response that will replace the current form on the page with the updated form:
# app/views/posts/new.js.erb $(“.post-form”).html(“<%= j render ‘form’ %>”)
Now, when the
@post cannot get saved, the view will get updated with the validation errors. Assuming you’re rendering those in the
form partial of course.
Bridge between JavaScript and Swift
One of the main advantages of building a Hybrid app is having access to the native APIs. So you'll be able to use things like native interface components, setting up push notifications or leveraging the device's hardware.
By default, the
turbolinks-ios adapter uses an implementation of the
UIViewController to navigate through web pages. Basically by initiating a new ViewController on each request and adding it to the stack of the NavigationController, which results in a native-like user experience when traversing between screens:
This default iOS navigation bar didn’t match the design of our app, so we decided to hide this navigation bar and let our Rails app render a custom back button. However, we do want this button to behave similarly to the native back button: deallocate the ViewController and slide-animate back to the previous view.
Our desired situation: click link to open new viewcontroller, click back-icon to slide back:
To accomplish this, we need a way for our Rails app’s JavaScript to execute a native (Swift) method. To set this up, we’re using WKScriptMessageHandler to extend our UINavigationController.
Adding the messageHandler to your wrapper
I’ve added the following code to my
ApplicationController.swift:
private lazy var webViewConfiguration: WKWebViewConfiguration = { let configuration = WKWebViewConfiguration() configuration.userContentController.addScriptMessageHandler(self, name: "closeViewController") return configuration }() extension ApplicationController: WKScriptMessageHandler { func userContentController(userContentController: WKUserContentController, didReceiveScriptMessage message: WKScriptMessage) { popViewControllerAnimated(true) } }
This code will register the
userContentController as Handler for any message the WebView receives with the name
closeViewController. The
userContentController method will call
popViewControllerAnimated(true) which will pop the currently visible view controller.
Calling the method from your Rails app
After adding this message handler, we’re adding a back button to our iOS app with
data-behavior=“close” attribute and the following iOS specific javascript:
$(document).on "click", "[data-behavior=close]", (e) -> webkit.messageHandlers.closeViewController.postMessage('close')
Note that this button should be either a link- or button tag otherwise the WebView won’t register the taps/clicks.
Questions or comments?
So far, I like the Turbolinks adapters a lot! It provides an excellent starting point for building hybrid apps.
We’re still actively developing with the Turbolinks adapters, so we are going to run into many other challenges from here, but at least we’ll have more stuff to blog about!
If you have any questions or comments, feel free to reach out to me on Twitter: @joshuajansen or email: mailto:joshua@firmhouse.com | https://firmhouse.com/blog/building-an-ios-app-with-turbolinks-ios/ | CC-MAIN-2019-13 | refinedweb | 725 | 51.89 |
Nonblocking I/O
What really are descriptors?
The fundamental building block of all I/O in Unix is a sequence of bytes. Most programs work with an even simpler abstraction — a stream of bytes or an I/O stream.
A process references I/O streams with the help of descriptors, also known as file descriptors. Pipes, files, FIFOs, POSIX IPC’s (message queues, semaphores, shared memory), event queues are all examples of I/O streams referenced by a descriptor.
Creation and Release of Descriptors
Descriptors are either created explicitly by system calls like open, pipe, socket and so forth, or are inherited from the parent process.
Descriptors are released when:
— the process exits
— by calling the close system call
— implicitly after an exec when the descriptor is marked as close on exec.
Close-on-exec.
Data transfer happens via a read or a write system call on a descriptor.
File Entry
Every descriptor points to a data structure called the file entry in the kernel. The file entry maintains a per descriptor file offset in bytes from the beginning of the file entry object. An open system call creates a new file entry.
Fork/Dup and File Entries
A fork system call results in descriptors being shared by the parent and child with share by reference semantics. Both the parent and the child are using the same descriptor and reference the same offset in the file entry. The same semantics apply to a dup/dup2 system call used to duplicate a file descriptor.
#include <unistd.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>int main(char *argv[]) {
int fd = open("abc.txt", O_WRONLY | O_CREAT | O_TRUNC, 0666);
fork();
write(fd, "xyz", 3);
printf("%ld\n", lseek(fd, 0, SEEK_CUR));
close(fd);
return 0;
}
which prints:
3
6
More interesting is what the close-on-exec flag does, if the descriptors are only being shared. My guess is setting the flag removes the descriptor from the child’s descriptor table, so that the parent can still continue using the descriptor but the child wouldn’t be able to use it once it has exec-ed.
Offset-per-descriptor
As multiple descriptors can reference the same file entry, the file entry data structure maintains a file offset for every descriptor. Read and write operations begin at this offset and the offset itself updated after every data transfer. The offset determines the position in the file entry where the next read or write will happen. When a process terminates, the kernel reclaims all descriptors in use by the process. If the process in question was the last to reference the file entry, the kernel then deallocates that file entry.
Anatomy of a File Entry
Each file entry contains:
— the type
— an array of function pointers. This array of function pointers translates generic operations on file descriptors to file-type specific implementations.
Disambiguating this a bit further, all file descriptors expose a common generic API that indicates operations (such as read, write, changing the descriptor mode, truncating the descriptor, ioctl operations, polling and so forth) that may be performed on the descriptor.
The actual implementation of these operations vary by file type and different file types have their own custom implementation. Reads on sockets aren’t quite the same as reads on pipes, even if the higher level API exposed is the same. The open call is not a part of this list, since the implementation greatly varies for different file types. However once the file entry is created with an open call, the rest of the operations may be called using a generic API.
Most networking is done using sockets. A socket is referenced by a descriptor and acts as an endpoint for communication. Two processes can create two sockets each and establish a reliable byte stream by connecting those two end points. Once the connection has been established, the descriptors can be read from or written to using the file offsets described above. The kernel can redirect the output of one process to the input of another on another machine. The same read and write system calls are used for byte-stream type connections, but different system calls handle addressed messages like network datagrams.
Non-Blocking descriptors
By default, read on any descriptor blocks if there’s no data available. The same applies to write or send. This applies to operations on most descriptors except disk files, since writes to disk never happen directly but via the kernel buffer cache as a proxy. The only time when writes to disk happen synchronously is when the O_SYNC flag was specified when opening the disk file.
Any descriptor (pipes, FIFOs, sockets, terminals, pseudo-terminals, and some other types of devices) can be put in the nonblocking mode. When a descriptor is set in nonblocking mode, an I/O system call on that descriptor will return immediately, even if that request can’t be immediately completed (and will therefore result in the process being blocked otherwise). The return value can be either of the following:
— an error: when the operation cannot be completed at all
— a partial count: when the input or output operation can be partially completed
— the entire result: when the I/O operation could be fully completed
A descriptor is put in the nonblocking mode by setting the no-delay flag O_NONBLOCK. This is also called an “open-file” status flag (in glibc, “open-file” flags are flags that dictate the behavior of the open system call. Generally, these options don’t apply after the file is open, but O_NONBLOCK is an exception, since it is also an I/O operating mode).
Readiness of Descriptors
A descriptor is considered ready if a process can perform an I/O operation on the descriptor without blocking. For a descriptor to be considered “ready”, it doesn’t matter if the operation would actually transfer any data — all that matters is that the I/O operation can be performed without blocking.
A descriptor changes into a ready state when an I/O event happens, such as the arrival of new input or the completion of a socket connection or when space is available on a previously full socket send buffer after TCP transmits queued data to the socket peer.
There are two ways to find out about the readiness status of a descriptor — edge triggered and level-triggered.
Level Triggered
I see this as the “pull” model or the “poll” model. To determine if a descriptor is ready, the process tries to perform a non blocking I/O operation. A process may perform such operations any number of times. This allows for more flexibility with respect to the handling of any subsequent I/O operation —like for instance, if a descriptor is ready, a process could choose to either read all the data available or not perform any I/O at all or choose not to read all of the input data available in the buffer. Let’s see how this works with an example.
At time t0, a process could try an I/O operation on a non-blocking descriptor. If the I/O operation blocks, the system call returns an error.
Then at time t1, the process could try I/O on the descriptor again. Let’s say the call blocks again and an error is returned.
Then at time t2, the process tries I/O on the descriptor again. Let’s assume the call blocks yet again and an error is returned.
Let’s say at time t3 the process polls for the status of a descriptor and the descriptor is ready. The process can then chose to actually perform the entire I/O operation (read all the data available on the socket, for instance).
Let’s assume at time t4 the process polls for the status of a descriptor and the descriptor is not ready. The call blocks again and the I/O operation returns an error.
Let’s assume at time t5 the process polls for the status of a descriptor and the descriptor is ready. The process can subsequently choose to only perform a partial I/O operation (reading only half of all the data available, for instance).
Let’s assume at time t6 the process polls for the status of a descriptor and the descriptor is ready. This time the process may choose to perform no subsequent I/O at all.
Edge Triggered
The process receives a notification only when the file descriptor is “ready” (usually when there is any new activity on the file descriptor since it was last monitored). I see this as the “push” model, in that a notification is pushed to the process about readiness of a file descriptor. Furthermore, with the push model, the process is only notified that a descriptor is ready for I/O, but not provided additional information like for instance how many bytes arrived on a socket buffer.
Thus, a process is only armed with incomplete data as it tries to perform any subsequent I/O operation. To work around this, the process can attempt to perform the maximum amount of I/O it possibly can every time it gets a descriptor readiness notification, since failing to do this would mean the process would have to wait until the next notification arrives, even if I/O is possible on a descriptor before the arrival of the next notification.
Let’s see how this works with the following example.
At time t2, the process gets a notification about a descriptor being ready.
The byte stream available for I/O is stored in a buffer. Let’s assume that 1024 bytes are available for reading when the process gets the notification at time t2.
Let’s assume the process only reads 500 out of the 1024 bytes.
This means that at times t3, t4 and t5, there are still 524 bytes available in the buffer that the process can read without blocking. But since the process can only perform I/O once it gets the next notification, these 524 bytes remain sitting in the buffer for that duration.
Let’s assume the process gets the next notification at time t6, when 1024 additional bytes have arrived in the buffer. The total amount of data available on the buffer is now 1548 bytes — 524 bytes that weren’t read previously, and 1024 bytes that have newly arrived.
Let’s assume the process now reads in 1024 bytes.
This means that at the end of the second I/O operation, 524 bytes still remain in the buffer that the process will be unable to read before the next notification arrives.
While it might be temping to perform all the I/O immediately once a notification arrives, doing so has consequences. A large I/O operation on a single descriptor has the potential to starve other descriptors. Furthermore, even with the case of level triggered notifications, an extremely large write or send call has the potential to block.
Multiplexing I/O on descriptors
In the above section, we only described how a process handles I/O on a single descriptor. Often, a process might want to handle I/O on more than one descriptor. An extremely common use case is where a program needs to log to stdout and stderr, while also accept connections on a socket and make outgoing RPC connections to other services.
There are several ways of multiplexing I/O on descriptors:
— non-blocking I/O (the descriptor itself is marked as non-blocking, operations may finish partially)
— signal driven I/O (the process owning the descriptor is notified when the I/O state of the descriptor changes)
— polling I/O (with select or poll system calls, both of which provide level triggered notifications about the readiness of descriptors)
— BSD specific kernel event polling (with the kevent system call).
Multiplexing I/O with Non-Blocking I/O
What happens to the descriptors?
When we have multiple file descriptors, we could put all of them in the non-blocking mode.
What happens in the process?
The process can try to perform the I/O operation on the descriptors to check if any of the I/O operations result in an error.
What happens in the kernel?
The kernel performs the I/O operation on the descriptor and returns an error or a partial output or the result of the I/O operation if it succeeds.
What are the cons?
Frequent checks: If a process tries to perform I/O operations very frequently, the process has to continuously be retrying operations that returned an error to check if any descriptors are ready. Such busy-waiting in a tight loop could lead to burning CPU cycles.
Infrequent checks: If such operations are conducted infrequently, then it might take a process an unacceptably long time to respond to an I/O event that is available.
When it might make sense to use this approach?
Operations on output descriptors (writes for example) don’t generally block. In such cases, it might help to try to perform the I/O operation first, and revert back to polling when the operation returns an error. It might also make sense to use this approach with edge-triggered notifications, where the descriptors can be put in nonblocking mode, and once a process gets notified of an I/O event, it can repeatedly try I/O operations until the system calls would block with an
EAGAIN or
EWOULDBLOCK.
Multiplexing I/O via Signal Driven I/O
What happens to the descriptors?
The kernel is instructed to send the process a signal when I/O can be performed on any of the descriptors.
What happens in the proceess?
The process will wait for signals to be delivered when any of the descriptor is ready for an I/O operation.
What happens in the kernel?
Tracks a list of descriptors and sends the process a signal every time any of the descriptors become ready for I/O.
What are the cons of this approach?
Signals are expensive to catch, rendering signal driven I/O impractical for cases where a large amount of I/O is performed.
When it might make sense to use this approach?
It is typically used for “exceptional conditions” when the cost of handling the signal is lower than that of polling constantly with select/poll/epoll or kevent. An example of an “exceptional case” is the arrival of out-of-band data on a socket or when a state change occurs on a pseudoterminal secondary connected to a master in packet mode.
Multiplexing I/O via Polling I/O
What happens to the descriptors?
The descriptors are put in a non-blocking mode.
What happens in the process?
The process uses the level triggered mechanism to ask the kernel by means of a system call (select or poll) which descriptors are capable of performing I/O. Described below is the implementation of both select and poll.
The signature of select on Darwin is:
int
int nfds,
fd_set *restrict readfds,
fd_set *restrict writefds,
fd_set *restrict errorfds,
struct timeval *restrict timeout
);
while on Linux it is:
int
int nfds,
fd_set *readfds,
fd_set *writefds,
fd_set *exceptfds,
struct timeval *timeout
);
Select monitors three independent sets of descriptors:
— the readfds descriptors are monitored to see if a read will not block (when bytes become available for reading or when encountering an EOF)
— the writefds descriptors are monitored to for when a write will not block.
— the exceptfds descriptors are monitored for exceptional conditions
When select returns, the descriptors are modified in place to indicate which file descriptors actually changed status. Any of the three file descriptor sets can be set to NULL if we don’t want to monitor that particular category of events.
The final argument is a timeout value, which specifies for how long the select system call will block:
— when the timeout is set to 0, select does not block but returns immediately after polling the file descriptors
— when timeout is set to NULL, select will block “forever”. When select blocks, the kernel can put the process to sleep until select returns. Select will block until 1) one or more descriptors specified in the three sets described above are ready or 2) the call is interrupted by a signal handler
— when timeout is set to a specific value, then select will block until 1) one or more descriptors specified in the three sets described above are ready or 2) the call is interrupted by a signal handler or 3) the amount of time specified by timeout has expired
The return values of select which indicates the total number of file descriptors in all the three sets that are ready. Each set is then individually inspected to find out which I/O event occurred.
Poll
Poll differs from select only in terms of how we specify which descriptors to track.
With select, we pass in three sets of descriptors we want to monitor for reads, writes and exceptional cases.
With poll we pass in a set of descriptors each marked with the events it specifically needs to track.
The signature of poll on Darwin is:
int poll(
struct pollfd fds[],
nfds_t nfds,
int timeout
);
And on Linux it is:
int poll(
struct pollfd *fds,
nfds_t nfds,
int timeout
);
The first argument to poll is an array of all the descriptors we want to monitor.
A pollfd structure contains three pieces of information:
— the ID of the descriptor to poll (let’s call this descriptor A)
— bit masks indicating what events to monitor for the given descriptor A (events)
— bit masks set by the kernel indicating the events that actually occurred on the descriptor A (revents)
The second argument is the total number of descriptors we are monitoring (the length of the array of descriptors used as the first argument, in other words).
The third timeout specifies for how long poll would block for every time it is invoked.
— when set to -1, poll blocks indefinitely until one of the descriptors listed in the first argument is ready or when a signal is caught
— when set to 0, poll does not block but returns immediately after polling to check which of the file descriptors, if any, are ready
— when set to a value greater than 0, poll blocks until timeout milliseconds or until one of the file descriptors becomes ready or until a signal is caught, whichever one is first.
The return values of poll. This number is the total number of file descriptors in the array on which events have happened. If the number file descriptors in the array is 10, and 4 of them had events that happened, then the return value is 4. The revents field can be inspected to find out which of the events actually occurred for the file descriptor.
What happens in the kernel?
Both select and poll are stateless. Every time a select or a poll system call is made, the kernel checks every descriptor in the input array passed as the first argument for the occurrence an event and return the result to the process. This means that the cost of poll/select is O(N), where N is the number of descriptors being monitored.
Furthermore, the implementation of both select and poll comprises of two tiers — a specific top tier which decodes the incoming request, as well as several device or socket specific bottom layers. The bottom layers compriseof kernel poll functions and is used by both select and poll.
What are the cons of this approach?
The process performs two system calls — select/poll to find out which descriptors are ready to perform I/O and another system call to do the actual operation (read/write).
Second, both select and poll only allow descriptors to be monitored for three events — reading, writing and exceptional conditions. A process might be interested in knowing about other events as well, such as signals, filesystem notifications or asynchronous I/O completions.
Furthermore, select and poll do not scale very well as the number of descriptors being monitored increases. As stated above, the cost of select/poll is O(N), which means when N is very large (think of a web server handling tens of thousands of mostly sleepy clients), every time select/poll is called, even if there might only be a small number of events that actually occurred (4, in the example above), the kernel still needs to scan every descriptor in the list (10 in the example above) and check for all the three conditions on each descriptor, and invoke the appropriate callbacks registered. This also means that once the kernel responds back to the process with the status of every descriptor, the process must then scan the entire list of descriptors in the response to determine which descriptor is ready.
When it might make sense to use this approach?
Perhaps when there the number of descriptors being monitored for I/O alone is small and these descriptors are mostly busy?
Kernel event polling on BSD
What happens to the descriptors?
The descriptors are put in non-blocking mode.
What happens in the process?
The process is provided a generic notification interface known as kevent that allows it to monitor a wide variety of events and be notified of these kernel event activities in a scalable manner. For example, the process can track:
— changes to file attributes when a file is renamed or deleted
— posting of signals to it when it forks, execs or exits
— asynchronous I/O completion
The kernel only returns a list of the events that have occurred, instead of returning the status of every event the process has registered. The kernel builds the event notification structure just once and the process is notified every time one of the events occur. If a process is tracking N events, and only a small number of those events have occurred (M), the cost is O(M) as opposed to O(N). Thus kevent scales well when only a small subset of events out of all the events the process is interested in occurs. In other words, this is especially good when there are a large number of sleepy clients or slow clients, since the number of events a process needs to be notified remains small, even if the number of events the process is monitoring is large.
The system call used to create a descriptor to use as a handle on which kevents can be registered is kqueue on BSD/Darwin. This descriptor is the means by which the process then gets the notification about the occurrence of events that it registered. This descriptor is also the means by which a process can add or delete events it wishes to be notified about.
What happens in the kernel?
The process subscribes to specific events on descriptors. The kernel provides a system call a process can invoke to find out which events have occurred. If the kernel has a list of events that have occurred, the system call returns. If not, the process is put to sleep until an event occurs.
Async I/O in POSIX
POSIX defines a specification for “parallel I/O”, by allowing a process to instantiate I/O operations without having to block or wait for any to complete. The I/O operation is queued to a file and the process is later notified when the operation is complete, at which point the process can retrieve the results of the I/O.
Linux exposes functions such as
io_setup,
io_submit,
io_getevents,
io_destroy, which allow for submission of I/O requests from a thread without blocking the thread. This is for disk I/O (especially random IO on SSD). Especially interesting is how the addition of eventfd support makes it possible to use it with epoll. Here’s a good description of how this works.
On Linux, another way of performing parallel I/O is using the threads based implementation is available within glibc:
These functions are part of the library with realtime functions named librt. They are not actually part of the libc binary. The implementation of these functions can be done using support in the kernel (if available) or using an implementation based on threads at user level. In the latter case it might be necessary to link applications with the thread library libpthread in addition to librt.
This approach, however, is not without its cons:.
On FreeBSD, POSIX AIO is implemented with the aio system call. An async “kernel process” (also called “kernel I/O daemon” or “AIO daemon”) performs the queued I/O operations.
AIO daemons are grouped into configurable pools. Each pool will add or remove AIO daemons based on load. One pool of AIO daemons is used to service asynchronous I/O requests for sockets. A second pool of AIO daemons is used to service all other asynchronous I/O requests except for I/O requests to raw disks.
To perform an asynchronous I/O operation:
— the kernel creates an async I/O request structure with all the information needed to perform the operation
— if the request cannot be satisfied by the kernel buffers immediately, this request structured is queued
— If an AIO daemon isn’t available at the time of request creation, the request structure is queued for processing and the syscall returns.
— the next available AIO daemon handles the request using the kernel’s sync path
— when the daemon finishes the I/O, the request structure is marked as finished along with a return or error code.
— the process uses the aio_error syscall to poll if the I/O is complete. This call is implemented by inspecting the status of the async I/O request structure created by the kernel
— if a process gets to the point where it cannot proceed until I/O is complete, then it can use the aio_suspend system call to wait until the I/O is complete.
— the process is put to sleep on the AIO request structure and is awakened by the AIO daemon when I/O completes or the process requests a signal be sent when the I/O is done
— once aio_suspend, aio_error or a signal indicating the I/O completion has arrived, using the aio_return system call gets the return value of the I/O operation.
Conclusion
This post only shed light on the forms on I/O possible and didn’t touch on epoll at all, which is by far the most interesting of all in my opinion. What’s often more interesting is buffer and memory management. My next post explores epoll in much greater detail. | https://copyconstruct.medium.com/nonblocking-i-o-99948ad7c957 | CC-MAIN-2021-10 | refinedweb | 4,443 | 58.01 |
In this series of posts, we already covered that methods can be transformed into procs and as such, can be evaluated later. Furthermore, we've seen that procs can be used as arguments to another methods and that such procs can optionally use curried arguments.
Until now, we have been using methods as a way to represent "blocks" of code:
def multiply(a, b) a * b end
Also, we learned that, in order to evaluate a method later, we must transform it into a proc (
method(:some_method)). In Ruby, we can represent blocks of code to be evaluated later not only in methods, but also creating procs directly:
current_time = Proc.new { Time.now } current_time.call # => 2021-04-10 17:22:06 # It's quite similar to using methods def current_time Time.now end method(:current_time).call # => 2021-04-10 17:22:10
Then, blocks can represent any group of code which will be evaluated later. Blocks can be inline or multiline:
# inline block Proc.new { Time.now } # multine block Proc.new do Time.now end
Let's take our example in the previous post about
map_numbers and, instead of creating a method
multiply, we define a
Proc directly with a block:
multiply = Proc.new { |a, b| a * b } multiply.call(2, 3) # => 6 multiply.curry[2].call(4) # => 8
Right. Now, remember that the implementation of
map_numbers takes a proc as the last argument? Then we have nothing to do in that method. It will simply work, because the object passed as argument should respond to a method
call, so in this case procs already do!
multiply = Proc.new { |a, b| a * b } map_numbers([1, 2, 3], multiply.curry[2]) # => [2, 4, 6]
We could also use another way of creating a proc, which is a
lambda. There are slight differences between procs and lambdas, but both belong to the same Ruby class: Proc, with lambda being a "type" of Proc.
multiply_proc = Proc.new { |a, b| a * b } multiply_proc.call(2, 3) # => 6 multiply_lambda = -> (a,b) { a * b } multiply_lambda.call(2, 3) # => 6 # let's bring methods into play def multiply(a, b) a * b end method(:multiply).call(2, 3) # => 6
"Meta" methods, procs, lambdas...they have little differences in practice but they all:
- take blocks
- respond to
.call
- respond to
.curry
- and share other similarities... Take a look at the Proc and Method documentation.
YAY! That's so much power!
A syntactic sugar
Our method
map_numbers looks like this:
def map_numbers(numbers, calculation_proc) # logic here # somewhere, it does `calculation_proc.call(number)` end
The standard Ruby gives us a syntactic sugar, a keyword called yield, which is as similar as calling
some_proc.call. If we choose to use
yield, we can omit the
proc parameter but we have to trust that whoever calls the method, they must ensure the proc was passed as the last argument.
def map_numbers(numbers) new_list = [] for number in numbers new_list << yield(number) # <--- similar as doing the proc call end new_list end
Now, if we try to call:
map_numbers([1, 2, 3], method(:multiply).curry[2])
Oh,oh:
ArgumentError (wrong number of arguments (given 2, expected 1))
That's because, this syntactic sugar has a rule of thumb: the argument cannot be a proc, but a BLOCK instead. For doing so, we have to transform our proc into a block, upon the passing argument, by prepending a
& in the proc object:
map_numbers([1, 2, 3], &method(:multiply).curry[2]) # => [2, 4, 6]
The
& prepend can be used to transform procs into blocks ONLY upon methods passing arguments!
Passing blocks to methods
Similar as using blocks to define procs and lambdas, we can also use blocks to be passed to methods. In case there's a block being passed, Ruby WILL always take the block and use it as the last argument.
map_numbers([1, 2, 3]) { |number| number * 2 } # => [2, 4, 6] map_numbers([1, 2, 3]) { |number| number * 3 } # ... map_numbers([1, 2, 3]) { |number| number + 55 } # ... # multiline map_numbers([1, 2, 3]) do |number| number * 10 end
Thankfully, we don't need to create such a method
map_numbers in our codebase. Ruby has a lot of useful methods in its standard library, and the method
map is one of them, being part of the Array class:
[1, 2, 3].map { |number| number * 2 } # => [2, 4, 6]
And, since we know that methods can be transformed into procs:
multiply_by_two = 2.method(:*) # 2 is an object, don't forget! [1, 2, 3].map(&multiply_by_two)
So, unleash the madness and abuse on the syntactic sugar!
[1, 2, 3].map(&2.method(:*)) # multiply by 2 [1, 2, 3].map(&6.method(:+)) # sum by 6
Reducing structures
What if we wanted to sum all numbers in a list? Well, that's a simple algorithm:
def sum_all(numbers) sum = 0 for number in numbers sum += number end end
But how can we write a more flexible and robust code that allows to apply any transformation, reducing the entire list into a single accumulated value, no matter if the desired output is a sum or the product of multiplication?
Yes, we can rely on blocks!
def reduce(numbers, initial_acc) accumulator = initial_acc for number in numbers accumulator = yield(accumulator, number) end accumulator end
Then, we can use our method to apply a bunch of reducers:
# sum all numbers reduce([1, 2, 3], 0) { |acc, number| acc = acc + number } # multiply all numbers reduce([1, 2, 3], 1) { |acc, number| acc = acc * number } # syntactic sugar reduce([1, 2, 3], 0, &:+) # sum all numbers reduce([1, 2, 3], 1, &:*) # multiply all numbers
Similar to
map, Ruby also provides a method
reduce in the standard library:
[1, 2, 3].reduce(0) { |acc, number| acc += number } [1, 2, 3].reduce(1) { |acc, number| acc *= number } [1, 2, 3].reduce(&:+) [1, 2, 3].reduce(&:*)
Wrapping up
In this series of blogposts, we tried to cover the fundamentals behind Ruby blocks, such as:
- how Ruby evaluates expressions
- how we can use methods to evaluate expressions later
- methods and procs
- procs as arguments
- curry arguments in procs
- blocks in procs, lambdas and methods
- bonus point to syntactic sugar and the Ruby standard library
I hope you could enjoy and understand a bit more on how Ruby blocks work and how to make a more effective use of them on a daily basis!
Discussion (0) | https://dev.to/leandronsp/ruby-blocks-made-easy-part-iii-grand-finale-blocks-and-syntactic-sugar-4d48 | CC-MAIN-2021-25 | refinedweb | 1,051 | 62.58 |
Overview
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
CDLibrary - a driver to Dacal CD Library 2 devices
This Driver allows you to access CD Library 2 devices connected on the USB ports either via http or the filesystem
Copyright (C) 2009 Florian Boesch <pyal <>.
Features
- plug&play recognizes connected devices
- list unique device ids
- open specified slots
- retract slots
Download
Dependencies
- (which depends on libusb)
-
-
- (which is included in the cdlibrary package)
Install
cdlibrary console is a python package installable by any of the following commands: command line:
cd cdlibrary; python setup.py install easy_install cdlibrary easy_install
Once you have the package installed (as root) you need to add a daemon script: command line:
sudo su - wget tar xzvf tip.tar.gz cp cdlibrary-*/cdlibd /etc/init.d update-rc.d -f cdlibd defaults
you can also use the code below as a start script: code:
import sys from cdlibrary import http, filesystem daemon = filesystem.daemon #daemon = http.daemon if __name__ == '__main__': command = sys.argv[1] if command == 'start': daemon.start() elif command == 'stop': daemon.stop() elif command == 'restart': daemon.stop() daemon.start() elif command == 'debug': daemon.debug()
Usage
command line:
/etc/init.d/cdlibd start|stop|restart|debug
HTTP interface
- --> lists the device IDs in json format
-<device-id>/123 --> opens the slot 123
-<device-id> --> closes the slot of this device
File System Interface
- ls /mnt/cdlibrary --> lists the device IDs as directory listing
- echo 123 > /mnt/cdlibrary/<device-id> --> opens slot 123
- echo close > /mnt/cdlibrary/<device-id> --> closes the slot of this device | https://bitbucket.org/pyalot/cdlibrary | CC-MAIN-2015-48 | refinedweb | 272 | 55.34 |
Introduction
Make sense of this: “applicatives compose, monads do not.”
More formally, the statement is: let f and g be applicative functors. Then the composition f g is also an applicative functor. On the other hand, there exist monads f and g such that f g is not a monad.
Composing functors
First we will show how to compose two functors. To make things concrete, let’s do some examples with two data types that are instances of the Functor class in Haskell: [] (lists), and Maybe. We can check this using ghci:
Prelude> :set prompt "ghci> " ghci> :set +t ghci> :m +Data.Maybe ghci> :info Maybe data Maybe a = Nothing | Just a -- Defined in `Data.Maybe' instance Eq a => Eq (Maybe a) -- Defined in `Data.Maybe' instance Monad Maybe -- Defined in `Data.Maybe' instance Functor Maybe -- Defined in `Data.Maybe' instance Ord a => Ord (Maybe a) -- Defined in `Data.Maybe' instance Read a => Read (Maybe a) -- Defined in `GHC.Read' instance Show a => Show (Maybe a) -- Defined in `GHC.Show' ghci> :info [] data [] a = [] | a : [a] -- Defined in `GHC.Types' instance Eq a => Eq [a] -- Defined in `GHC.Classes' instance Monad [] -- Defined in `GHC.Base' instance Functor [] -- Defined in `GHC.Base' instance Ord a => Ord [a] -- Defined in `GHC.Classes' instance Read a => Read [a] -- Defined in `GHC.Read' instance Show a => Show [a] -- Defined in `GHC.Show'
In particular, note these lines:
instance Functor Maybe -- Defined in `Data.Maybe' ... instance Functor [] -- Defined in `GHC.Base'
Composing [] with Maybe means that we have a list of Maybe values, for example:
ghci> [Just 1, Nothing, Just 42] :: [] (Maybe Int) [Just 1,Nothing,Just 42]
Our goal is to write a Functor instance declaration for the general form of this composition, which means having a data type that represents the composition itself. Often it’s easier to start from a specific example and work up to the general case. So let’s start with a list of Maybe values:
> {-# LANGUAGE FlexibleInstances, InstanceSigs #-} > > module Compose01 where > > import Data.Maybe() > > data Compose1 = MkCompose1 ([] (Maybe Int))
where I have prefixed the data constructor with “Mk” to disambiguate it from the data type which is Compose (this can be helpful for newcomers to Haskell who are not familiar with the usual practice of giving the data type and data constructors the same name). Now generalise on the inner-most type, Int, by making it a parameter:
> data Compose2 x = MkCompose2 ([] (Maybe x))
Next, generalise on the inner-most data constructor, Just, by making it a parameter:
> data Compose3 g x = MkCompose3 ([] (g x))
Finally, generalise on the list constructor [], making it a parameter:
> data Compose f g x = MkCompose (f (g x))
and this is our definition of Compose. It lets us represent any composition of data constructors. We can play around with it in ghci:
Compose01> :t MkCompose MkCompose :: f (g x) -> Compose f g x Compose01> :t MkCompose [[42]] -- list of list MkCompose [[42]] -- list of list :: Num x => Compose [] [] x Compose01> :t MkCompose [Just 3, Just 42, Nothing] -- list of Maybe MkCompose [Just 3, Just 42, Nothing] :: Num x => Compose [] Maybe x
Next, we have to fill in the definition of fmap in an instance declaration for Functor:
> instance (Functor f, Functor g) => Functor (Compose f g) where > fmap f (Compose x) = ...
Again, use a concrete example and ghci to guide us. The inner-most type is a Maybe, and being an instance of Functor means that we can usefmap to apply a function to a “boxed” value:
Compose01> fmap (x -> x + 1) (Just 42) Just 43 Compose01> fmap (x -> x + 1) (Nothing) Nothing Compose01> :t fmap (x -> x + 1) fmap (x -> x + 1) :: (Functor f, Num b) => f b -> f b
So this function, fmap (x -> x + 1), can be applied to a list using fmap again:
Compose01> fmap (fmap (x -> x + 1)) [Just 3, Just 42, Nothing] :: [] (Maybe Int) [Just 4,Just 43,Nothing]
Generalise this by replacing the function (x -> x + 1) with f and the value [Just 3, Just 42, Nothing] with the value z, and we get what turns out to be the correct definition for the instance declaration:
> instance (Functor f, Functor g) => Functor (Compose f g) where > fmap f (MkCompose x) = MkCompose (fmap (fmap f) x)
An exercise for the reader is to check that with this definition of fmap, the functor laws hold:
> fmap id = id > fmap (p . q) = (fmap p) . (fmap q)
Now that Compose is an instance of Functor, we can use a single fmap to apply a function on values that are wrapped up in Compose:
Compose01> fmap (x -> x + 1) (MkCompose [Just 3, Just 42, Nothing]) MkCompose [Just 4,Just 43,Nothing]
Applicatives compose
To show that applicatives compose, we need to write the instance declaration for Applicative:
> instance (Applicative f, Applicative g) => Applicative (Compose f g) where > pure x = ... > f x = ...
This one is a bit more complicated than the Functor instance so I made a short screencast on how to use hole-driven development to find the answer. With hole-driven development we have a bit of a conversation with the type system and this is easier to show in a narrated screencast compared to a linear written text.
(Be sure to watch in 720p fullscreen otherwise the text is illegible.)
If you don’t want to watch the screencast, just take my word that we can fill in the definition for the Compose instance of Applicative. (Or, sneak a peek at the source code for Control.Applicative.Compose.) Another exercise for the reader: verify that the following functor laws hold.
> pure id v = v -- Identity > pure (.) u v w = u (v w) -- Composition > pure f pure x = pure (f x) -- Homomorphism > u pure y = pure ($ y) u -- Interchange > fmap f x = pure f x -- Fmap (on the Functor instance)
Monads do not compose
To show that “monads do not compose”, it is sufficient to find a counterexample, namely two monads f and g such that f g is not a monad. In particular, we will show that one of the monad laws is violated for any possible instance declaration.
The following is just an expanded version of Conor McBride’s answer on stackoverflow so all credit goes to him, and any mistakes here are my responsibility. Conor’s proof is the shortest and easiest to explain counterexample that I could find.
First, define the terminal monad Thud:
> data Thud a = MkThud deriving (Show)
Note that it has an unused type parameter. We have to do this so that the kind is correct for the Monad instance. The instance declaration for Monad is quite easy because we only have a single way of creating a Thud value:
> instance Monad Thud where > return _ = MkThud > _ >>= _ = MkThud
Playing around with ghci, we see that anything turns into a Thud:
ghci> return 0 :: Thud Int MkThud ghci> (return 0 :: Thud Int) >>= (x -> return (x + 1)) MkThud
The other data type is Flip, which wraps a value along with a boolean:
> data Flip a = MkFlip Bool a deriving (Show)
The Monad instance is of a writer monad with an xor structure:
> instance Monad Flip where > return :: a -> Flip a > return = MkFlip False -- or, return x = MkFlip False x > > (>>=) :: Flip a -> (a -> Flip b) -> Flip b > MkFlip False x >>= f = f x > MkFlip True x >>= f = MkFlip (not b) y > where MkFlip b y = f x
Informally, return wraps a value along with the False value. The bind (>>=) function will apply the monadic function f if we have a False value, otherwise it will apply f but flip its boolean component for the final result.
Some example values and computations:
ghci> (return "boo" :: Flip String) MkFlip False "boo" ghci> (return "boo" :: Flip String) >>= (x -> return $ x ++ " hey!") MkFlip False "boo hey!" ghci> (return "boo" :: Flip String) >>= (x -> return $ x ++ " hey!") >>= (x -> return $ x ++ " Huh?") MkFlip False "boo hey! Huh?" ghci> (return "boo" :: Flip String) >>= (x -> MkFlip True (x ++ " hey!")) MkFlip True "boo hey!" ghci> (return "boo" :: Flip String) >>= (x -> MkFlip True (x ++ " hey!")) >>= (x -> return $ x ++ " What?") MkFlip True "boo hey! What?" ghci> (return "boo" :: Flip String) >>= (x -> MkFlip True (x ++ " hey!")) >>= (x -> MkFlip True (x ++ " What?")) MkFlip False "boo hey! What?"
Finally we come to the Monad instance for Compose for the specific case of a Flip of Thud:
> instance Monad (Compose Flip Thud) where > return x = undefined > x >>= f = undefined
Let’s start with return. It has to produce something of type Compose Flip Thud a, so we begin with the type constructor:
> return x = MkCompose (MkFlip ??? MkThud)
This is all we can do – we are constrained by the types. Now what can go in the place of the three question marks? Perhaps a function of x, say
> return x = MkCompose (MkFlip (h x) MkThud)
where h :: a -> Bool. However, Haskell has the parametricity property. Quoting the Haskell wiki:
Since a parametrically polymorphic value does not “know” anything about the unconstrained type variables, it must behave the same regardless of its type. This is a somewhat limiting but extremely useful property known as parametricity.
So parametricity implies that the function h can’t be something like
h x = if (x is of type blah) then True else ...
which means that h must be a constant, and therefore return is also a constant. Without loss of generality, suppose that the definition is
> instance Monad (Compose Flip Thud) where > return :: a -> Compose Flip Thud a > return x = MkCompose (MkFlip True MkThud)
The left identity monad law says that
> return x >>= f = f x
for any appropriately typed f and x. Since return is a constant, we have
> (MkCompose (MkFlip True MkThud)) >>= f = f x
Let f = id, then we have two equations using the two values that exist of type Compose Flip Thud:
> (MkCompose (MkFlip True MkThud)) >>= id = id (MkCompose (MkFlip True MkThud)) > (MkCompose (MkFlip True MkThud)) >>= id = id (MkCompose (MkFlip False MkThud))
which implies that
> id (MkCompose (MkFlip True MkThud)) = id (MkCompose (MkFlip False MkThud))
which is a contradiction. So it is not possible to define return and >>= in a consistent manner for the Compose Flip Thud instance of the Monad typeclass. We conclude that in general it is not true that the composition f g will be a monad for any two monads f and g.
Further reading
- Another Stack Overflow question:
- Philip Wadler’s papers on parametricity:
- A paper about conditions under which monads do compose: Eugenia Cheng: Iterated distributive laws. See also the first Stack Overflow link for comments about a swap function for reversing the nesting of the monads.
Archived Comments
Date: 2014-12-10 13:50:17.111839 UTC
Author: paluh
Thanks for this detailed insight into Applicative and Monad composition (wonderful analysis of Conor McBride’s answer) – it was really helpful for me!
Date: 2014-12-11 13:59:02.180167 UTC
Author: paluh
I have one question regarding your monad composition counterexample. Does this:
(MkCompose (MkFlip True MkThud)) >>= id
typecheck? Can we use id function which has type a -> a in place of function which should have type Monad m => a -> m a?
Date: 2014-12-11 21:28:56.99053 UTC
Author: Carlo
Here is a stand-alone file with the expression that you asked about: TypeCheckId.hs.
Note the type signature for >>= in the Monad instance for Compose Flip Thud:
(>>=) :: Compose Flip Thud a -> (a -> Compose Flip Thud b) -> Compose Flip Thud b
If we let a = Compose Flip Thud b then the type signature is:
(>>=) :: Compose Flip Thud (Compose Flip Thud b) -> (Compose Flip Thud b -> Compose Flip Thud b) -> Compose Flip Thud b
So the expression (MkCompose (MkFlip True MkThud)) >>= id does type check.
It does look odd to me. I think the degeneracy of Thud is the cause of this strange situation: we have data Thud a = MkThud so the parameter a can take any value. | https://carlo-hamalainen.net/2014/01/02/applicatives-compose-monads-do-not/ | CC-MAIN-2017-51 | refinedweb | 1,977 | 66.47 |
Rev Up the Drools 5 Java Rule Engine
Brief Overview of Drools FlowDrools Flow is based on flow charts. It represents a process engine that sustains the integration of processes and rules. Practically, a flow chart will allocate ruleflow groups to rules so that you can include or exclude rules. (A ruleflow is a process that describes the order in which a series of steps need to be executed, using a flow chart). In addition, you may run rules in loops, execute rules in parallel, use timers, use events, and so on. Figure 2 shows the set of components that can construct a flow chart, as presented in the Drools plug-in for Eclipse IDE.
Figure 2: Flow Chart Components in Drools Plug-in for Eclipse
For the voting example, you will build a simple flow chart that will force the engine to avoid checking the "WrongValue less than 0" and "WrongValue bigger than 10" rules. For this, you will place these two rules in a group named IgnoreGroup, while the rest of the rules will be under a group named ValidGroup. For this, you should edit the vote.drl by using the ruleflow-group keyword, like this:
vote.drl package com.sample import com.sample.DroolsTest.Vote; rule "WrongValue less than 0" ruleflow-group "IgnoreGroup" when m : Vote( average < 0.0, vote : vote ) then m.setAverage(0.0f); update( m ); end rule "WrongValue bigger than 10" ruleflow-group "IgnoreGroup" when m : Vote( average > 10.0, vote : vote ) then m.setAverage(10.0f); update( m ); end rule "BadValue between 0-3" ruleflow-group "ValidGroup" when m : Vote( average >= 0.0 && average <=3.0, vote : vote ) then m.setVote("Bad!"); System.out.println( m.getVote() ); end rule "GoodValue between 3-6" ruleflow-group "ValidGroup" when m : Vote( average >3.0 && average <=6.0, vote : vote ) then m.setVote("Good!"); System.out.println( m.getVote() ); end rule "VeryGoodValue between 6-9" ruleflow-group "ValidGroup" when m : Vote( average >6.0 && average <=9.0, vote : vote ) then m.setVote("Very Good!"); System.out.println( m.getVote() ); end rule "ExcellentValue between 9-10" ruleflow-group "ValidGroup" when m : Vote( average >9.0 && average <=10.0, vote : vote ) then m.setVote("Excellent!"); System.out.println( m.getVote() ); end
Figure 3: A Flow Chart Example
In the background, this is an XML document. Therefore, you can manually edit it.
Every flow chart starts with a Start node and ends with an End node. Between these nodes, you have the flow components. In this case, you have a single component (a RuleFlowGroup component). This component will indicate the ValidGroup group of rules.
Next, you have to set some properties for the RuleFlowGroup and for the flow chart. First, though, you should know that a flow chart exposes the following properties:
- Id The unique id of the process
- Name The display name of the process
- Version The version number of the process
- Package The package (namespace) in which the process is defined
- Variables Can be defined to store data during the execution of your process
- Swimlanes Specify the actor responsible for the execution of human tasks
- Exception Handlers Specify the behavior when a fault occurs in the process
- Connection Layout Specify how the connections are visualized on the canvas using the connection layout property:
- 'Manual' always draws your connections as lines going straight from their start points to their end points (although it's possible to use intermediate break points)
- 'Shortest path' similar to 'Manual', but tries to go around any obstacles it might encounter between the start and end points to avoid lines crossing nodes
- 'Manhattan' draws connections by using only horizontal and vertical lines
In addition, a RuleFlowGroup node exposes the following properties:
- Id The id of the node (which is unique within one node container)
- Name The display name of the node
- RuleFlowGroup The name of the ruleflow group that represents the set of rules of this RuleFlowGroup node
- Timers Timers that are linked to this node
In the case of the RuleFlowGroup component, the properties that you want to set are Name and RuleFlowGroup. For the Name property, use the VoteFlow value. For the RuleFlowGroup property, use the ValidGroup value (as you can see, it corresponds to the value of ruleflow-group from vote.drl).
For the flow chart, you should set Connection Layout to Shortest Path, Id to votes, Name to ruleflow, and Package to com.sample.
You can accomplish these tasks from the Drools plug-in for Eclipse or by entering them manually. The result is an XML document with the .rf extension. It should look like this:
vote.rf <?xml version="1.0" encoding="UTF-8"?> <process xmlns="" xmlns: <header> </header> <nodes> <start id="1" name="Start" x="16" y="16" /> <end id="3" name="End" x="240" y="16" /> <ruleSet id="4" name="VoteFlow" x="125" y="17" width="80" height="40" ruleFlowGroup="ValidGroup" /> </nodes> <connections> <connection from="4" to="3" /> <connection from="1" to="4" /> </connections> </process>
Save this file as vote.rf in the project directory. Now you can use the Drools API to connect the flow chart (vote.rf) with the rule resource (vote.drl) and test it. For this, you will make two small modifications to the DroolsTest.java application:
- Load the flow chart as you have loaded the rule resource.
- Call the StatefulKnowledgeSession.startProcess before you fire any rules.
This method gets a String argument that represents the ruleflow id (in this case, votes). After making these modifications, you get the following application:
RuleFlowTest.java; /** * This is a sample file to launch a process. */ public class RuleFlowTest { public static final void main(String[] args) { try { // load up the knowledge base KnowledgeBase kbase = readKnowledgeBase(); StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(); KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory.newFileLogger(ksession, "log"); // go ! Vote vote = new Vote(); vote.setAverage(8.4f); //insert the "vote" ksession.insert(vote); //start the "votes" process ksession.startProcess("votes"); //"fire all rules" ksession.fireAllRules(); //clean up logger.close(); ksession.dispose(); } catch (Throwable t) { t.printStackTrace(); } } private static KnowledgeBase readKnowledgeBase() throws Exception { KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); kbuilder.add(ResourceFactory.newClassPathResource("vote.drl", RuleFlowTest.class), ResourceType.DRL); kbuilder.add(ResourceFactory.newClassPathResource("vote.rf", RuleFlowTest.class),; } public static class Vote { private String vote; private float average; public String getVote() { return this.vote; } public void setVote(String vote) { this.vote = vote; } public float getAverage() { return this.average; } public void setAverage(float average) { this.average = average; } } }
The output, conforming to vote.drl, will be the text "Excellent!" As an exercise, you may want to try a value that is in IgnoreGroup.
Authors Note: For a complete tutorial on Drools Flow, check out this write-up.
Flexibility, Readability, and FunctionalityDrools 5 provides access to many powerful new features that, although not very easy to use, can bring your Java projects to a new level of flexibility, readability, and functionality. This article was just a brief introduction to get you familiar with Drools; there is much more to explore than you have seen here. With the basic knowledge you now have, you are ready to rev up this rule engine.<< | https://www.developer.com/java/ent/article.php/10933_3821101_4/Rev-Up-the-Drools-5-Java-Rule-Engine.htm | CC-MAIN-2018-34 | refinedweb | 1,174 | 57.27 |
Introduction to statistical data analysis
Posted February 18, 2013 at 09:00 AM | categories: statistics | tags:
Updated February 27, 2013 at 02:34 PM
Given several measurements of a single quantity, determine the average value of the measurements, the standard deviation of the measurements and the 95% confidence interval for the average.
import numpy as np y = [8.1, 8.0, 8.1] ybar = np.mean(y) s = np.std(y, ddof=1) print ybar, s
>>> >>> >>> >>> >>> >>> 8.06666666667 0.057735026919
Interesting, we have to specify the divisor in numpy.std by the ddof argument. The default for this in Matlab is 1, the default for this function is 0.
Here is the principle of computing a confidence interval.
-).
from scipy.stats.distributions import t ci = 0.95 alpha = 1.0 - ci n = len(y) T_multiplier = t.ppf(1.0 - alpha / 2.0, n - 1) ci95 = T_multiplier * s / np.sqrt(n) print 'T_multiplier = {0}'.format(T_multiplier) print 'ci95 = {0}'.format(ci95) print 'The true average is between {0} and {1} at a 95% confidence level'.format(ybar - ci95, ybar + ci95)
>>> >>> >>> >>> >>> >>> >>> >>> T_multiplier = 4.30265272991 ci95 = 0.143421757664 The true average is between 7.923244909 and 8.21008842433 at a 95% confidence level
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | https://kitchingroup.cheme.cmu.edu/blog/2013/02/18/Introduction-to-statistical-data-analysis/ | CC-MAIN-2021-31 | refinedweb | 213 | 55.1 |
I have written the following Java source file (
Hello.java
package com;
public class Hello {
public static void main(String[] args) {
System.out.println("Hello!");
}
}
C:/tmpjava/Hello.java
javac Hello.java
dir
Hello.class
Hello.java
javac
java Hello.class
Exception in thread "main" java.lang.NoClassDefFoundError: Hello/class
Caused by: java.lang.ClassNotFoundException: Hello: Hello.class. Program will exit.
javac
java
Your class
Hello belongs to the package
com. So the fully qualified name of your class is
com.Hello. When you invoke a program using java on the command-line, you should supply the fully-qualified class name of the class that contains your
main method and omit the .class, like so:
java com.Hello
The java program needs this fully-qualified class name to understand which class you are referring to.
But you have another problem. The java program locates packages, sub-packages, and the classes that belong to them using the filesystem. So if you have a package structure like
com.Hello, the java program expects to find a class file named Hello.class in a directory named com, like this: com/Hello.class. In fact you can observe this behavior in the
Exception that you see; you've incorrectly used Hello.class, which java is interpreting as a package named
Hello, and a class named
class, and is looking for the directory structure Hello/class:
java.lang.NoClassDefFoundError: Hello/class
But the compiler javac doesn't set up this directory structure by default. See the documentation for javac, but the important bit is this: when you do your compiles, you can specify a destination directory using the
-d flag:
c:\myclasses and the class is called com.mypackage.MyClass, then the class file is called c:\myclasses\com\mypackage\MyClass.class.
If -d is not specified, javac puts the class file in the same directory as the source file.
The last bit in bold is the source of much confusion for beginners, and is part of your own problem.
So you have two alternatives:
In your case, it's fine if you supply the current directory as the destination directory, like so (the period
. means current directory):
javac -d . Hello.java
If you invoke the compiler like this, it will create the com directory for you, and put your compiled class file in it, the way that the java program expects to find it. Then when you run java as above, from c:\tmpJava, your program should execute.
You could set up your source code using a directory structure that mirrors your package structure: put your source file Hello.java inside a directory called com, in your case: c:\tmpJava\com\Hello.java. Now, from c:\tmpJava you can run your javac compile like this:
javac com\Hello.java
You haven't supplied the
-d flag, but that's fine, because you've created the directory structure yourself, and quoting again from the documentation above:
If -d is not specified, javac puts the class file in the same directory as the source file.
Again, when you run java as above, your program should execute.
Note that this second alternative is one that is commonly employed by java programmers: the source code files are organized in a directory structure that mirrors the package structure.
In this explanation we've ignored the concept of the classpath. You'll also need to understand that to write java programs, but in your case of simply compiling a program in the current directory - if you follow one of the two alternatives above when compiling your class - you can get away without setting a classpath because, by default, the java program has the current directory as a classpath. Another quote, this one from the documentation for java:
(.).
Note that when you use an IDE like Eclipse to run your java code, this is mostly handled for you, but you'll still run into classpath issues. | https://codedump.io/share/jT3zapBrhcUx/1/java-can39t-find-main-class | CC-MAIN-2017-43 | refinedweb | 655 | 56.35 |
NumPy allows easy standard mathematics to be performed on arrays, a well as moire complex linear algebra such as array multiplication.
Lets begin by building a couple of arrays. We’ll use the np.arange method to create an array of numbers in range 1 to 12, and then reshape the array into a 3 x 4 array.
import numpy as np # note that the arange method is 'half open' # that is is includes the lower number, and goes up yo, but not including, # the higher number array_1 = np.arange(1,13) array_1 = array_1.reshape (3,4) print (array_1) OUT: [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]]
Maths on a single array
We can multiple an array by a fixed number (or we can add, subtract, divide, raise to power, etc):
print (array_1 *4) OUT: [[ 4 8 12 16] [20 24 28 32] [36 40 44 48]] print (array_1 ** 0.5) # square root of array OUT: [[1. 1.41421356 1.73205081 2. ] [2.23606798 2.44948974 2.64575131 2.82842712] [3. 3.16227766 3.31662479 3.46410162]]
We can define a vector and multiple all rows by that vector:
vector_1 = [1, 10, 100, 1000] print (array_1 * vector_1) OUT: [[ 1 20 300 4000] [ 5 60 700 8000] [ 9 100 1100 12000]]
To multiply by a column vector we will transpose the original array, multiply by our column vector, and transpose back:
vector_2 = [1, 10, 100] result = (array_1.T * vector_2).T print (result) OUT: [[ 1 2 3 4] [ 50 60 70 80] [ 900 1000 1100 1200]]
Maths on two (or more) arrays
Arrays of the same shape may be multiplied, divided, added, or subtracted.
Let’s create a copy of the first array:
array_2 = array_1.copy() # If we said array_2 = array_1 then array_2 would refer to array_1. # Any changes to array_1 would also apply to array_2
Multiplying two arrays:
print (array_1 * array_2) OUT: [[ 1 4 9 16] [ 25 36 49 64] [ 81 100 121 144]]
Matrix multiplication (’dot product’)
See for an explanation of matrix multiplication, if you are not familiar with it.
We can perform matrix multiplication in numpy with the np.dot method.
array_2 = np.arange(1,13) array_2 = array_1.reshape (4,3) print ('Array 1:') print (array_1) print ('\nArray 2:') print (array_2) print ('\nDot product of two arrays:') print (np.dot(array_1, array_2)) OUT: Array 1: [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] Array 2: [[ 1 2 3] [ 4 5 6] [ 7 8 9] [10 11 12]] Dot product of two arrays: [[ 70 80 90] [158 184 210] [246 288 330]]
One thought on “35. Array maths in NumPy” | https://pythonhealthcare.org/2018/04/10/35-array-maths-in-numpy/ | CC-MAIN-2020-29 | refinedweb | 440 | 65.25 |
csShaderExpression Class Reference
An evaluable expression attached to a shader variable. More...
#include <csgfx/shaderexp.h>
Detailed Description
An evaluable expression attached to a shader variable.
Definition at line 45 of file shaderexp.h.
Member Function Documentation
Evaluate this expression into a variable.
It will use the symbol table it was initialized with.
Retrieve the error message if the evaluation or parsing failed.
Definition at line 294 of file shaderexp.h.
Parse in the XML in the context of a symbol table.
The documentation for this class was generated from the following file:
- csgfx/shaderexp.h
Generated for Crystal Space 2.1 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/api/classcsShaderExpression.html | CC-MAIN-2015-22 | refinedweb | 107 | 53.98 |
I was thinking this’d make a good Java quiz question. It’s something I have to do all the time in order to make collections accessible in a streaming fashion. Getting an iterator is the easiest way for a client to visit the elements of a collection one at a time. The usual reason is to apply some operation to them. If the collection’s large (sometimes offline, even), it’s less memory intensive to iterate.
Problem: Suppose you have a standard binary tree implementation (see below) and want to implement an instance of java.util.Iterator<Integer> that iterates over the elements. To satisfy the spirit of the exercise, the implementation must take constant time per item iterated and may take at most an amount of memory proportional to the depth of the tree.
Here’s a standard CS 101 binary tree implementation, with the
add() and
toString() methods for convenience:
class Tree { public final int mVal; public Tree mLeft; public Tree mRight; public Tree(int val) { mVal = val; } public void add(int val) { if (val < mVal) { if (mLeft == null) mLeft = new Tree(val); else mLeft.add(val); } else if (val > mVal) { if (mRight == null) mRight = new Tree(val); else mRight.add(val); } } public String toString() { return "(" + mVal + " " + mLeft + " " + mRight + ")"; }
To see how a tree gets built, here’s an example:
public static void main(String[] args) { Integer val = Integer.valueOf(args[0]); Tree t = new Tree(val); for (int i = 1; i < args.length; ++i) t.add(Integer.valueOf(args[i])); System.out.println("Tree toString= " + t); }
If I compile and run, here’s what I get:
c:carpblogs>java TreeIterator 3 7 8 1 4 5 2 5 Tree toString= (3 (1 null (2 null null)) (7 (4 null (5 null null)) (8 null null)))
Hint: It’s easy to walk over the nodes in order using a recursive program. Here’s a simple string builder:
public String inOrder() { return ((mLeft == null) ? "" : mLeft.inOrder()) + mVal + " " + ((mRight == null) ? "" : mRight.inOrder()); }
If I add the following to my
main():
System.out.println("Values in order = " + t.inOrder());
I get the following line of output:
Values in order = 1 2 3 4 5 7 8
February 2, 2009 at 6:13 pm |
Hint: (call-with-current-continuation)
February 7, 2009 at 3:38 pm |
preOrder() should really be named inOrder()
February 8, 2009 at 12:52 pm |
My bad — the traditional name’s indeed “in order” for visiting a node’s left daughter, the node, then it’s right daughter. Post-order visits the node after both daughters and pre-order visits the node before its daughters.
I took the liberty of correcting the body of the post to cover up my mistake.
February 20, 2009 at 4:35 am |
Interesting quiz question. Here’s a related implementation over the file system:
Not quite what your looking for, I admit. One’s a push- and the other a pull-interface, but it does meet your requirement that the memory overhead be linear in the depth of the tree structure.
February 20, 2009 at 1:32 pm |
Babak’s classes provide a nice example of the other way of solving this problem — with a visitor (in this case, util.io.file.TraverseListener). The visitor implementation gets callbacks for each of the nodes visited for both pre-order and post-order.
I’d think about simplifying the listener to just be a file listener and then have two ways of calling the listener, pre-order or post-order. Then you can generify the listener to something like LingPipe’s corpus.ObjectHandler<E>. So you’d get a static parser method
visitPreOrder(ObjectHandler<File> visitor, File path, FileFilter filter). I like the
FileFilterargument — it’s what we used to have in LingPipe as a file visitor. I’d never thought about sibling ordering, but a comparator’s a nice general way to do that.
But, if you need to have one object visit in both pre-order and post-order, then you need to have the more refined listener interface.
.
February 20, 2009 at 8:04 pm |
Good point, lingpipe. I think I’ll include your single-method visitor interface idea in the next release. Under the current design, if a user wants to visit pre-order, they only implement
TraverseListener.preorder(File f), leaving the body of the other method empty.
Like you mention, there are situations where you can use both pre-and post-order events. For example, if a user wants to visit files in pre-order but also wants to count how many files exist under each subdirectory, then they can use the post-order events to implement that count. It’s kinda analogous to capturing a SAX parser’s begin- and end-element events. | https://lingpipe-blog.com/2009/01/25/java-quiz-iterating-over-binary-tree-elements/ | CC-MAIN-2019-35 | refinedweb | 797 | 61.56 |
On 08/02/2013 10:02 AM, Emmanuel Lécharny wrote:
> Le 8/2/13 9:20 AM, Emmanuel Lécharny a écrit :
>> Hi guys,
>>
>> there is something that is extremelly weird i server-integ test : we
>> have some kind of 1 second delay for each test we run. For instance,
>> when we run the SearchIT tests, any test is taking 1 second at least to
>> execute.
>>
>> This should not be the case, because we don't inject a lot of entries.
>>
>> I suspect that MINA is the culprit here : the select() loop is waiting 1
>> second before acting.
>>
>> This slows down the tests a lot, as we have around 600 of them.
>>
>> To be investigated...
>>
> After investigation, I found that the following line :
>
> public class SearchIT extends AbstractLdapTestUnit
> {
> @Rule
> public MultiThreadedMultiInvoker i = new MultiThreadedMultiInvoker(
> MultiThreadedMultiInvoker.NOT_THREADSAFE );
>
> cause the delay.
>
> With this line, the SearchIT test runs in rouglhy 49 seconds, withhout
> it, it runs in 13 seconds !
>
> The reason is that the MultiThreadMultiInvoker introduce a 1 second sleep :
>
> while(counter.get() > 0)
> {
> Thread.sleep( 1000 );
> }
> (line 187).
Well, I must admit that the implementation of that class is not very
smart, it uses runnables that count down a counter when they are done.
When using Futures instead the counter and the sleep can be avoided.
I'll try up fix that.
> We shoumd probably get rid of this @rules, unless we have very good
> reason to use it...
Yes do so. They are helpful to find concurrency issues by running tests
multiple times in parallel.
Kind Regards,
Stefan | http://mail-archives.apache.org/mod_mbox/directory-dev/201308.mbox/%3C51FBEDC5.5020102@stefan-seelmann.de%3E | CC-MAIN-2014-15 | refinedweb | 257 | 66.94 |
On Wed, 26 Feb 2003 02:17 am, Hanasaki JiJi wrote:
> Any comparisons on VFS vs JNDI? seems very similar to me.
They are very similar. JNDI is a little more general: a namespace of Objects.
VFS is a little more specific: a hierarchy of files.
VFS does not try to be as universal as JNDI does, even though there is going
to be plenty of overlap (find by name, create, delete, get/set attribute,
etc). VFS adds things that don't make sense under JNDI's more general model
(get content as a stream, content signing, copy a tree, converting to/from
java.io.File, etc), and does things in a way that reflects how files get used
(as opposed to how generic namespaces of Objects get used).
--
Adam
---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org | http://mail-archives.apache.org/mod_mbox/commons-dev/200302.mbox/%3C200302261327.09890.adammurdoch@apache.org%3E | CC-MAIN-2016-26 | refinedweb | 151 | 66.54 |
:Creating screencasts with mm
In the last months I have been publishing my applications at Wiki with some screencasts. These screencasts are really useful and may help people to understand the application purpose. In this article I want to show how these screencasts are created.
MMaker
The strategy behind screencasts are simple: just a sequence of screenshots, presented as a slide show. A Python application called mmaker is used to save all screenshots and other application like Microsoft Movie Maker may be employed to create the movie. There are commercial applications that could be used to the same task but mmaker is a free alternative even with some additional work involved.
The process can be resume as:
- Define sample rate and directory for saving images and start MMaker. Phone memory may be faster for * saving images.
- Start you application or whatever you want to show.
- Finished ? Stop mmaker and copy all images to your PC. Remove images that you do not want to see in your movie.
- Using Movie Maker or similar application, copy all images to the movie timeline, in the same order they were created. Do not forget to set the transition time and slide time before. I have been using 0.25s for transition and 0.5s for slide time.
- Create and publish your movie.
Source code
The code (Python 1.4.5 but it should work with Python 1.9.6) and screenshots are below. You can download the sis or source code from the program repository.
# -*- coding: cp1252 -*-
# (c) Marcelo Barros de Almeida
# marcelobarrosalmeida@gmail.com
# License: GPL3
import graphics
import time
from appuifw import *
import os
import e32
class MMaker(object):
NL = u"\u2029"
def __init__(self,dir=u"e:\mmaker",etime=0.25):
self.dir = dir
self.etime = etime
self.cnt = 0
self.running = False
self.timer = e32.Ao_timer()
menu = [(u"Start", self.start),
(u"Stop", self.stop),
(u"Set dir", self.set_dir),
(u"Set time", self.set_time),
(u"Clean dir",self.clean_dir),
(u"About", self.about),
(u"Quit", self.close_app)]
self.body = Text()
app.screen = "normal"
app.body = self.body
app.menu = menu
app.tile = u"MMaker"
self.lock = e32.Ao_lock()
app.exit_handler = self.close_app
def close_app(self):
self.running = False
self.timer.cancel()
self.lock.signal()
def run(self):
self.lock.wait()
app.set_tabs( [], None )
app.menu = []
app.body = None
app.set_exit()
def set_dir(self):
dir = query(u"Images dir:","text",self.dir)
if dir is not None:
if not os.path.isdir(dir):
try:
os.makedirs(dir)
except:
self.body.add((u"Can't create %s" % dir) + self.NL)
return
self.dir = dir
self.body.add((u"New dir is %s" % self.dir) + self.NL)
def clean_dir(self):
yn = popup_menu([u"No",u"Yes"],u"Clean %s?" % self.dir)
if yn is not None:
if yn == 1:
self.cnt = 0
files = os.listdir(self.dir)
for f in files:
fp = os.path.join(self.dir,f)
try:
os.remove(fp)
self.body.add((u"%s deleted" % fp) + self.NL)
except:
self.body.add((u"Cant´t delete %s" % fp) + self.NL)
def set_time(self):
tm = query(u"Time","float",self.etime)
if tm is not None:
self.body.add((u"New time is %f" % tm) + self.NL)
self.etime = tm
def start(self):
self.timer.after(self.etime,self.take_screenshot)
self.running = True
self.body.add(u"Started" + self.NL)
def stop(self):
self.running = False
self.timer.cancel()
self.body.add(u"Stopped" + self.NL)
def take_screenshot(self):
if self.running:
ss = graphics.screenshot()
name = os.path.join(self.dir,"mm%04d.png" % self.cnt)
self.cnt += 1
ss.save(name)
self.body.add((u"Screenshot %s" % name) + self.NL)
self.timer.after(self.etime,self.take_screenshot)
def about(self):
note(u"MMaker by Marcelo Barros (marcelobarrosalmeida@gmail.com)","info")
mm = MMaker()
mm.run() | http://developer.nokia.com/community/wiki/Archived:Creating_screencasts_with_mmaker | CC-MAIN-2014-52 | refinedweb | 635 | 56.72 |
On 27/10/2013 9:58 p.m., Ahmad wrote:
> hi ,
> about smp and workers
> ,
> just want to understand
>
> 1- i want an equation that equal the number of instances for squid relative
> with cache dir and worker number ???
>
> ex:
> With 3 workers and 1 rock cache = 5 processes running:
>
> i want general fourm for this above
"Sum of all componentes configured which cause instances to start" is
the closest we can give in the way of general formula.
Squid has not been a single-process system since v2.0, possibly earlier.
There are always background helper processes doing things for some
component or another. The new Disker processes are equivalent to the old
cache_dir diskd helpers but for rock storage format and are SMP-aware.
Also, it *will* be changing as new SMP support is added/modified.
Get your configuration working the way you want it to go before trying
to tune instances to cores. By that time you should have cache.log
records to identify what each process number is doing to base the
affinity on.
The SMP macros and if-else-endif should always be treated with care and
used as sparingly as possible. They are temporary workarounds for the
current incomplete or missing SMP support of some components, and we
expect to break configurations that use them in future as SMP-support
improves.
> 2-wts the difference between squid instance and worker ???
"squid instance" is a fuzzy term. Its meaning changes depending on
whether you enabled SMP mode or non-SMP mode, or debug non-daemon mode.
I think of it as meaning any instance of process running out of the
"squid" binary.
worker is a specific type of squid instance....
> im
> misunderstanding !! why its better to give worker a core not like cores of
> "rock disk" ???
... worker process does all HTTP I/O and protocol handling. Currently
they also do a lot of non-HTTP protocols actions like SNMP. But that is
planned to change eventually.
> isnt worker is squid instance ??!!!
Yes. Worker is a type of squid instance.
> 3-which will scan and read squid.conf first , is it the instance of squid ??
> or the worker of squid ??
The first one to read the config file is something else you may not have
noticed yet. The first instance of squid binary to run is the daemon
manager. It reads the config file and determines how many processes to
fork into (if any). The result is all the kidN processes, forked at
different times as determined by the configuration.
It then performs the high-availability monitoring to ensure that
either the coordinator (SMP-mode) or the single worker process (non-SMP
mode) always stays running.
FYI: this design is one reason why Squid is so unfriendly with upstart
and systemd. They try to be daemon managers themselves, and things dont
work well if you have two managers giving conflicting HA instructions to
a set of processes (one manager says shutdown, the other notices outage
and auto-starts a replacement ... etc, etc). Or one manager is managing
the other, which has no useful effects.
Amos
Received on Sun Oct 27 2013 - 11:58:58 MDT
This archive was generated by hypermail 2.2.0 : Sun Oct 27 2013 - 12:00:06 MDT | http://www.squid-cache.org/mail-archive/squid-users/201310/0538.html | CC-MAIN-2022-21 | refinedweb | 546 | 65.73 |
CGTalk
>
Software Specific Forums
>
Autodesk Maya
>
Maya Character Setup
> Bi-Directional Constraint Plugin
PDA
View Full Version :
Bi-Directional Constraint Plugin
wigal
08-12-2007, 06:25 PM
HI,
So I bought the maya learning tool about bi directional constraining, and was wondering if anybody here compiled the plugin that comes with it against Maya 8.5???
would be cool!
thank you
nottoshabi
08-14-2007, 12:47 AM
Can you elaborate a little about bi directional constraining. Or give us a link?
scroll-lock
08-14-2007, 08:40 AM
I think he mean this:
futurcraft
08-14-2007, 11:58 AM
If I am right, I guess they used the Bi-directional constraint plugins for the transformer rigs...? has anybody come up with a way of setting up the animation keys using the bidirectional plugin using Maya's default animation hotkeys? cause the dvd shows the process of setting animation keys using this constraint using its own UI..
It would be great if we have a community here developing this plugin together.. over whats already been done..
meagane
08-14-2007, 04:49 PM
are the bidirectional plugins (or source code) available only to those who bought the DVDs?
nilslerin
08-14-2007, 08:31 PM
If you read/watch trough the learning tool I think you will learn how to compile it for newer versions of Maya
nottoshabi
08-14-2007, 10:14 PM
Is it worth the money? What do you think about it?
futurcraft
08-15-2007, 08:56 AM
It sure is worth the money... but the only issue I have with it is that of the animation keys not yet being built into Maya atleast with the code which is provided.. they use a custom UI for it.. but the concept of breakin the cyclic dependency using the plugin is really worth knowing about even for just understanding the whole process really well..
scroll-lock
08-15-2007, 09:28 AM
Did you bought Part I and Part II or just one of them ? And can you make a little resume of what is included in them ? Should I believe the resume on the Autodesk site? :) Thanks
futurcraft
08-15-2007, 11:03 AM
well yeah both part 1 & 2 is what you need to get.. the autodesk website is pretty much whts in there in the dvds... and actually speaking it has more info in the docs too provided with the dvds.. have any of you out there compiled the plugins using VC++ 2005 express edition?? I get a whole bunch of errors.. emanating from the header files when the VC++ compiler runs the code. Its as if the header files are not having a syntax which is compatible with the VC++ 2005 express edition compiler. I read the docs in May 8.5 and it says that maya 8.5 API has been compiled with the Visual studio 2005 edition. does this mean that the compiling will not work with the express editions of Visual studio?
BoostAbuse
08-15-2007, 03:36 PM
In order to compile using VC++ Express Edition you need to make a few alterations to the API
That will give you the outline on how to modify the wizard and library to compile using Express Edition, though I would highly suggest compiling using the non Express Edition but if funds are tight it will work the same.
As for the bi-directional constraining, think of it like a non-hierarchial parent whereby you can pull on the child and the parent will follow or pull on the parent and the child will follow (hence bi-directional). Motionbuilder has the exact same theory inside of it though it's far more efficient and precise. The code provided works, albeit it's what I would call still an early beta for bi-directional constraining inside Maya and even the presentation mentions that it's a very early stage mock-up intended to introduce people to the concept in hopes of turning it into a real solid production tool. Jazz Tigan (sp?) I believe compiled a version that he included a flash interface with, if you search the boards for the bi-directional constraint stuff or search Jonhandhisdog.com I believe the plugin is available for download.
-s
nottoshabi
08-15-2007, 03:41 PM
Isn't the express edition a compact version of the visual studio 2005? I dont think there should be something there that should not work. I have not tried what you are trying to do I'm just thinking out loud. I have the express edition my self. I have been getting errors my self with the header files. I think that is a microsoft question. Write to them and ask, see what they say.
meagane
08-15-2007, 04:44 PM
what does mean: it`s not precise?
so it cannot be used for accurate production??
BoostAbuse
08-15-2007, 08:46 PM
nottoshabi: yeah, express is just a watered down version of the full package but for some reason the Maya plugin information just doesn't work the same with Express Edition. Go figure.
meagane: It's a cool plug-in and an interesting concept, but due to the nature of the workflow and necessity of breaking the hierarchy and setting keys in a not so standard fashion it doesn't quite fit into a production environment out of the box which is the way most Masterclass material should be (should be able to take the concepts and spin them into your own ideas and creations). That said, you probably wouldn't be able to hand the plug-in off to an animator and expect them to be able to work with it in the most regular of fashions.
-s
nottoshabi
08-16-2007, 07:01 AM
Ahh crap so what now I have to go buy the software in order to make plugins?
meagane
08-16-2007, 10:09 AM
BoostAbuse: does the plugin has a weight attribute, so the parenting can be turned of?
scroll-lock
08-16-2007, 10:43 AM
well.. we used the Express edition before in our studio and we successfully compiled a lot of plugins for different versions of Maya...now we use the full Visual Studio, but haven`t noticed to be any different. Of course I haven`t tried that particular plugin....Anyway, I`m gonna buy the dvds and I`ll check if I can recompile it for 8 and 8.5...
futurcraft
08-16-2007, 12:39 PM
Hopefully this post should complete the picture for the Setup of the VC++ Express edition for Maya plugin development..
Cheers!
Nikhil.
BoostAbuse
08-16-2007, 08:07 PM
You can compile just fine with Express Edition, we've got a few copies here for our FX guys and some other mild developers. You just need to alter some values that the Wizard relies on and that the compiler looks to, mainly just identifier codes.
The link I posted to highend3d will suffice in letting you compile Maya plugins using Express Edition :) If all else fails, can't go wrong with Linux.
futurcraft
08-16-2007, 09:23 PM
nottoshabi: I have been getting errors my self with the header files. I think that is a microsoft question. Write to them and ask, see what they say.
I have again encountered the same errors with respect to the header files... I followed the steps exactly on the high end site & also the MSDN tutorial on making changes in the VC++ express edition to support the PSDK installation. I even made the changes in the project settings to source the respective include, bin & lib files from both PSDK & Maya. Though now I end up with the same errors with the same header files during compile time..
I'd really appreciate some help here.. I'm attaching the buildlog html file as a reference. Here is the jist of the bunch of errors:
c:\program files\autodesk\maya8.5\include\maya\mstatus.h(161) : error C2653: 'std' : is not a class or namespace name
c:\program files\autodesk\maya8.5\include\maya\mstatus.h(161) : error C2143: syntax error : missing ';' before '&'
c:\program files\autodesk\maya8.5\include\maya\mstatus.h(161) : error C2433: 'ostream' : 'friend' not permitted on data declarations2653: 'std' : is not a class or namespace name
c:\program files\autodesk\maya8.5\include\maya\mstatus.h(161) : error C2061: syntax error : identifier 'ostream'2805: binary 'operator <<' has too few parameters
I get the same set of errors for all the maya header files which are called by the program code. Can anybody help me out here....?
Thanks so much...
Nikhil.
BoostAbuse
08-16-2007, 10:45 PM
I don't have access to Maya 8.5 at the moment but what version of Visual Studio are you using? 2005 Express Edition? We're still on 8.0 at the office as I've yet to really find a need for switching to 8.5 but in testing I noticed a whole wack of problems in the conversion due to the Nucleus change. Comparison between the two MStatus.h files for 8.0 and 8.5 32bit just shows that there have been a few things added and changed for openMaya support.
futurcraft
08-16-2007, 11:31 PM
Yes I am using the 2005 express edition of Visual studio & Maya 8.5 - 32 bit is what I am working on.. if changes have been made, shouldn't autodesk help to provide support for developers who work on plugins for 8.5? Or is there any other way to develop plugins for 8.5 by using the source files from an earlier version of Maya like 8 or 7?
I also put up a posting on the MSDN forums and this is the reply I got from one of the Software developers of Microsoft
I would make sure that your include path contains the location of iostream and that the code actually includes this header file. But even after including this line of code I get the same errors again.
#include <maya/MIOStream.h>
the io.h file too is within the default project settings directory path for the VC++ include files
BoostAbuse
08-17-2007, 10:10 PM
The only thing I can think of is if it's just not able to link the folders together for some ridiculous reason. My workstation at the office has all of my installs mapped to C:\aw\maya8.#_##bit\ but I'm using VS2005. I might be able to do a dry test-run with VSEE2005 on a home machine to see if I can recreate the header errors, just compile a stupid little plugin and see what happens.
-s
futurcraft
08-18-2007, 11:11 AM
Hey Shawn...!
Thanks a lot for your time n advice on this issue.. But I am really happy to say that i finally got it to work after getting advice from another thread... ()
This thread has all the info one needs to set up VCE2005 for Maya 8.5.
Thanks once again to all of you guys who poured ur thoughts into problem solving this issue...
Cheers!
Nikhil.
CGTalk Moderation
08-18-2007, 11. | http://forums.cgsociety.org/archive/index.php/t-528211.html | CC-MAIN-2014-52 | refinedweb | 1,886 | 71.75 |
Warning: this page refers to an old version of SFML. Click here to switch to the latest version.
Playing sounds and music
Sound or music?
SFML provides two classes for playing audio:
sf::Sound and
sf::Music. They both provide more or less the same
features, the main difference is how they work.
sf::Sound is a lightweight object that plays loaded audio data from a
sf::SoundBuffer. It should be used for small
sounds that can fit in memory and should suffer no lag when they are played. Examples are gun shots, foot steps, etc.
sf::Music doesn't load all the audio data into memory, instead it streams it on the fly from the source file. It is typically used to
play compressed music that lasts several minutes, and would otherwise take many seconds to load and eat hundreds of MB in memory.
As mentioned above, the sound data is not stored directly in
sf::Sound but in a separate class named
sf::SoundBuffer.
This class encapsulates the audio data, which is basically an array of 16-bit signed integers (called "audio samples"). A sample is the amplitude of
the sound signal at a given point in time, and an array of samples therefore represents a full sound.
In fact, the
sf::Sound/
sf::SoundBuffer classes work the same way as
sf::Sprite/
sf::Texture from the graphics module. So if you understand how sprites and textures work together,
you can apply the same concept to sounds and sound buffers.
You can load a sound buffer from a file on disk with its
loadFromFile function:
#include <SFML/Audio.hpp> int main() { sf::SoundBuffer buffer; if (!buffer.loadFromFile("sound.wav")) return -1; ... return 0; }
As with everything else, you can also load an audio file from memory (
loadFromMemory) or from a
custom input stream (
loadFromStream).
SFML supports most common audio file formats. The full list is available in the API documentation.
You can also load a sound buffer directly from an array of samples, in the case they originate from another source:
std::vector<sf::Int16> samples = ...; buffer.loadFromSamples(&samples[0], samples.size(), 2, 44100);
Since
loadFromSamples loads a raw array of samples rather than an audio file, it requires additional arguments in order to have a complete
description of the sound. The first one (third argument) is the number of channels; 1 channel defines a mono sound, 2 channels define a stereo sound,
etc. The second additional attribute (fourth argument) is the sample rate; it defines how many samples must be played per second in order to
reconstruct the original sound.
Now that the audio data is loaded, we can play it with a
sf::Sound instance.
sf::SoundBuffer buffer; // load something into the sound buffer... sf::Sound sound; sound.setBuffer(buffer); sound.play();
The cool thing is that you can assign the same sound buffer to multiple sounds if you want. You can even play them together without any issues.
Sounds (and music) are played in a separate thread. This means that you are free to do whatever you want after calling
play() (except destroying
the sound or its data, of course), the sound will continue to play until it's finished or explicitly stopped.
Playing a music
Unlike
sf::Sound,
sf::Music doesn't pre-load the audio data, instead it streams the data directly from the source. The
initialization of music is thus more direct:
sf::Music music; if (!music.openFromFile("music.ogg")) return -1; // error music.play();
It is important to note that, unlike all other SFML resources, the loading function is named
openFromFile instead of
loadFromFile.
This is because the music is not really loaded, this function merely opens it. The data is only loaded later, when the music is played.
It also helps to keep in mind that the audio file has to remain available as long as it is played.
The other loading functions of
sf::Music follow the same convention:
openFromMemory,
openFromStream.
What's next?
Now that you are able to load and play a sound or music, let's see what you can do with it.
To control playback, the following functions are available:
playstarts or resumes playback
pausepauses playback
stopstops playback and rewind
setPlayingOffsetchanges the current playing position
Example:
// start playback sound.play(); // advance to 2 seconds sound.setPlayingOffset(sf::seconds(2)); // pause playback sound.pause(); // resume playback sound.play(); // stop playback and rewind sound.stop();
The
getStatus function returns the current status of a sound or music, you can use it to know whether it is stopped, playing or paused.
Sound and music playback is also controlled by a few attributes which can be changed at any moment.The pitch is a factor that changes the perceived frequency of the sound: greater than 1 plays the sound at a higher pitch, less than 1 plays the sound at a lower pitch, and 1 leaves it unchanged. Changing the pitch has a side effect: it impacts the playing speed.
sound.setPitch(1.2);
The volume is... the volume. The value ranges from 0 (mute) to 100 (full volume). The default value is 100, which means that you can't make a sound louder than its initial volume.
The loop attribute controls whether the sound/music automatically loops or not. If it loops, it will restart playing from the beginning when it's finished, again and again until you explicitly callThe loop attribute controls whether the sound/music automatically loops or not. If it loops, it will restart playing from the beginning when it's finished, again and again until you explicitly call
sound.setVolume(50);
stop. If not set to loop, it will stop automatically when it's finished.
sound.setLoop(true);
More attributes are available, but they are related to spatialization and are explained in the corresponding tutorial.
Common mistakes
Destroyed sound buffer
The most common mistake is to let a sound buffer go out of scope (and therefore be destroyed) while a sound still uses it.
sf::Sound loadSound(std::string filename) { sf::SoundBuffer buffer; // this buffer is local to the function, it will be destroyed... buffer.loadFromFile(filename); return sf::Sound(buffer); } // ... here sf::Sound sound = loadSound("s.wav"); sound.play(); // ERROR: the sound's buffer no longer exists, the behavior is undefined
Remember that a sound only keeps a pointer to the sound buffer that you give to it, it doesn't contain its own copy. You have to correctly manage the lifetime of your sound buffers so that they remain alive as long as they are used by sounds.
Too many sounds
Another source of error is when you try to create a huge number of sounds. SFML internally has a limit; it can vary depending on the OS, but
you should never exceed 256. This limit is the number of
sf::Sound and
sf::Music instances that can exist
simultaneously. A good way to stay below the limit is to destroy (or recycle) unused sounds when they are no longer needed.
This only applies if you have to manage a really large amount of sounds and music, of course.
Destroying the music source while it plays
Remember that a music needs its source as long as it is played. A music file on your disk probably won't be deleted or moved while your application plays it, however things get more complicated when you play a music from a file in memory, or from a custom input stream:
// we start with a music file in memory (imagine that we extracted it from a zip archive) std::vector<char> fileData = ...; // we play it sf::Music music; music.openFromMemory(&fileData[0], fileData.size()); music.play(); // "ok, it seems that we don't need the source file any longer" fileData.clear(); // ERROR: the music was still streaming the contents of fileData! The behavior is now undefined
sf::Music is not copyable
The final "mistake" is a reminder: the
sf::Music class is not copyable, so you won't be allowed to do that:
sf::Music music; sf::Music anotherMusic = music; // ERROR void doSomething(sf::Music music) { ... } sf::Music music; doSomething(music); // ERROR (the function should take its argument by reference, not by value) | https://en.sfml-dev.org/tutorials/2.0/audio-sounds.php | CC-MAIN-2021-49 | refinedweb | 1,368 | 63.09 |
The Hyperscript Tagged Markup (HTM) library, which proposes an alternative to JSX, released its second major iteration. HTM 2.0 is a rewrite that is faster and smaller than HTM 1.x, has a syntax closer to JSX, and now supports Server-Side Rendering (SSR). With HTM 2.0, developers may enjoy simplified React/Preact workflows for modern browsers.
HTM 2.0 contributors have optimized for performance and size. The release notes explain:
HTM is now 20 times faster, 10% smaller, has 20% faster caching, runs anywhere (native SSR!). The Babel plugin got an overhaul too! It's 1-2 orders of magnitude faster, and no longer requires JSDOM (...).
HTM also ships with a prebuilt, optimized bundle of htm with preact. The library's authors explain:
The original goal for htm was to create a wrapper around Preact that felt natural for use untranspiled in the browser. I wanted to use Virtual DOM, but I wanted to eschew build tooling and use ES Modules directly.
HTM 2.0 strives to deliver a syntax closer to JSX, building on HTM's primary goal of developer-friendliness, as detailed by the library's authors.
HTM 2.0 tag template syntax mostly mirrors that of JSX. It features spread properties (
<div ...${props}>), self-closing tags: (
<div />), components (
<${Foo}>, where Foo is a component reference), and boolean attributes (
<div draggable />). HTM 2.0 syntax is also similar to alternative templating syntaxes hyperHMTL and lit-html.
Luciano Mammino, a developer who successfully used HTM for quick prototyping with Preact, comments:
I will also use
htm, a library that can be easily integrated with Preact to define DOM elements in a very expressive and react-like way (like JSX), without having to use transpilers like Babel.
HTM relies on standard JavaScript's Tagged Templates, which are natively implemented in all modern browsers. With HTM (see
html template tag in following example), developers can describe DOM trees in render functions, with a syntax similar to that of JSX:
render(props, state) { return html` <div class="container mt-5"> <div class="row justify-content-center"> <div class="col"> <h1>Hello from your new App</h1> <div> ${state.loading && html` <p> Loading time from server...</p> `} ${state.time && html` <p>⏱ Time from server: <i>${state.time}</i></p> `} </div> <hr /> </div> </div> </div> ` }
The HTM module must be used in connection with a HyperScript function:
import htm from 'htm'; function h(type, props, ...children) { return { type, props, children }; } const html = htm.bind(h); console.log( html`<h1 id=hello>Hello world!</h1>` ); // { // type: 'h1', // props: { id: 'hello' }, // children: ['Hello world!'] // }
The HyperScript
h function can be customized for server-side rendering to render HTML strings instead of virtual DOM trees. Developers can use HTM anywhere they can use JSX.
HTM 2.0 is ready for production use. HTM is available under the Apache 2.0 open source license. Contributions and feedback may be provided via the htm GitHub project.
Community comments
Thanks for the mention
by Luciano Mammino /
Re: Thanks for the mention
by Bruno Couriol /
Thanks for the mention
by Luciano Mammino /
Your message is awaiting moderation. Thank you for participating in the discussion.
I am glad you found my article and example indicative on how to use HTM :)
I might experiment with this topic even more in the future, trying to cover how to achieve universal rendering (server + client side rendering) and other similar topics
Re: Thanks for the mention
by Bruno Couriol /
Your message is awaiting moderation. Thank you for participating in the discussion.
Thanks for the comment! Those are indeed exciting topics to address and cover. Feel free to stay in touch! I may in the future write an article dedicated to universal rendering, and it is always nice to have good examples to illustrate the approach to our readers. | https://www.infoq.com/news/2019/02/htm-alternative-markup-jsx/ | CC-MAIN-2019-30 | refinedweb | 635 | 56.55 |
Bravo
From HaskellWiki
Latest revision as of 15:20, 4 May 2010
[edit] 1 What is Bravo?
Bravo is a general-purpose text template library inspired by the PHP template engine Smarty and the Haskell template library chunks, providing the parsing and generation of templates at compile time. Templates can be read from strings or files and for each a new record data type is created, allowing convenient access to all template variables in a type-safe manner.
All releases of Bravo are available on Hackage; there's no darcs repository for Bravo at the moment.
[edit] 2 Features
- Static template processing: All templates are read, parsed and processed at compile time, so no extra file access or error handling at runtime is necessary. If the application is supposed to be released in binary form, it's not necessary to ship the template files with the application.
- Multiple templates per file: Some template libraries require the user the create a file for each template, what may be very annoying for a lot of small templates. Bravo allows the user to define multiple templates per file with arbitrary comments between them. This means template documentation can be embedded directly into the template files.
- Conditional template evaluation: Bravo provides conditional template splices (see below), allowing to check conditions at runtime. This removes misplaced logic from the program source code and leads to a cleaner separation of program logic and layout. Beyond that, it helps to decrease the number of templates.
- Embedding of Haskell expressions: Bravo allows the user to embed arbitrary Haskell expressions combined with template variables.
- Customized data type generation: For the data type creation by default the following naming scheme is used:
- For a given template name
"name", the created data type/constructor name is
"TplName".
- For a template
"name"and a template variable
"var"a record field
"nameVar"is created.
- This behaviour can be changed by the
mkTemplatesWithOptionsand
mkTemplatesFromFileWithOptionsfunctions, that allow you to customize this naming scheme.
[edit] 3 Comparison to other template systems
[edit] 3.1 HStringTemplate
The main difference between HStringTemplate and Bravo is that Bravo parses the templates at compile time, while HStringTemplate does at runtime. Besides that, HStringTemplate provides several methods for rendering different data types in different styles. This can also be accomplished in Bravo by writing the appropriate rendering function in your Haskell code and using it in the template.
[edit] 3.2 chunks
chunks and Bravo share the basic idea of parsing templates and generating new data types at compile time. However, Bravo uses a richer syntax allowing conditional template evaluation and the use of Haskell expressions. Moreover chunks doesn't provide a customization of the generated data types. In contrast with that, chunks allows nesting of templates which is not provided by Bravo.
[edit] 4 Usagecondition_1
}} ... [ {{ elseifcondition_2
}} ... ]* [ {{
else }} ... ] {{
endif }}where each
...stands for an arbitrary number of inner splices. Multiple
elseifsplices and/or a single
elsespl. The passed value of type TplOptions contains a
tplMkName function to create the data type/contructior name and the
tplMkFieldName function to create the field names. Additionally, it contains a
tplModifyText function that is applied to each template text splice (see below), allowing e.g. the stripping of extra whitespace or other post-processing.
Finally, use the
show function on a value of the created data type to convert it into a string.
[edit] 5 Security concerns
Bravo allows the template writer to use arbitrary Haskell functions, even the use of
unsafePerformIO is permitted. You may think this is a huge security problem – no, it's not! Using Haskell's module system will fix this problem, as you can see in the following example from the Bravo package:
The template file:
{{tpl safe}} {{: "Hello" ++ " world!" }} {{endtpl}} {{tpl unsafe}} {{: unsafePerformIO (putStrLn "Doing some very bad stuff ..." >> return "Hello world!") }} {{endtpl}}
The templates module:
{-# LANGUAGE TemplateHaskell #-} module Example02Templates ( TplSafe (..), TplUnsafe (..) ) where import Text.Bravo $(mkTemplatesFromFile "Example02.tpl")
The main module:
module Main (main) where import Text.Bravo import Example02Templates main :: IO () main = return ()
This program won't compile, since
unsafePerformIO is not visible in the templates module, whereas e.g.
putStrLn is. All Prelude functions can be hidden, but at least
Show,
show and
concat have to be imported to make Bravo work properly.
[edit] 6 Future work
- Performance: Processing huge template files (> 1MB) sometimes fails. Usually a single template is not that large, so large files can be split and this should not be a big problem. Nevertheless, maybe this could be fixed by the use of attoparsec.
- Input encoding: Support for different input encodings is not yet implemented.
- Custom template splice delimiters:
{{and
}}as delimiters are suitable in most cases. One goal of future releases is to remove these fixed delimiters and provide a wide range of possible delimiters.
- Processing of lists with section splices: Sometimes it is necessary to process a list of values of type A in the same way, leading to format a template for each value, concatenate all the produced strings and pass the result to the variable of a second template. A convenient way to handle this would be a `section splice', that is mapped on a variable with a list of values (see also Smarty sections).
- Caching: Caching of evaluated templates is not provided at the moment.
[edit] 7 Known issues
Bravo depends on haskell-src-meta, where the last stable version 0.0.6 on Hackage doesn't support TH >= 2.4. This is a problem since GHC 6.12 ships with TH 2.4. Though, there's a patched version that supports TH 2.4 available from this darcs repository. Bravo presumes haskell-src-meta >= 0.1.0 to be compatible with TH >= 2.4. | https://wiki.haskell.org/index.php?title=Bravo&diff=34653&oldid=34270 | CC-MAIN-2015-32 | refinedweb | 948 | 56.05 |
My
Android
ListView
ArrayList
You need to do it through an
ArrayAdapter which will adapt your ArrayList (or any other collection) to your items in your layout (ListView, Spinner etc.).
This is what the Android developer guide says:
A
ListAdapterthat manages a
ListViewbacked by an array of arbitrary objects. By default this class expects that the provided resource id references a single
TextView. If you want to use a more complex layout, use the constructors that also takes a field id. That field id should reference a
TextViewin the larger layout resource.
However the
TextViewis referenced, it will be filled with the
toString()of each object in the array. You can add lists or arrays of custom objects. Override the
toString()method of your objects to determine what text will be displayed for the item in the list.
To use something other than
TextViewsfor the array display, for instance
ImageViews, or to have some of data besides
toString()results fill the views, override
getView(int, View, ViewGroup)to return the type of view you want.
So your code should look like:
public class YourActivity extends Activity { private ListView lv; public void onCreate(Bundle saveInstanceState) { setContentView(R.layout.your_layout); lv = (ListView) findViewById(R.id.your_list_view_id); // Instanciating an array list (you don't need to do this, // you already have yours). List<String> your_array_list = new ArrayList<String>(); your_array_list.add("foo"); your_array_list.add("bar"); // This is the array adapter, it takes the context of the activity as a // first parameter, the type of list view as a second parameter and your // array as a third parameter. ArrayAdapter<String> arrayAdapter = new ArrayAdapter<String>( this, android.R.layout.simple_list_item_1, your_array_list ); lv.setAdapter(arrayAdapter); } } | https://codedump.io/share/qut1Fz6XATSg/1/populating-a-listview-using-an-arraylist | CC-MAIN-2017-51 | refinedweb | 278 | 63.9 |
APIBridge Plug-in API
Package Description
The APIBridge plug-in creation package includes the following components:
- APIBridge_Emulator.zip: This file contains the binaries and header files needed to set up the development environment for creating plug-ins.
- Examples: This directory contains sample plug-in code that can be used as a starting point for any plug-in. The EchoServlet sample is used in this document to illustrate the steps for creating a plug-in.
Setting Up the Development Environment
To create a plug-in for the APIBridge, the following tools are required:
- S60 SDK (3rd Edition or later)
- Carbide.c++
The steps outlined below should be followed to set up the development environment.
Add the APIBridge files to your SDK
- Locate the root directory for the SDK. This is the one containing the epoc32 directory (for example, C:\S60\devices\S60_5th_Edition_SDK_v1.0).
- Unzip APIBridge_Emulator.zip in this directory.
Build the EchoServlet plug-in
- Using Carbide.c++, import the project contained in the \Examples\EchoServlet directory.
- Add the SDK you chose in the previous step to the Build Configurations.
- Build the software.
- Create a new Debug Configuration for the project and set the Process to Launch to: <SDK Directory>\epoc32\release\winscw\udeb\APIBridge_20023710.exe.
Test the plug-in
- Launch the project in Debug using the Debug configuration created in the previous step.
- When the emulator has finished loading, install the EchoTest.wgz located in Examples\EchoServlet\test. Do this by opening the file from the emulator file menu.
- Launch the test application by navigating to it inside the emulator: Menu -> Applications -> EchoTest.
Anatomy of a Plug-in
A plug-in is composed of two files:
- The ECOM resource file that tells the framework which requests is the plug-in going to serve
- The Servlet DLL, which implements the code necessary to fulfill the request.
Any plug-in requires the use of two or more UIDs:
- The UID for the DLL (It’s called <DLL UID> in the example);
- The UID(s) for each of the request types serviced by the plug-in (It’s called <request UID> in the example).
In the Example directory of the package you will find the EchoServlet plug-in used in this section.
ECOM resource file
The source for this can be found in \data\20024644.rss. This resource file defines which requests are served by the plug-in.
#include <RegistryInfo.rh>
// Declares info for implementations
RESOURCE REGISTRY_INFO theInfo
{
// UID for the DLL. Must me same as in mmp file
dll_uid = <DLL_UID>; //Change this with your dll UID
// Declare array of interface info
interfaces =
{
INTERFACE_INFO
{
// UID of interface that is implemented. UID of the ApiBridge interface
// DO Not Change
interface_uid = 0x20023711;
implementations =
{
IMPLEMENTATION_INFO
{
//The UID of your implementation
// same as in proxy.cpp implementation
implementation_uid = <request UID>;
version_no = 1;
display_name = "Sample Echo Servlet";
//This is the url that this servlet will serve
default_data = "/sample/echo";
}
};
}
};
}
The three areas that are highlighted above are:
- DLL UID: Tells the system which DLL implements this interface. Make sure that the UID used here is the same as the UID indicated in your DLL’s .mmp file.
TARGET EchoServlet_20024644.dll
TARGETTYPE PLUGIN
UID 0x10009D8D
- Implementation UID: Tells the system how to find the class that would take care of the request. Make sure that this UID is specified in your proxy.cp file:
- IMPLEMENTATION_PROXY_ENTRY(<request uid>, CEchoServlet::NewL)
- Default Data: A string that defines which request URL this plug-in will serve.
The IMPLEMENTATION_INFO block can be repeated as many times as requests are served by the plug-in.
Servlet object
The servlet object will be responsible for creating the instance that will process the request. In the EchoServlet example, it can be found in the \src\EchoServlet.cpp file. Mainly, two functions are required. For more information, see the API description.
- NewL function: The function responsible for creating the object. It is called by the framework when it has been detected that a client has made a request which the plug-in is registered to process. Your proxy file will make this connection.
- CEchoServlet* CEchoServlet::NewL()
- ServiceL function: The function that is the entry point for each request, and it passes the request object as a parameter. The request is completed by using the request object passed.
void CEchoServlet::ServiceL( MHttpRequest* req )
ECOM proxy file
The proxy file can be found in \src\proxy.cpp in the sample project. Its function is to link the request types with the C++ object that will process it.
const TImplementationProxy ImplementationTable[] =
{IMPLEMENTATION_PROXY_ENTRY(<request uid>, CEchoServlet::NewL) };
There can be as many entries in this table as requests processed by this plug-in.
Installation file
There are two files to copy into a device when installing a plug-in: the ECOM resource file in c:\resource\plugins and the dll file in c:\sys\bin. Also the dependency with the APIBridge needs to be stated. The .pkg for the sample can be found in the \sis directory.
;Dependency with APIBridge
[0x2002373F],1, 0, 6,{"APIBridge"}
;Servlet files
"EchoServlet_20024644.rsc" - "C:\Resource\Plugins\EchoServlet_20024644.rsc"
"\EchoServlet_20024644.dll" - "C:\sys\bin\EchoServlet_20024644.dll"
Binding code
For the plug-in to be accessible from the different runtimes, binding code needs to be created. See the following pages:
- Creating APIBridge JavaScript Binding Code
- Binding code for Flash from Adobe (coming soon)
- Binding code for Java™ (coming soon)
APIBridge Plug-in API
This chapter describes the Symbian API available for APIBridge plug-ins.
CServlet class
This is the base class for all servlets; it controls the entry point into the plug-in and the security.
ServiceL function
virtual void ServiceL ( MHttpRequest* req ) = 0
This function must be implemented by the plug-in. It is the entry point into the plug-in code and receives the HTTP request from the binding code.
GetSecurityType function
TSecurityType GetSecurityType ()
The implementation of this function is optional. It informs the framework about which security mechanism would be required by the plug-in.
IsClientValid function
TBool IsClientValid(const TDesC8& hash )
This function must be implemented only if the security mode is set to Secure.
MCookie class
This class represents a cookie in the HTTP request or response.
GetName function
TPtrC8 GetName()
This function returns the name of the cookie. The name cannot be changed after creation.
GetValue function
TPtrC8 GetValue()
This function returns the value of the cookie.
GetVersion function
TInt GetVersion()
This function returns the version of the protocol with which this cookie complies. Version 1 complies with RFC 2109, and version 0 complies with the original cookie specification drafted by Netscape. Cookies provided by a browser use and identify the browser's cookie version.
SetVersion function
void SetVersion (TInt v)
This function sets the version of the cookie protocol to which this cookie complies. Version 0 complies with the original Netscape cookie specification. Version 1 complies with RFC 2109. Since RFC 2109 is still somewhat new, consider version 1 to be experimental; do not use it yet on production sites.
GetPath function
TPtrC8 GetPath()
This function returns the path on the server to which the browser returns this cookie. The cookie is visible to all subpaths on the server.
SetPathL function
SetPathL(const TDesC8& uri)
This function specifies a path for the cookie to which the client should return the cookie..
GetMaxAge function
TInt GetMaxAge()
This function returns the maximum age of the cookie, specified in seconds; by default, -1 indicates the cookie will persist until browser shutdown.
SetMaxAge function
void SetMaxAge(TInt expiry)
This function sets the maximum age of the cookie in seconds. 0 value causes the cookie to be deleted. Returns the maximum age of the cookie, specified in seconds; by default, -1 indicates the cookie will persist until browser shutdown.
MHttpRequest class
This class represents the HTTP request and response objects.
GetRequest function
MHttpServletRequest* GetRequest()
This function provides the request object, which contains the information that was initiated at the binding layer.
GetResponse function
MHttpServletResponse* GetResponse()
This function provides the response object, which contains the information that has to be populated and sent back to the binding layer.
CreateCookieLC function
CreateCookieLC(const TDesC8& name, const TDesC8& value )
This is a helper function that allows an MCookie object to be created.
MHttpServletRequest class
This class represents the HTTP request object, which contains the information that was passed to the plug-in by the binding layer.
GetHeader function
TPtrC8 GetHeader(const TDesC8& name )
This function returns the value of the specified request header as a string. If the request did not include a header of the specified name, this method returns null. The header name is case insensitive. You can use this method with any request header.
GetIntHeaderL function
TInt GetIntHeaderL(const TDesC8& name )
This function returns the value of the specified request header as an integer. If the request does not have a header of the specified name, this method returns -1. If the header cannot be converted to an integer, this method leaves.
GetHeadersL function
void GetHeadersL(const TDesC8& name, RStringCollection& values )
This function returns all the values of the specified request header as an enumeration of string objects.
GetHeaderNamesL function
void GetHeaderNamesL ( RStringCollection& names )
This function returns an enumeration of all the header names this request contains. If the request has no headers, this method returns an empty enumeration.
GetMethod function
TPtrC8 GetMethod()
This function returns the name of the HTTP method with which this request was made, for example, GET, POST, or PUT.
GetPath function
TPtrC8 GetPath()
This function returns the portion of the request URI that indicates the context of the request. The context path always comes first in a request URI. The path starts with a '/' character but does not end with a '/' character. For servlets in the default (root) context, this method returns "".
GetQuery function
TPtrC8 GetQuery()
This function returns the query string that is contained in the request URL after the path. The method returns an empty string if the URL does not have a query string.
GetContent function
TPtrC8 GetContent()
This function returns the content of the request body.
GetCookiesL function
void GetCookiesL( RCookieCollection& cookies )
This function returns an array containing all of the cookie objects the client sent with this request. The method returns an empty array if no cookies were sent.
MHttpServletResponse class
This class represents the HTTP response object that contains the information that will be passed to the binding layer by the plug-in.
AddCookieL function
void AddCookieL( const MCookie& cookie )
This function adds the specified cookie to the response. This method can be called multiple times to set more than one cookie.
AddHeaderL function
void AddHeaderL( const TDesC8& name, const TDesC8& value )
This function adds a response header with the given name and value. The method allows response headers to have multiple values.
AddIntHeaderL function
void AddIntHeaderL ( const TDesC8& name, TInt value )
This function adds a response header with the given name and integer value. The method allows response headers to have multiple values.
ContainsHeader function
TBool ContainsHeader( const TDesC8& name )
This function returns a Boolean indicating whether the named response header has already been set.
SetHeaderL function
void SetHeaderL(const TDesC8& name, const TDesC8& value )
This function sets a response header with the given name and value. If the header had already been set, the new value overwrites the previous one. The containsHeader method can be used to test for the presence of a header.
SetIntHeaderL function
void SetIntHeaderL( const TDesC8& name, TInt value )
This function sets a response header with the given name and integer value. If the header had already been set, the new value overwrites the previous one. The containsHeader method can be used to test for the presence of a header before setting its value.
SetStatus function
void SetStatus( TInt sc )
This function sets the status code for this response. This method is used to set the return status code when there is no error. If there is an error, the sendError method should be used instead.
SendErrorL function
void SendErrorL( TInt sc )
This function sends an error response to the client using the specified status. The server generally creates the response to look like a normal server error page.
void SendErrorL( TInt sc, const TDesC8& msg )
This sends an error response to the client using the specified status code and descriptive message. The server generally creates the response to look like a normal server error page.
SendL function
void SendL( const TDesC8& content )
This function sends a response to the client with specified content.
void SendL( const TUint8* ptr, TInt length )
This sends a response to the client with specified content.
SetListener function
void SetListener( MHttpResponseListener* listener )
This function assigns a listener object to the request. This will be used to get informed when the request is done.
MHttpResponseListener class
This class represents a listener object that is used to get notifications regarding the completion of the request.
OnSendDone function
virtual void OnSendDone() = 0
This function needs to be implement by the object and gets called when the request has been sent to the binding layer. The object needs to be assigned to the response object via the SetListener function in the MHttpServletResponse object.
RQueryParser class
This is a helper class used to parse URI parameters.
ParseL function
void ParseL(const TDesC8& query )
This function initialises the object by ingesting the query string.
GetValue function
TPtrC8 GetValue( const TDesC8& name )
This function retrieves the value for a URI parameter.
HasValue function
TBool HasValue( const TDesC8& name )
This function informs us of the existence of a parameter.
Count function
TInt Count()
This function returns the number of parameters in the query.
GetName function
TPtrC8 GetName(TInt idx)
This function returns the name of the parameter in position idx.
GetValue function
TPtrC8 GetValue(TInt idx)
This function returns the value of the parameter in position idx.
Close function
void Close()
This function closes the object and destroys all internal structures. It must be called when the object is no longer needed. | http://developer.nokia.com/community/wiki/index.php?title=APIBridge_Plug-in_API&oldid=162494 | CC-MAIN-2014-15 | refinedweb | 2,320 | 55.13 |
quandl-api
Quandl.com API library
See all snapshots
quandl-api appears in
quandl-api
This library provides an easy way to download data from Quandl.com in Haskell.
Installation
- Install the Haskell Platform
cabal install quandl-api
Basic Usage
The
getTable function is all you need to download tables.
To get all data points for the dataset FRED/GDP:
import Data.Quandl getTable "FRED" "GDP" Nothing
Registered users should include their auth_token, like this:
import Data.Quandl getTable "FRED" "GDP" (Just "dsahFHUiewjjd")
Advanced Usage
The
getTableWith function allows you to use query a subset of the data,
query multiple tables (multisets), apply frequency conversions,
and apply transformations supported by Quandl.
For example, here is the annual percentage return for AAPL stock
over the previous decade, in ascending date order:
import Data.Quandl import Data.Time (fromGregorian) getTableWith (defaultOptions {opSortAscending = True, opStartDate = Just (fromGregorian 2000 1 1), opEndDate = Just (fromGregorian 2010 1 1), opFrequency = Just Annual, opTransformation = Just RDiff}) [("WIKI", "AAPL", Just 4)] -- Just 4 means we only want the 4'th column (Close price)
You can pull data from multiple datasets (or from multiple columns in a single dataset) using this function as well. In the example below, we combine US GDP from FRED/GDP, crude oil spot prices from DOE/RWTC, and Apple closing prices from WIKI/AAPL. We are going to convert all of them to annual percentage changes, and look only at data for the last 10 years:
import Data.Quandl getTableWith (defaultOptions {opNumRows = Just 10, opFrequency = Just Annual, opTransformation = Just RDiff}) [("FRED", "GDP", Just 1), ("DOE", "RWTC", Just 1), ("WIKI", "AAPL", Just 4)]
See for detailed information on the Quandl API.
Changes
quandl-api changes
0.2.1.0
- Fixed issue #2: fixed build on GHC 7.10
0.2.0.0
- Add a function to access the Quandl search API. Contributed by mwm / fpcomplete. | https://www.stackage.org/lts-6.35/package/quandl-api-0.2.1.0 | CC-MAIN-2018-51 | refinedweb | 310 | 53.31 |
A STREAMS line-discipline module called ldterm (see ldterm(7M)) is a key part of the STREAMS-based terminal subsystem. Throughout this chapter, the terms line discipline and ldterm(7M) are used interchangeably and refer to the STREAMS version of the standard line discipline and not the traditional character version. ldterm performs the standard terminal I/O processing traditionally done through the linesw mechanism.
The termio(7I) and termios(3) specifications describe four flags that are used to control the terminal:
c_iflag (defines input modes)
c_oflag (defines output modes)
c_cflag (defines hardware control modes)
c_lflag (defines terminal functions used by ldterm(7M))
To process these flags elsewhere (for example, in the firmware or in another process), a mechanism is in place to turn on and off the processing of these flags. When ldterm(7M) is pushed, it sends an M_CTL message downstream that asks the driver which flags the driver will process. The driver sends back that message in response if it needs to change the ldterm default processing. By default, ldterm(7M) assumes that it must process all flags except c_cflag, unless it receives a message indicating otherwise.
When ldterm is pushed on the Stream, the open routine initializes the settings of the termio flags. The default settings are:
In canonical mode (ICANON flag in c_lflag is turned on), read from the terminal file descriptor is in message non-discard (RMSGN) mode (see streamio(7I)). This implies that in canonical mode, read on the terminal file descriptor always returns at most one line, regardles of how many characters have been requested. In non-canonical mode, read is in byte-stream (RNORM) mode. The flag ECHOCTL has been added for SunOS 4.1 compatibility.
See termio(7I) for more information on user-configurable settings.
The open routine of the ldterm(7M) module allocates space for holding the TTY structure (see ldtermstd_state_t in ldterm.h) by allocating a buffer from the STREAMS buffer pool. The number of modules that can be pushed on one stream, as well as the number of TTY's in use, is limited. The number of instances of ldterm that have been pushed is limited only by available memory. The open also sends an M_SETOPTS message upstream to set the Stream head high-water and low-water marks to 1024 and 200, respectively. These are the current values (although they may change over time).
The ldterm module identifies itself as a TTY to the stream head by sending an M_SETOPTS message upstream with the SO_ISTTY bit of so_flags set. The Stream head allocates the controlling TTY on the open, if one is not already allocated.
To maintain compatibility with existing applications that use the O_NDELAY flag, the open routine sets the SO_NDELON flag on in the so_flags field of the stroptions(9S) structure in the M_SETOPTS message.
The open routine fails if there are no buffers available (cannot allocate the internal state structure) or when an interrupt occurs while waiting for a buffer to become available.
The close routine frees all the outstanding buffers allocated by this Stream. It also sends an M_SETOPTS message to the Stream head to undo the changes made by the open routine. The ldterm(7M) module also sends M_START messages downstream to undo the effect of any previous M_STOP messages.
The ldterm(7M) module's read side processing has put and service procedures. High-waterand low-water marks for the read queue are 1024 and 200, respectively. These are the current values and may be subject to change.
ldterm(7M) can send the following messages upstream:
M_DATA, M_BREAK, M_PCSIG, M_SIG, M_FLUSH, M_ERROR, M_IOCACK, M_IOCNAK, M_HANGUP, M_CTL, M_SETOPTS, M_COPYOUT, and M_COPYIN.
The ldterm(7M) module's read side processes M_BREAK, M_DATA, M_CTL, M_FLUSH, M_HANGUP, M_IOCACK and M_IOCNAK messages. All other messages are sent upstream unchanged.
The put procedure scans the message for flow-control characters (IXON), signal-generating characters, and after (possible) transformation of the message, queues the message for the service procedure. Echoing is handled completely by the service procedure.
In canonical mode, if the ICANON flag is on in c_lflag, canonical processing is performed. If the ICANON flag is off, non-canonical processing is performed (see termio(7I) for more details). Handling of VMIN/VTIME in the STREAMS environment is somewhat complicated, because read needs to activate a timer in the ldterm module in some cases; hence, read notification becomes necessary. When a user issues an ioctl(2) to put ldterm(7M) in non-canonical mode, the module sends an M_SETOPTS message to the Stream head to register read notification. Further reads on the terminal file descriptor will cause the Stream head to issue an M_READ message downstream and data will be sent upstream in response to the M_READ message. With read notification, buffering of raw data is performed by ldterm(7M). It is possible to canonize the raw data when the user has switched from raw to canonical mode. However, the reverse is not possible.
To summarize, in non-canonical mode, the ldterm(7M) module buffers all data until VMIN or VTIME criteria are met. For example, if VMIN=3 and VTIME=0, and three bytes have been buffered, these characters are sent to the stream head regardless of whether there is a pending M_READ, and no M_READ needs to be sent down. If an M_READ message is received, the number of bytes sent upstream is the argument of the M_READ message, unless VTIME is satisfied before VMIN (for example. the timer has expired) in which case whatever characters are available will be sent upstream.
The service procedure of ldterm(7M) handles STREAMS related flow control. Since the read side high-waterand low-water marks are 1024 and 200 respectively, placing 1024 characters or more on the read queue causes the QFULL flag be turned on, indicating that the module below should not send more data upstream.
Input flow control is regulated by the line-discipline module which generates M_STARTI and M_STOPI high priority messages. When sent downstream, receiving drivers or modules take appropriate action to regulate the sending of data upstream. Output flow control is activated when ldterm(7M) receives flow control characters in its data stream. The module then sets an internal flag indicating that output processing is to be restarted/stopped and sends an M_START/M_STOP message downstream.
Write side processing of the ldterm(7M) module is performed by the write-side put and service procedures.
The ldterm module supports the following ioctls:
TCSETA, TCSETAW, TCSETAF, TCSETS, TCSETSW, TCSETSF, TCGETA, TCGETS, TCXONC, TCFLSH, and TCSBRK.
All ioctl(2) not recognized by the ldterm(7M) module are passed downstream to the neighboring module or driver.
The following messages can be received on the write side:
M_DATA, M_DELAY, M_BREAK, M_FLUSH, M_STOP, M_START, M_STOP, M_START, M_READ, M_IOCDATA, M_CTL, and M_IOCTL.
On the write side, the ldterm module processes M_FLUSH, M_DATA, M_IOCTL, and M_READ messages, and all other messages are passed downstream unchanged.
An M_CTL message is generated by ldterm(7M) as a query to the driver for an intelligent peripheral and to decide on the functional split for termio(7I) processing. If all or part of termio(7I) processing is done by the intelligent peripheral, ldterm(7M) can turn off this processing to avoid computational overhead. This is done by sending an appropriate response to the M_CTL message, as follows:
If all of the termio(7I) processing is done by the peripheral hardware, the driver sends an M_CTL message back to ldterm(7M) with ioc_cmd of the structure iocblk(9S) set to MC_NO_CANON. If ldterm(7M) is to handle all termio(7I) processing, the driver sends an M_CTL message with ioc_cmd set to MC_DO_CANON. The default is MC_DO_CANON.
If the peripheral hardware handles only part of the termio(7I) processing, it informs ldterm(7M) in the following way:
The driver for the peripheral device allocates an M_DATA message large enough to hold a termios(3) structure. The driver then turns on those c_iflag, c_oflag, and c_lflag fields of the termios(3) structure that are processed on the peripheral device by ORing the flag values. The M_DATA message is then attached to the b_cont field of the M_CTL message it received. The message is sent back to ldterm(7M) with ioc_cmd in the data buffer of the M_CTL message set to MC_PART_CANON.
One difference between AT&T STREAMS and SunOS 5 is that AT&T's line discipline module does not check if write side flow control is in effect before forwarding data downstream. It expects the downstream module or driver to add the messages to its queue until flow control is lifted. This is not true in SunOS 5.
The screen width of characters (that is, how many columns are taken by characters from each given code set on the current physical display) and it takes this width into account when calculating tab expansions. When using multi-byte characters or multi-column characters ldterm automatically handles tab expansion (when TAB3 is set) and does not leave this handling to a lower module or driver.
By default, multi-byte handling by ldterm is turned off. When ldterm receives an EUC_WSET ioctl(2), it turns multi-byte processing on, if it is essential to properly handle the indicated code set. Thus, if you use single byte 8-bit codes and has no special multi-column requirements, the special multi-column processing is not used at all. This means that multi-byte processing does not reduce the processing speed or efficiency of ldterm unless it is actually used.
The following describes how the EUC handling in ldterm works:
First, the multi-byte and multi-column character handling is only enabled when the EUC_WSET ioctl indicates that one of the following conditions is met:
Code set consists of more than one byte (including the SS2 and/or SS3) of characters
Code set requires more than one column to display on the current device, as indicated in the EUC_WSET structure
Assuming that one or more of the previous conditions exists, EUC handling is enabled. At this point, a parallel array (see ldterm_mod structure) used for other information is allocated and a pointer to it is stored in t_eucp_mp. The parallel array that it holds is pointed to by t_eucp. The t_codeset field holds the flag that indicates which of the code sets is currently being processed on the read side. When a byte with the high bit arrives, it is checked to see if it is SS2 or SS3. If yes, it belongs to code set 2 or 3. Otherwise, it is a byte that comes from code set 1. Once the extended code set flag has been set, the input processor retrieves the subsequent bytes, as they arrive, to build one multi-byte character. The counter field t_eucleft tells the input processor how many bytes remain to be read for the current character. The parallel array t_eucp holds its display width for each logical character in the canonical buffer. During erase processing, positions in the parallel array are consulted to determine how many backspaces need to be send to erase each logical character. (In canonical mode, one backspace of input erases one logical character, no matter how many bytes or columns that character consumes.) This greatly simplifies erase processing for EUC.
The t_maxeuc field holds the maximum length, in memory bytes, of the EUC character mapping currently in use. The eucwioc field is a substructure that holds information about each extended code set.
The t_eucign field aids in output post-processing (tab expansion). When characters are output, ldterm(7M) keeps a column to indicate what the current cursor column is supposed to be. When it sends the first byte of an extended character, it adds the number of columns required for that character to the output column. It then subtracts one from the total width in memory bytes of that character and stores the result in t_eucign. This field tells ldterm(7M) how many subsequent bytes to ignore for the purposes of column calculation. (ldterm(7M) calculates the appropriate number of columns when it sees the first byte of the character.)
The field t_eucwarn is a counter for occurrences of bad extended characters. It is mostly useful for debugging. After receiving a certain number of illegal EUC characters (perhaps because of some problem on the line or with declared values), a warning is given on the system console.
There are two relevant files for handling multi-byte characters: euc.h and eucioctl.h. eucioctl.h contains the structure that is passed with EUC_WSET and EUC_WGET calls. The normal way to use this structure is to get CSWIDTH from the locale using a mechanism such as getwidth(3C) or setlocale(3C) are used to set and get EUC widths (these functions assume the environment where the eucwidth_t structure is needed and available):
#include <eucioctl.h> /* need others,like stropts.h*/ struct eucioc eucw; /*for EUC_WSET/WGET to line disc*/ eucwidth_t width; /* ret struct from _getwidth() */ /* * set_euc Send EUC code widths to line discipline. */ set_euc(struct eucioc *e) { struct strioctl sb; sb.ic_cmd = EUC_WSET; sb.ic_timout = 15; sb.ic_len = sizeof(struct eucioc); sb.ic_dp = (char *) e; if (ioctl(0, I_STR, &sb) < 0) fail(); } /* * euclook. Get current EUC code widths from line discipline. */ euclook(struct eucioc *e) { struct strioctl sb; sb.ic_cmd = EUC_WGET; sb.ic_timout = 15; sb.ic_len = sizeof(struct eucioc); sb.ic_dp = (char *) e; if (ioctl(0, I_STR, &sb) < 0) fail(); printf("CSWIDTH=%d:%d,%d:%d,%d:%d", e->eucw[1], e->scrw[1], e->eucw[2], e->scrw[2], e->eucw[3], e->scrw[3]); }
For more detailed descriptions, see System Interface Guide. | http://docs.oracle.com/cd/E19620-01/805-4038/termsub15-25833/index.html | CC-MAIN-2017-30 | refinedweb | 2,269 | 61.26 |
string.maketrans() method not working
Today, I tried to code in pythonista for ios, everything seems okay. But, when I use maketrans method , it not working at all. What happened? I tried the code in pycharm, it worked perfectly.
Please help.
This is my code:
import string
str = 'map.html'
i = 'abcdefghijklmnopqrstuvwxyz'
o = 'cdefghijklmnopqrstuvwxyzab'
table = string.maketrans(i, o)
print (str.translate(table))
stris the name of Python's string type. In your code, you're overwriting it with another object. This is not a good idea (it means that you cannot use the
strtype properly anymore) but normally it would only affect your own code. Here in Pythonista it also breaks some internal
sor
nameor
urlor whatever you like.
I had just done this
import string #str = 'map.html' s = 'map.html' i = 'abcdefghijklmnopqrstuvwxyz' o = 'cdefghijklmnopqrstuvwxyzab' table = string.maketrans(i, o) print str.translate(s, table)
I see. Now, it worked perfectly.
So, in pythonista, this word already taken.
Many thanks!
In Python this word is taken.
Exactly.
stris the string class, and you can use it to convert an object to a string. For example,
str(42)is the same as
'42'. This is important when you want to do something like this:
print("Your score is: " + str(42))
Without using
str, you would get a
TypeError, because you can't add a string and a number.
This means that if you assign something different to the name
str, you cannot use the original
strclass anymore. The same goes for other standard types like
int,
float,
list,
dict, etc. | https://forum.omz-software.com/topic/3062/string-maketrans-method-not-working | CC-MAIN-2021-17 | refinedweb | 261 | 77.74 |
Regression analysis is used to quantify the relationship between one or more explanatory variables and a response variable.
The most common type of regression analysis is simple linear regression, which is used when a predictor variable and a response variable have a linear relationship.
However, sometimes the relationship between a predictor variable and a response variable is nonlinear.
For example, the true relationship may be quadratic:
Or it may be cubic:
In these cases it makes sense to use polynomial regression, which can account for the nonlinear relationship between the variables.
This tutorial explains how to perform polynomial regression in Python.
Example: Polynomial Regression in Python
Suppose we have the following predictor variable (x) and response variable (y) in Python:
x = [2, 3, 4, 5, 6, 7, 7, 8, 9, 11, 12] y = [18, 16, 15, 17, 20, 23, 25, 28, 31, 30, 29]
If we create a simple scatterplot of this data, we can see that the relationship between x and y is clearly not linear:
import matplotlib.pyplot as plt #create scatterplot plt.scatter(x, y)
Thus, it wouldn’t make sense to fit a linear regression model to this data. Instead, we can attempt to fit a polynomial regression model with a degree of 3 using the numpy.polyfit() function:
import numpy as np #polynomial fit with degree = 3 model = np.poly1d(np.polyfit(x, y, 3)) #add fitted polynomial line to scatterplot polyline = np.linspace(1, 12, 50) plt.scatter(x, y) plt.plot(polyline, model(polyline)) plt.show()
We can obtain the fitted polynomial regression equation by printing the model coefficients:
print(model) poly1d([ -0.10889554, 2.25592957, -11.83877127, 33.62640038])
The fitted polynomial regression equation is:
y = -0.109x3 + 2.256x2 – 11.839x + 33.626
This equation can be used to find the expected value for the response variable based on a given value for the explanatory variable. For example, suppose x = 4. The expected value for the response variable, y, would be:
y = -0.109(4)3 + 2.256(4)2 – 11.839(4) + 33.626= 15.39. = numpy.polyfit(x, y, degree) p = numpy.poly1d(coeffs) #calculate r-squared yhat = p(x) ybar = numpy.sum(y)/len(y) ssreg = numpy.sum((yhat-ybar)**2) sstot = numpy.sum((y - ybar)**2) results['r_squared'] = ssreg / sstot return results #find r-squared of polynomial model with degree = 3 polyfit(x, y, 3) {'r_squared': 0.9841113454245183}
In this example, the R-squared of the model is 0.9841. This means that 98.41% of the variation in the response variable can be explained by the predictor variables. | https://www.statology.org/polynomial-regression-python/ | CC-MAIN-2022-21 | refinedweb | 432 | 57.47 |
Lead Image © Claudiarndt, photocase.com
Controlling Amazon Cloud with Boto
Free Floating
The Amazon Cloud offers a range of services for dynamically scaling your own server-based services, including the Elastic Compute Cloud (EC2), which is the core service, various storage offerings, load balancers, and DNS. It makes sense to control these via the web front end if you only manage a few services and the configuration rarely changes. The command-line tool, which is available from Amazon for free [1], is better suited for more complex setups.
If you want to write your own scripts to control your own cloud infrastructure, however, Boto provides an alternative in the form of an extensive Python module that largely covers the Amazon API.
Python Module
Boto was written by Mitch Garnaat, who lists a number of application examples in his blog [2]. Boto is available under an undefined free license from GitHub [3]. You can install the software using the Python Package Manager, Pip:
pip install boto
To use the library, you first need to generate a connection object that maps the connection to the respective Amazon service:
from boto.ec2 import EC2Connection conn = EC2Connection(access_key, secret_key)
As you can see, the
EC2Connection function expects two keys that you can manage via Amazon Identity and Access Management (IAM). In the Users
section of the IAM Management Console, you can go to User Actions
and select Manage Access Keys
to generate a new passkey (Figure 1). The key is immediately ready for downloading but disappears later for security reasons. In other words, you want to download the CSV file and store it in a safe place where you can find it again even six months later.
The preceding call connects to the default region of the Amazon Web Service, which is located in the United States. To select a different region, Boto provides a
connect_to_region() call. For example, the following command connects to the Amazon datacenter in Ireland:
conn = boto.ec2.connect_to_region("eu-west-1")
Thanks to the connection object, users can typically use all the features a service provides. In the case of EC2, this equates to starting an instance, which is similar to a virtual machine. To do this, you need an image (AMI) containing the data for the operating system. Amazon itself offers a number of images, including free systems as well as commercial offerings, such as Windows Server or Red Hat Enterprise Linux. Depending on the license, use is billed at a correspondingly higher hourly rate.
AWS users have created images that are preconfigured for web services, for example, and any kind of application you can imagine. For the time being, you can discover the identifier you need to start an image instance in the web console, which also includes a search function. The following function launches the smallest Amazon instance on offer using the
ami-df9b8bab image:
conn.run_instances('ami-df9b8bab', instance_type='m1.small')
If you take a look at the EC2 front end, you can see the instance launch (Figure 2). That's it basically; you can now use the newly created virtual server. Amazon points out that users themselves are responsible for ensuring that the selected instance type and the AMI match. The cloud provider does not perform any checks.
Reservation
To stop an instance you have started, it would be useful to have direct access to it in Boto. This access can be achieved by storing the return value for the above call in a variable:
run_instances() returns a reservation containing an array field named
instances that stores all the concurrently started instances. Listing 1 shows the complete script for starting and stopping an instance.
Listing 1
launch.py
01 import boto.ec2 02 03 conn = boto.ec2.connect_to_region("eu-west-1") 04 reservation = conn.run_instances('ami-df9b8bab', instance_type='m1.small') 05 instance = reservation.instances[0] 06 07 raw_input("Press ENTER to stop instance") 08 09 instance.terminate()
For better orientation with a large number of instances, the Amazon Cloud provides tags. Thus, you can introduce a name tag that you assign when you create the instance:
instance.add_tag("Name", "Mailserver")
The instance object's
tags attribute then stores the tags as a dictionary.
To help you stop multiple instances, the EC2 module provides the
terminate_instances() method, which expects a list of instance IDs as parameters. If you did not store the reservations at the start, you can now retrieve them with the somewhat misleadingly named
get_all_instances(). Again, each reservation includes a list of instances – even the ones that have been terminated in the meantime:
>>> reservations = conn.get_all_instances() >>> reservations[0].instances[0].id u'i-1df79851' >>> reservations[0].instances[0].state u'terminated'
If you add a few lines, like those in Listing 2, to a script, you can call the script with
python -i. You then end up in an interactive Python session in which you can further explore the EC2 functions – for example, using the
dir<(object)> command:
$ python -i interact.py r-72ffb53e i-1df79851 terminated r-9b6b61d4 i-3021247f terminated >>>
Listing 2
interact.py
01 # call with python -i 02 import boto.ec2 03 04 conn = boto.ec2.connect_to_region("eu-west-1") 05 reservations = conn.get_all_instances() 06 07 for r in reservations: 08 for i in r.instances: 09 print r.id, i.id, i.state
To save yourself the trouble of repeatedly typing the region and the authentication information, Boto gives you the option of storing these variables in a configuration file, either for each user as
~/.boto or globally as
/etc/boto.cfg.
The configuration file is divided into sections that usually correspond to a specific Amazon service. Access credentials are stored in the
Credentials section, which is not really convenient, because in addition to the region, the user also needs to enter the matching endpoint; however, you only need to do this work the first time. You can retrieve the required data (e.g., as in Listing 3). A complete configuration file containing a region, an endpoint, and access credentials is shown in Listing 4.
Listing 3
Regions
01 >>> import boto.ec2 02 >>> regions = boto.ec2.regions() 03 >>> regions 04 [RegionInfo:ap-southeast-1, RegionInfo:ap-southeast-2, \ RegionInfo:us-west-2, RegionInfo:us-east-1, \ RegionInfo:us-gov-west-1, RegionInfo:us-west-1, \ RegionInfo:sa-east-1, RegionInfo:ap-northeast-1, \ RegionInfo:eu-west-1] 05 >>> regions[-1].name 06 'eu-west-1' 07 >>> regions[-1].endpoint 08 'ec2.eu-west-1.amazonaws.com'
Listing 4
~/.boto
01 [Boto] 02 ec2_region_name = eu-west-1 03 ec2_region_endpoint = ec2.eu-west-1.amazonaws.com 04 05 [Credentials] 06 aws_access_key_id=AKIABCDEDFGEHIJ 07 aws_secret_access_key=ZSDSLKJSDSDLSKDJTAJEm+7fjU23223131VDXCXC+
As mentioned before, the regional data is service specific; after all, you want to run the services in different regions. The region for EC2 in this example is thus
ec2_<region_name>, whereas the S3 storage service uses
s3_<region_name>, and the load balancer uses
elb_<region_name>, and so on.
The API is a little confusing because some functions are assigned to individual services and can be found in the matching Python modules, whereas others with the same tasks are located in the
boto module as partially static functions. For example, to set up a connection, you have
boto.ec2.connect_to_region() and
boto.connect_ec2(). Although
boto.connect_s3 follows the same pattern, the corresponding call in the S3 module is named
S3Connection. The API documentation [4] provides all the details.
Load Balancers
To set up a load balancer in the Amazon Cloud with Boto, you first need a Connection object, which you can retrieve with the
boto.ec2.elb.connect_to_region() or
boto.connect_elb() calls. Another function creates a load balancer, to which you can assign individual nodes:
lb = conn.create_load_balancer('lb', zones, ports) lb.register_instances(['i-4f8cf126'])
Alternatively, you can use the Autoscaling module that automatically adds nodes to the load balancer or removes them depending on the load.
The Boto library includes modules for every service Amazon offers. Thus, besides EC, it includes the basic services, such as Secure Virtual Network, Autoscaling, DNS, and Load Balancing. For simple web applications with greater scalability and availability requirements, these tools will take you quite a long way. Additionally, Amazon offers a wide range of possibilities for storing data, including the Dynamo distributed database, the Elastic Block Store, a relational database, and a large-scale system for a "data warehouse."
Of course, you can always install a MySQL, PostgreSQL, or NoSQL database on a number of instances of Linux and replicate it on other PCs. For load distribution, you would then use the load balancing service, possibly in cooperation with the Amazon CloudFront content delivery network.
When it comes to managing your own cloud installation, you have too many choices, rather than too few. In addition to autoscaling, your options include Elastic Beanstalk, OpsWorks, and Cloud Formation, all of which offer similar features. This list of services, however, is not complete; the PDF on Amazon's website [5] provides an overview. | http://www.admin-magazine.com/Archive/2013/18/Controlling-Amazon-Cloud-with-Boto | CC-MAIN-2018-05 | refinedweb | 1,493 | 55.54 |
>
'if' Statement - Braces
Java Notes'if' Statement - Braces
Braces { } not required for one... statement,
you do not need to use braces (also called "curly brackets").
Form
The if statement doesn't need braces if there is only
one
Java Review: Control Flow
Java Review: Control Flow
Java uses the dominant imperative control flow paradigm.
Other paradigms are declarative programming and data flow.
Structured Programming control flow primitives within a method:
Sequence
Counting Objects Clandestinely - Java Tutorials
Counting Objects Clandestinely
2001-12-28 The Java Specialists' Newsletter [Issue 038b] - Counting Objects Clandestinely - Follow-up
Author:
Dr. Heinz M... you use a "boolean counting" variable?
A: I want to avoid counting objects Objects Clandestinely - Java Tutorials
Counting Objects Clandestinely
2001-12-28 The Java Specialists' Newsletter [Issue 038a] - Counting Objects Clandestinely
Author:
Dr. Heinz M. Kabutz... Java programmers ;-).
Counting Objects Clandestinely
A few months ago I
Loop Control flow enhancement discussion - Java Tutorial
Loop Control flow enhancement discussion
2001-04-28 The Java Specialists... product for practicing for the SUN Certified Java Programmer Examination.../implement regular program flow.
I just picked up some code from another group
Summary - Control Flow
Java: Summary - Control Flow
Each control statement is one logical statement,
which often encloses a block of statements in curly braces... (testExpression);
Other Flow Control Statements
Method Return
numbers - Java Beginners
numbers Write a program to create a file named "numbers.dat". Then create an algorithm that adds all even numbered integers from 1 to 100, separated by a comma. After the file has been created, close and reopen the file
word and character counting - Java Beginners
word and character counting here is the java code i made but i have to add something where it will read the inFile and display the number of words and number of characters.. can you help me with it? thanks.. :)
import
counting the positive,negative and zero nos. - Java Beginners
counting the positive,negative and zero nos. Create a program that will input n number, and determine the number of positive, negative
Counting bytes on Sockets,java tutorial,java tutorials
Counting bytes on Sockets
2002-10-09 The Java Specialists' Newsletter [Issue 058] - Counting bytes on Sockets
Author:
Dr. Heinz M. Kabutz
If you... to the 58th edition of The Java(tm) Specialists' Newsletter sent to 4814 Java
Numbers
Java NotesNumbers
Two kinds of numbers.
There are basically two kinds of numbers in Java and most other programming languages:
binary integers (most... numbers, you will usually use decimal numbers in your
Java source program
defining numbers in Java Script
defining numbers in Java Script Explain about defining numbers in Java Script
Interview Tips - Java Interview Questions
Interview Tips Hi,
I am looking for a job in java/j2ee.plz give me interview tips. i mean topics which i should be strong and how to prepare. Looking for a job 3.5yrs experience
prime numbers - Java Beginners
prime numbers Write a java program to find the prime numbers between n and m
automorphic numbers
automorphic numbers how to find automorphic number in java
Hi Friend,
Pleas visit the following link:
Automorphic numbers
Thanks
even more circles - Java Beginners
even more circles Write an application that compares two circle objects.
? You need to include a new method equals(Circle c) in Circle class. The method compares the radiuses.
? The program reads two radiuses from the user
Easy Counting Java Program...Please Help? - Java Beginners
Easy Counting Java Program...Please Help? As I said, I am a beginner at Java. I came across this problem that Im sure uses java. Could you please tell me the code to do this and possibly explain?
How many ways
flow charts
flow charts draw a flow chart program with a user prompt of 5 numbers computing the maximum, minimum and average
random numbers - Java Beginners
random numbers write a program to accept 50 numbers and display 5 numbers randomly Hi Friend,
Try the following code:
import...);
System.out.println("Enter 10 numbers: ");
for(int i=0;i<10;i
Prime Numbers
Prime Numbers Create a complete Java program that allows the user to enter a positive integer n, and which then creates and populates an int array with the first n prime numbers. Your program should then display the contents
Perfect Numbers - Java Beginners
+ 2 + 3
Write a java program that finds and prints the three smallest perfect numbers. Use methods
Hi Friend,
Try the following code:
public
GUI Tips
Java NotesGUI Tips
[Beginning of list of GUI tips -- needs much more]
Program structure
main can be in any class, but it's often
simplest to understand if it's in a separate class.
main should do very little work
Java program - convert words into numbers?
Java program - convert words into numbers? convert words into numbers?
had no answer sir
Generating random numbers in a range with Java
Generating random numbers in a range with Java Generating random numbers in a range with Java
Tips & Tricks
6 allows an application to show a splash screen even before the Java runtime... to be displayed more quickly to the user i.e. even before starting of Java runtime.
Read...Tips & Tricks
Random numbers - Introduction
Java NotesRandom numbers - Introduction
When to use random numbers
There are many types of programs that use random numbers.
Game programs use them... numbers.
You might want to use random numbers to change the
appearance
Textbox allows only numbers in java wicket
Textbox allows only numbers in java wicket Please provide me wicket code for text box that allows only numbers to type.
Thank you
adding two numbers - Java Beginners
information :
Thanks
generating random numbers - Java Beginners
Add two big numbers - Java Beginners
Add two big numbers - Java Beginners Hi,
I am beginner in Java and leaned basic concepts of Java. Now I am trying to find example code for adding big numbers in Java.
I need basic Java Beginners example. It should easy
Divide 2 numbers
Divide 2 numbers Write a java program to divide 2 numbers. Avoid division by zeor by catching the exception.
class Divide
{
public static void main(String[] args)
{
try{
int num1=8
Add Complex Numbers Java
How to Add Complex Numbers Java
In this Java tutorial section, you will learn how to add complex Numbers in Java Programming Language. As you are already aware of Complex numbers. It is composed of two part - a real part and an imaginary
Applet for add two numbers
Applet for add two numbers what is the java applet code for add two numbers?
import java.awt.Graphics;
import javax.swing.*;
public...);
add(text2);
label3 = new Label("Sum of Two Numbers
Finding all palindrome prime numbers - Java Beginners
Finding all palindrome prime numbers How do i write a program to Find all palindrome prime numbers between two integers supplied as input (start and end points are excluded
Printing numbers in pyramid format - Java Beginners
Printing numbers in pyramid format Q) Can you please tel me the code to print the numbers in the following format:
1
2 3
4 5 6
7 8 9 10
Hi Friend,
Try
Outsourcing Communication Tips,Useful Cultural Tips in Offshore Outsourcing,Communication and Culture Tips
Communication and Culture Tips in Offshore Outsourcing Relationships... management is to ensure adequate flow of information among all the stakeholders... aspect is often underestimated, even in situations where a lot of potential
Random Numbers - shuffling
Java NotesRandom Numbers - shuffling
A standard approach to shuffling the elements of an array is to write some
code as described below. As of Java 2...
is described at the bottom.
Shuffling: Random numbers without replacement
Tips and Tricks
Tips and Tricks
... in Java
Java provides a lot of fun while programming. This article shows you how... and keyboard related operation through java code for the purposes of test automation, self
Tips for Increasing Money Making Abilities of Your Articles
Tips for Increasing Money Making Abilities of Your Articles... get a chance of receiving more sales.
6 tips for increasing your article... on the capabilities of your article in helping the reader. The flow of words should
Tips & Tricks
Tips & Tricks
Here are some basic implementations of java language, which you would... screen button on the keyboard, same way we can do it through java programming
Add Two Numbers in Java
Add Two Numbers in Java
... these
arguments and print the addition of those numbers. In this example, args....
These passed arguments are of String types so these can't be added
as numbers
Programming
Java NotesProgramming
Here are some tips on making programming student... a program that's vaguely similar
in it's interface, even it it solves... and you will be rewarded. Eg, to get it to indent
your code, match braces
Useful Negotiation Tips on Outsourcing, Helpful Negotiation Tips
Outsourcing-Negotiation Tips
Introduction
The principles for negotiations are the same as the things
you need to keep in mind while... and vendor. However this is
easier said than done. Some of the issues can get even.
Comparing Two Numbers
Comparing Two Numbers
This is a very simple example of Java that teaches you the method of
comparing two numbers and finding out the greater one. First of all, name a
class
Tips and Tricks
Tips and Tricks
Send
data from database in PDF file as servlet response... is a java library containing classes to generate documents in PDF, XML, HTML, and RTF
java
or close braces properly or semicolon is missing somewhere. Check it.
Even...java when compiling a java program,following error messages are shown on command prompt.identify the resons to appear those error.
a)DoTest.java:13
greatest of 3 numbers using classes and functions.
greatest of 3 numbers using classes and functions. WAP to calculate greatest of 3 numbers using classes and functions with parameters through input in main?
Java Find Largest Number
import java.util.*;
class
Converting Numbers to Strings
Java: Converting Numbers to Strings
Vanilla Java: Converting numbers... by the half even algorithm
Floating-point numbers may be rounded when..., ...
Concatenation works with all objects, not just numbers.
When Java needs to convert
Swapping of two numbers
Swapping of two numbers
This Java programming tutorial will teach you the
methods for writing program to calculate swap of two numbers. Swapping
is used where you want
Java Break continue
' statement flow of control inside a loop can be continued even
when the specified... Java Break continue
Java has two keywords break and continue in its
branching
Java Switch Statement
Java Switch
Statement
In java, switch is one of the control statement which
turns the normal flow..., that further follows cases, all enclosed in braces. The switch
statement executes
Top 10 Tips for Good Website Design
flow and business conversion is really what determines the success of your web presence. Good website design tips reflect on the art of mastering the traffic... are our picks on top 10 tips for good website design.
Enhance your page speed
Java Glossary Term - D
Java Glossary Term - D
... that is used for converting the decimal numbers into Strings.
This class is also used for parsing the Strings back in to binary numbers.
It works with a String
Tips 'n' Tricks
Tips 'n' Tricks
Download files data from many
URLs
This Java program... arguments separated by space. Java provides
URLConnection class that represents
Offshore Outsourcing Tips,Useful Offshore Outsourcing Tips,Helpful Outsourcing Tips
Offshore Outsourcing Tips
Introduction
What is the perfect... in such a situation. Even seasoned players make
mistakes at times... outsourcing.
However here are some tips that can help you | http://www.roseindia.net/tutorialhelp/comment/82588 | CC-MAIN-2014-15 | refinedweb | 1,923 | 53.41 |
This such a model can be described. If, for instance, we use a language such as RDF, this would be modeled as follows:
What you see is a set of assertions that identify that there exists a class named “person”, and that this class has two properties. The domain and range assertions on each property are important, because they indicate what class the property is used on (the domain) and what is expected as the value of the property (the range).Note also that <person> is a class, but <person1> is an instance of that class.It’s important to note that this description is in fact agnostic with regard to a physical representation. For instance, the model above could just as readily identify JSON or XML, as is shown in Listing 3.
So far, so good. Now, suppose that Jane decides to get married, and changes her name to her husband’s lastName, (he’s James Dean). This is where modelers face a conundrum. You could of course simply change the name, but what if you wanted to actually track all of the names that a person has.This occurs all the time, by the way - people exist over time, and things change.
Most relational database designers have to make a conscious decision about whether to create a new table or not, but this is again a physical model issue, not a logical one. You also can’t just say you have a new firstName1, firstName2, etc., because that makes a limiting assumption about how often people change their name. Instead, the usual solution is to bite the bullet and create a table (or, in modeling terms, set up a first normal form).
This may solve one problem - acknowledging that some items can have cardinality (the range of potential numbers of items) beyond 1 - but there’s a much more subtle issue that can cause a great degree of grief if not properly handled.
The model given above makes no assumption about the order of items - if you have three items, the system will not preferentially return them in the order that they are entered (e.g., (“a”, “b”, “c”, “d”) could just as readily be returned as (“c”, “b”, “d”, “a”)). In order to facilitate this, it becomes necessary to dig a little deeper into the data model.
An array is an abstract data structure, though one used so often that people tend to forget that it is in fact an abstraction. I can describe that array at a data model level by defining a couple of classes and associated properties:
This may seem like overkill, but in practice what it does is provide an abstract layer that says that anything that is a subclass of an array can hold anything that is identified as a subclass of an array item. I use the notation <_index_> to indicate that these would likely be “consumed” by the conversion process to a given format such as JSON or XML (these would likely be in a different namespace if we used them here, rather than using a potentially breakable syntactic convention).Now, with the above defined, it becomes possible to change the model from above:
That may seem like a lot of work, but there are some major benefits to be gained by going into the deeper model. This creates a consistent data model for SQL, XML and JSON, making each of these interchangeable. Listing 7 illustrates how each of these gets rendered through a semantically neutral process:
The two JSON formats represent two different approaches, the first being a class key oriented approach, where the class name acts as the key, while the second takes an identifier approach, which treats the class as another attribute (given by _type_). In the XML format, the _type_ information is given by the XML tag, while _id_ is treated as an attribute.So what happened to the indexes?
Remember that both XML and JSON do respect ordering within a component, the first as an implicit (albeit due to a historical artifact) characteristic of the language, the second through the explicit array structure. This means that the ordering is used to determine how the lists are reassembled, but once this is done, their presence makes it possible that the implicit ordering and an explicit attribute can get out of sync very easily.
SQL and CSV-like outputs would retain the indexes, because they don’t have that explicit guarantee of ordering.The SQL approach produces an extra table that has a one-to-one relationship between (Person) and (PersonNames). This redundancy helps to preserve round tripping. The rather awkward personKeyRef and personNameKeyRef are ways of making it clear that these are the pointers to the array and base elements respectively, since SQL can only incorporate inbound links when dealing with references to objects (RDF is not limited by this restriction, which is one of the more compelling arguments for using an RDF triple store).
The above points to one deficiency with the way that most data is modeled - the fact that we often fail to think about the fact that a “table” is most often the representation of an entity, and that entity changes over time, necessitating changes in cardinality. It is as often as not cardinality changes, rather than needing to add new properties to a model, that cause the biggest integration headaches, because such changes necessitate the creation of new tables.
In effect, there is a tug of war occurring here between normalization and denormalization. NoSQL formats such as XML and JSON are intrinsically denormalized, meaning that is is trivially easy to create deep nested structures, and there is no real penalty for going from having a named field hold one value and one that holds two or more. SQL, on the other hand, incurs a fairly significant penalty - you have to construct a new table any time you change cardinality from "1" to "many", and an "0" (optional) value requires that you store an arbitrary NULL value, and if you do go from one to many, all of your pointers change from outbound pointers (pointers that originate in the source (reference) template and end in the target (child) template), to inbound pointers (pointers from target to source).
This last point is significant, because key representations no longer represent semantic relationships. A person has one or more person-names is an outbound relationship (and represents what modelers call a composition). However, SQL can only say "a person-name belongs to a person", which is a weaker relationship (called an association).
In a graph language, this distinction is useful because it clearly identifies when a given data structure is explicitly bound to an entity - if I remove that person from the data structure, I know that I can also remove its compositional components, because they have no real existence except relative to their associated person. Because SQL forces a distinct direction for a relationship, determining whether something is a composition or an association becomes a human activity, rather one that can be determined programmatically. This in turn makes automated integration FAR harder.
One of the major advantages of RDF is that you can model things more naturally, and because of that its easier to set up relationships which can be cleanly automated - and that provide the ability to go from a hierarchical structured format (NoSQL) to a table based flat structured format (SQL). I like to call this process "zipping". The unzipped form is the normalized one, where you have flat tables and relational keys (SQL or CSV), while the zipped form is demormalized, an XML or JSON structure. I hope to discuss the process of zipping in part two of this series.
As you may have noticed, the discussion here talks about names because they usually hide a lot of unstated assumptions, but the reality is this is as relevant to most data structures that you have within enterprise level ontologies. The bigger take-away here is to understand that we're moving into a world where data conversions and data integration dominate, and the kind of thinking that wants to just add a couple of fields to an object to represent a name or other singleton entries (addresses, contact information, emails, companies, stores, the list is pretty much endless) is likely to get you into trouble, especially when data worlds collide.
In my next article in this series (check back here for the new link) I’ll look at the temporal aspect of change, as well as exploring controlled vocabularies and how they fit into contemporary data modeling.
Kurt Cagle is Products Director for Capital BPM, and founder of Semantical LLC.
Views: 1743
Comment
© 2019 Data Science Central ®
Badges | Report an Issue | Privacy Policy | Terms of Service
You need to be a member of Data Science Central to add comments!
Join Data Science Central | https://www.datasciencecentral.com/profiles/blogs/the-art-of-modeling-names | CC-MAIN-2019-47 | refinedweb | 1,482 | 52.43 |
My favorite analogy for explaining variables is the "bucket" analogy. Think of a variable as a bucket. Into that bucket, you can place data. Some buckets don't care what kind of data you place in them, and other buckets have specific requirements on the type of data you can place in them. You can move data from one bucket to another. Unfortunately, the bucket analogy gets a little confusing when you take into account that one bucket can contain a little note inside that reads "see Bucket B for actual data" (you'll read about reference types shortly in the section "Value Types vs. Reference Types").
To declare a variable in C#, you can use the following syntax:
type variable_name;
You can initialize a variable on the same line:
type variable_name = initialization expression
where type is a .NET type. The next section lists some of the core .NET types.
Table 1.1 shows some of the basic .NET data types. As you will see in later chapters, this is just the beginning. When you start using classes, the variety of types available to you will be virtually unlimited.
Data Type
Description
System.Boolean
Provides a way to store true/false data.
System.Byte
Represents a single byte of data.
System.Char
A single character. Unlike other languages, this character is a 2-byte Unicode character.
System.Decimal
A decimal value with 28 to 29 significant digits in the range ±1.0 x 1028 to ±7.9 x 1028.
System.Double
A double-precision value that represents a 64-bit floating-point value with 15 to 16 significant digits in the range ±5.0 x 10324 to ±1.7 x 10308.
System.Single
A single-precision value that represents a 32-bit floating point number in the range ±1.5 x 1045 to ±3.4 x 1038.
System.Int32
Represents a 32-bit signed integer in the range 2,147,483,648 to 2,147,483,647.
System.Int64
Represents a 64-bit signed integer in the range 9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
System.SByte
A signed 8-bit integer.
System.Int16
A signed 16-bit integer.
System.UInt32
An unsigned 32-bit integer.
System.UInt64
An unsigned 64-bit integer.
System.UInt16
An unsigned 16-bit integer.
System.String
An arbitrary-length character string that can contain Unicode strings.
If you aren't familiar with .NET or C#, you may be wondering what the "System" is in the data types listed in Table 1.1. .NET organizes all types into namespaces. A namespace is a logical container that provides name distinction for data types. These core data types all exist in the "System" namespace. You'll see more namespaces throughout the book as you learn about more specific aspects of .NET.
C# provides you with some shortcuts to make declaring some of the core data types easier. These shortcuts are simple one-word lowercase aliases that, when compiled, will still represent a core .NET type. Table 1.2 lists some data type shortcuts and their corresponding .NET types.
Shortcut
.NET Type
bool
byte
char
decimal
double
float
int
long
sbyte
short
uint
ulong
ushort
Up to this point, this chapter has just been illustrating data types in one category. Earlier in the chapter, I mentioned a "bucket" analogy where data in one bucket could actually refer to data contained in some other bucket. This is actually the core point to illustrate the difference between value types and reference types.
A value type is a type whose data is contained with the variable on the stack. Value types are generally fast and lightweight because they reside on the stack (you will read about the exceptions in Chapter 16, "Optimizing your .NET 2.0 Code").
A reference type is a type whose data does not reside on the stack, but instead resides on the heap. When the data contained in a reference type is accessed, the contents of the variable are examined on the stack. That data then references (or points to, for those of you with traditional C and C++ experience) the actual data contained in the heap. Reference types are generally larger and slower than value types. Learning when to use a reference type and when to use a value type is something that comes with practice and experience.
Your code often needs to pass very large objects as parameters to methods. If these large parameters were passed on the stack as value types, the performance of the application would degrade horribly. Using reference types allows your code to pass a "reference" to the large object rather than the large object itself. Value types allow your code to pass small data in an optimized way directly on the stack. | https://flylib.com/books/en/1.237.1.12/1/ | CC-MAIN-2020-40 | refinedweb | 800 | 66.84 |
A Flutter plugin for launching a URL in the mobile platform. Supports iOS and Android.
To use this plugin, add
url_launcher as a dependency in your pubspec.yaml file..
closeWebViewfunction to programmatically close the current WebView.
launchto enable javascript in Android WebView.
launchto set iOS status bar brightness.
launchnow returns
Future<void>instead of
Future<Null>.
canLaunchmethod.
launchto a top-level method instead of a static method in a class.
example/README.md
Demonstrates how to use the url_launcher plugin.
For help getting started with Flutter, view our online documentation.
Add this to your package's pubspec.yaml file:
dependencies: url_launcher: ^4.0.2
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:url_launcher/url_launcher.dart';
We analyzed this package on Dec 5, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries. | https://pub.dartlang.org/packages/url_launcher | CC-MAIN-2018-51 | refinedweb | 180 | 53.07 |
DevelopersHow-TosOSFY A Python Program to Help You Back Up Your Files Automatically By Mohit Raj - December 17, 2018 0 23520 Lost or deleted files are a common phenomenon. To ensure file and folder security, prudence dictates you take a backup. To avoid the drudgery of physically doing so every now and then, it is best to automate the process. The author has created a Python program to help Backup Files Automatically. We often create new documents, files and folders in our computers. Sometimes we accidentally or mistakenly delete documents. Consider that you have made an important presentation after spending a lot of time on it. But accidentally, your computer hard disk crashes. It is a painful situation. People often take a backup of files. But it is very difficult to back up after every one hour or after every minute. So, to overcome this problem, I am going to demonstrate a Python script that I’ve created, which will keep taking backups of your file or folder after a specified period of time (specified by you). The name of the program is sync.py. It is for Windows and is compatible with Python 2 and Python 3. The python program to help Backup Files Automatically contains the following three files: Sync.py: The main program Sync1.ini: The configuration file Logger1.py: The module for logger support Sync.log is a file created by the sync.py. Let us now understand the code of sync.py and look at how it works. 1. Import the essential library to be used. Import configparser. Import time. Import shutil. Import hashlib. From the distutils.dir_util import copy_tree. From the collections, import OrderedDict. Import the OS. Import logger1 as log1. The following code reads the Sync1.ini file: def ConfRead(): config = configparser.ConfigParser() config.read(“Sync1.ini”) return (dict(config.items(“Para”))) Shown below are some of the variables obtained from the Sync.ini file: All_Config = ConfRead() Freq = int(All_Config.get(“freq”))*60 Total_time = int(All_Config.get(“total_time”))*60 repeat = int(Total_time/Freq) Figure 1: Making an exe file using pyinstaller Figure 2: Place the exe file in the Windows folder The following function md5 is used to calculate the hash of the file. If you modify a file, then the name remains the same but the hash gets changed. def md5(fname,size=4096): hash_md5 = hashlib.md5() with open(fname, “rb”) as f: for chunk in iter(lambda: f.read(size), b””): hash_md5.update(chunk) return hash_md5.hexdigest() The following function copies the whole directory with intermediaries: def CopyDir(from1, to): copy_tree(from1, to) The following function just copies one file to the destination location: def CopyFiles(file, path_to_copy): shutil.copy(file,path_to_copy) Figure 3: CMD default path Figure 4: Sync command The following function creates a dictionary, which contains the file names with the hash of the files. The function takes the source location and makes a dictionary of all the files present: def OriginalFiles(): drive = All_Config.get(“from”) Files_Dict = OrderedDict() print Original files: {0}’.format(e)) return Files_Dict The following function creates a dictionary that contains file names with a hash of the files. The function takes the destination location and gets all present files and makes a dictionary. If the root folder is not present, then it calls the CopyDir function. def Destination(): Files_Dict = OrderedDict() from1 = All_Config.get(“from”) to= All_Config.get(“to”) dir1= from1.rsplit(“\\”,1)[1] drive = to+dir1 #print (drive) try: if os.path.isdir Destination foor loop: {0}’.format(e)) return Files_Dict else : CopyDir(from1,drive) log1.logger.info(‘Full folder: {0} copied’.format(from1)) return None except Exception as e : log1.logger.error(‘Error Destination: {0}’.format(e)) The following functions define the logic: If file has been created with a folder. If file has been modified. Figure 5: Full folder is copied Figure 6: After modifying the file In both the cases, the following piece of code just compares the original and the destination’s dictionaries. If any file gets created or modified, then the interpreter copies the file from source and pastes it into the destination. def LogicCompare(): from1 = All_Config.get(“from”) to= All_Config.get(“to”) Dest_dict = Destination() if Dest_dict: Source_dict = OriginalFiles() remaining_files = set(Source_dict.keys())- set(Dest_dict.keys()) remaining_files= [Source_dict.get(k) for k in remaining_files] for file_path in remaining_files: try: log1.logger.info(‘File: {0}’.format(file_path)) dir, file = file_path.rsplit(“\\”,1) rel_dir = from1.rsplit(“\\”,1)[1] rel_dir1 = dir.replace(from1,””) dest_dir = to+rel_dir+”\\”+rel_dir1 if not os.path.isdir(dest_dir): os.makedirs(dest_dir) CopyFiles(file_path, dest_dir) except Exception as e: log1.logger.error(‘Error LogicCompare: {0}’.format(e)) The following piece of code uses loop to run the code again and again: i = 0 while True: if i >= repeat: break LogicCompare() time.sleep(Freq) i = i +1 Let us see the content of file Sync1.ini [Para] From = K:\testing1 To = E:\ Freq = 1 Total_time = 5 In the above code: From: Specifies what the source means and takes the backup of the testing1 folder. To: Specifies where to take the backup. Freq: Takes the backup after a specified minute. Total_time: Runs the code for Total_time minutes. Let us look at the code of logger1.py: import logging logger = logging.getLogger(“Mohit”) logger.setLevel(logging.INFO) fh = logging.FileHandler(“Sync.log”) formatter = logging.Formatter(‘%(asctime)s - %(levelname)s - %(message)s’) fh.setFormatter(formatter) logger.addHandler(fh) The above code is very simple and will work in INFO mode. If you don’t want to bother running the code using the interpreter, make a Windows exe file and this will work as a command. In order to convert this, let us take the help of pyinstaller. I have already installed that module. The command in Figure 1 converts your code into an exe file. To run it, the Python interpreter is not needed. Figure 7: After creating the new file Figure 8: Output when the pen -drive is ot present How to run the program After executing the command as shown in Figure 1, check the folder named Sync. In this folder, check the folder named dist, where you will get the .exe file. Now copy this .exe file and paste it into the C:/Windows folder, as shown in Figure 2. Now open the command prompt. Check the current folder as shown in Figure 3. In my PC, the default prompt path is c:/user/Mohit. In your PC, it may be different. So copy the Sync1.ini file and paste it in the c:/user/<your-name> folder. Now plug in an external pen-drive. Check the pen-drive drive letter, which in my PC is E. Based on your PC configuration, change the parameter of Sync1.ini placed in the C:/user/<your-name> directory. Now open the command prompt and type the command as shown in Figure 4. Now check your pen-drive. Look at sync.log, which was created in the folder c:/user/<your-name> Four cases are possible: 1. When the whole folder is not present in the pen-drive (Figure 5). 2. When you modify the existing file (Figure 6). 3. When you create a new file (Figure 7). The last case is a negative test case when the pen-drive is not present (Figure 8). RELATED ARTICLES Rasa Releases Open Source 3.0 To Help Build Better Conversational AI Abbinaya Kuzhanthaivel - November 26, 2021 Private Clouds: An Insight Bhagvan Kommadi - November 26, 2021 The Significance of Neural Networks in NLP Dibyendu Banerjee - November 25, 2021 | https://www.opensourceforu.com/2018/12/a-python-program-to-help-you-back-up-your-files-automatically/ | CC-MAIN-2021-49 | refinedweb | 1,253 | 61.43 |
Beaver 36.2.0
python daemon that munches on logs and sends their contents to logstash======
Beaver
======
.. image::
:target:
.. image::
:target:/python-beaver/python-beaver.git@36.2.0#egg=beaver
From PyPI::
pip install beaver==36.2.0
Documentation
=============.
=========
36.2.0 (2016-04-12)
-------------------
- Replaced Mosquitto with Paho-mqtt for mqtt transport. [Justin van
Heerde]
- Add log file rotate in limit size. [soarpenguin]
- Fix README.rst docs link. [Andrew Grigorev]
36.0.1 (2016-01-13)
-------------------
- Fix README.rst formatting for rst-lint. [Jose Diaz-Gonzalez]
- Remove tabs and use "except .. as .." [EYJ]
- Try to fix hanging test by joining leftover thread. [EYJ]
- Fix Queue timeout occurring after successful reconnect. [EYJ]
- Fix rabbitmq reconnection behaviour to use the beaver reconnect
mechanism. [EYJ]
- Migrating repo locations. [Jamie Cressey]
- Fixups for CONTRIBUTING. [Jamie Cressey]
- Fixing formatting. [Jamie Cressey]
- Changes to guidelines and adding reference to README. [Jamie Cressey]
- Adding contributing guidelines. [Jamie Cressey]
36.0.0 (2015-12-15)
-------------------
- Adding max_queue_size to docs. [Jamie Cressey]
- Pinning kafka-python version. [Jamie Cressey]
- Ensure we test against the latest version of kafka-python. [Jamie
Cressey]
- Attempt to reconnect to Kafka on failure. [Jamie Cressey]
- Adding SQS tests. [Jamie Cressey]
- Exclude gh-pages from TravisCI runs. [Jamie Cressey]
- Adding coverage results to README. [Jamie Cressey]
- Adding coverage tests. [Jamie Cressey]
- We say Py2.6+ is a requirement, but do the tests actually pass on 2.6?
[Jamie Cressey]
- Dont test py3, yet... [Jamie Cressey]
- Testing python 3.x. [Jamie Cressey]
- Using new travis config. [Jamie Cressey]
- Added requests as a dependency. [Jose Diaz-Gonzalez]
Closes #304
- Bump debian version on release. [David Moravek]
- Support both older and newer pika. [Tim Stoop]
- Make reconnecting to a lost RabbitMQ work. [Tim Stoop]
- Remove old worker code in favor of the - now non-experimental -
TailManager. [Jose Diaz-Gonzalez]
35.0.2 (2015-12-03)
-------------------
- Write to the SQS object not the dict when using sqs_bulk_lines flag.
[Jamie Cressey]
35.0.1 (2015-11-26)
-------------------
- Remove autospec attribute. [Jose Diaz-Gonzalez]
For some reason, this broke attribute setting on the mock SelectConnection.
- Fix pika version to version with all named parameters. [Jose Diaz-
Gonzalez]
- Peg kafka to a known-good version. [Jose Diaz-Gonzalez]
35.0.0 (2015-11-26)
-------------------
- Remove gitchangelog.rc. [Jose Diaz-Gonzalez]
- Merging changes. [Jamie Cressey]
- Added configuration option ignore_old_files. [Ryan Steele]
Files older then n days are ignored
- Support writes into multiple redis namespaces. [Andrei Vaduva]
- Adding support for multiple SQS queues. [Jamie Cressey]
- Ensure log lines confirm to utf-8 standard. [Jamie Cressey]
We've come across cases when certain characters break Beaver transmitting log lines. This PR ensures all log lines correctly conform to UTF-8 when they're formatted for transmission.
- Set timeout to 1 second. [Tim Stoop]
Apparantly, it needs to be an integer, so we cannot use pika's default
of .25.
- Revert "Lower the default to .25, which is pika's default." [Tim
Stoop]
This reverts commit 17157990a272e458cc9253666f01c6002b84bda8.
- Lower the default to .25, which is pika's default. [Tim Stoop]
As suggested by @kitchen.
- Pieter's patch for rabbitmq timeout. [Tim Stoop]
- Typo in config variable default value. [Jamie Cressey]
- Fix regressed change. [Jamie Cressey]
- Ability to send multiple log entries per single SQS message. [Jamie
Cressey]
- Adding AWS profile authentication to SQS transport. [Jamie Cressey]
34.1.0 (2015-08-10)
-------------------
- Adding AWS SNS as a transport option. [Jamie Cressey]
34.0.1 (2015-08-07)
-------------------
- Revert some breakages caused by
d159ec579c01b8fab532b3814c64b0ff8b2063ff. [Jose Diaz-Gonzalez]
Closes #331
- Set default for command. [Jose Diaz-Gonzalez]
- #323 - fix tests to run with pika SelectConnection. [Tom Kregenbild]
- #323 - fix RabbitMQ transport _on_open_connection_error function to
print connection errors. [Tom Kregenbild]
- #323 1. Add clear debug prints with queue size (one print every 1000
items in order not to hurt performance) 2. If main queue is empty keep
running and do nothing 3. In case of a timeout from main queue restart
queue. [Tom Kregenbild]
- #323 - Change RabbitMQ pika to use Asynchronous SelectConnection
instead of BlockingConnection for better performance. [Tom Kregenbild]
- #323 - add the ability to increase the number of Queue consumers by
creating additional processes while running with --experimental flag.
[Tom Kregenbild]
- #323 - add the ability to increase the number of Queue consumers by
creating additional processes. [Tom Kregenbild]
- #323 - print current queue size and number of total number transports
in debug mode in order to find problem in transport rate. [Tom
Kregenbild]
34.0.0 (2015-07-24)
-------------------
- Added ssl-tcp key file support. [babbleshack]
- Rename configuration dir and debian bin to python-beaver. [David
Moravek]
- Rename debian package back to python-beaver. [David Moravek]
- Debian packaging code review; thx @mnicky. [David Moravek]
- Improves debian packaging. [David Moravek]
- Fix tests when ZMQ is not installed. [David Moravek]
- Fix tests for python 2.7 (add funcsigs test dependency) [David
Moravek]
- Move badge to below header. [Jose Diaz-Gonzalez]
- Add constants for data types, validate in init, use callback map.
[Hector Castro]
- Move data type method conditional outside of loop. [Hector Castro]
- Add channel support to Redis transport. [Hector Castro]
This changeset adds support for publishing log entries to a Redis
channel, which is also supported by Logstash's Redis input.
Beaver configuration files can now supply a `redis_data_type` key. Valid
values for this key are `list` and `channel`. If left unset, the default
is `list`.
Attempts to resolve #266.
- Introduced a stomp transport for beaver using stomp.py. [Peter
Lenderyou]
- Fix references to ConfigParser error classes. [Jose Diaz-Gonzalez]
- Redis transport: handle multiple connections and use them in round
robin style. [musil]
- Fixes GELF format according to specs. [Marvin Frick]
GELF formatted messages need to be \0 ended. At least for sending over
TCP.
- Kafka round robin partitioner. [David Moravek]
- Solve error: cannot convert argument to integer. [Theofilis George-
Nektarios]
See at #312
33.3.0 (2015-04-08)
-------------------
- Basic docs for GELF formatter. [Oleg Rekutin]
Also fixes formatting issues with the immediately-preceding HTTP
transport example section.
- Adds a GELF formatter. [Oleg Rekutin]
short_message is truncated to 250 characters and only the first line is
retained. Pair with the HTTP POST output to write directly to graylog2.
- Issue #305, accept any 2xx code for http_transport. [Oleg Rekutin]
33.2.0 (2015-03-11)
-------------------
- Improved kafka test. [Marcel Casado]
- Added example of kafka transport usage in the user docs. [Marcel
Casado]
- Added placeholder "dist" directory to download kafka binaries. [Marcel
Casado]
- Added integration test support for Kafka transport. [Marcel Casado]
- Wrapped kafka client init in a try catch. [Marcel Casado]
- Initial kafka transport impl. [Marcel Casado]
- Updating config examples and docs. [Jonathan Sabo]
- Adding support for sqs queues in different accounts. [Jonathan Sabo]
33.1.0 (2015-02-04)
-------------------
- Improved error message for missing logstash_version. [Florian Hopf]
Added a comment that the version needs to be set in the config
- Specify stricter dependency on python-daemon, fixes #286. [Graham
Floyd]
- Add message_batch size checking since SQS can only handle 256KiB in a
batch. Flush queue if message_batch is 10 messages or >= 250KiB.
[Lance O'Connor]
- Explained valid values and meaning for rabbitmq_delivery_mode. [Fabio
Coatti]
- Added documentation for rabbitmq_delivery_mode configuration
parameter. [Fabio Coatti]
- A small change in except syntax. This should make happy python3 and
work also in 2.6 and later. [Fabio Coatti]
- When sending a message, now we can tell rabbitmq which delivery mode
we want, according to main configuration option
rabbitmq_delivery_mode. [Fabio Coatti]
- Added configuration option for rabbitmq deliveryMode. Basically it
works like a boolean, but having 1 and 2 as allowed values, we
consider it integer and validate it as such. [Fabio Coatti]
- Newline removed. [Fabio Coatti]
- Added stanzas specific redis_namespace key to documentation. [Fabio
Coatti]
- Added a space after comma, more compliant with python style guide.
[Fabio Coatti]
- Revert "ignored eric files" [Fabio Coatti]
This reverts commit ea2a6b27437570aeda3ee53b6c6ebd7ebb1f4f2a.
as suggested, leave alone .gitignore :)
- This small commit allows to specify a redis namespace in file section
of configuration file (stanzas). Basically, beaver checks if a
redis_namespace is defined for the current file. If yes, it is used
for the redis payload. If not (or null), beaver uses the
redis_namespace value specified in global section. [Fabio Coatti]
- Added a section (stanza) configuration option in order to be able to
specify a redis namespace. If set, it will override the namespace set
in main section. Default is null. [Fabio Coatti]
- Ignored eric files. [Fabio Coatti]
- Remove `python-daemon` from requirements on win32. [Ryan Davis]
If we're installing on windows, don't require `python-daemon`. This
fixes a problem where trying to `pip install beaver` errors out when
trying to install `python-daemon`.
refs #141
- Use new repository name for travis-ci badge. [Jose Diaz-Gonzalez]
33.0.0 (2014-10-14)
-------------------
- Extend release script to support new, semver-tagged releases. [Jose
Diaz-Gonzalez]
- Add gitchangelog.rc to fix changelog generation. [Jose Diaz-Gonzalez]
32 (2014-10-14)
---------------
- Allow for the config file to override the logfile's setting. [Aaron
France]
- Force update of sincedb when beaver stop. [Pierre Fersing]
- Fixed sincedb_write_interval (Bugs #229). [Pierre Fersing]
- Fix config.get('ssh_options') [svengerlach]
ssh_options could never be returned due to a wrong type check
- Add debian packaging based on dh-virtualenv. [Jose Diaz-Gonzalez]
- Zmq3 split HWM into SNDHWM/RCVHWM. Closes #246. [Pete Fritchman]
- Fix typo in usage.rst. [Hugo Lopes Tavares]
s/logstash_verion/logstash_version/
- Fixed badge to point to master branch only. [Jose Diaz-Gonzalez].
- Fix redis_transport.py redis exception handling. Fixes #238. [Hugo
Lopes Tavares]
- Attempt to fix memory leaks. Closes #186. [Jose Diaz-Gonzalez]
- Allow for newer versions of boto to be used. Closes #236. [Jose Diaz-
Gonzalez]
- 'rabbit
the use of `sys.exit` inside the signal handlers means that a
`SystemExit` exception is raised
() which can be caught
by try/except blocks that might have been executing at time of signal
handling, resulting in beaver failing to quit
-\n.
- CONFIG_DIR to CONFD_PATH. [iyingchi]
- Added doc for -C option for config directory. [iyingchi]
- Fixed example in Readme.rst for sqs_aws_secret_key. [Jonathan Quail]
- Allow path to be None. [Lars Hansson]
Allow path to be set empty (None) in the configuration filer. This way
all files and globs can be configured in files in confd_path.
- Fix zmq transport tests. [Scott Smith]
- Move zmq address config parsing into _main_parser. [Scott Smith]
- Allow specifying multiple zmq addresses to bind/connect. [Scott Smith]
- Redis 2.4.11 is no longer available on Pypi. [Andrew Gross]
Fixes issue #167
- Add a TCP transport. [Kiall Mac Innes]
- Isolate connection logic. Provide proper reconnect support. [Chris
Roberts]
-]
- 'type' instead of 'exchange: 'Non ]
- Auto-reconnect mechanism for the SSH tunnel. [Michael Franz Aigner]
- Use an alternative method of reading in requirements. Refs #120. [Jose
Diaz-Gonzalez]
- 'sin]
- Add test requirements to setup. [Paul Garner]
- Allow beaver to accept custom transport classes. [Paul Garner]
- Rabbitmq_exchange_type option fixed in the README. [Xabier de Zuazo]
-]
- Adding exception when redis connection can't be confirmed. [William
Jimenez]
- Add '-]
-']
8 (2012-11-28)
--------------
- Removed deprecated usage of e.message. [Rafael Fonseca]
- Fixed exception trapping code. [Rafael Fonseca]
- Added some resiliency code to rabbitmq transport. [Rafael Fonseca]
7 (2012-11-28)
--------------
- Added a helper script for creating releases. [Jose Diaz-Gonzalez]
- Partial fix for crashes caused by globbed files. [Jose Diaz-Gonzalez])]
- Better exception handling for unhandled exceptions. [Michael D'Auria]
- Handle the case where the config file is not present.
- Support globs in file paths. [Darren Worrall]
- When sending data over the wire, use UTC timestamps. ]
- First commit. [Jose Diaz-Gonzalez]
- Author: Jose Diaz-Gonzalez
- Bug Tracker:
- License: LICENSE.txt
- Requires six
- Categories
- Package Index Owner: savant
- DOAP record: Beaver-36.2.0.xml | https://pypi.python.org/pypi/Beaver | CC-MAIN-2016-22 | refinedweb | 1,923 | 62.54 |
Tips
Tips
.NET tutorials, guides and quizzes
Book Review: Understanding .NET, Second Edition
Ed Tittel calls this book an effective .NET tutorial for software developers and their managers. It covers the My namespace, ASP.NET, the CLR and other important topics. Continue Reading
Using data sources with databases: Ch. 14 of Murach's VB 2005
This book excerpt offers tips for binding data to Windows Forms, using data sources with DatGridView and TextBox controls and performing data queries.
FxCop lets developers check their managed code up against the .NET Framework. Continue Reading
The polymorphism debate, part 3
The polymorphism debate is refueled by this programmer's commentary. Continue Reading
Review of basic book on .NET. Continue Reading
Use inheritance on forms for simpler and more robust source code
By using inheritance, source code can be made simpler and more robust. Continue Reading
Add an assembly to GAC
This tip shows how to add an assembly to GAC (Global Assembly Cache) so it can be referred globally by any other applications/assemblies. Continue Reading
Mapping properties to a key in configuration file using dynamic properties
This is of great help when you need to customize an application for a particular user or environment. Continue Reading
Using JavaScript to save roundtrips
Client code to set a check box to true when the user types a search string into a text box without having to make a roundtrip to the server. Continue Reading
Form calling
You can dynamically load a VB form from any string value. This is useful when you have an app that has to load one of many different forms based on different user-defined criteria. Continue Reading | http://searchwindevelopment.techtarget.com/tips/NET-tutorials-guides-and-quizzes | CC-MAIN-2018-13 | refinedweb | 278 | 54.12 |
5
Functions
Written by Jonathan Sande
Heads up... You're reading this book for free, with parts of this chapter shown beyond this point astext.
You can unlock the rest of this book, and our entire catalogue of books and videos, with a raywenderlich.com Professional subscription.
Each week, there are tasks that you repeat over and over: eat breakfast, brush your teeth, write your name, read books about Dart, and so on. Each of those tasks can be divided up into smaller tasks. Brushing your teeth, for example, includes putting toothpaste on the brush, brushing each tooth and rinsing your mouth out with water.
The same idea exists in computer programming. A function is one small task, or sometimes a collection of several smaller, related tasks that you can use in conjunction with other functions to accomplish a larger task.
In this chapter, you’ll learn how to write functions in Dart. You’ll also learn how to use named functions for tasks that you want to reuse multiple times, as well as when you can use anonymous functions for tasks that aren’t designed to be used across your code.
Function basics
You can think of functions like machines in a factory; they take something you provide to them (the input), and produce something different (the output).
There are many examples of this even in daily life. With an apple juicer, you put in apples and you get out apple juice. The input is apples; the output is juice. A dishwasher is another example. The input is dirty dishes, and the output is clean dishes. Blenders, coffee makers, microwaves and ovens are all like real-world functions that accept an input and produce an output.
Don’t repeat yourself
Assume you have a small, useful piece of code that you’ve repeated in multiple places throughout your program:
// one place if (fruit == 'banana') { peelBanana(); eatBanana(); } // another place if (fruit == 'banana') { peelBanana(); eatBanana(); } // some other place if (fruit == 'banana') { peelBanana(); eatBanana(); }
Anatomy of a Dart function
In Dart, a function consists of a return type, a name, a parameter list in parentheses and a body enclosed in braces.
void main() { const input = 12; final output = compliment(input); print(output); } String compliment(int number) { return '$number is a very nice number.'; }
12 is a very nice number.
More about parameters
Parameters are incredibly flexible in Dart, so they deserve their own section here.
Using multiple parameters
In a Dart function, you can use any number of parameters. If you have more than one parameter for your function, simply separate them with commas. Here’s a function with two parameters:
void helloPersonAndPet(String member, String pet) { print('Hello, $member, and your furry friend, $pet!'); }
helloPersonAndPet('Fluffy', 'Chris'); // Hello, Fluffy, and your furry friend, Chris!
Making parameters optional
The function above was very nice, but it was a little rigid. For example, try the following:
helloPersonAndPet();
2 positional argument(s) expected, but 0 found.
String fullName(String first, String last, String title) { return '$title $first $last'; }
String fullName(String first, String last, [String title]) { if (title != null) { return '$title $first $last'; } else { return '$first $last'; } }
print(fullName('Ray', 'Wenderlich')); print(fullName('Albert', 'Einstein', 'Professor'));
Ray Wenderlich Professor Albert Einstein
Providing default values
In the example above, you saw that the default value for an optional parameter was
null. This isn’t always the best default value, though. That’s why Dart also gives you the power to change the default value of any parameter in your function, by using the assignment operator.
bool withinTolerance(int value, [int min = 0, int max = 10]) { return min <= value && value <= max; }
withinTolerance(5) // true withinTolerance(15) // false
withinTolerance(9, 7, 11) // true
withinTolerance(9, 7) // true
Naming parameters
Dart allows you to use named parameters to make the meaning of the parameters more clear in function calls.
bool withinTolerance({int value, int min = 0, int max = 10}) { return min <= value && value <= max; }
withinTolerance(value: 9, min: 7, max: 11) // true
withinTolerance(value: 9, min: 7, max: 11) // true withinTolerance(min: 7, value: 9, max: 11) // true withinTolerance(max: 11, min: 7, value: 9) // true
withinTolerance(value: 5) // true withinTolerance(value: 15) // false withinTolerance(value: 5, min: 7) // false withinTolerance(value: 15, max: 20) // true
Making named parameters required
The fact that named parameters are optional by default causes a problem, though. A person might look at your function declaration, assume that all of the parameters to the function are optional, and call your function like so:
print(withinTolerance());
NoSuchMethodError: The method '>' was called on null.
import 'package:meta/meta.dart';
pub get
bool withinTolerance({ @required int value, int min = 0, int max = 10, }) { return min <= value && value <= max; }
Writing good functions
People have been writing code for decades. Along the way, they’ve designed some good practices to improve code quality and prevent errors. One of those practices is writing DRY code as you saw earlier. Here are a few more things to pay attention to as you learn about writing good functions.
Avoiding side effects
When you take medicine to cure a medical problem, but that medicine makes you fat, that’s known as a side effect. If you put some bread in a toaster to make toast, but the toaster burns your house down, that’s also a side effect. Not all side effects are bad, though. If you take a business trip to Paris, you also get to see the Eiffel Tower. Magnifique!
void hello() { print('Hello!'); }
String hello() { return 'Hello!'; }
var myPreciousData = 5782; String anInnocentLookingFunction(String name) { myPreciousData = -1; return 'Hello, $name. Heh, heh, heh.'; }
Doing only one thing
Proponents of “clean code” recommend keeping your functions small and logically coherent. Small here means only a handful of lines of code. If a function is too big, or contains unrelated parts, consider breaking it into smaller functions.
Choosing good names
You should always give your functions names that describe exactly what they do. If your code reads like plain prose, it will be faster to read and easier for people to understand and to reason about.
Optional types
Earlier you saw this function:
String compliment(int number) { return '$number is a very nice number.'; }
compliment(number) { return '$number is a very nice number.'; }
dynamic compliment(dynamic number) { return '$number is a very nice number.'; }
Mini-exercises
- Write a function named
youAreWonderful, with a String parameter called
name. It should return a string using
name, and say something like “You’re wonderful, Bob.”
- Add another
intparameter to that function called
numberPeopleso that the function returns something like “You’re wonderful, Bob. 10 people think so.” Make both inputs named parameters.
- Make
namerequired and set
numberPeopleto have a default of
30.
Anonymous functions
All the functions you’ve seen previously in this chapter, such as
main,
hello, and
withinTolerance are named functions, which means, well, they have a name.
First-class citizens
In Dart, functions are first-class citizens. That means you can treat them like any other other type, assigning functions as values to variables and even passing functions around as parameters or returning them from other functions.
Assigning functions to variables
When assigning a value to a variable, functions behave just like other types:
int number = 4; String greeting = 'hello'; bool isHungry = true; Function multiply = (int a, int b) { return a * b; };
Function myFunction = int multiply(int a, int b) { return a * b; };
Function expressions can't be named.
Passing functions to functions
Just as you can write a function to take
int or
String as a parameter, you can also have
Function as a parameter:
void namedFunction(Function anonymousFunction) { // function body }
Returning functions from functions
Just as you can pass in functions as input parameters, you can also return them as output:
Function namedFunction() { return () { print('hello'); }; }
Using anonymous functions
Now that you know where you can use anonymous functions, have a hand at doing it yourself. Take the multiply function from above again:
final multiply = (int a, int b) { return a * b; };
print(multiply(2, 3));
Returning a function
Have a look at a different example:
Function applyMultiplier(num multiplier) { return (num value) { return value * multiplier; }; }
final triple = applyMultiplier(3);
print(triple(6)); print(triple(14.0));
18 42.0
Anonymous functions in forEach loops
Chapter 4 introduced you to
forEach loops, which iterate over a collection. Although you may not have realized it, that was an example of using an anonymous function.
const numbers = [1, 2, 3];
numbers.forEach((number) { final tripled = number * 3; print(tripled); });
3 6 9
Closures and scope
Anonymous functions in Dart act as what are known as closures. The term closure means that the code “closes around” the surrounding scope, and therefore has access to variables and functions defined within that scope.
Function applyMultiplier(num multiplier) { return (num value) { return value * multiplier; }; }
var counter = 0; final incrementCounter = () { counter += 1; };
incrementCounter(); incrementCounter(); incrementCounter(); incrementCounter(); incrementCounter(); print(counter); // 5
Function countingFunction() { var counter = 0; final incrementCounter = () { counter += 1; return counter; }; return incrementCounter; }
final counter1 = countingFunction(); final counter2 = countingFunction();
print(counter1()); // 1 print(counter2()); // 1 print(counter1()); // 2 print(counter1()); // 3 print(counter2()); // 2
Mini-exercises
- Change the
youAreWonderfulfunction in the first mini-exercise of this chapter into an anonymous function. Assign it to a variable called
wonderful.
- Using
forEach, print a message telling the people in the following list that they’re wonderful.
const people = ['Chris', 'Tiffani', 'Pablo'];
Arrow functions
Dart has a special syntax for one-line functions, either named or anonymous. Consider the following function
add that adds two numbers together:
// named function int add(int a, int b) => a + b;
// anonymous function (parameters) => expression;
Refactoring example 1
The body of the anonymous function you assigned to
multiply has one line:
final multiply = (int a, int b) { return a * b; };
final multiply = (int a, int b) => a * b;
print(multiply(2, 3)); // 6
Refactoring example 2
You can also use arrow syntax for the anonymous function returned by
applyMultiplier:
Function applyMultiplier(num multiplier) { return (num value) { return value * multiplier; }; }
Function applyMultiplier(num multiplier) { return (num value) => value * multiplier; }
Refactoring example 3
You can’t use arrow syntax on the
forEach example, though:
numbers.forEach((number) { final tripled = number * 3; print(tripled); });
numbers.forEach((number) => print(number * 3));
Mini-exercise
Change the
forEach loop in the previous “You’re wonderful” mini-exercise to use arrow syntax.
Challenges
Before moving on, here are some challenges to test your knowledge of functions. It is best if you try to solve them yourself, but solutions are available if you get stuck in the challenge folder of this chapter.
Challenge 1: Prime time
Write a function that checks if a number is prime.
Challenge 2: Can you repeat that?
Write a function named
repeatTask with the following definition:
int repeatTask(int times, int input, Function task)
Challenge 3: Darts and arrows
Update Challenge 2 to use arrow syntax.
Key points
- Functions package related blocks of code into reusable units.
- A function signature includes the return type, name and parameters. The function body is the code between the braces.
- Parameters can be positional or named, and required or optional.
- Side effects are anything, besides the return value, that change the world outside of the function body.
- To write clean code, use functions that are short and only do one task.
- Anonymous functions don’t have a function name, and the return type is inferred.
- Dart functions are first-class citizens and thus can be assigned to variables and passed around as values.
- Anonymous functions act as closures, capturing any variables or functions within its scope.
- Arrow syntax is a shorthand way to write one-line functions.
Where to go from here?
This chapter spoke briefly about the Single Responsibility Principle and other clean coding principles. Do a search for SOLID principles to learn even more. It’ll be time well spent. | https://www.raywenderlich.com/books/dart-apprentice/v1.0.ea1/chapters/5-functions | CC-MAIN-2021-49 | refinedweb | 1,971 | 52.29 |
snd_mixer_sort_gid_table()
Sort a list of group ID structures
Synopsis:
#include <sys/asoundlib.h> void snd_mixer_sort_gid_table( snd_mixer_gid_t *list, int count, snd_mixer_weight_entry_t *table );
Since:
BlackBerry 10.0.0
Arguments:
- list
- The list of snd_mixer_gid_t structures that you want to sort.
- count
- The number of entries in the list.
- table
- A pointer to an array of snd_mixer_weight_entry_t structures that defines the relative weights for the groups.
Most applications use the default table weight structure, snd_mixer_default_weights.
Description:
The snd_mixer_sort_gid_table() function sorts a list of gid (group id structures) based on the names and the relative weights specified by the weight table.
Classification:
QNX Neutrino
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.audio.lib_ref/topic/snd_mixer_sort_gid_table.html | CC-MAIN-2020-34 | refinedweb | 121 | 59.9 |
Talk:Proposed features/left name (2)
< Talk:Proposed features(Redirected from Talk:Rejected features/left name)
Discuss Proposed features/left name here:
Left of what?
Left of what ?? What happens with a horizontal way ??? What happens when someone reverse the way ??? Like on your example street sign , the street has just two names. -- User:PhilippeP 18:22, 1 May 2008
- The left is defined by the order of the nodes like for the tag oneway. If someone reverse the way the editor can ask the user if he want to exange the tags. --Patou 18:59, 1 May 2008 (UTC)
Key name
I think left:name and right:name is better because it's already used in boundary=* --Jttt 20:01, 19 May 2008 (UTC)
- I chosen left_name and right_name to be similar to old_name and alt_name which are widely used. --Patou 21:45, 22 May 2008 (UTC)
- Actually name:left would make more sense (similar to source:ref etc). The ':' symbol means name is namespace, so for example namefinder would know name:left is some special kind of name and will include it in names database. --Jttt 06:56, 27 May 2008 (UTC)
name_left and name_right seem to be in use instead of left_name and right_name--Kurt Roeckx 22:11, 29 May 2008 (UTC)
- You're right, I didn't find any left_name or right_name, but I've found about 120 name_left/right. Anyway I think name:left shoud be used. There is already 120 different keys containg name_ or _name applied on ways and 60 applied on nodes. How are namefinder or renderers suppose to guess which key means some kind of name and which key is somethink else? --Jttt 06:52, 30 May 2008 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Rejected_features/left_name | CC-MAIN-2018-43 | refinedweb | 288 | 79.19 |
Unico Hommes <Unico@hippo.nl> wrote:
> Guido Casper wrote:
>>
>>
>> The only change to your proposal to allow for both behaviours would
>> be to not prohibit ambiguities.
>>
>> Deal? :-)
>>
>
> Yes, that was my bad. It had been a long day.
No no, you were the one opening my (and hopefully other's) eyes how
powerful all this is :-)
>
>>>
>>> This would handle for example the situation we currently have with
>>> the GIF- and JPEGSourceInspectors that actually deal with the same
>>> type of properties but define separate namespaces for them
>>> respectively. I'd like to instead use the same property namespace
>>> for these: .
>>
>> Exactly. An example where dynamic behaviour would be beneficial.
>>
>
> So, can I make this change then? Making the two use the same
> namespace I mean.
+1 from me
>
>>> ...
>
> The thing is that, if you allow inspectors to handle arbitrary
> property types, you need to communicate certain information between
> inspectors.
>
> Consider a propfind for the image:width property on a source.
> Currently there are two inspectors that deal with this property.
> However it depends on the type of the source whether one, the other
> or neither will be able to determine the property value. What if you
> tried to get this property on a source of mime type text/xml? The
> manager will loop thru the set of inspectors that assert they handle
> properties like that. First comes GIFSourceInspector, then comes
> JPEGSourceInspector. Both return a value of null since the mime type
> is beyond their epistemic scope. Next in the list is an inspector
> that can in principal handle any type because it just stores them as
> name-value pairs say. If this inspector now chooses to query for the
> image:width property that would certainly be undesirable behavior.
>
> So how to solve that?
One can blame the one trying to read image:width on a XML resource :-)
I can see the problem of several inspectors not checking the namespace.
Maybe the implementor should make sure there is no more than 1 of them
(but then again you have to ensure the correct sequence of calling (the
same applies for setting props)). Maybe another mechanism for ensuring
more robustness is needed. Hmm ... keeping on thinking about that.
Hope I can work on it for my next project (don't know yet) and gather
some more experiences.
>
> My idea to guard against unexpected performance hits like this was to
> require each inspector to know in advance what property types they
> deal with. Making the requirement explicit by defining a method by
> which they expose their respective property types. This would impact
> as you say the flexibility in that the application must in advance
> know what properties a client is going to use. And so we came up with
> a wildcard syntax that would be tried if no exact match was available.
>
> But I don't really like this. It complicates things too much.
>
> So now I think, until we come up with something better, the best way
> for the moment is to just let the inspectors decide for themselves
> like they did before. If they decide to just try each and every
> property request they get, then they must accept the impact this will
> have on the overall performance of the system. The responsibility
> lies with them. I'll provide an abstract base class that can
> configure what properties the concrete inspector will deal with.
Thanks a lot for your work.
Guido | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200310.mbox/%3C045001c39c72$df4a9e40$22020a0a@hw0393%3E | CC-MAIN-2013-20 | refinedweb | 569 | 64.61 |
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 7 for the current design information.
What a piece of work is Perl 6!
How noble in reason!
How infinite in faculty!
In
form how express and admirable!
– W. Shakespeare, "Hamlet" (Perl 6 revision)
Formats are Perl 5's mechanism for creating text templates with fixed-width fields. Those fields are then filled in using values from prespecified package variables. They're a useful tool for generating many types of plaintext reports – the r in Perl, if you will..
The general idea is the same as for Perl's two other built-in string
formatting functions:
sprintf and
pack. The first argument
represents a template with N placeholders to be filled in, and the
next N arguments are the data that is to be formatted and
interpolated into those placeholders:
$text = sprintf $format_s, $datum1, $datum2, $datum3; $text = pack $format_p, $datum1, $datum2, $datum3; $text = form $format_f, $datum1, $datum2, $datum3;
Of course, these three functions use quite different mini-languages to specify the templates they fill in, and all three fill in those templates in quite distinct ways.
Apart from those differences in semantics,
form has a syntactic
difference too. With
form, after the first N data arguments we're
allowed to put a second format string and its corresponding data, then a
third format and data, and so on:
$text = form $format_f1, $datum1, $datum2, $format_f2, $datum4, $format_f3, $datum5;
And if we prettify that function call a little, it becomes obvious that it has
the same basic structure as a Perl 5
format:
form $format_f1, $datum1, $datum2, $datum3, $format_f2, $datum4, $format_f3, $datum5;
But the Perl 6 version is implemented as a vanilla Perl 6 subroutine,
rather than hard-coded into the language with a special keyword and
declaration syntax. In this respect it's rather like Perl 5's
little-known
formline function – only much, much better.
So, whereas in Perl 5 we might write:
# Perl 5 code...
our ($name, $age, $ID, $comments);
format STDOUT =================================== | NAME | AGE | ID NUMBER | |----------+------------+-----------| | @<<<<<<< | @||||||||| | @>>>>>>>> | $name, $age, $ID, |===================================| | COMMENTS | |-----------------------------------| | ^<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< |~~ $comments, =================================== .
write STDOUT;
in Perl 6 we could write:. | ===================================
At first glance the Perl 6 version may seem like something of a backwards step – all those extra quotation marks and commas that the Perl 5 format didn't require. But the new formatting interface does have several distinct advantages:
-
writefunction – and hence frees up
writeto be used as the true opposite of
read, should Larry so desire.
Of course, this is Perl, not Puritanism. So those folks who happen to like package variables, global accumulators, and mysterious writes, can still have them. And, if they're particularly nostalgic, they can also get rid of all the quotation marks and commas, and even retain the dot as a format terminator. For example:
sub myster_rite { our ($name, $age, $ID, $comments); print form :interleave, <<'.' =================================== | NAME | AGE | ID NUMBER | |----------+------------+-----------| | {<<<<<<} | {||||||||} | {>>>>>>>} | |===================================| | COMMENTS | |-----------------------------------| | {[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[} | =================================== . $name, $age, $ID, $comments; }
# and elsewhere in the same package...
($name, $age, $ID, $comments) = get_data(); myster_rite();
($name, $age, $ID, $comments) = get_more_data(); myster_rite();
Let's take a look...
What's in a name?
But before we do, here's a quick run-down of some of the highly arcane technical jargon we'll be using as we talk about formatting:
-.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 7 for the current design information.
Why, how now, ho! From whence ariseth this?
Unlike
sprintf and
pack, the
form subroutine isn't built into Perl 6.
It's just a regular subroutine, defined in the Form.pm module:
module Form { type FormArgs ::= Str|Array|Pair;
sub form (FormArgs *@args is context(Scalar)) returns Str is exported { ... }
... }
That means that if we want to use
form we need to be sure we:
use Form;
first.
Note that the above definition of
form specifies that the subroutine takes
a list of arguments (
*@args), each of which must be a string, array
or pair (
type FormArgs ::= Str...?
Like all Perl subroutines,
form can be called in a variety of contexts.
When called in a scalar or list context,
form returns a string
containing the complete formatted text:
my $formatted_text = form $format, *@data;
@texts = ( form($format, *@data1), form($format, *@data2) ); # 2 elems
When called in a void context,
form waxes lyrical about human
frailty, betrayal of trust, and the pointlessness of calling out
when nobody's there to heed the reply, before dying in a highly
theatrical manner.
He doth fill fields...
The format strings passed to
form determine what the resulting
formatted text looks like. Each format consists of a series
of field specifiers, which are usually separated by literal characters.
form understands a far larger number of field specifiers than
format did,
but they're easy to remember because they obey.
The fields are fragrant...
That may still seem like quite a lot to remember, but the rules have
been chosen so that the resulting fields are visually mnemonic. In other
words, they're supposed to look like what they do. The intention is that
we simply draw a (stylized) picture of how we want the finished text to
look, using fields that look something like the finished product
– left or right brackets brackets showing horizontal alignments,
a middlish
= or bottomed-out
_ indicate middled or bottom vertical
alignment, etc., etc. Then
form fits our data into the fields so it
looks right.
The typical field specifications used in a
form format look like this:.
What a block art thou... line.";
for @reasons.kv -> $index, $reason {.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 7 for the current design information.
And mark what way I make...: Unicode ellipses or ASCII colons at either end. So instead of
{<<<<>>>>}, we'd write
{…<<<>>>…}
or
{:<<<>>>:}.
Note that each ellipsis is a single, one-column wide Unicode HORIZONTAL
ELLIPSIS character (
\c[2026]), not three separate dots. The
connotation of the ellipses is "...then keep on formatting from where
you previously left off, remembering there's probably still more to
come...". And the colons are the ASCII symbol most like a single
character ellipsis (try tilting your head and squinting).
Follow-on fields are most useful when we want to split a formatting task into distinct stages – or iterations – but still allow the contents of the follow-on field to flow uninterrupted from line to line. For example:
print "The best Shakespearean roles are:\n\n";
for @roles -> $role { ellipses in the second field tell each call to start
extracting data from
$disclaimer at the offset indicated by
$disclaimer.pos, and then to update
$disclaimer.pos.
Therefore, put you in your best array...
Data, especially numeric data, is often stored in arrays.
So
form also accepts arrays as data arguments. It can
do so because its parameter list is defined as:
sub form (Str|Array|Pair *@args is context(Scalar)) {...}
which means that although its arguments may include one or more arrays, each such array argument is nevertheless evaluated in a scalar context. Which, in Perl 6, produces an array reference.
In other words, array arguments don't get flattened automatically, so
form doesn't losing track of where in
the argument list one array finishes and the next begins. @roles -> $role {.
Note, however, that this block-based approach wouldn't work so well if
one of the elements of
@roles was too big to fit on a single line. In
that case we might end up with something like the following: * *
rather * *
That's because the
"*" that's being used as a bullet for the first
column is a literal (i.e. mere decoration),
and so it will be repeated on every line that
is formatted, regardless of whether that line is the start of a new
element of
@roles or merely the broken-and-wrapped remains of the
previous element. Happily, as we shall see later, this particular
problem has a simple solution.
Despite these minor complications, array data sources are particularly useful when formatting, especially if the data is known to fit within the specified width. For example:
print form '-------------------------------------------', 'Name Score Time | Normalized', '-------------------------------------------', '{[[[[[[[[[[[[} {III} {II} | {]]].[[} ', @name, @score, @time, [@score »/« @time];
is a very easy way to produce the table:
------------------------------------------- Name Score Time | Normalized ------------------------------------------- Thomas Mowbray 88 15 | 5.867 Richard Scroop 54 13 | 4.154 Harry Percy 99 18 | 5.5
Note the use of the Perl6-ish listwise division (
»/«)
to produce the array of data for the "Normalized" column.
More particulars must justify my knowledge....
What, is't too short?.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 7 for the current design information.
Command our present numbers be muster'd...
The points are all aligned, the minimal number of decimal places are
shown, and the decimals are rounded (using the same rounding protocol that
printf employs). Note in particular that, even though both
1 and
1.0001 would normally convert to the same 3-decimal-place value
(
1.000), a
form call only shows all three zeros in the second case
since only in the second case are they "significant".
In other words, unless we tell it otherwise,
form tries to avoid displaying a
number with more accuracy than it actually possesses (within the
constraint that it must always show at least one decimal place).
Here are only numbers ratified.: #####.###
Note, however, that it is possible to change this behaviour should we need using the unary numerification operator (shown here in its
list form –
+« – since we have an array of
values to be numerified):
print form 'Thy score be: {]]]].[[}', +« @mixed_data;
This version would print:
Thy score be: 1.0 Thy score be: 2.0 Thy score be: 0.0 Thy score be: 1.0 Thy score be: 6.0 Thy score be: 7.0
(The
1.0 on the fourth line appears because Perl 6 hashes numerify to the
number of entries they contain).
See how the giddy multitude do point...;
Or else be impudently negative...:
print form 'Thy dividend be: ({]]]].[[})', @nums;
and we'd get:
Thy dividend be: ( 1.0 ) Thy dividend be: ( -1.2 ) Thy dividend be: ( 1.23 ) Thy dividend be: ( -11.234) Thy dividend be: ( 111.235) Thy dividend be: (#####.###)...
Wait a minute...
Where exactly did we conjure that
:locale syntax from?
And what, exactly, did it create? What is an "option"?
Well, we're passing
:locale as an argument to
form, and
form's
signature guarantees us
that it can only accept a
Str, or an
Array, or a
Pair as an
argument. So an "option" must be one of those three types, and that
funky
:identifier syntax must be a constructor for the equivalent data
structure.
And indeed, that's the case. An "option" is just a pair, and the
funky
:identifier syntax is just another way of writing a pair
constructor.
The standard "option" syntax is:
:key( "value" )
which is identical in effect to:
key => "value"
Both specify an autoquoted key; both associate that key with a value; both evaluate to a pair object that contains the key and value. So why have a second syntax for pairs?
Because it allows us to optimize the pair constructor syntax in two different ways. The now-familiar "fat arrow" pair constructor takes a key and a value, each of which can be of any type. In contrast, the key of an "option" pair constructor can only be an identifier, which is always autoquoted...at compile-time. So, if we use the "option" syntax we're guaranteed that the key of the resulting pair is a string, that the string that contains a valid identifier, and that the compiler can check that validity before the program starts.
Moreover, whereas the "fat arrow" has only one syntax, "options" have several highly useful syntactic variations. For example, "fat arrow" pairs can be especially annoying when we want to use them to pass named boolean arguments to a subroutine. For example:
duel( $person1, $person2, to_death=>1, no_quarter=>1, left_handed=>1, bonetti=>1, capoferro=>1 );
In contrast, "options" have a special default behaviour. If we leave off their
parenthesized value entirely, the implied value is
1. So we could rewrite
the preceding function call as:
duel( $person1, $person2, :to_death, :no_quarter, :left_handed, :bonetti, :capoferro );
Better still, when we have a series of options, we don't have to put commas between them:
duel( $person1, $person2, :to_death :no_quarter :left_handed :bonetti :capoferro );
That makes them even more concise and uncluttered, especially in
use statements:
use POSIX :errno_h :fcntl_h :time_h;
There are other handy "option" variants as well, all of which simply substitute the parentheses following their key for some other kind of bracket (and hence some other kind of value). The full list of "option"...err...options is:
Option syntax Is equivalent to ================== =============================
:key("some value") key => "some value"
:key key => 1
:key{ a=>1, b=>2 } key => { a=>1, b=>2 }
:key{ $^arg * 2; } key => { $^arg * 2; }
:key[ 1, 2, 3, 4 ] key => [ 1, 2, 3, 4 ]
:key«eat at Joe's» key => ["eat", "at", "Joe's"]
Despite the deliberate differences in conciseness and flexibility, we can use "options" and "fat arrows" interchangeably in almost every situation where we need to construct a pair (except, of course, where the key needs to be something other than an identifier string, in which case the "fat arrow" is the only alternative). To illustrate that interchangeability, we'll use the "option" syntax throughout most of the rest of this discussion, except where using a "fat arrow" is clearly preferable for code readability.
Meanwhile, back in the fields...
Some tender money to me... %format.kv -> $nationality, $layout {
Nice, eh?
Able verbatim to rehearse...
But sometimes too nice.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 7 for the current design information.
And now at length they overflow their banks.«across».
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 7 for the current design information.
Great floods have flown from simple sources...
When it comes to specifying the data source for each field in a format,
form offers several alternatives as to where that data placed,
several alternatives as to the order in which that data is extracted, and
an option that lets us control how the data is fitted into each field.
A man may break a word with you, sir....
(Of course, that might actually correspond to less than W characters
if the string contains wide characters. However, for the sake of exposition
we'll pretend that all characters are one column wide here.) ($data is rw, $width, $ws) { given $data { # Treat any squeezed or vertical whitespace as a single character # (since they'll subsequently be squeezed to a single space) my rule single_char { <$ws> | \v+ | . }
# Give up if there are no more characters to grab... return ("", 0) unless m:cont/ (<single_char><1,$width>) /;
# Squeeze the resultant substring... (my $result = $1) ~~ s:each/ <$ws> | \v+ /\c[SPACE]/;
# Check for any more data still to come... my bool $more = m:cont/ <before: .* \S> /;
# ($data is rw, $width, $ws) { given $data { # Locate the next word (no longer than $width cols) my $found = m:cont/ \s* $?word:=(\S<1,$width>) /;
# Fail if no more words... return ("", 0) unless $found{word};
# Check for any more data still to come... my bool $more = m:cont/ <before: .* \S> /;
# Otherwise, return broken text and "more" flag... return ($found{word}, $more); } } print form :break(&break_word), "|{[[[[[}|", $data;
producing:
|You | |can | |play | |no | |part | |but | |Pyramus| |; | |for | |Pyramus| |is | |a | |sweet-f| |aced | |man |
We'll see yet another application of user-defined breaking when we discuss user-defined fields.
He, being in the vaward, placed behind... (
) or might be stored in a hash
(
get_info
%person{«.
Of course, in this example we're also taking advantage of the new indenting behaviour of heredocs. The "Name:", "Status:", and "Comments:" titles are actually at the very beginning of their respective lines, because the start of a Perl 6 heredoc terminator marks the left margin of the entire heredoc string.
Would they were multitudes....
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 7 for the current design information.
Lay out. Lay out..
For the table, sir, it shall be served....
Normally in Perl 6, if we wanted to preset a particular optional argument we'd simply make an assumption:
my &down_form := &form.assuming(:layout«down»);
But, of course,
form collects all of its arguments in a single slurpy array, so it
doesn't actually have a
$layout parameter that we can prebind.
Fortunately, the
.assuming method is smart enough to recognize when it
being applied to a subroutine whose arguments are slurped. In such cases,
it just prepends any prebound arguments to the resulting subroutine's argument
list. That is, the binding of
down_form shown above is equivalent to:
my &down_form := sub (FormArgs *@args is context(Scalar)) returns Str { return form( :layout«down», *@args ); };
This was your default...
form provides one other mechanism by which options can be prebound.
To use it, we (re-)load the Form module with an explicit argument list:
use Form :layout«down», :locale, :interleave;
This causes the module to export a modified version of
form in which the
specified options are prebound. That modified version of
form is exported
lexically, and so
form only has the specified defaults preset for the
scope in which the
use Form statement appears.
These default options are handy if we have a series of calls
to
form that all need some consistent non-standard behaviour.
For example:
use Form :layout«across», :interleave, :page{ :header("Draft $(localtime)\n\n") };
print form $introduction_format, *@introduction_data;
for @sections -> $format, @data { print form $format, *@data; }
print form $conclusion_format, *@conclusion_data;
Another use is to set up a fixed formatting string into which different data
is to be interpolated (much in the way Perl 5 formats are typically used).
For example, we might want a standard format for errors in a
CATCH block:
CATCH { use Form :interleave, <<EOFORMAT; Error {<<<<<<<}: {[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[} ___________________________________________________ EOFORMAT
when /Missing datum/ { warn form "EMISSDAT", $_.msg } when /too large/ { warn form "ETOOBIG", $_.msg } when .core { warn form "EINTERN", "Internal error" } default { warn form "EUNKNOWN", "Seek help" } }
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 7 for the current design information.
And welcome to the wide fields...
All the fields we've seen so far have been exactly as wide as their specifications. That's the whole point of having fields – they allow us to lay out formats "by eye".
But
form also allows us to specify field widths in other ways. And better
yet, to avoid specifying them at all and let
form work out how big
they should be.
The measure then of one is easily told..
When such fields are part of a larger format, errors like that can
easily result in a call to
form producing, say, 81-column lines. That
would merely be messy if the extra characters wrapped, but could
be disasterous if they happened to be chopped instead. Suppose,
for example, that the last 4 columns of output contain nuclear reactor
core temperatures and then consider the difference between an
apparently normal reading of 567 Celsius and what might actually be
happening if the reading were in fact a truncated 5678 Celsius..
What you will command me will I do....
Imperative fields disrupt the WYSIWYG layout of a format, so they're generally
only used when the format itself is being generated programmatically. For
example, when we were counting down the top ten reasons not to do one's English Lit homework, we used a fixed-width
{>} field to format each number:
for @reasons.kv -> $n, $reason { my $n = @reasons - $index ~ '.'; print form " {>} {[[[[[[[[[[[[[[[[[[[[[[[[[[[[}", $n, $reason, ""; }
But, of course, there's not reason (theoretically, at least) why we couldn't
find more than 99 reasons not to do our homework, in which case we'd
overflow the
{>} field.
So instead of limiting ourselves that way, we could just tell
form to make
the first field wide enough to enumerate however many reasons we come up with,
like so:
my $width = length(+@reasons)+1;
for @reasons.kv -> $n, $reason { my $n = @reasons - $index ~ '.'; print form " {>>{$width}>>} {[[[[[[[[[[[[[[[[[[[[[[[[[[[[}", $n, $reason, ""; }
By evaluating
@reasons in a numeric context (
+@reasons) we determine the
number of reasons we have, and hence the largest number that need ever fit
into the first field. Taking the length of that number (
)...).
From the crown of his head to the sole of his foot... (+ = ;
These paper bullets of the brain...
Bulleted lists of items are a very common feature of reports, but as we saw earlier they're surprisingly hard to get right.
Suppose, for example, @items -> $item { @items -> $item {;. | http://www.perl.com/pub/2004/02/27/exegesis7.html?page=3 | CC-MAIN-2015-11 | refinedweb | 3,478 | 63.19 |
bumped the version number to 0.3.1 gforth-makeimage now makes an executable file and uses $GFORTH documentation changes fixed bug involving locals and recurse: : { ( -- lastxt wid 0 ) \ gforth open-brace 336: dp old-dpp ! 337: locals-dp dpp ! 338: lastxt get-current 339: also new-locals 340: also locals definitions locals-types 341: 0 TO locals-wordlist 342: 0 postpone [ ; immediate 343: 344: locals-types definitions 345: 346: : } ( lastxt wid 0 a-addr1 xt1 ... -- ) \ gforth close-brace 347: \ ends locals definitions 348: ] old-dpp @ dpp ! 349: begin 350: dup 351: while 352: execute 353: repeat 354: drop 355: locals-size @ alignlp-f locals-size ! \ the strictest alignment 356: previous previous 357: set-current lastcfa ! 358: locals-list TO locals-wordlist ; 359: 360: : -- ( addr wid 0 ... -- ) \ gforth dash-dash 361: } 362: [char] } parse 2drop ; 363: 364: forth definitions 365: 366: \ A few thoughts on automatic scopes for locals and how they can be 367: \ implemented: 368: 369: \ We have to combine locals with the control structures. My basic idea 370: \ was to start the life of a local at the declaration point. The life 371: \ would end at any control flow join (THEN, BEGIN etc.) where the local 372: \ is lot live on both input flows (note that the local can still live in 373: \ other, later parts of the control flow). This would make a local live 374: \ as long as you expected and sometimes longer (e.g. a local declared in 375: \ a BEGIN..UNTIL loop would still live after the UNTIL). 376: 377: \ The following example illustrates the problems of this approach: 378: 379: \ { z } 380: \ if 381: \ { x } 382: \ begin 383: \ { y } 384: \ [ 1 cs-roll ] then 385: \ ... 386: \ until 387: 388: \ x lives only until the BEGIN, but the compiler does not know this 389: \ until it compiles the UNTIL (it can deduce it at the THEN, because at 390: \ that point x lives in no thread, but that does not help much). This is 391: \ solved by optimistically assuming at the BEGIN that x lives, but 392: \ warning at the UNTIL that it does not. The user is then responsible 393: \ for checking that x is only used where it lives. 394: 395: \ The produced code might look like this (leaving out alignment code): 396: 397: \ >l ( z ) 398: \ ?branch <then> 399: \ >l ( x ) 400: \ <begin>: 401: \ >l ( y ) 402: \ lp+!# 8 ( RIP: x,y ) 403: \ <then>: 404: \ ... 405: \ lp+!# -4 ( adjust lp to <begin> state ) 406: \ ?branch <begin> 407: \ lp+!# 4 ( undo adjust ) 408: 409: \ The BEGIN problem also has another incarnation: 410: 411: \ AHEAD 412: \ BEGIN 413: \ x 414: \ [ 1 CS-ROLL ] THEN 415: \ { x } 416: \ ... 417: \ UNTIL 418: 419: \ should be legal: The BEGIN is not a control flow join in this case, 420: \ since it cannot be entered from the top; therefore the definition of x 421: \ dominates the use. But the compiler processes the use first, and since 422: \ it does not look ahead to notice the definition, it will complain 423: \ about it. Here's another variation of this problem: 424: 425: \ IF 426: \ { x } 427: \ ELSE 428: \ ... 429: \ AHEAD 430: \ BEGIN 431: \ x 432: \ [ 2 CS-ROLL ] THEN 433: \ ... 434: \ UNTIL 435: 436: \ In this case x is defined before the use, and the definition dominates 437: \ the use, but the compiler does not know this until it processes the 438: \ UNTIL. So what should the compiler assume does live at the BEGIN, if 439: \ the BEGIN is not a control flow join? The safest assumption would be 440: \ the intersection of all locals lists on the control flow 441: \ stack. However, our compiler assumes that the same variables are live 442: \ as on the top of the control flow stack. This covers the following case: 443: 444: \ { x } 445: \ AHEAD 446: \ BEGIN 447: \ x 448: \ [ 1 CS-ROLL ] THEN 449: \ ... 450: \ UNTIL 451: 452: \ If this assumption is too optimistic, the compiler will warn the user. 453: 454: \ Implementation: 455: 456: \ explicit scoping 457: 458: : scope ( compilation -- scope ; run-time -- ) \ gforth 459: cs-push-part scopestart ; immediate 460: 461: : endscope ( compilation scope -- ; run-time -- ) \ gforth 462: scope? 463: drop 464: locals-list @ common-list 465: dup list-size adjust-locals-size 466: locals-list ! ; immediate 467: 468: \ adapt the hooks 469: 470: : locals-:-hook ( sys -- sys addr xt n ) 471: \ addr is the nfa of the defined word, xt its xt 472: DEFERS :-hook 473: last @ lastcfa @ 474: clear-leave-stack 475: 0 locals-size ! 476: locals-buffer locals-dp ! 477: 0 locals-list ! 478: dead-code off 479: defstart ; 480: 481: : locals-;-hook ( sys addr xt sys -- sys ) 482: def? 483: 0 TO locals-wordlist 484: 0 adjust-locals-size ( not every def ends with an exit ) 485: lastcfa ! last ! 486: DEFERS ;-hook ; 487: 488: \ THEN (another control flow from before joins the current one): 489: \ The new locals-list is the intersection of the current locals-list and 490: \ the orig-local-list. The new locals-size is the (alignment-adjusted) 491: \ size of the new locals-list. The following code is generated: 492: \ lp+!# (current-locals-size - orig-locals-size) 493: \ <then>: 494: \ lp+!# (orig-locals-size - new-locals-size) 495: 496: \ Of course "lp+!# 0" is not generated. Still this is admittedly a bit 497: \ inefficient, e.g. if there is a locals declaration between IF and 498: \ ELSE. However, if ELSE generates an appropriate "lp+!#" before the 499: \ branch, there will be none after the target <then>. 500: 501: : (then-like) ( orig -- ) 502: dead-orig = 503: if 504: >resolve drop 505: else 506: dead-code @ 507: if 508: >resolve set-locals-size-list dead-code off 509: else \ both live 510: over list-size adjust-locals-size 511: >resolve 512: locals-list @ common-list dup list-size adjust-locals-size 513: locals-list ! 514: then 515: then ; 516: 517: : (begin-like) ( -- ) 518: dead-code @ if 519: \ set up an assumption of the locals visible here. if the 520: \ users want something to be visible, they have to declare 521: \ that using ASSUME-LIVE 522: backedge-locals @ set-locals-size-list 523: then 524: dead-code off ; 525: 526: \ AGAIN (the current control flow joins another, earlier one): 527: \ If the dest-locals-list is not a subset of the current locals-list, 528: \ issue a warning (see below). The following code is generated: 529: \ lp+!# (current-local-size - dest-locals-size) 530: \ branch <begin> 531: 532: : (again-like) ( dest -- addr ) 533: over list-size adjust-locals-size 534: swap check-begin POSTPONE unreachable ; 535: 536: \ UNTIL (the current control flow may join an earlier one or continue): 537: \ Similar to AGAIN. The new locals-list and locals-size are the current 538: \ ones. The following code is generated: 539: \ ?branch-lp+!# <begin> (current-local-size - dest-locals-size) 540: 541: : (until-like) ( list addr xt1 xt2 -- ) 542: \ list and addr are a fragment of a cs-item 543: \ xt1 is the conditional branch without lp adjustment, xt2 is with 544: >r >r 545: locals-size @ 2 pick list-size - dup if ( list dest-addr adjustment ) 546: r> drop r> compile, 547: swap <resolve ( list adjustment ) , 548: else ( list dest-addr adjustment ) 549: drop 550: r> compile, <resolve 551: r> drop 552: then ( list ) 553: check-begin ; 554: 555: : (exit-like) ( -- ) 556: 0 adjust-locals-size ; 557: 558: ' locals-:-hook IS :-hook 559: ' locals-;-hook IS ;-hook 560: 561: ' (then-like) IS then-like 562: ' (begin-like) IS begin-like 563: ' (again-like) IS again-like 564: ' (until-like) IS until-like 565: ' (exit-like) IS exit-like 566: 567: \ The words in the locals dictionary space are not deleted until the end 568: \ of the current word. This is a bit too conservative, but very simple. 569: 570: \ There are a few cases to consider: (see above) 571: 572: \ after AGAIN, AHEAD, EXIT (the current control flow is dead): 573: \ We have to special-case the above cases against that. In this case the 574: \ things above are not control flow joins. Everything should be taken 575: \ over from the live flow. No lp+!# is generated.: 606: : (local) ( addr u -- ) \ local paren-local-paren 607: \ a little space-inefficient, but well deserved ;-) 608: \ In exchange, there are no restrictions whatsoever on using (local) 609: \ as long as you use it in a definition 610: dup 611: if 612: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 613: else 614: 2drop 615: endif ; 616: 617: : >definer ( xt -- definer ) 618: \ this gives a unique identifier for the way the xt was defined 619: \ words defined with different does>-codes have different definers 620: \ the definer can be used for comparison and in definer! 621: dup >does-code 622: ?dup-if 623: nip 1 or 624: else 625: >code-address 626: then ; 627: 628: : definer! ( definer xt -- ) 629: \ gives the word represented by xt the behaviour associated with definer 630: over 1 and if 631: swap [ 1 invert ] literal and does-code! 632: else 633: code-address! 634: then ; 635: 636: :noname 637: ' dup >definer [ ' locals-wordlist ] literal >definer = 638: if 639: >body ! 640: else 641: -&32 throw 642: endif ; 643: :noname 644: 0 0 0. 0.0e0 { c: clocal w: wlocal d: dlocal f: flocal } 645: comp' drop dup >definer 646: case 647: [ ' locals-wordlist ] literal >definer \ value 648: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 649: [ comp' clocal drop ] literal >definer 650: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 651: [ comp' wlocal drop ] literal >definer 652: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 653: [ comp' dlocal drop ] literal >definer 654: OF POSTPONE laddr# >body @ lp-offset, POSTPONE 2! ENDOF 655: [ comp' flocal drop ] literal >definer 656: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 657: -&32 throw 658: endcase ; 659: interpret/compile: TO ( c|w|d|r "name" -- ) \ core-ext,local 660: 661: : locals| 662: \ don't use 'locals|'! use '{'! A portable and free '{' 663: \ implementation is compat/anslocals.fs 664: BEGIN 665: name 2dup s" |" compare 0<> 666: WHILE 667: (local) 668: REPEAT 669: drop 0 (local) ; immediate restrict | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?f=h;only_with_tag=MAIN;ln=1;content-type=text%2Fx-cvsweb-markup;rev=1.32 | CC-MAIN-2022-21 | refinedweb | 1,681 | 60.35 |
Hi,
I am working on the design of my own numerical library (with il::Vector, il::StaticVector, il::Matrix, il::StaticMatrix) with some improvements over the STL (indices are signed, il::Vector<double> are initialised to NaN in debug mode, etc). I've been thinking over a long time on a good design for custom allocators but I can't make out my mind. The STL way of doing it invades the type system and makes it a pain to use. The Bloomberg way of doing it, uses dynamic dispatch for memory allocation and does not invade the type system. It looked like the right solution to me before I saw Chandler Carruth from Google saying that compilers can optimise away memory allocation if they can see it. This kind of optimisations are obviously lost in the Bloomberg model. It took me a while to find an example where memory allocation are optimised away, but the following code
#include <iostream> #include <vector> std::vector<double> f_val(std::size_t i, std::size_t n) { auto v = std::vector<double>( n ); for (std::size_t k = 0; k < v.size(); ++k) { v
= static_cast<double>(k + i); } return v; } int main (int argc, char const *argv[]) { const auto n = std::size_t{10}; const auto nb_loops = std::size_t{300000000}; auto v = std::vector<double>( n, 0.0 ); auto start_time = std::chrono::high_resolution_clock::now(); for (std::size_t i = 0; i < nb_loops; ++i) { auto w = f_val(i, n); for (std::size_t k = 0; k < v.size(); ++k) { v += w ; } } std::cout << v[0] << " " << v[n - 1] << std::endl; return 0; }
blew my mind when compiled with clang. There is only one memory allocation in the whole program: the one for v. It seems that the call to f_val is inline, the loops are fused and then memory allocation for w is completely removed! Both gcc and icpc don't do this kind of optimisation. It's not really clear if this kind of optimisation is allowed by the standard but there is a proposal to clarify that point: . Is there any work at Intel on doing this kind of optimisation?
Link Copied
Hi Velvia,
Will changing the below line
auto w = f_val(i, n);
to
auto w = std::move(f_val(i, n));
force the compiler to use move constructor which will avoid the extra memory allocation for "w".
Thanks and Regards
Anoop
Hi Anoop,
There is a misunderstanding.
You can check the assembly, but the "easy way" is to run the program. If it takes less than one second, it is that the allocation is optimised away. But both Gcc and Icpc can't do that right now. | https://community.intel.com/t5/Intel-C-Compiler/Memory-allocation-optimised-away/m-p/1051593/highlight/true | CC-MAIN-2021-43 | refinedweb | 440 | 59.84 |
In this notebook I am going to show you how we can do Blind Source Separation (BSS) using algorithms available in the Shogun Machine Learning Toolbox. What is Blind Source Separation? BSS is the separation of a set of source signals from a set of mixed signals.
My favorite example of this problem is known as the cocktail party problem where a number of people are talking simultaneously and we want to separate each persons speech so we can listen to it separately. Now the caveat with this type of approach is that we need as many mixtures as we have source signals or in terms of the cocktail party problem we need as many microphones as people talking in the room.
Let's get started, this example is going to be in python and the first thing we are going to need to do is load some audio files. To make things a bit easier further on in this example I'm going to wrap the basic scipy wav file reader and add some additional functionality. First I added a case to handle converting stereo wav files back into mono wav files and secondly this loader takes a desired sample rate and resamples the input to match. This is important because when we mix the two audio signals they need to have the same sample rate.
from scipy.io import wavfile from scipy.signal import resample def load_wav(filename,samplerate=44100): # load file rate, data = wavfile.read(filename) # convert stereo to mono if len(data.shape) > 1: data = data[:,0]/2 + data[:,1]/2 # re-interpolate samplerate ratio = float(samplerate) / float(rate) data = resample(data, len(data) * ratio) return samplerate, data.astype(np.int16)
Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook.
import StringIO import base64 import struct from IPython.core.display import HTML def wavPlayer(data, rate): """ will display html 5 player for compatible browser The browser need to know how to play wav through html5. there is no autoplay to prevent file playing when the browser opens Adapted from SciPy.io. and github.com/Carreau/posts/blob/master/07-the-sound-of-hydrogen.ipynb """ buffer = StringIO.StringIO() buffer.write(b'RIFF') buffer.write(b'\x00\x00\x00\x00') buffer.write(b'WAVE') buffer.write(b'fmt ') if data.ndim == 1: noc = 1 else: noc = data.shape[1] bits = data.dtype.itemsize * 8 sbytes = rate*(bits // 8)*noc ba = noc * (bits // 8) buffer.write(struct.pack('<ihHIIHH', 16, 1, noc, rate, sbytes, ba, bits)) # data chunk buffer.write(b'data') buffer.write(struct.pack('<i', data.nbytes)) if data.dtype.byteorder == '>' or (data.dtype.byteorder == '=' and sys.byteorder == 'big'): data = data.byteswap() buffer.write(data.tostring()) # return buffer.getvalue() # Determine file size and place it in correct # position at start of the file. size = buffer.tell() buffer.seek(4) buffer.write(struct.pack('<i', size-8)) val = buffer.getvalue() <title>Simple Test</title> </head> <body> <audio controls="controls" style="width:600px" > <source controls Your browser does not support the audio element. </audio> </body> """.format(base64=base64.encodestring(val)) display(HTML(src))
Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here: among other places on the web or from your Starcraft install directory (come on I know its still there).
Another good source of data (although lets be honest less cool) is ICA central and various other more academic data sets:. Note that for lots of these data sets the data will be mixed already so you'll be able to skip the next few steps.
Okay lets load up an audio file. I chose the Terran Battlecruiser saying "Good Day Commander". In addition to the creating a wavPlayer I also plotted the data using Matplotlib (and tried my best to have the graph length match the HTML player length). Have a listen!
# change to the shogun-data directory import os os.chdir('../../../data/ica')
import pylab as pl # load fs1,s1 = load_wav('tbawht02.wav') # Terran Battlecruiser - "Good day, commander." # plot pl.figure(figsize=(6.75,2)) pl.plot(s1) pl.title('Signal 1') pl.show() # player wavPlayer(s1, fs1)
Now let's load a second audio clip:
# load fs2,s2 = load_wav('TMaRdy00.wav') # Terran Marine - "You want a piece of me, boy?" # plot pl.figure(figsize=(6.75,2)) pl.plot(s2) pl.title('Signal 2') pl.show() # player wavPlayer(s2, fs2)
and a third audio clip:
# load fs3,s3 = load_wav('PZeRdy00.wav') # Protoss Zealot - "My life for Aiur!" # plot pl.figure(figsize=(6.75,2)) pl.plot(s3) pl.title('Signal 3') pl.show() # player wavPlayer(s3, fs3)
/usr/lib/python2.7/dist-packages/scipy/io/wavfile.py:31: WavFileWarning: Unfamiliar format bytes warnings.warn("Unfamiliar format bytes", WavFileWarning)
Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together!
First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be filled with zeros so it won't affect the sound.
The signals are mixed by creating a mixing matrix \(A\) and taking the dot product of \(A\) with the signals \(S\).
Afterwards I plot the mixed signals and create the wavPlayers, have a listen!
import numpy as np # Adjust for different clip lengths fs = fs1 length = max([len(s1), len(s2), len(s3)]) s1.resize((length,1)) s2.resize((length,1)) s3.resize((length,1)) S = (np.c_[s1, s2, s3]).T # Mixing Matrix #A = np.random.uniform(size=(3,3)) #A = A / A.sum(axis=0) A = np.array([[1, 0.5, 0.5], [0.5, 1, 0.5], [0.5, 0.5, 1]]) print 'Mixing Matrix:' print A.round(2) # Mix Signals X = np.dot(A,S) # Mixed Signal i for i in range(X.shape[0]): pl.figure(figsize=(6.75,2)) pl.plot((X[i]).astype(np.int16)) pl.title('Mixed Signal %d' % (i+1)) pl.show() wavPlayer((X[i]).astype(np.int16), fs)
Mixing Matrix: [[ 1. 0.5 0.5] [ 0.5 1. 0.5] [ 0.5 0.5 1. ]] | http://shogun-toolbox.org/static/notebook/current/bss_audio.html | CC-MAIN-2014-10 | refinedweb | 1,121 | 58.79 |
Arduino: LCD with native I2C support and AIP31068L Driver (liquid crystal I2C)
Surenoo and Grove offer Liquid Crystal Displays with native I2C support. Also variants with RGB backlights are available. The AIP31068 uses a similar character set like the well known HD44870 ROM A00.
The Character Set of the AIP31068L LCD
The character set of the AIP31068L is similar to the Hitachi HD44780U A00 ROM.
The "Noiasca Liquid Crystal" library does a character mapping from UTF-8 to the existing characters in the LCD ROM. As example some special characters in the second row of the display:
(Degree, Division, middle Point, n with tilde, Pound, Yen, Tilde, Sqareroot, Proportional to, Infinity, Left Arrow, Right Arrow, Backslash)
You can read more about this character mapping in the Introduction or in the German section of this site. For the beginning you should know that you don't need to enter the octal ROM addresses of the special characters manually and these print can be done by a simple:
lcd.print("°÷·ñ£¥~√∝∞←→\\");
The Hardware Driver for Native Wire/I2C Displays
The library offers a basic class for native wire/I2C displays. It uses the 8bit mode and slightly adopted wait times after commands according to the datasheet.
The necessary #include and the constructor are:
#include <Wire.h> #include <NoiascaLiquidCrystal.h> #include <NoiascaHW/lcd_wire.h> LiquidCrystal_Wire_base lcd(lcdAddr, cols, rows);
Remember to add a Wire.begin() to your setup() like in the examples, as this will not be done in the library.
Arduino Library for the Surenoo
The Surenoo RGB LCD uses a dedicated PCA9633 at 0x60 as backlight LED driver.
The necessary specific constant, #include and the constructor for the Surenoo display are:
const byte lcdAddr = 0x3E; #include <NoiascaLiquidCrystal.h> #include <NoiascaHW/lcd_wire.h> LiquidCrystal_Wire_RGB lcd(lcdAddr, cols, rows);
You can set the RGB color with
lcd.setRGB(255, 128, 64);
the method always takes 3 parameters for red green and blue each value can be 0..255.
Hack for the Sureeno RGB
To get a reliable communication with the Surenoo RGB LCD, my display needed a small hardware hack: I added a 100nF decoupling capacitor between GND and VCC.
The Grove Display
The Grove RGB Display uses a different I2C address for the RGB IC. Therefore you have to hand over the RGB address 0x62 as second parameter to the display.
The necessary #include and the constructor are:
const byte lcdAddr = 0x3E; const byte rgbAddr = 0x62; #include <NoiascaLiquidCrystal.h> #include <NoiascaHW/lcd_wire.h> LiquidCrystal_Wire_RGB lcd(lcdAddr, rgbAddr, cols, rows);
Additional (and modified) methods for the Surenoo and Grove displays
Backlight controls will be done by the three RGB colors. After startup the activation of the backlight will switch to a medium "greyisch" look (127, 127, 127).
You can set the RGB color with
lcd.setRGB(255, 128, 64);
To switch off the backlight
lcd.noBacklight();
To reactivate the previous colors switch on the backlight:
lcd.backlight();
You can dim your backlight, it's like "control the brightness". The value can range from 0 - 255:
lcd.setBacklight(255);
You can set a 1Hz blink mode completely done by the RGB chip. This means it will not need resources from the Arduino for blinking:
lcd.blinkBacklight(); // blink the backlight
To stop the backlight blinking call
lcd.noBlinkBacklight(); // stop blinking backlight
Build in Examples
All examples regarding displays with native I2C interface (like the Surenoo RGB or the Grove RGB) can be found in the example folder 05_wire.
Your own Character Converter
If you need a different converter for the characters you can hand over a callback function as optional last parameter.
Obviously - you also have to implement the callback function to do all the magic.
See the general example "50 Convert" how to write your own character converter.
German Umlauts
For my German readers: the default constructor enables the support for the small letters ä ö ü and ß. The large German umlauts Ä Ö Ü (with diaeresis) will be converted to their counterpart A O U. If you want to try other variants, you can hand over different callback converter functions:
LiquidCrystal_Wire_RGB lcd(addr, cols, rows, csPin); // Ä becomes A //LiquidCrystal_Wire_RGB lcd(addr, cols, rows, csPin, convert_ae); // Ä becomes Ae //LiquidCrystal_Wire_RGB lcd(addr, cols, rows, csPin, convert_small); // Ä becomes ä //LiquidCrystal_Wire_RGB lcd(addr, cols, rows, csPin, convert_special); // Ä becomes Ä
The Wire class in "Noiasca Liquid Crystal" enables you to run LCDs with native I2C interface. For the Surenoo and Grove display the support of the RGB backlight was added. | https://werner.rothschopf.net/202009_arduino_liquid_crystal_wire_en.htm | CC-MAIN-2022-40 | refinedweb | 747 | 62.78 |
GHC Commentary: Packages
This section documents how GHC implements packages. You should also look at
- The Packages section of the Users Guide
- The Cabal documentation
- Various Distribution.* modules in the Cabal package, eg. Distribution.Package.
- ModIface, ModDetails, ModGuts: Details of some of the types involve in ghc's handling of modules and packages.
- Packages: A commentary on packages at a slightly higher level, user orientated view.
Overview
A package consists of zero or more Haskell modules, compiled to object code and placed in a single library file libHS<pkg>.a. GHCi's linker can't load .a files, so there is also a version of the .a file linked together as a single object file, usually named HS<pkg>.o.
Package databases
GHC draws its information about what packages are installed from one or more package databases. A package database is a file containing a value of type [ InstalledPackageInfo ], rendered as text via show. Also, GHC allows the system package databases to be in the form of a directory of files, each of which contains a [InstalledPackageInfo] (in the future this may be extended to allow all packages databases to have this form). Note: the exact representation of a package database is intended to be private to GHC, which is why we provide the ghc-pkg tool to manipulate it.
Package-related types
Source files: compiler/main/PackageConfig.hs, compiler/main/Packages.lhs
- PackageId
- The most important package type inside ghc is PackageId, representing the full name of a package (including its version). It is represented as a FastString for fast comparison.
- PackageConfig
- The information contained in the package database about a package. Currently this is a synonym for InstalledPackageInfo, later it might contain extra GHC-specific info, or have a more optimised representation.
- PackageConfigMap
- A mapping (actually UniqFM) from PackageId to PackageConfig.
- PackageState
- Everything the compiler knows about the package database. This is built by initPackages in compiler/main/Packages.lhs, and stashed in the DynFlags.
Modules
GHC (from version 6.6) allows a single program to contain multiple modules with the same name, as long as the duplicates all come from different packages. In other words, the pair (package name, module name) must be unique within a program. GHC implements this with the Module type, which contains a PackageId and the ModuleName of a module. For any Module, we can therefore ask which package it comes from.
This means that the Module type is not Uniqable, so we can't use Module as the key in a UniqFM, which is sad. We explored various schemes for extracting uniques from Modules, but didn't find anything attractive enough. Another problem with the current scheme is that everytime we refer to a Module in an interface file, it gives rise to two words in the binary representation. Our current plan is to improve the binary representation in .hi files to mitigate this, but this is currently one reason why in GHC 6.6 interface files are larger than in 6.4.
Source code: compiler/basicTypes/Module.lhs.
The current package
There is a notion of which package we are compiling, set by the -package-name flag on the command line. In the absence of a -package-name flag, the default package main is assumed.
To find out what the current package is, grab the field thisPackage from DynFlags (see compiler/main/DynFlags.hs).
Special packages
Certain packages are special, in the sense that GHC knows about their existence and something about their contents. Any Name that is wired-in (see compiler/prelude/PrelNames.lhs) must by definition contain a Module, and that module must therefore contain a PackageId. But the PackageId is a full package name, including the version, so does this mean we have to somehow find out the version of the base package (for example) and bake it into the GHC binary?
We took the view that it should be possible to upgrade these packages independently of GHC, so long as you don't change any of the information that GHC knows about the package (eg. the type of fromIntegral or what module things come from). Therefore we shouldn't bake in the version of any packages. So the PackageId for the base package inside GHC is simply base: we explicitly strip off the version number for special packages wherever they occur.
This does have the consequence that you cannot use multiple versions of a special package simultaneously in a program, but we believe that is unlikely for these packages anyway. Another consequence is that symbol names for entities from special packages will not include the version number, which saves some space in the object files.
The following packages are special in GHC 6.6:
- base
- rts
- haskell98
- template-haskell
The special PackageIds are defined in compiler/main/PackageConfig.hs, and the stripping of versions from special packages in the package database happens in initPackages in compiler/main/Packages.lhs.
Symbol names
All symbol names in the object code have the package name prepended (plus an underscore) so that modules of the same name from different packages do not clash. We assume the symbol namespace is global, which is the worst case - allegedly there are ways to have semi-private namespaces on some platforms but we haven't explored that.
There is one exception: we don't prepend main_ to symbols from the main package, because there can only ever be one main package. This is a small optimisation.
Source code: see the Outputable instance for Module in compiler/basicTypes/Module.lhs.
Dynamic linking
Packages have another purpose when it comes to dynamic linking: each package is a single dynamically-linked library. This is an important property on systems where making intra-library calls is different from inter-library calls (eg. Windows DLLs). Even on systems where we only need to generate a single kind of call, making a data reference within a single library is cheaper than a data reference in another library, so knowing which is which is important.
At the time of writing (GHC 6.6) GHC doesn't have working support for generating multi-DLL Haskell programs, but it worked in the past and work is underway to resurrect it. Dynamic libraries currently only work on MacOS X/PowerPC.
Packages in a GHC build
When GHC is building, it constructs two package databases:
- driver/package.conf: the package database that will be installed if you say make install. To inspect or modify this database, use utils/ghc-pkg/ghc-pkg-inplace -f <somewhere>/driver/package.conf.
- driver/package.conf.inplace: the same, but paths points to the build tree so that GHC can be run without installing. To inspect or modify this database, use utils/ghc-pkg/ghc-pkg-inplace.
Both of these databases start empty: make boot in driver creates an empty database in each file. Then, packages are registered into each database when make boot runs in a package directory.
NOTE: packages must be registered in dependency order. The build system arranges this normally, but if you build parts of the tree by hand you might violate this rule. If a package is registered before its dependencies, you might not get an error message, but something will go wrong later (probably a missing package dependency). The reason is, to make it easier to register packages, we don't specify full version numbers in the depends field of a package configuration, leaving ghc-pkg to fill it in from the database, but if the dependency isn't present in the database, ghc-pkg silently registers it anyway (because we use --force... that's another story).
Refreshing your package databases
Sometimes things can get out of sync in your build tree, if a package version was bumped for example. If you get into trouble, just make clean in your tree. | https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Packages?version=5 | CC-MAIN-2016-18 | refinedweb | 1,306 | 54.93 |
This is going to be a long, language lawyerish question, so I'd like to quickly state why I find it relevant. I am working on a project where strict standard compliance is crucial (writing a language that compiles to C). The example I am going to give seems like a standard violation on the part of clang, and so, if this is the case, I'd like to confirm it.
gcc says that a conditional with a pointer to a restrict qualified pointer can not co-inhabit a conditional statement with a void pointer. On the other hand, clang compiles such things fine. Here is an example program:
#include <stdlib.h>
int main(void){
int* restrict* A = malloc(8);
A ? A : malloc(8);
return 0;
}
-std=c11
-pedantic
-std=c11
-Weverything
tem-2.c: In function ‘main’:
tem-2.c:7:2: error: invalid use of ‘restrict’
A ? A : malloc(8);
^
6.5.15 Conditional operator
...
-.
...
- If both the second and third operands are pointers or one is a null pointer constant and the
other is a pointer, the result type is a pointer to a type qualified with all the type qualifiers
of the types referenced by both operands. Furthermore, if both operands are pointers to
compatible types or to differently qualified versions of compatible types, the result type is
a pointer to an appropriately qualified version of the composite type; if one operand is a
null pointer constant, the result has the type of the other operand; otherwise, one operand
is a pointer to void or a qualified version of void, in which case the result type is a
pointer to an appropriately qualified version of void.
...
6.7.3 Type qualifiers, paragraph 2
Types other than pointer types whose referenced type is an object type shall not be restrict-qualified.
5.1.1.3 Diagnostics, paragraph 1.
Yes it looks like a bug.
Your question more briefly: can
void be
restrict qualified? Since
void is clearly not a pointer type, the answer is no. Because this violates a constraint, the compiler should give a diagnostic.
I was able to trick
clang to confess its sins by using a
_Generic expression
puts(_Generic(A ? A : malloc(8), void* : "void*"));
and
clang tells me
static.c:24:18: error: controlling expression type 'restrict void *' not compatible with any generic association type puts(_Generic(A ? A : malloc(8), void* : "void*"));
which shows that
clang here really tries to match a nonsense type
restrict void*.
Please file them a bug report. | https://codedump.io/share/Sh9vINN2AFQv/1/is-this-behavior-of-clang-standard-compliant | CC-MAIN-2017-09 | refinedweb | 421 | 64.2 |
Daniel Lezcano [dlezcano@fr.ibm.com] wrote:> Hi,>> I am facing a problem with the pid namespace when I launch the following > lxc commands:>> lxc-execute -n foo sleep 3600 &> ls -al /proc/$(pidof lxc-init)/exe && lxc-stop -n foo>> All the processes related to the container are killed, but there is > still a refcount on the pid_namespace which is never released.Thanks for the bug report.Did you notice any leak in 'struct pids' also or just the pid_namespace ?If the pids are not leaking, this may be slightly different from the problemCatalin Marinas ran into: the pid_namespace does not seem to reproduce for me, with out the'ls -al /proc/...' above, or with the simpler 'ns_exec' approach tocreating pid namespace.I am going through the code for lxc-execute, but does it remount /proc in the container ?Sukadev | http://lkml.org/lkml/2009/10/6/2 | CC-MAIN-2014-41 | refinedweb | 141 | 61.36 |
Triple Quoted Strings
00:00 Strings: Triple-Quoted Strings.
00:04
Triple-quoted strings are most easily demonstrated within a program itself, so that’s what you’re going to see here. As you can see onscreen, the triple-quoted string has three quote characters (
""") at each end and they can be single or double as long as they match each other.
00:22 Printing this out, it just looks like a normal string,
00:28 but triple-quoted strings obviously have more to offer us just than extra typing. It’s possible to put any kind of quote inside. As you can see here, the same quotes are being used inside the string as that enclose it.
00:42 As you can see printed out here, the double quotes work perfectly normally, but it’s also possible to use single quotes inside this triple-quoted string as well. So you can see, they offer an alternative to using escape characters, which you may find simpler to understand. However, their main use isn’t to avoid the use of escape characters, but it’s to allow strings which cover multiple lines, as you’re going to see next onscreen.
01:20 And once that gets printed out, you can see it works exactly as you’d expect. While this is a convenience, it probably isn’t the most common use for triple-quoted strings.
01:31
Their most common use is providing docstrings for functions. Onscreen, first, a function is being created, as you can see. It’s a simple function that just takes a
value and prints it out.
01:50 Now, while this is clearly a simple function and doesn’t need much explanation, most functions you create will be more complicated than this, and when you come back to them it may take you a while to understand what they are or what they do, particularly when they’re named similarly.
02:04 A good way to deal with this is to insert a docstring straight after the definition of the function. This will take the form of a triple-quoted string which provides a description of what the function does and what the parameters are that you feed to it. While this may make sense while you’re reading through the code, it’s also useful while you’re writing code. With most editors, when you use a function it will pop up a hint which is made from the docstring that tells you about the function, as you can see here. Takes a value and prints it to the screen has been replicated.
02:38
This may not be that much use in a simple program, but where you have a complicated one with multiple imports from multiple files, this can be really useful in telling you what the functions do. And notice that this is where the information that you see when using standard functions such as
print() also comes from.
02:56 Any imported modules you’ve used in your programs will also have docstrings, which will help you understand how each function works.
Sir if i write the same code in cmd i am not able to get the result instead i am getting the result as
def my_function(value): ... print(f"{value} is a nice value") File "<stdin>", line 2 print(f"{value} is a nice value")
I am getting this as the result could you please explain why this is happening?
Ana - I think you need to indent your print line - from the printout you have presented, it looks as if you need to hit TAB before typing
print(f"{value} is a nice value")
Become a Member to join the conversation.
Cristian Palau on April 8, 2020
Great explanation Darren, if we need to add \n or ” in a strings what is the best option, escape sequences or directly use triple quoted strings? Maybe there is a convention or a good python practice for this. Thanks! | https://realpython.com/lessons/triple-quoted-strings/ | CC-MAIN-2021-17 | refinedweb | 667 | 65.66 |
ConstructorsConstructors
MembersMembers
Subclass of Application for applicability tests with type arguments and value argument trees.
Subclass of Application for applicability tests with type arguments and value argument trees.
Subclass of Application for applicability tests with value argument types.
Subclass of Application for type checking an Apply node with typed arguments.
Subclass of Application for type checking an Apply node with untyped arguments.
Subclass of Application for the cases where we are interested only in a "can/cannot apply" answer, without needing to construct trees or issue error messages.
Subclass of Application for type checking an Apply node, where types of arguments are either known or unknown.
If
app is a
this(...) constructor call, the this-call argument context,
otherwise the current context.
Compare to alternatives of an overloaded call or an implicit search.
Compare owner inheritance level.
Rewrite
new Array[T](....) if T is an unbounded generic to calls to newGenericArray.
It is performed during typer as creation of generic arrays needs a classTag.
we rely on implicit search to find one.
Overridden in ReTyper to handle primitive operations that can be generated after erasure
Apply a transformation
harmonize on the results of operation
op,
unless the expected type
pt is fully defined.
If the result is different (wrt eq) from the original results of
op,
revert back to the constraint in force before computing
op.
This reset is needed because otherwise the original results might
have added constraints to type parameters which are no longer
implied after harmonization. No essential constraints are lost by this because
the result of harmomization will be compared again with the expected type.
Test cases where this matters are in pos/harmomize.scala.
If
trees all have numeric value types, and they do not have all the same type,
pick a common numeric supertype and convert all constant trees to this type.
If the resulting trees all have the same type, return them instead of the original ones.
If all
types are numeric value types, and they are not all the same type,
pick a common numeric supertype and widen any constant types in
tpes to it.
If the resulting types are all the same, return them instead of the original ones.
Does
tp have an extension method named
name with this-argument
argType and
result matching
resultType?
Is given method reference applicable to type arguments
targs and argument trees
args?
Is given method reference applicable to argument types
args?
Is given type applicable to type arguments
targs and argument trees
args,
possibly after inserting an
apply?
Is given type applicable to argument types
args, possibly after inserting an
apply?
Is given method reference applicable to type arguments
targs and argument trees
args without inferring views?
Try to typecheck any arguments in
pt that are function values missing a
parameter type. If the formal parameter types corresponding to a closure argument
all agree on their argument types, typecheck the argument with an expected
function or partial function type that contains these argument types,
The result of the typecheck is stored in
pt, to be retrieved when its
typedArgs are selected.
The benefit of doing this is to allow idioms like this:
def map(f: Char => Char): String = ??? def map[U](f: Char => U): Seq[U] = ??? map(x => x.toUpper)
Without
pretypeArgs we'd get a "missing parameter type" error for
x.
With
pretypeArgs, we use the
Char => ? as the expected type of the
closure
x => x.toUpper, which makes the code typecheck.
Resolve overloaded alternative
alts, given expected type
pt and
possibly also type argument
targs that need to be applied to each alternative
to form the method type.
Two trials: First, without implicits or SAM conversions enabled. Then,
if the fist finds no eligible candidates, with implicits and SAM conversions enabled.
This private version of
resolveOverloaded does the bulk of the work of
overloading resolution, but does not do result adaptation. It might be
called twice from the public
resolveOverloaded method, once with
implicits and SAM conversions enabled, and once without.
Typecheck application. Result could be an
Apply node,
or, if application is an operator assignment, also an
Assign or
Block node. | http://dotty.epfl.ch/api/dotty/tools/dotc/typer/Applications.html | CC-MAIN-2019-13 | refinedweb | 689 | 55.95 |
Platform independent extensible log class
Introduction:
The ability to log is commonly needed in every software project on every platform. I wrote this class to save time.
There are two basic log classes provided for easy use. One is CFileLog, which implements a file logging system. The other is CRegFileLog that implements a registry controled file logging system. The whole logging system is quit easy to extended for any propose.
How to use:
Demo code:
#include "MjLog.h" int main() { MjTools::CFileLog m_Log("test.log"); std::string a="aaa"; m_Log.Clear(); m_Log.AddLog("Abc"); m_Log.AddLog(a); MjTools::CFileLog m_Log1=m_Log; m_Log1.AddLog("From Log1"); #ifdef WIN32 //RegistryLogControl only valid in Windows system // construct a registry key controled log object. If // the specified registry key is found,the log is enabled MjTools::CRegFileLog m_regLog("reglog.log", "HKEY_LOCAL_MACHINE\\Software\\YourControlKeyName"); m_regLog.AddLog("reglog"); m_regLog.Pause(); m_regLog.AddLog("reglog1"); m_regLog.Resume(); m_regLog.AddLog("reglog2"); #endif return 0; }
How to compile:
The source code itself can be compiled and executed. You can use command line tool to compile it.
Under VC++:
CL /D"_TEST_" MjLog.cpp
This one may cause a link error. I don't know why.But if you use a win32 console project, no error occurs
Under BCC:
bcc32 /D_TEST_ mjlog.cpp
Under Linux:
g++ /D_TEST_ MjLog.cpp
Future Updates:
1. Make the class thread_safe.
2. Still thinking...
very crude solutionPosted by Legacy on 07/24/2003 12:00am
Originally posted by: Matthew Pasko
Keep in mind a quick solution to this type of problem would be to simply find the log file (FindFirstFile (CE)), look at the file size, and if the file size is half of the total space of how big the log should grow, move the file name (rename) the file to an archive log file. As time progresses, the old data overwritten and the log file(s) grows only so big.
Obvious crude solution, but effective for some applications.. and a snap to write.
;)
growth of log, over write old info!Posted by Legacy on 04/16/2002 12:00am
Originally posted by: Matthew Pasko
There is often limited space on a device for logging. How about function to limit growth size of the log to be adjustable and over writes old log information?
This would be great..
fix for link errors (VC++)Posted by Legacy on 02/06/2002 12:00am
Originally posted by: Edmond Nolan
Why would you want to add "Logging" overhead to the Registry?Posted by Legacy on 12/09/2001 12:00am
Originally posted by: Hector Santos
Maybe I miss something here, but why would you want to add "Logging" overhead to the Registry?
Just curious
Reply | http://www.codeguru.com/cpp/cpp/cpp_mfc/article.php/c4117/Platform-independent-extensible-log-class.htm | CC-MAIN-2015-14 | refinedweb | 447 | 67.76 |
pyramid.response¶
- class
Response(body=None, status=None, headerlist=None, app_iter=None, content_type=None, conditional_response=None, **kw)[source]¶
accept_ranges¶
Gets and sets the
Accept-Rangesheader (HTTP spec section 14.5).
age¶
Gets and sets the
Ageheader (HTTP spec section 14.6). Converts it using int.
allow¶
Gets and sets the
Allowheader (HTTP spec section 14.7). Converts it using list.
app_iter¶
Returns the app_iter of the response.
If body was set, this will create an app_iter from that body (a single-item list)
app_iter_range(start, stop)[source]¶
Return a new app_iter built from the response app_iter, that serves up only the given
start:stoprange.
body_file¶
A file-like object that can be used to write to the body. If you passed in a list app_iter, that app_iter will be modified by writes.
cache_control¶
Get/set/modify the Cache-Control header (HTTP spec section 14.9)
conditional_response_app(environ, start_response)[source]¶
Like the normal __call__ interface, but checks conditional headers:
- If-Modified-Since (304 Not Modified; only on GET, HEAD)
- If-None-Match (304 Not Modified; only on GET, HEAD)
- Range (406 Partial Content; only on GET, HEAD)
content_disposition¶
Gets and sets the
Content-Dispositionheader (HTTP spec section 19.5.1).
content_encoding¶
Gets and sets the
Content-Encodingheader (HTTP spec section 14.11).
content_language¶
Gets and sets the
Content-Languageheader (HTTP spec section 14.12). Converts it using list.
content_length¶
Gets and sets the
Content-Lengthheader (HTTP spec section 14.17). Converts it using int.
content_location¶
Gets and sets the
Content-Locationheader (HTTP spec section 14.14).
content_md5¶
Gets and sets the
Content-MD5header (HTTP spec section 14.14).
content_range¶
Gets and sets the
Content-Rangeheader (HTTP spec section 14.16). Converts it using ContentRange object.
content_type¶
Get/set the Content-Type header not be applied otherwise)
date¶
Gets and sets the
Dateheader (HTTP spec section 14.18). Converts it using HTTP date.
Delete a cookie from the client. Note that path and domain must match how the cookie was originally set.
This sets the cookie to the empty string, and max_age=0 so that it should expire immediately.
encode_content(encoding='gzip', lazy=False)[source]¶
Encode the content with the given encoding (only gzip and identity are supported).
etag¶
Gets and sets the
ETagheader (HTTP spec section 14.19). Converts it using Entity tag.
expires¶
Gets and sets the
Expiresheader (HTTP spec section 14.21). Converts it using HTTP date.
from_file(fp)[source]¶
Reads a response from a file-like object (it must implement
.read(size)and
.readline()).
It will read up to the end of the response, not the end of the file.
This reads the response as represented by
str(resp); it may not read every valid HTTP response properly. Responses must have a
Content-Length
last_modified¶
Gets and sets the
Last-Modifiedheader (HTTP spec section 14.29). Converts it using HTTP date.
location¶
Gets and sets the
Locationheader (HTTP spec section 14.30).
md5_etag(body=None, set_content_md5=False)[source]¶
Generate an etag for the response object using an MD5 hash of the body (the body parameter, or
self.bodyif not given)
Sets
self.etagIf
set_content_md5is True sets
self.content_md5as well
Merge the cookies that were set on this response with the given resp object (which can be any WSGI application).
If the resp is a
webob.Responseobject, then the other object will be modified in-place.
pragma¶
Gets and sets the
Pragmaheader (HTTP spec section 14.32).
retry_after¶
Gets and sets the
Retry-Afterheader (HTTP spec section 14.37). Converts it using HTTP date or delta seconds.
server¶
Gets and sets the
Serverheader (HTTP spec section 14.38).
Set (add) a cookie for the response.
Arguments are:
nameThe cookie name.
valueThe cookie value, which should be a string or
None. If
valueis
None, it's equivalent to calling the
webob.response.Response.unset_cookie()method for this cookie key (it effectively deletes the cookie on the client).
max_ageAn integer representing a number of seconds,.
pathA string representing the cookie
Pathvalue. It defaults to
/.
domainA string representing the cookie
Domain, or
None. If domain is
None, no
Domainvalue will be sent in the cookie.
secureA boolean. If it's
True, the
secureflag will be sent in the cookie, if it's
False, the
secureflag will not be sent in the cookie.
httponlyA boolean. If it's
True, the
HttpOnlyflag will be sent in the cookie, if it's
False, the
HttpOnlyflag will not be sent in the cookie.
commentA string representing the cookie
Commentvalue, or
None. If
commentis
None, no
Commentvalue will be sent in the cookie.
expiresA.
overwriteIf this key is
True, before setting the cookie, unset any existing cookie.
Unset a cookie with the given name (remove it from the response).
vary¶
Gets and sets the
Varyheader (HTTP spec section 14.44). Converts it using list.
www_authenticate¶
Gets and sets the
WWW-Authenticateheader (HTTP spec section 14.47). Converts it using
parse_authand
serialize_auth.
-.
Functions¶
response_adapter(*types_or_ifaces)[source]¶
Decorator activated via a scan which treats the function being decorated as a response adapter for the set of types or interfaces passed as
*types_or_ifacesto the decorator constructor.
For example, if you scan the following response adapter:.
import json from pyramid.response import Response from pyramid.response import response_adapter @response_adapter(dict, list) def myadapter(ob): return Response(json.dumps(ob))
This method will have no effect until a scan is performed agains the package or module which contains it, ala:
from pyramid.config import Configurator config = Configurator() config.scan('somepackage_containing_adapters') | https://docs.pylonsproject.org/projects/pyramid/en/1.5-branch/api/response.html | CC-MAIN-2021-21 | refinedweb | 908 | 60.21 |
- Simple)
Overview
Pinouts
Power Pins
- V pins of the i2c address. There are pull-down resistors on the board so connect them to VDD to set the bits to '1'. They are read on power up, so de-power and re-power to reset the address
Arduino Code
- Connect Vdd to the power supply, 3V or 5V is fine. Use the same voltage that the microcontroller logic is based off of. For most Arduinos, that is 5V
-
Download Adafruit_MCP9808To begin reading sensor data, you will need to download Adafruit_MCP9808 from our github repository. You can do that by visiting the github repo and manually downloading or, easier, just click this button to download the zip
Place the Adafruit_MCP9808 library folder your arduinosketchfolder/libraries/ folder.
You may need to create the libraries subfolder if its your first library. Restart the IDE.
We also have a great tutorial on Arduino library installation at:
Load DemoOpen up File->Examples->Adafruit_MCP9808->mcp9808test and upload to your Arduino wired up to the sensor
Python & CircuitPython
It's easy to use the MCP9808 sensor with Python or CircuitPython and the Adafruit CircuitPython MCP9808 module. This module allows you to easily write Python code that reads the temperature from the sensor.
You can use this sensor with any CircuitPython microcontroller board or with a computer that has GPIO and Python thanks to Adafruit_Blinka, our CircuitPython-for-Python compatibility library.
CircuitPython Microcontroller Wiring
First wire up a MCP9808 to your board exactly as shown on the previous pages for Arduino. Here's an example of wiring a Feather M0 to the sensor:
Python Computer Wiring
Since there's dozens of Linux computers/boards you can use we will show wiring for Raspberry Pi. For other platforms, please visit the guide for CircuitPython on Linux to see whether your platform is supported.
Here's the Raspberry Pi wired with I2C:
CircuitPython Installation of MCP9808 Library
Next you'll need to install the Adafruit CircuitPython MCP9808 library on your CircuitPython. For example the Circuit Playground Express guide has a great page on how to install the library bundle for both express and non-express boards.
Remember for non-express boards like the Trinket M0, Gemma M0, and Feather/Metro M0 basic you'll need to manually install the necessary libraries from the bundle:
- adafruit_mcp9808.mpy
- adafruit_bus_device
Before continuing make sure your board's lib folder or root filesystem has the adafruit_mcp9808.mpy, and adafruit_bus_device files and folders copied over.
Next connect to the board's serial REPL so you are at the CircuitPython >>> prompt.
Python Installation of MCP9808 Library-mcp9808 temperature. First initialize the I2C connection and library by running:
import board import busio import adafruit_mcp9808 i2c = busio.I2C(board.SCL, board.SDA) mcp = adafruit_mcp9808.MCP9808(i2c)
import board import busio import adafruit_mcp9808 i2c = busio.I2C(board.SCL, board.SDA) mcp = adafruit_mcp9808.MCP9808(i2c)
Now you can read the temperature property to retrieve the temperature from the sensor in degrees Celsius:
print('Temperature: {} degrees C'.format(mcp.temperature))
print('Temperature: {} degrees C'.format(mcp.temperature))
That's all there is to reading temperature with the MCP9808 and CircuitPython code!) | https://learn.adafruit.com/adafruit-mcp9808-precision-i2c-temperature-sensor-guide?view=all | CC-MAIN-2019-22 | refinedweb | 519 | 54.32 |
ablo Rocha10,142 Points
I do not get "intellisense-like" auto complete when defining the attributes for any com.google.android defined tag?
I don't get "intellisense-like" auto complete when defining the attributes for any com.google.android defined tag, is there some extra step I have to perform to get that?
2 Answers
Harry James14,780 Points
Hey Pablo!
I seem to have reproduced your problem at a bit of testing.
Can you try closing down all open code windows and then running an Invalidate and Restart. After that, open up the layout_listing.xml file again.
Let me know how it goes! :)
Harry James14,780 Points
Well I'm not exactly sure how I did get there in the end.
I tried deleting the cache file under Recommendations\build\intermediates\dex-cache which seemed to flag up the issue but even after replacing the file again I still was unable to get the correct content assist results (I would get something about a namespace template when typing an...)
It looks like this is a bug with Android Studio and is difficult to reproduce - if it's not causing a big deal then I wouldn't worry about it too much and just hope that it gets resolved in the future. I'll see if I can find a way to reproduce my fix again tomorrow but can't promise anything!
Pablo Rocha10,142 Points
Thanks so much for looking into this Harry James. I will look into it some more as well and update you if I find a solution.
Harry James14,780 Points
Hey Pablo!
The content assistant should come up with the tag. If it doesn't come up at all, you can activate it again with Ctrl+Space.
If even then no tags are displayed, it's likely because you don't have the dependency for Google Play Services. Make sure you have this line in your build.gradle file for the app module under dependencies:
compile 'com.google.android.gms:play-services-plus:7.8.0'
After that, run a Gradle sync by clicking its button on the Toolbar:
Wait for the sync to complete and then try again.
If you're still having problems after this, give me a shout and I'll see what else can be done :)
Pablo Rocha10,142 Points
Thanks for answering Harry James.
I do have the dependency synced and tried hitting Ctrl+Space. I have already built and run the project and it runs just fine. However if I go back to the layout file I still do not get the content assist. Very strange. I assume it has something to do with some settings I have somewhere, since the tag is coming from a gradle dependency it is not loading it into the content assist compiler.
Pablo Rocha10,142 Points
There is a rendering problem in the Design view, not sure if that is related.
java.lang.NullPointerException at com.google.android.gms.plus.PlusOneDummyView$zzb.isValid(Unknown Source) at com.google.android.gms.plus.PlusOneDummyView.zzyF(Unknown Source) at com.google.android.gms.plus.PlusOneDummyView.<init>(Unknown Source) at com.google.android.gms.plus.internal.zzg.zza(Unknown Source) at com.google.android.gms.plus.PlusOneButton.zzah(Unknown Source) at com.google.android.gms.plus.PlusOneButton.<init>(Unknown Source).jetbrains.android.uipreview.ViewLoader.createNewInstance(ViewLoader.java:437) at org.jetbrains.android.uipreview.ViewLoader.loadClass(ViewLoader.java:154) at org.jetbrains.android.uipreview.ViewLoader.loadView(ViewLoader.java:93)
Harry James14,780 Points
Hey again Pablo!
The rendering problem is normal as it's not built-in to Android Studio, and it therefore does not know how to render it.
The fact that it does not come up in the Content Assist however, is not.
Are you able to run your project with the +1 button completely fine? Is it just that the Content Assist for whatever reason does not come up with it as an option and you have to type the full tag out?
Pablo Rocha10,142 Points
That is correct, I can use all in app features and everything works like it is supposed to. It is just the content assist does not work for any com.google prefixed tab.
It is not a big deal, it just requires more typing (or cut and pasting). I thought it was odd though since in the video they are getting the content assist.
Harry James14,780 Points
Interesting. It should come up after a Gradle Sync so I'm not sure what's hiding it. There isn't an option in the settings to limit this (Either Content Assist is enabled or it's not).
Have you tried Invalidating Android Studio's caches and restarting?:
To do this, click on the File tab then Invalidate Caches / Restart and press Invalidate and Restart:
Let me know if that makes any difference or not.
Pablo Rocha10,142 Points
That didn't do the trick :(
I also get no suggestions when I click Ctrl+space.
Harry James14,780 Points
Hey again Pablo!
Do you have Power Save Mode enabled under the File menu?
Pablo Rocha10,142 Points
Hey Harry - power save mode is off.
Pablo Rocha10,142 Points
Pablo Rocha10,142 Points
Still not getting it. What did you do to reproduce it? | https://teamtreehouse.com/community/i-do-not-get-intellisenselike-auto-complete-when-defining-the-attributes-for-any-comgoogleandroid-defined-tag | CC-MAIN-2022-40 | refinedweb | 885 | 66.74 |
On Sun, Dec 07, 2008 at 11:27:01PM -0800, Stephan Richter wrote: > On Friday 05 December 2008, Martin Aspeli wrote: > > Is there any indication on when we'll see a 2.0 release of z3c.form? > > > > We need a widget that's not in the 1.9 release, but is on trunk (for a > > list with textline value type), and are wondering whether to roll our > > own or wait for a new z3c.form release. > > I am considering the code feature complete. I would like to get confirmation > from people that (a) the z3c.pt integration works well
I have an issue with this. z3c.pt.compat creates a nasty issue when trying to package it as a Debian package. The root cause seems to be that z3c.pt.compat declares z3c.pt as a namespace package. These are defined in the setuptools documentation to be "merely a container for modules and subpackages." [1]. z3c.pt doesn't conform to that as it contains files. This doesn't matter till you try to install it using the --single-version-externally-managed option, at which point you get file conflicts. I've been thinking about it a while, and I think the only solution is to rename z3c.pt.compat to z3c.ptcompat, make a new release of that and migrate z3c.form. I'm willing to implement that if there's enough support. [1] > and (b) the object > widget is useful. Oh yes, and since I have not done the development, are we > 100% test covered? > > BTW, at Keas we are currently using z3c.form trunk and it all looks okay. > > Regards, > Stephan > -- > Stephan Richter > Web Software Design, Development and Training > Google me. "Zope Stephan Richter" > _______________________________________________ > Zope-Dev maillist - Zope-Dev@zope.org > > ** No cross posts or HTML encoding! ** > (Related lists - > > ) -- Brian Sutherland _______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org ** No cross posts or HTML encoding! ** (Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg27040.html | CC-MAIN-2017-17 | refinedweb | 321 | 69.18 |
BOSTON -- Behind the sudden industry interest in Ajax (Asynchronous JavaScript and XML), clearly, is a new interest in frameworks that handle some of the complexity of developing more responsive Web applications. Microsoft has focused its energies on ASP.NET AJAX Extensions, formerly known as Atlas. The software just went beta last week, and individuals from the company have said it should formally ship this year. (Yes, if you developed using the last CTP with Atlas nomenclature, it is time to go in and change those names.)
Among useful traits in ASP.NET AJAX Extensions and the associated tool kit are improved Web service proxy handling and JSON-based serialization, suggested Fritz Onion, technical staff member, PluralSight. Onion spoke this week at VSLive 2006 in Boston.
Chief among capabilities of the new software for .NET developers may be an abstraction level that ensures your AJAX apps work on a variety of popular browsers. This layered browser capability is significant, said Onion.
"You can write client-side JavaScript using the ASP.NET AJAX Library extensions in a browser-independent way, so that you don't have to worry about your application breaking when different browsers hit your pages," he said.
Among the elements Microsoft created for this AJAX framework were some core runtime JavaScript additions, these include declared 'namespaces' and certain classes with abstract functions. Also, some helper classes, such as StringBuilder, were brought in from .NET.
JSON serializer
With ASP.NET AJAX Extensions, Microsoft chose to use Java Script Object Notation (JSON) to move data between the server and the Ajax client. The company implemented serializer and deserializer types on both the client and the server to move data in the JSON format. It provides a means for the browser to call Web service methods on the server This provides a new asynchronous communication layer to connect a browser to connect to network end-points.
"I think this Web service [proxy capability] is one of the most compelling aspects. Of ASP.NET AJAX," said Onion. It generates a JavaScript class that will pass the type across to ASMX endpoints." This trait belies the fact that Ajax does not always mean XML.
"They have given your ASMX endpoints the ability to serialize as JSON rather than SOAP or XML. You have the choice," Onion said, adding, "a lot of work went into this JSON serializer.
Also supported in the ASP.NET AJAX kit is an Update Panel Control that lets ASP.NET developers do a lot of "Ajax" style of work within basically familiar confines.
This control supports partial page rendering, an AJAX-style trait, without the need to write special client script. Of the Update Panel Control Onion says: "It's sort of the ultimate implementation of AJAX within ASP.NET." But he cautions that it should not be used everywhere.
Back to JS drawing board
Also behind the surge in Ajax interest is renewed interest in JavaScript itself. As always, the fact that a framework can shield developers from complexity does not excuse developers from the need to know what is going on under the covers. Frameworks reduce work but don't take the developer's place.
Like cohorts in Java world, ASP.NET developers, too, will be visiting or revisiting their JavaScript skills.
"Anyone who wants to write in Ajax will have to hone up on JavaScript skills," said Onion. That is something Onion himself has done of late.
"JavaScript is interesting," he mused. "It's one of those languages where you think something and suddenly It's there So you have to be careful in JavaScript. You have to be sure you have good thoughts." | http://searchwindevelopment.techtarget.com/news/article/0,289142,sid8_gci1226796,00.html | crawl-002 | refinedweb | 607 | 65.42 |
C# Program to Find the Value of Sin(x)
Sin(x) is also known as Sine. It is a trigonometric function of an angle. In a right-angled triangle, the ratio of the length of the perpendicular to the length of the hypotenuse is known as the sine of an angle.
sin θ = perpendicular / hypotenuse
The values of sine of some of the comman angles are given below,
- sin 0° = 0
- sin 30° = 1 / 2
- sin 45° = 1 / √2
- sin 60° = √3 / 2
- sin 90° = 1
This article focuses upon how we can calculate the sine of an angle by in C#.
Method 1
We can calculate the sine of an angle by using the inbuilt sin() method. This method is defined under the Math class and is a part of the system namespace. Math class is quite useful as it provides constants and some of the static methods for trigonometric, logarithmic, etc.
Syntax:
public static double Sin (double angle);
Parameter:
- angle: A double value (angle in radian)
Return type:
- double: If “angle” is double
- NaN: If “angle” is equal to NaN, NegativeInfinity, or PositiveInfinity
Example 1:
C#
The value of sin(0) = 0 The value of sin(45) = 0.707106781186547 The value of sin(90) = 1 The value of sin(135) = 0.707106781186548
Example 2:
C#
Output
Sine of angle1: NaN Sine of angle2: NaN Sine of angle3: NaN
Method 2
We can calculate the value of sine of an angle using Maclaurin expansion. So the Maclaurin series expansion for sin(x) is:
sin(x) = x - x3 / 3! + x5 / 5! - x7 / 7! + ....
Follow the steps given below to find the value of sin(x):
- Initialize a variable angleInDegree that stores the angle (in degree) to be calculated.
- Initialize another variable terms that stores the number of terms for which we can approximate the value of sin(x).
- Declare a global function findSinx.
- Declare a variable current. It stores the angle in radians.
- Initialize a variable answer with current. It will store our final answer.
- Initialize another variable temp with current.
- Iterate from i = 1 to i = terms. At each step update temp as temp as ((-temp) * current * current) / ((2 * i) * (2 * i + 1)) and answer as answer + temp.
- Eventually, return the answer from findSinX function.
- Print the answer.
This formula can compute the value of sine for all real values of x.
Example:
C#
The value of sin(45) = 0.707106781186547 The value of sin(90) = 1 The value of sin(135) = 0.707106781186548 The value of sin(180) = 2.34898825287367E-16 | https://www.geeksforgeeks.org/c-sharp-program-to-find-the-value-of-sinx/?ref=rp | CC-MAIN-2022-05 | refinedweb | 423 | 64.81 |
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.
> Date: Mon, 26 Sep 2005 10:08:41 -0400 > From: Christopher Faylor <me@cgf.cx> > > I think I get it. This patch just modifies chew so that it always > outputs '\n'. Then you see '\n' on input no matter what. > > Would it be possible to just link with binmode.o under mingw (and cygwin for > that matter)? If the problem is with MSYS's makeinfo, then it's IMHO wrong to solve it in chew. Besides, the suggested change, viz +#ifdef __MINGW32__ +/* Prevent \r\n\ line endings */ +#include <fcntl.h> +unsigned int _CRT_fmode = _O_BINARY; +#endif is too general: it switches _all_ file I/O for _all_ files to binary mode, in _all_ MinGW builds. Thus, if the file chew reads is edited with some Windows editor that doesn't preserve line-endings (Emacs does), then chew itself might fail. Even if currently this change doesn't cause any trouble, it could be a time bomb: imagine that at a later date someone adds code to chew that reads some other file--we don't want to remember then to open it explicitly in text mode to prevent bugs. So I'm against this fix. If MSYS makeinfo is the culprit, let them fix it, or let them use another port, which does TRT. | http://cygwin.com/ml/gdb-patches/2005-09/msg00231.html | CC-MAIN-2019-13 | refinedweb | 230 | 82.54 |
If you have been using computers for some time, you have probably come across files with the .zip extension. They are special files that can hold the compressed content of many other files, folders, and subfolders. This makes them pretty useful for transferring files over the internet. Did you know that you can use Python to compress or extract files?
This tutorial will teach you how to use the zipfile module in Python, to extract or compress individual or multiple files at once.
Compressing Individual Files
This one is easy and requires very little code. We begin by importing the zipfile module and then open the ZipFile object in write mode by specifying the second parameter as 'w'. The first parameter is the path to the file itself. Here is the code that you need:
import zipfile jungle_zip = zipfile.ZipFile('C:\\Stories\\Fantasy\\jungle.zip', 'w') jungle_zip.write('C:\\Stories\\Fantasy\\jungle.pdf', compress_type=zipfile.ZIP_DEFLATED) jungle_zip.close()
Please note that I will specify the path in all the code snippets in a Windows style format; you will need to make appropriate changes if you are on Linux or Mac.
You can specify different compression methods to compress files. The newer methods
BZIP2 and
LZMA were added in Python version 3.3, and there are some other tools as well which don't support these two compression methods. For this reason, it is safe to just use the
DEFLATED method. You should still try out these methods to see the difference in the size of the compressed file.
Compressing Multiple Files
This is slightly complex as you need to iterate over all files. The code below should compress all files with the extension pdf in a given folder:
import os import zipfile fantasy_zip = zipfile.ZipFile('C:\\Stories\\Fantasy\\archive.zip', 'w') for folder, subfolders, files in os.walk('C:\\Stories\\Fantasy'): for file in files: if file.endswith('.pdf'): fantasy_zip.write(os.path.join(folder, file), os.path.relpath(os.path.join(folder,file), 'C:\\Stories\\Fantasy'), compress_type = zipfile.ZIP_DEFLATED) fantasy_zip.close()
This time, we have imported the
os module and used its
walk() method to go over all files and subfolders inside our original folder. I am only compressing the pdf files in the directory. You can also create different archived files for each format using
if statements.
If you don't want to preserve the directory structure, you can put all the files together by using the following line:
fantasy_zip.write(os.path.join(folder, file), file, compress_type = zipfile.ZIP_DEFLATED)
The
write() method accepts three parameters. The first parameter is the name of our file that we want to compress. The second parameter is optional and allows you to specify a different file name for the compressed file. If nothing is specified, the original name is used. not exist, this method will create one for you. Here is the code that you can use to extract files:
import zipfile fantasy_zip = zipfile.ZipFile('C:\\Stories\\Fantasy\\archive.zip') fantasy_zip.extractall('C:\\Library\\Stories\\Fantasy') fantasy_zip.close()
If you want to extract multiple files, you will have to supply the name of files that you want to extract as a list.
Extracting Individual Files
This is similar to extracting multiple files. One difference is that this time you need to supply the filename first and the path to extract them to later. Also, you need to use the
extract() method instead of
extractall(). Here is a basic code snippet to extract individual files.
import zipfile fantasy_zip = zipfile.ZipFile('C:\\Stories\\Fantasy\\archive.zip') fantasy_zip.extract('Fantasy Jungle.pdf', 'C:\\Stories\\Fantasy') fantasy_zip.close()
Reading Zip Files
Consider a scenario where you need to see if a zip archive contains a specific file. Up to this point, your only option to do so is by extracting all the files in the archive. Similarly, you may need to extract only those files which are larger than a specific size. The
zipfile module allows us to inquire about the contents of an archive without ever extracting it.
Using the
namelist() method of the ZipFile object will return a list of all members of an archive by name. To get information on a specific file in the archive, you can use the
getinfo() method of the ZipFile object. This will give you access to information specific to that file, like the compressed and uncompressed size of the file or its last modification time. We will come back to that later.
Calling the
getinfo() method one by one on all files can be a tiresome process when there are a lot of files that need to be processed. In this case, you can use the
infolist() method to return a list containing a ZipInfo object for every single member in the archive. The order of these objects in the list is same as that of actual zipfiles.
You can also directly read the contents of a specific file from the archive using the
read(file) method, where
file is the name of the file that you intend to read. To do this, the archive must be opened in read or append mode.
To get the compressed size of an individual file from the archive, you can use the
compress_size attribute. Similarly, to know the uncompressed size, you can use the
file_size attribute.
The following code uses the properties and methods we just discussed to extract only those files that have a size below 1MB.
import zipfile stories_zip = zipfile.ZipFile('C:\\Stories\\Funny\\archive.zip') for file in stories_zip.namelist(): if stories_zip.getinfo(file).file_size < 1024*1024: stories_zip.extract(file, 'C:\\Stories\\Short\\Funny') stories_zip.close()
To know the time and date when a specific file from the archive was last modified, you can use the
date_time attribute. This will return a tuple of six values. The values will be the year, month, day of the month, hours, minutes, and seconds, in that specific order. The year will always be greater than or equal to 1980, and hours, minutes, and seconds are zero-based.
import zipfile stories_zip = zipfile.ZipFile('C:\\Stories\\Funny\\archive.zip') thirsty_crow_info = stories_zip.getinfo('The Thirsty Crow.pdf') print(thirsty_crow_info.date_time) print(thirsty_crow_info.compress_size) print(thirsty_crow_info.file_size) stories_zip.close()
This information about the original file size and compressed file size can help you decide whether it is worth compressing a file. I am sure it can be used in some other situations as well.
Final Thoughts
As evident from this tutorial, using the
zipfile module to compress files gives you a lot of flexibility. You can compress different files in a directory to different archives based on their type, name, or size. You also get to decide whether you want to preserve the directory structure or not. Similarly, while extracting the files, you can extract them to the location you want, based on your own criteria like size, etc.
To be honest, it was also pretty exciting for me to compress and extract files by writing my own code. I hope you enjoyed the tutorial, and if you have any questions, please let me know in the comments.
| https://code.tutsplus.com/tutorials/compressing-and-extracting-files-in-python--cms-26816 | CC-MAIN-2019-51 | refinedweb | 1,185 | 57.16 |
; };
base_constructors
base_constructors
In a lookup in which the constructor is an acceptable lookup resultsChange 7.3.3 namespace.udecl paragraph 1 as follows
,if the nested-name-specifier nominates a class C
-
, and the name specified after the nested-name-specifier, when looked up in C, is the injected-clas. That name is a synonym for the name of some entity declared elsewhere.Change 7.3.3 namespace.udecl paragraph 3 as follows. ...Change 7.3.3 namespace.udecl paragraph 18 as follows
The alias created by the using-declaration has the usual accessibility for a member-declaration.Create a new section 12.9 class.fwd [not shown in bold here]:
12.9 Inheriting Constructors [class.fwd]
A using-declaration (7.3.3 namespace.udecl) that names a constructor implicitly declares a set of inheriting constructors. For each non-template constructor other than a default or copy constructor in the class named in the using class where the using-declaration appears. Similarly, for each constructor template in the class named in the using class where the using-declaration appears.
A constructor so declared has the same access as the corresponding constructor in the base class. It is deleted if the corresponding base class constructor is deleted (8.4 dcl.fct.def).[Note: Default and copy constructors may be implicitly declared as specified in 12.1 class.ctor and 12.8 class.copy.]
: base class. is implicitly declared. ... | http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2512.html | crawl-001 | refinedweb | 237 | 51.04 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
A recent article in Wired with quotes from one of our own. Also, this is a good time to bring up Leo et al. publication draft on Socio-PLT along with their project page and a survey to view and take.
From the article:.
Edit, abstract from paper:
Could you fix the Wired link?
"Why Do Some Programming Languages Live and Others Die?"
Fixed, sorry about that! I really wish we went with some sort of wikimarkup rather than HTML.
I have seen so many people arguing that this-or-that programming language is "superior" and "will be adopted" based solely on its mathematical properties or type system, that I have ceased to give such people the respect of arguing with them; these days I tend to just laugh.
It is good to see that some people are doing real research to see why languages are really seen to be superior by engineers (as opposed to just mathematicians) and why they are really adopted.
This is long overdue. Perhaps it will give us something more interesting than (or at least something else besides) type theory to think about.
My view is more of as an engineer with a new microscope. For example, our study found that programmers anti-correlate types with code reuse. This doesn't mean we should ignore type theory! Instead, it suggests that type theorists are still missing a core piece of the puzzle and that we can start looking at developers to understand where that is. Instead of the POPLMark challenge, imagine a PeopleMark one where feature improvements have an actual sociotechnical basis rather than increasingly ungrounded mathematical ones.
Oddly, I've noticed the same thing. The more semantics the definition of a type carries, the less likely it is that code involving that type will ever be reused. Code that uses "simple" types like numbers and strings and so on (and cons cells in most Lisp dialects) is reused all over the place, readily collected into useful libraries, etc.
But with the possible exception of things like the Standard Template Library, I don't think I've seen much large-scale code reuse, or especially, much code reuse in projects unrelated to the original source, of code implementing operations on types that take more than two pages or so (of whatever programming language) to define.
Type theory is getting good at the ability to define more and more about our data; but this extensively-defined data seems to be very rarely seen to be suited for reuse. Maybe this study will tell us why.
Ray
Your observation (which I'm not sure is the same as leo's remark) is not surprising: you use complex types when you want to capture complex invariants about your domain, they are specialized, and therefore are less likely to be shared widely. It is a good thing that we are able to propose simple interface (therefore simple types) to the widely used code, and those that cannot understandably suffer from less use.
You could say the same things of all kind of communication protocols (inside computers or outside). The most complex ones are generally not the most widely used, and they survive even when they're worth it, in specialized domains.
Without types, you could use an abstraction that almost, but not quite, fits your problem domain, and just incrementally adapt the mismatched functionality. This is might be harder to do if the abstraction you want to reuse enforces all sorts of invariants. In this case, you have to understand the full semantics of the abstraction you want to adapt before you can even run your program; not so in the dynamic case. This is a good example of Sean's point that dynamic languages allow easier exploration.
Color me unconvinced. Most libraries that provide rich representations also come with convenience functions to turn them into a less-structured soup of data, because that is also useful for debugging, serialization etc.
Do you have specific examples or use cases in mind? That would be interesting.
It also needs care not to let that mathematical line of thinking continue to "The language designers of language X are doing everything right because of theoretical reason Y, the reason that the language isn't widely adopted is just because the engineers are stupid.". I've heard that kind of sentiment expressed many times...
Here is a little challenge. If it is true that programmers who reach my age get lost for their craft and begin to manage programmers ( I have still no idea what this means in terms of a job description, despite being in the industry for 14 years. Three weeks ago I set up a development process for my team. One week later I continued programming ... ), how can this brain drain be slowed down?
Unlike Ray Dillinger I have no interest in playing out engineers against mathematicians. I have also little doubt that Haskell or more generally "type driven programming" has futurity and momentum with respect to PL evolution and Haskell itself has a thriving community despite or because "avoiding success on all costs" but I doubt that PL evolution and the permanent proliferation of new technologies has any intrinsic value. Compared to experience ( including both programming skill and domain knowledge ) it might actually be quite small. However, it depends. Professionally I'm working in a niche which doesn't evolve a lot and could need some fresh air and a generational change while I look at the hyper-mutation spectacle which is web programming with both fascination and disgust.
[...]while I look at the hyper-mutation spectacle which is web programming with both fascination and disgust
Same here!
Kay Schluehr: Here is a little challenge. If it is true that programmers who reach my age get lost for their craft and begin to manage programmers (...), how can this brain drain be slowed down?
I don't quite follow, but I might be able to help. Is it an age question? I'm over 50 (might look younger), but still a guy who codes hard stuff, and I'm good at it. I no longer remember everything, and can't recall every bit of my thought process from code I wrote three years ago. I do suffer from falling enthusiasm because I don't learn as much, because I get results I expect, modulo small errors. (I have an idea, and it always works, and it's therefore boring.) That I explain what I'm doing is just as valuable as getting implausibly effective results. Both seem related to my job tasks. Explanations give others confidence in results.
I see folks self-select for management where I work. White-haired folks who want to keep coding do so, without pressure to manage. So it might be job market evolution. (Simulation: start new companies, place older applicants, then see what distribution falls into managment.) Maybe pay scales in different roles play a part. Dunno. When a company is profitable, it can pay experienced folks well.
Can't say exactly why folks want me to do complex parts -- seems related to clarity at both high and low levels. I start with an explanation and end with bit twiddling, and the relation is clear. Last week I needed to improve locality despite concurrent writes in a thundering herd scenario. I describe the problem, write a vague plan, then a highly specific one achieving the vague plan, before coding fuzzy write-wake detection in eight bytes of state including a circular window bitmap of conflicts in the last N operations, plus a bunch of stats elsewhere showing what happened. It's just tedious, rather than hard. I would have found it fun twenty years ago.
In the real world, we have wives and kids.
I also have no interest in playing engineers *against* mathematicicans. My point was that they ought to be working together. To claim human-level results like "ease of use" is meaningless unless you're getting quantitative feedback from users, in statistically meaningful sample sizes, in some form that can reasonably be interpreted as representing ease of use, like time required to perform a particular task.
On purely mathematical grounds, you can say that things have particular mathematical properties, and prove it. That's not the issue. But if you aren't working with engineers and actually measuring something, then making claims that some set of mathematical properties is essential, or more meaningful or effective than some other, in terms of engineering ease of use is making a claim without evidence.
Scientific publications have rules against making claims without evidence in most fields. Why should Programming Languages be any different?
While I generally enjoy reading Wired articles, I found this one quite disappointing. I enjoyed the draft paper on Socio-PLT: it's basically a call for participation to a new¹ research area, with a though-provoking vision and a whole methodological stance (use sociological tools to study PL). The article contained nothing of the sort; it basically reduced it to a popularity/buzz comparison between existing languages (Dart, Go, Scala and C), and platitudes on academic ivory towers.
¹: despite having been informally discussed here on LtU a few times before; this is also related to the work on "Human factors in PL" that Ehud is interested in. I don't know the field well enough to claim that the work here is scientifically novel, but that's at least what I felt, as an outsider.
I think the Socio-PLT draft really deserves a front page story (on LtU) by itself.
It is nice to have a front page article on wired even the popci writing isn't good, for us the paper is much better!
If anyone has any burning questions about programmers they'd like answered, we'd like to know! We've been doing waves of surveys whenever an opportunity presents itself (massive online courses, moments of publicity, etc.), so this is a good chance to scope out any PL hunches :). What I think you can do, especially for all the C projects on SourceForge, is analyze re-work of particular lines of code over time and look at the tokens in the language being used in that re-work. That way you don't even need to care about the language's semantics first. You're just a boy swinging a hammer looking for nails. Thats the best way to get great insights.
Another way to get great insights is to weight your results via a product-sum algorithm, which I've already recommended to you.
Finally, going back to C, there are a ton of static analysis programs for C. Which projects have the most checkins where the program fails static analysis. If you do a static analysis for every checkin, and the warnings increase, then that particular person checking in the code can be thought to have a "novice to expert" scaled rating. Likewise, there are certain hallmarks of Noobism. The phrase "bug patterns" in the 1980s was coined when there was a lot of research regarding psychology of programming. I believe the researcher's last name was Johnson. He did tons of studies regarding novices ability to understand constructs like for loops. And when I say novices, I mean people like Regis Philbin learning how to program.
Furthermore, something I found fascinating was running Java static analysis tools on code contributed by major companies like Google. You would think they run all the latest and greatest static analysis tools, right? And that they would never check-in code if it failed static analysis, right? Guess again.
So I would say my biggest hunch is programmers continuously overestimate the quality of code released by big companies.
Finally, there is some good research that taps large code bases for figuring out how programmers use libraries. Patrick Lam has done some interesting studies about when do Java programmers chose to use non-standard Java collections.
Edit: I found an early article that coined the term "bug patterns". See: Misconceptions in student's understanding (1979). This phrase is uber-popular today in the software tools community, ever since Eric Allen published Bug Patterns in Java. I also just found An Analysis of Tutorial Reasoning About Programming Bugs, which I think is really cool. Why?.
There are also some old design papers I find interesting, too, but don't have much to do with popularity. Rather, Griswold's paper An Alternative to the Use of Patterns in String Processing, where he makes some very strong usability claims of patterns vs. generators.
I always come back to Sherry Turkle's software bricolage observations (especially with regards to gender and programming, but also more generally applied also). Really, she had this figured out a couple of decades ago on how many people learn how to program.
You make assumptions that the static analysis tools are actually always useful, whereas they often represent biased "best practices" that are not necessarily shared by everyone. Google has a lot of real gates in place to ensure the quality of their code, like code reviews. Java for example, is notoriously hard for static analysis tools since the strong typing gets rid of a lot of the low hanging fruit.
I went to Patrick's webpage and couldn't find anything related to what you mentioned. Can you give me a more specific reference, I'm very interested in the topic of mining how libraries are used.
DSFinder.
As for big companies like Google knowing better, hmm ;-) I will admit, certain idioms in code may seem strange outside Google, but the most common case would be where a library is optimized for a Google in-house compiler (e.g. I *know* the V8 compiler does certain optimizations, so I *know* certain open source JavaScript Google code will get a free speedup). It should be possible to rule out those static analysis issues with other more alarming issues. Besides, your reaction totally validates that my hunch is interesting if true. And Java tools like PMD and FindBugs get plenty of low-hanging fruit.
As for sherry's work, a few years back you recommended I read tinkering. It was very loaded with thoughts on gender, which although interesting is not what drives me wild. Besides, Marshall McLuhan had way deeper insights in fewer words.
Edit: Some food for thought. When Microsoft released StyleCop (a tool that checks the syntactic layout of C# against various rules), they were shocked users not only wanted to use it, but wanted to write their own style rules with it. Users liked it so much that Microsoft didn't want to support it and instead let the C# community support it. Since then, it is integrated with ReSharper, and also a more powerful rule checker, StyleCop+, has been built on top of it. We use StyleCop+ at work to enforce naming conventions.
Turkle's observations can be generalized into that people just think/create in different ways. Some are more wired for top-down methodical thinking, some are more wired for bottom-up exploratory thinking, and of course most of us are wired somewhere in between. That there is a gender bias is interesting but not the only think to get out of that.
That users wanted to incorporate their own styles into stylecop reinforces my point about SA tools. There is no standard here, and evaluating code written against one standard according to another isn't going to tell you much.
My biggest hunch about programming is very meta.
Thomas La Toza, who has done some really good work lately on the HCI side of things, told me that many of his peers working at big companies will do "API Usability Studies" and present the results and recommendations to the lead designers/architects. Yet, they almost always get shot down.
Rather than knowing the relationship between a programmer and his code, I want to know what suggestions programmers will always decline. Also, what kind of API design habits will a programmer make given a constrained set of allowed language features? For example, my hunch is if you don't give users functions with default parameters, then they will write many overloaded functions. I see this in a lot of Microsoft code, like ASP.NET. Function overload madness. Another interesting thing I saw in some ASP.NET code was how they started co-opting anonymous types to make it look like C# supported dictionary literals like Ruby. This encoding, using anonymous types as dictionary literals, is probably something the language designer did not intend or think of. All the programmer was looking to achieve was getting rid of passing an array of params as the last parameter to a function. And then I see this encoding mixed in with overloading. I blogged about how much I disliked this in terms of program comprehension and maintainability when I first saw it, and I still hate it, but it overall fascinates me. Further, it is where my hunch about constrained language features comes from. Users will figure out ways to do things with your language you never imagined, whether it is Todd Veldhuizen for C++ or Oleg Kiselyov for Haskell or Joe Plummer as your user.
Data mining should save us here; I think its already heavily practiced at google. User study's often lie in that they are just not empirical enough to be useful. But to go out and see what APIs people are actually using and how they are using them...
I'm an expert at co-opting C# to make it more interesting than it really is. I've got a pattern that allows me to write something like
new XXX() {
Extend = { A, B, C }
}
new XXX() {
Extend = A,
}
And so on...it really saves me a lot of effort when I'm encoding another language in C# without the benefit of a parser.
Isn't it that data mining ( or empirism ) can help decision making in the short term but might be counter productive in the long run?
Suppose you always try to optimize your position in the charts and respond to user desires immediately, you will also break expectations about the reliability of your approach and the strength of your intuition as a "lead designer". Being a Maoist doesn't suffice. Sometimes one has to be Mao himself.
In practice we work around this with some models and tricks. Instead of a "homo economicus" we have the "average programmer" and what is good for the average programmer is also good for you and me. Of course sometimes we want to beat the averages and become a "better programmer" and also offer models and solutions to achieve this e.g. by referring to a master significant such as "science". It cannot go both ways. So it may be more important to determine the "average programmer" and the "better programmer" than to literally adapt to any of those. It is programmers who will adapt.
Two things.
First, Wired practices very fast-and-loose journalism, so take it with a grain of salt. E.g., they consistently misquoted us.
Second:.
Misquoted :) I actually said that, according to our study, ActionScript programmers (unlike JavaScript programmers) typically believe they can see underneath the hood. That means programmer understanding of their tools is flawed. In the case of ActionScript, this can be costly if the typical AS dev believes that they can do something like build a game engine or DSP lib. It's possible, but high performance requires a lot of reverse engineering.
(And yes, Patrick Lam is great. Just had dinner with him :) ).
This finally gets into the constructive side of things!
In this case, what if a programmer misunderstands the semantics (e.g., writing C in Java), the APIs, ...? What should we do about it? A vague but useful CSCW model is "actor networks", where you essentially can replace people with machines, such as a pair programmer getting replaced by an automated tutor. The model is flawed in the sense of Ackermann's sociotechnical gap: technology augments, so just because a technology (like Resharper) exists, doesn't mean it's good. How the heck do we tell if Reshaper is actually good, and if not, what are good guidelines (design space axis) for doing better?
If you haven't read it, I suggest skimming our draft paper, and the last two big sections in particular. We cut many of the relevant theories and blue sky hypotheticals due to length, but I think you'll enjoy it.
1) Read the paper I linked to about how tutors use consistent patterns for identifying students weaknesses and apply associated consistent patterns to help them. It takes an expert systems approach.
2) The value in using actor networks to solve these sorts of "debugging" problems was first proposed by one of Hewitt's students, precisely for the problem of "automated tutors". Tutors was a big idea at MIT in the 70s -- Sussman did his thesis on tutors as well. Hewitt's student argued that actors could serialize all messages to an external database, and that this database could then be used to provide essentially infinite backtracking in conjunction with forward chaining, to provide rich problem solving capabilities.
3) How do I know Resharper is good?
Uhm, I suggested a way to measure feedback into the system. This isn't exactly what you want, but it is pretty darn good. Compare to that silly Visual Studio Token Economy mentioned on LtU a few months back. At least within the system I sketched out above, users are actually ascending some skill tree. It really shouldn't matter how good the skill tree is, because we're not Michel Foucault ;-)
4) I really want to read your paper, and more of your work in general. When are you going to have a better web page?
According to the abstract's opening, the focus is socio-PLT, so a fish-out-of-water perspective might be useless. If I care little about social perspective, you might get that. I devote attention to a small number of close associates, so the opinion of others is just information, often cleaving toward least common denominator I dismiss.
(A dozen years ago, a coworker told me a new rule "everyone" had to follow, which I said I planned to ignore. He smiled and asked, "What if everyone did that?" When I criticized the premise underlying that question, he laughed and asked, "What if everyone made fun of the categorical imperative?!" It seemed funny at the time.)
Hypothesis 1. Working on the closure side of the duality increases influence on programming language researchers, but decreases influence on both practitioners and software engineering researchers.
From context I infer "the duality" is closures and continuations in functional langauges vs objects in the form of generators and iterators. You might insert parenthetical "(vs objects)" after the word closure, if you don't want folks to read the preceding section closely to figure out what the duality references.
Some languages (eg Smalltalk and Lisp) have collection methods taking a block/function as a first class argument, applied to every collection member. I like that style a lot. It's clear and small, and can be more efficient. But if the execution can be suspended in the middle, then later resumed, this confuses a lot of people because reasoning about the implicit state has few mental handles to manipulate. But explicitly coded objects to represent iterative state has a place to go consult for study, when reasoning. I know folks who love iterators and hate high-order methods. It's a strong bias.
If we pretend old Jean Piaget research is relevant, the object approach is more concrete and therefore easier. Piaget put a lot of effort into charting normative progress from concrete to abstract, culminating with more abstract operational thinking relevant to mathmetics in a child's early teens (if they get there at all, which I doubt in the case of some folks). Maybe we should assume concrete is mentally cheaper.
In programming languages it's possible for abstractions to fail in support for inspection and verification. A mathemetician would not tolerate mysterious propositions you need not prove, but it's okay to tell programmers they don't need to know? Hmm. Seems kinda patronizing, maybe asking to be ignored.
Hypothesis 2. Both programming language designers and programmers incorrectly perceive the performance of languages and features in practice.
Yes, usually. Designers of low level languages might do better. It's hard to pick a suitable cost model. Intuitive grasp of algebra is necessary to see weighting matters. (Bob drives a compact and Alice drives a hulking 60's gas guzzler; how much total gas is burned in their cross-county race?) And until you check empirically, it's easy to be wrong.
But there I talk about performance cost, and the section containing that hypothesis speaks of cost in a more abstract sense -- something like, "What am I giving up to get this suite of features?" That seems even easier to get wrong. Most people seem to underestimate the cost of dealing with anything present: more is less when each feature F added requires a programmer to think about it, perhaps more than it's worth. That's what you meant, right?
Solutions nearly always bring their own problems. There's a large class of design bugs related to problems caused by solutions. A classic one goes, "This 99% solution makes it impossible to handle the last 1%, because now it's in the way." More subtly, some solutions are all hammers, when sometimes you should really use a screwdriver. Since this is probably a fruitful place to have a long discussion, I won't add more examples. However, it's common in programming that folks want a house but build a shack on the lot first, which is in the way.
Hypothesis 3. Developer demographics influence technical analysis.
(The comment in that section on modularity bugs at human communication boundaries might explicitly reference Conway's law.)
It sounds like you're saying it's dangerous to pool results into commentary on what "average developers" do when it might only be a subset of developers involved. In short, you need a histogram. (Those are pretty important in display of patterns in statistics for stochastic systems like networking code.) Sounds good: avoid lying to yourself with statistics.
Hypothesis 4. Programmers will abandon a language if it is not updated to address use cases that are facilitated by its competitors.
There is competition, yes, so features cannot be weighed in a vaccuum. (This amounts to: context matters.)
Here's a personal example. I have used more C++ than anything else, but as C++ heads to higher level, I plan to go back to C because any new hand-holding is in my way. That's competition. More generally, your installed base competes with anything new you want to do, and your community might not follow.
Hypothesis 5. The productivity and correctness benefits of functional programming are better correlated with the current community of developers than with the languages themselves.
That is such a good hypothesis. (One that must be asked; I don't know if it's right.) It turns out you can write spaghetti code with anything. So it's worthwhile to look at features with an eye to what will happen when someone uses them in spaghetti code, because they will.
Hypothesis 6. The diffusion of innovation model better predicts language and feature adoption than both the diffusion of information model and the change function.
(This is a side note you can ignore. But here's a pattern to note. Ted tells Alice, "Bob says XYZ is really cool." But Alice replies, "Well, Bob's an idiot." Which generalizes to House snarking, "Well, everyone's an idiot." Sometimes you want to know the histogram of who thinks an idea is really stupid.)
I'm not quite sure I get the point of this hypothesis. I'm guessing it's this: "If your model has even more steps where adoption failure can occur, it might predict failure better." That sounds right. (I spend a lot of time thinking of everything that can go wrong before I think new code will likely work.)
Hypothesis 7. Many languages and features have poor simplicity, trialability, and observability. These weakeness are more likely in innovations with low adoption. [Typo: weakness.]
Hey, that's my song; I'm glad to see it here. From a social perspective, you might work up some kind of challenge-response model to predict more adoption failure when advocates cannot defend against critical challenges. Ted says, "This language sucks because X happens when I do Y!" But Alice counters, "No it doesn't, and here are clear examples showing what actually happens." From a non-social perspective, individuals want fast turn-around hypothesis testing.
Hypothesis 8. A significant percentage of professional programmers are aware of functional and parallel programming but do not use them. Knowledge is not the adoption barrier.
Plausible to likely as not an adoption barrier. I have trouble making statements of larger scope though. (To me they sound like, "But my friends and I know what we're doing!", which is suspect because it's self-serving, and there's lot of evidence everyone enjoys flattering lies.)
I see functional and parallel programming tactics go up as folks get more experience, especially with anything managing data, but it's done in the context of imperative languages like C, rather than using a functional language. The benefit of immutable data -- when it's shared by multiple control flows -- is so high that designs must explicitly revolve around when and how data becomes immutable.
(In practice, folks either cheat, or code too casually. So it's necessary to audit that data does not change when it should not. An api can announce, "If you pass me immutable data, I promise not to change it and I'll let you know when my last reference goes away." Then the caller changes the data while the callee is using it. Bad caller, bad. :-)
Hypothesis 9. Developers primarily care about how quickly they get feedback about mistakes, not about how long before they have an executable binary.
I compile far more often then I want to actually run a unit test, because I want feedback about whether I have started propagating bad code among several locations I edit. This would be painful if compilation was so slow it interrupted my train of thought. Library size can be used to limit max compile time when libraries can be compiled separately.
Hypothesis 10. Users are more likely to adopt an embedded DSL than a non-embedded DSL and the harmony of the DSL with the embedding environment further increases the likelihood of adoption.
Yes, because control over the top level environment is slight where there's a team and huge inertia in existing code. Since the top typically cannot be changed, the only way a new language can be used is via embedding, and only if the DSL knows it is secondary and has to behave within bounds established by the top layer.
Hypothesis 11. Particular presentations of features such as a language-as-a-library improve the likelihood of short-term and long-term feature adoption.
Almost tautological if phrased, "When a language says my-way-or-the-highway, sometimes a programmer chooses the highway." (But Neo has been there before, of course.)
Hypothesis 12. Implementing an input-output library is a good way to test the expressive power and functionality of a language.
It might also go hand-in-hand with trialability and observability when i/o is the only way to inspect.
Hypothesis 13. Open-source code bases are often representative of the complete universe of users.
Umm. I'm guessing probably not. But maybe close enough for what you want, or maybe you don't mind disservice for closed-source users.
(I must be thinking something will appear in closed-source that doesn't appear in open-source, and that maybe there's a correlation.)
Hypothesis 14. Most users are tolerant of compilers and interpreters that report back anonymized statistics about program attributes.
Sounds plausible, but will they believe it's anonymized without auditing it? Or if auditing is too much trouble and folks assume the worst? (Think about the challenge-response cycle with paranoid folks here.) Being able to turn that off would improve adoption.
Hypothesis 15. Many programming languages features such as modularity mechanisms are tied to their social use.
Features correlating with human communication boundaries, like modules, should interact with social forces and could affect adoption. (Ted tells his boss, "We can't use that on our team because of how it manages modules.") Some features can have an even bigger social than technical role when they facilitate Conway's law.
Agreed on many points. A few things stand out --.
We recently started polling developers on thoughts about OO vs. FP. In particular, about how interchangeable HoF are with objects. There was a lot of confusion.
A lot of techniques pioneered in the FP community work just as well in the OO one, and vice versa. If you believe social use advances invention and innovation, the perceived distance between HoFs and objects is problematic. The OO vs. FP divide is therefore inefficient in the social sense, and particularly troublesome given that practice is on one side and research on the other.
Hypothesis 6. The diffusion of innovation model better predicts language and feature adoption than both the diffusion of information model and the change function.
...
I'm not quite sure I get the point of this hypothesis. I'm guessing it's this: "If your model has even more steps where adoption failure can occur, it might predict failure better."
Having more steps is actually surprising. A cynic has more surface area to attack, finding things that are wrong, contextual, or should at least be deemphasized.
I found it useful in that diffusion of innovation is a rich working model, which we now know to 1) examine (falsify/refine) and 2) build upon for downstream PL research. For example, I hadn't really focused on observability until I saw it.
Hypothesis 8. A significant percentage of professional programmers are aware of functional and parallel programming but do not use them. Knowledge is not the adoption barrier.
...
Plausible to likely as not an adoption barrier. I have trouble making statements of larger scope though
Ah, but this is useful. It helps shifts the blame away from teaching / promoting PL concepts. Now we know to look further out.
The actual numbers get rather funny in our surveys. Many people claim to know about threads and tasks, but many less know about determinism. So, I'm not 100% on that hypothesis (and many others, such as the 13, which sounds very wrong). However, like the HIV prevention folks, we can actually start to analyze the situation and act on it in an informed way rather than adopting education policies, research trends, etc. that are not grounded in reality.
lmeyerov: We recently started polling developers on thoughts about OO vs. FP. ... There was a lot of confusion.
There's a concept "knowing what you know" I have heard used in the context of grasp of grammar. A person can know the rules of grammar, but not be able to cite them; so they can perform but not articulate.. (It's considered slightly weird if you do it in C, though, but folks rarely object enough to ask you to stop doing it when it makes sense.)
The point of abstraction is the ability to swap parts on one side, when details are hidden, because you couldn't depend on them anyway. I rarely see folks swap things though, even when testing. (But I think people in test-driven environments do, a lot more, especially when mocking interfaces.) So it's possible to use abstract or high level code and just ignore that aspect. Cut and paste doesn't require knowing what you're doing. What was my point? I don't see a lot of self-conscious review in coders.
There has been research done on how quickly folks perceive patterns (in decks of cards with different probabilities of yielding good or bad results in a game), and they see subjects appear to anticipate unconsciously well in advance of forming a conscious opinion. So a programmer can dislike something unconsciously for reasons they can't articulate in a poll. (Wow, that sounds a lot like BS; there's almost no way to test that, sorry.)
Having more steps is actually surprising.
I'm used to more steps yielding less pleasing results, especially in schedule estimates, which become longer the more explicit detail is noted. Nothing takes zero time.
It helps shifts the blame away from teaching / promoting PL concepts.
I don't know how long it takes before folks learn it in practice. There could be a shortfall in grasp of functional concepts just after basic learning. Everybody with ten years of experience who knows what they're doing says, "Yeah, yeah, I know," when you go through the immutable data parts of problems. And they actually do know, based on good planning and design. But I have a poor sample set size, and I'm not sure how the knowledge is acquired. Seems to be direct finger-burning experience with mentor debugging.
but many less know about determinism.
There's a problem with grasp of (non)determinism even when someone has lots of experience. It's worse in optimistic people with a natural expectation events converge, and a notion: "The world tends to be a nice place, left to it's own devices." An extreme variant -- rarely in a programmer -- is what I call a "one context" person who believes only one thing is true, and it's true everywhere, and exceptions are aberrations. Once familiar with what's "supposed to happen", they have trouble perceiving faults and their causes. People who are good at processing rules might expect rule-driven systems to actually behave according to the rules. :-)
Pessimists seem better at expecting non-determinism and its conseqences, especially if they expect divergence is natural, and believe in Murphy's law, etc, so things going according to plan is the exception..
From the trenches, I'd be surprised if even a quarter of professional programmers knew what's happening. And that number is as high as it is (at time of writing) only due to LINQ and jQuery.
But that's the thing; part of this research will hopefully involve objective views of what programmers know, and what they are even capable of using.
RM Schellhas: From the trenches, I'd be surprised if even a quarter of professional programmers knew what's happening.
When I said my sample set size was poor, I meant non-representative. And it's currently biased as well as small: systems programmers where several have 30 years of experience.
I was addressing this implicit hypothesis: "If programmers only knew about functional programming, they would give up their wicked ways and follow the true light." (Other than hyperbole, did I get that right for some folks here?) But I know some coders who understand the ideas really well, and the result is using related techniques in C, since we ship in C. The idea of having a revolution and talking everyone into using a functional language doesn't reach the question stage. So despite my crummy sample size, I thought it worth reporting this particular "If A then B" proposal sometimes had B false when A was true.
Have you actually read the code?
And LINQ is not all too easy to read, it just tends to be easier than for loops. Richard Hale Shaw, who is an extremely well-knowned .NET teacher, has a number of good talks on LINQ details with regards to common misunderstandings he sees on how iterators work and how they are resumed in concurrent settings.
I am aware of the Large Hadron Collider, and to a certain extent I know what it does and how it does it. I doubt I would be able to use it, though.
I strongly suspect the same goes for the majority of programmers who are 'aware' of functional or parallel programming.
Simon.
Regarding "burning questions", I'd like someone to tackle the belief that people think sequentially or find it intuitive.
It's something I hear repeatedly but I'm skeptical about. I am inclined to believe a claim that our language centers are mostly sequential, due to our limits of one mouth and tongue. But spoken language is not how we think, rather a constraint on how we express some thoughts. (Notably, experts rarely use their language centers to facilitate their thinking compared to beginners in a skill, at least in Chess.)
I've observed my own approach to programming, planning, building arguments - and it is certainly not linear. I sort of seed a program with different thoughts or goals, then meld them together. I've been designing paradigms and languages around this style of thinking - i.e. that support what I describe as "stone soup" programming: components and agents each contribute a few small skills or features, and they are merged together through common environment, discovery, and blackboard metaphors. The cost is that expression of linear behavior is somewhat awkward to express - still easily achieved, but in terms of rules with shared state. I think this is okay, since I believe very few problems are truly linear (we can't walk through the door before we open it, but opening and walking through a door involves a dozen concurrent challenges - like maintaining balance and avoiding obstacles).
Similarly, I disfavor many structured editors because I don't believe people think in a very structured manner. I would like enough structure to support multiple views with bi-directional editing (since I believe people think from multiple views at once), so there's a balance to be achieved.
Guess what a common complaint is when I try to explain this? That people think sequentially, and therefore (well, they don't say this explicitly, but bear with my inference) people won't ever rise above imperative programming. Ugh.
How people approach problems has a huge impact on how our IDEs and programming languages should be presented. I'd love some solid empirical evidence about how people approach non-sequential programming, especially beyond toy problems.
I've brought this up before, and you made a fairly convincing argument against my perception that most people think about a solution in the form of a recipe or directions.
I would perhaps not be surprised if beginners' thinking (and thus language adoption) was starkly different than someone who is comfortable with a language. Certainly a good thing to measure.
I can't even remember that. But it came up a few times more recently for me. It's getting to be a pattern.
The question of whether people think sequentially seems to me to be a red herring.
I am inclined to believe a claim that our language centers are mostly sequential, due to our limits of one mouth and tongue.
But then why do we read sequentially? Or maybe you'll argue that at a small scale, we don't and that we pick up parts of the sentence we're reading in parallel. Rather than try to answer this, I would ask, who cares? How does the answer affect how we program? I don't think it really does.
Parallel programming is in some sense harder than sequential programming because one technique for understanding programs -- thinking through traces of programs -- becomes much more difficult in a parallel setting whereas any technique that would let you understand a parallel program would also work if it happened to be implemented sequentially.
How much more difficult it is depends much on the extent to which you rely on this technique of thinking through program traces. If you're primarily using other modes of reasoning to establish the correctness of your programs, then it might not be much harder.
Do we read sequentially? I don't know, but I do know that at a high level (e.g. browsing Wikipedia) my reading experience is clearly non-sequential, and that at a low level it is quite irritating and difficult to read through a peephole even if it kept up with my eyes. I know from observation that I don't write sequentially. I tend to hop around, adding to sentences and paragraphs, modifying them. This leads to all sorts of tense and plurality and referent errors, trying to squeeze my thought patterns into a linear medium like text. I would not be surprised to learn that many others are similar.
It seems strange, to me, that you assume how we think and read would not affect our programming experience. How could it not?
any technique that would let you understand a parallel program would also work if it happened to be implemented sequentially
I believe this untrue. Knowing that two computations are computed "independently" in parallel - i.e. that one computation does not wait on another - is valuable for certain classes of reasoning (e.g. about denial of service, progress, fairness, latency). If we sequentialize a parallel program, we have difficulty reasoning with respect to divergence.
How much more difficult [parallel programming] is depends much on the extent to which you rely on this technique of thinking through program traces.
I do observe that adding parallelism to a paradigm designed for sequential expression and reasoning (imperative) has not been a pleasant experience for most developers, whereas modeling sequential actions on languages designed for concurrent reasoning (temporal logics, FRP, etc.) is relatively easy.
I don't see why would you expect the physical process of programming to have such an effect on what we're programming. In the real world, we're accustomed to time passing and things changing, but it's still easier to reason about things that aren't changing. Similarly, I think sequential is just fundamentally easier than parallel, regardless of how familiar it is.
any technique that would let you understand a parallel program would also work if it happened to be implemented sequentially
I believe this untrue. Knowing that two parallel computations are computed "independently" - i.e. that one computation does not wait on another - is valuable for certain classes of reasoning (e.g. about denial of service, progress, fairness, real-time properties for subprograms).
I believe this untrue. Knowing that two parallel computations are computed "independently" - i.e. that one computation does not wait on another - is valuable for certain classes of reasoning (e.g. about denial of service, progress, fairness, real-time properties for subprograms).
But you can compute independently in sequence, which was my point. You can always sequence a parallel process, and your reasons for why it works will still be valid. The other direction is problematic.
I don't see why would you expect the physical process of programming to have such an effect on what we're programming
Because naively, it seems as though there would be more 'waste heat' (for lack of a better term) or error chance the more the programming medium is different than our mental models. That likely causes some impedence which shows itself as a number of adoption, usability, or accuracy symptoms.
The relevant mental model is how we understand parallelism, not whether parallelism is present in our thinking. If you're working on an interrupt driven architecture, would it be helpful to be interrupted every few minutes? Of course not.
Programming is humans interacting with a computer to express their interests, policies, intentions, and obtain feedback, intelligence. The issue isn't how we understand parallelism, it's whether parallelism is present in our natural form of expression and interaction - i.e. does the computer language adapts to our needs, or must we adapt to the computer?
If we think or act in terms of interrupts (perhaps to cancel or undo) and wish to control systems by that means, then our PLs and IDEs and UIs - our programming experience - should make it easier to do, to live program in that manner.
[edit] I'm guessing you got confused about the different topics of "how we think" vs. "how we mentally model parallelism". Even if we don't understand parallelism (because our languages make it painful), I believe we think non-sequentially (i.e. forwards and backwards chaining, inside out, outside in, flitting from one subject to another like butterflies).
In live programming it becomes obvious that there is no clear dividing line between UI and PL (and it becomes obvious that this is true for more than live programming). There is no clear distinction between the physical process of programming, how we express ourselves, and what we express. There is only the programming experience - how we interact with our computers to extend human reach and intelligence.
The traditional program life cycle, i.e. involving a frozen program and separate compilation, has bought us much pain and accidental complexity (primarily with regards to distribution, upgrade, persistence, security, debugging, and language integration) and a few dubious benefits. Things don't need to be that way.
I agree that it's easier to reason about things that aren't changing. It's easy to lie to ourselves and pretend things aren't changing. It would be better, IMO, to model this explicitly, i.e. via a staged interpreter, so we later can easily repair the program (live upgrade) when we find things have changed.
sequential is just fundamentally easier than parallel, regardless of how familiar it is
If the problem is such that there is a unique, predictable, sequential solution for it, this might be true. But most problems don't seem to fit the sequential mold very well.
you can compute independently in sequence, which was my point. You can always sequence a parallel process
Counter example: If you have parallel non-terminating processes, sequencing them will not result in the same system behavior as would running them in parallel.
What assumptions are you making that would make your claim valid?
You don't need live programming to adopt a view of programming languages as user interfaces. The traditional program life cycle needs to be carefully extended to support live programming. If you just jump head first into the pool, though, you'll probably end up with something like smalltalk. Your "staged interpreter" comment seems to imply you have something more sophisticated in mind, but I'm still not on board with your desire to get rid of libraries. But we're heading off on tangents, as usual.
I agree that it's easier to reason about things that aren't changing. It's easy to lie to ourselves and pretend things aren't changing.
It's not always a lie. There are plenty of applications which have no need for persistence. You run your program and if it doesn't work you kill it and start over. When that approach works, it's simpler and probably superior.
Counter example: If you have parallel non-terminating processes, sequencing them will not result in the same system behavior as would running them in parallel.
What assumptions are you making that would make your claim valid?
Counter example: If you have parallel non-terminating processes, sequencing them will not result in the same system behavior as would running them in parallel.
I really just had in mind that hardware parallelism can always be emulated by task switching paralellism, but I think a suitable assumption would be that all processes terminate.
You don't need live programming to adopt a view of programming languages as user interfaces.
Of course. Live programming just makes this obvious, deep, strips away the illusions that there is a distinction between them (other than bad practice).
There is a difference between knowing this as an intellectual curiosity and knowing it viscerally, as a fundamental truth. Sort of like the difference between knowing people play football and actually playing football. To you, the relationship between PL and UI is still an intellectual curiosity. You don't really understand it.
It's not always a lie. There are plenty of applications which have no need for persistence. You run your program and if it doesn't work you kill it and start over. When that approach works, it's simpler and probably superior.
Did you just provide an example of a killable application (state change) that can be started over (state change) from a source (associated persistence) which was possibly modified (state change) as an example that things aren't changing and don't need persistence? I don't find it very convincing.
We can model things as constant between changes. And we can model a system as "static" if it changes at a much slower rate than the observer. These are convenient abstractions. But they're lies.
Failing to address the nature of change and program life cycle formally, and simplistic solutions like you suggested, are significant sources of accidental complexity and pain in our programming experience and HCIs. This isn't a subtle problem, but is so pervasive and systemic that it's difficult to recognize. We shouldn't need to restart our applications and operating systems to upgrade them. But we do, because that's the solution down at the lowest layers.
I really just had in mind that hardware parallelism can always be emulated by task switching paralellism.
Sequential would mean you can give an ordering to the processes as a whole, not to each tiny atomic sub-process.
When I spoke of people not thinking or approaching problems sequentially, it hardly matters to me whether it's parallel or fast "attention" switching, just that it isn't easily ordered on any particular dimension. I expect we do both.
Live programming just makes this obvious, deep, strips away the illusions... viscerally ... fundamental truth ... heathen ... don't really understand it.
Claiming a deep connection between disconnected things seems like a great way to increase my profundity, so I'll go ahead and stake the claim that live programming, programming language design, and user interface design are actually all the same thing under the Curry Howard isomorphism.
I actually think the connection between PLs and UIs is quite important and have explored many related ideas over the years. I just think that connection is mostly orthogonal to live programming (which is also important).
Let me follow the reasoning: live programming encompasses all of the important activities we've been doing without live programming, and is thus indispensable. Hard to find a flaw.
The connection between PL and UI can be discovered without a good live programming experience. I agree. I've already said as much. However, my argument is that human ability to really grasp this relationship does not seem to be orthogonal to live programming - to experiencing it, or designing for it.
I actually think the connection between cargo shipping and the economy is quite important. I know it to be true. But my knowledge is shallow, based on History channel documentaries and armchair reasoning. I expect someone who actually understood shipping or economics would pick up on this pretty quickly, if I were to discuss the subject with them.
Let me follow the reasoning:
I'm not sure where you came up with what you wrote after the colon, but it certainly wasn't the reasoning. I'll offer you a better summary:
YOU: it's easier to reason about things that aren't changing
ME: sure, but everything changes. we should acknowledge this in PX.
YOU: untrue. we can model programs as constant between changing them when they're broken. oh, and restarts are often superior.
ME: the body of your argument contradicts your claim, and supports mine. oh, and restarts are a source of much trouble and pain.
YOU: you're just promoting live programming! here's a straw-man summary and some sarcasm. false apologies in advance.
ME: I poke holes in your argument and promote live programming. In parallel, BTW.
Live programming is probably a good setting for understanding many things better, but I think most people would have no trouble grasping a language that did a good job of UI as PL, even if it didn't support live programming.
I guess the rest of the apparent disagreement is just my misunderstanding of what you mean by Live Programming. I still don't fully grasp what you're arguing, but I'm guessing it's reasonable and I just don't get it.
Every UI is already a language - with a grammar of checkboxes, textboxes, buttons, and so on. Many UIs make bad PLs, or very domain specific, or both - the PL equivalents of HQ9+. (Some days it seems we're at the level of: "Jump up and down on your seat to honk your horn!")
To do a good job of live programming requires doing a good job with the UI experience - i.e. maintaining stability across edits, ensuring consistency between system behavior and code, not losing important work. Conversely, live programming enhances the UI experience with a good PL experience - flexible abstraction, composition, extension; good debugging and testing or error prevention; etc..
It seems to me that "good job of UI as PL" implies live programming, and vice versa. You indicate that the concepts might be separated, but perhaps we have different ideas of what "good job" means. Hypothesis: any example of `UI as PL` that is not live programming will be inferior in both roles.
There aren't three problems - UI, PL, Live Programming. They're different perspectives on the same problem.
Every UI is already a language - with a grammar of checkboxes, textboxes, buttons, and so on.
This seems confused or at least stated poorly, but maybe we have completely different things in mind as "UI as PL". Here's my take: Any particular UI can be viewed as a user control for manipulating a data value. The UI as PL point of view comes from treating that data value as a program to be interpreted. A UI with two checkboxes and a textbox doesn't correspond to a language with checkboxes and textboxes as its grammar. It corresponds to a language with a single AST node annotated with two booleans and a string. The language that has a grammar of checkboxes and textboxes corresponds to a special UI for building UIs, not any particular UI.
"Live programming" to me mainly consists of solutions to the schema update problem. My view seems close to the one Jules just posted about in a sibling post to yours. I still see it as orthogonal.
UI with two checkboxes and a textbox doesn't correspond to a language with checkboxes and textboxes as its grammar. It corresponds to a language with a single AST node annotated with two booleans and a string.
That's one valid perspective. Another is that the UI - a GUI in this particular case - is a system that interprets gestures from the mouse and keyboard. And the language is, therefore, the system of gestures that it interprets.
This gesture interpretation does not reveal anything more than your own, might even seem to obfuscate, if faced with two checkboxes and a textbox. But as we see buttons, canvases, drag-and-drop, animations (with associated timed inputs), I think it will seem more legit.
My dictionary describes a language as any formalized communication system considered in abstract. There is no reason that an AST is the only valid abstraction with which to consider a language. Abstracting language in terms of a grammar of widgets (with associated gestures) works relatively for GUI.
"Live programming" to me mainly consists of solutions to the schema update problem.
Well, solving the schema update problem is certainly a big part of live programming.
Another is that the UI - a GUI in this particular case - is a system that interprets gestures from the mouse and keyboard..
But anyway, that explains what you meant above, I think: you meant a grammar of textbox manipulations, checkbox manipulations, etc..
I don't see a problem. I have no difficulty understanding "the UI for a text programming language" as having a language of its own, distinct from the "text programming language". It's a valid abstraction, though perhaps not a useful one in the cases you have considered.
We can design text programming languages for streaming interpretation, for manipulating and maintaining system behaviors primarily in terms of appends. I.e. consider something like a textual command-line language, perhaps even a declarative command line. In this case, the relationship between the programming language and the UI language would not obfuscate; I expect it would actually reveal a great deal about the design and purpose of each.
I assume the "text programming languages" you are imagining are not designed with an effective, consistent UI experience in mind beyond a myopic view of notation - and hence the result is a more complex UI and UX, difficult debugging or testing, and awful UI-as-PL that only obfuscates. It's all accidental complexity, the cost of being simpler than possible. That's what we get when we don't take UI as PL seriously.
I think: you meant a grammar of textbox manipulations, checkbox manipulations, etc.
I meant a grammar of widgets where I described it thus. To consider widgets as an abstract means of communication from human-to-computer and computer-to-human - language for influencing systems in both directions - is valid. A grammar of widgets does, of course, admit a corresponding language of "manipulations".
I'm a bit new to this, but it might help to see "live programming" as a gradual rejection of modal programming, where either we're in the text editor, we're in the REPL, we're in the debugger (at a particular source location), we're running the program (navigating its own modes), etc. Live views help us without obstructing the other helpful parts of the UI.
Different language semantics and programming styles might be conducive to different programming experiences, including live views, so live programming concerns are relevant to language designs in general.
If for some reason we only care about developing a program to do a one-shot calculation, that hardly matters. We still build that program in a gradual way, so the UI we're using has plenty of opportunity to clarify and streamline our work as we go along.
This is an interesting view that I hadn't considered, Ross. I like how it tastes, but I'll need more time to chew on it.
Perhaps dmbarbour means something like this:
All programming languages are UIs, even C with a text editor. By taking the viewpoint of programming languages as UIs seriously, we can improve the UI aspect of programming languages. Live programming is an UI aspect. If you have live programming, the UI aspect of programming becomes more obvious: it's not just the code on disk that matters, but how you interact with it. Conversely, if you take the viewpoint that programming languages are UIs, then you'll inevitably be led to live programming.
But such abstract discussions are mostly discussions over the meaning of words, rather than about the meat of the matter. I think a discussion on how this live programming might work is more useful. The central problem seems to be state. Live programming implies that when you modify some code, the application keeps running with the same state. The state being retained is the difference with restarting the application.
Smalltalk keeps all state. Live programming in Smalltalk is swapping a method another while the application is running. This can easily break invariants. From a theoretical viewpoint it's surprising that this works at all. On the other end you have languages that throw away all state and restart the application from a fresh state. That doesn't break any invariants but isn't live programming either.
Perhaps a model to get to a more useful place is explicit migration of state. There is an explicit procedure to extract state from the application before the edit, a procedure to compute the new state based on the old state plus the edit, and then you run the new application (with edit) with the new state. At least you have a handle on the problem rather than randomly swapping out a pointer. The question is what should these procedures be?
It seems to be a good idea to extract the state at the point where no application code is running anymore. So you first let any event handlers run to completion rather than modifying the code in the middle of an event handler. If you've done that, the only problem that remains is if the programmer changed the representation of data. Some changes can be handled fairly easily, for example deleting a property from a data type. Adding a property is more difficult, because the old state doesn't have a value for that property. Even more difficult is if the representation is changed, but when this isn't visible in the data type definitions. For example the application before the edit might be working with a list of things, and after the edit it's working with a set of things.
We could of course ask for programmer intervention in these cases, but it's difficult or impossible to detect some of them and it defeats the purpose of live programming. How to solve this?
Well said, Jules. I agree with your summary of my meaning and the description of (part of) the problem. There are some related problems, too, such as stability of the UI (especially in a concurrent edit scenario) - can't have an icon jumping out from under someone's mouse, for example.
I've found it useful to model certain things explicitly.
Rather than "freezing" the app to wait for some event handlers to run, I add temporal tags to everything - the events have tags, so does the code update. Locally, there is an atomic switch in the code at a given logical instant. I also model the propagation of code changes in a distributed system formally - i.e. in terms of dynamic behaviors and signals literally carrying code from place to place.
Stability client-side (assuming that the developer isn't the only client of his code) is achieved by using intermediate state anywhere stability is needed - i.e. to freeze icons under a client's pointer.
Other aspects I've found useful to avoid.
Significant changes in state representation I avoid by eliminating internal state from my model, shoving everything into external state models. Of course, that doesn't avoid the problem entirely, but it does simplify the remaining parts of the problem: (a) by designing the state models appropriately, I can make them robust to most schema changes in terms of extensions and abstractions (i.e. SQL-like views coupled to the state). (b) if schema change is necessary, the developer can add a transition path, i.e. create an agent to systematically translate old data into new data, or add extra code to the observers to interpret old data as new data.
To help with the issue of recognizing a problem and alerting the developer before it grows: I expect analysis could recognize most cases. But I also support anticipation with the possibility to observe a behavior and abort if it seems problematic (I recently blogged on the issue of avoiding commitment). Either would allow a developer and user agent to get a transition path in place before there is a problem.
Explicit schema migration is a good option for migrating a web application to its next version, and this is also an interesting problem as you don't want an edit to causes O(n) of downtime where n is the size of your state. But I don't think it's a good fit for live programming. When live programming the data isn't that valuable, and I think it's better to have some heuristics and restart and have a quick way to get back to the same situation in the new application than always try to migrate the old state explicitly. For example if you are developing a game, and you change some aspect of the world representation in an incompatible fashion, it makes more sense to just throw away the game state and restart. In addition it might be helpful to have quick ways to get to the same state. For example by automating the UI like Selenium tests.
The goal of live programming is to let you quickly see the results of code changes. For this it is often helpful to have more than one live program. For example if you are making a web application, and you're changing some aspect of the layout or some other cross cutting aspect, it's useful to see what the effect is on multiple screens. For example you could record a bunch of sessions of a human interacting with the application in some way. Then on a code edit it would replay those sessions and highlight the ones that your edit affected. You'd need a way to easily keep the scripts that control the UI up to date with the new code. For example you don't want to redo all your test scripts if you introduce a margin around an UI element that results in tons of the automated scripts clicking the mouse in the wrong place. So you'd need to find the right level of abstraction to express the test scripts.
If you take the FRP view then the mouse/keyboard input is the first layer, and the time-varying application state is the bottom layer. A program is something that translates the mouse/keyboard input stream into an application state stream. What I'm saying is that for live programming you don't necessarily want to migrate the bottom layer (by writing an explicit migration), but on the other hand you don't necessarily want to migrate the top layer either (by redoing the recorded session). Finding a way to express the right tests at the right layer and a way to migrate that layer after a code edit is, I think, the key to something that easily lets you see the results of an edit.
I'm uncomfortable with the general focus on "live programming" as the ultimate goal of "interfaces to PL" work displayed in this thread. I personally expect to see more productivity benefits from a systematic reduction of the edit-feedback loop (syntax checking, type checking, and tests) than funny and not-well-defined ways to "interact live" with the program.
In my view, the tests that should be run continuously along the program and reported through a good UI are an extension of static checking (enriched with checks best expressed by programs, rather than type-level specifications), rather than a dynamic affair.
"Live visualization" of the running program seems useful for some problems domain only, where correctness of a change is very hard to test automatically, and easy to test for an human (eg. aesthetic of a web design, naturalness of a body animation, etc.). I believe they could we huge wins there, but am not convinced this is so useful in general -- automated testing looks easier to handle for a good UI and more conductive to productivity benefits. Maybe what you call "live programming" in this case is the way tests results are reported (things turning green or red in a corner of the screen, plus good UI to get explanation of what was tested and what the results were).
In any case, the kind of language feature that need love for such a "continuous feedback" scenario seems to be exactly those that languages interested in interactive/live programming have focused on. When a change a function, I want only tests that possibly involve this function to be re-run, to get feedback earlier. This requires a dependency analysis informed by my language semantics. If I understand correctly, such dependency analysis is also a very important aspect of live program modification. So I believe the two views/motivations are quite compatible and converge toward a general goal of more tool-able languages.
I don't think anybody is saying that live programming is the ultimate goal of interfaces to PL. Rather that the goal of live programming, namely reducing the time it takes to see the effect of a change, is a very important goal in interfaces to PL. I probably didn't make it sufficiently clear, but I actually tried to advocate moving *away* from live programming to continuous testing as a method to achieve that reduction in feedback time. Of course static analysis is a totally valid approach too, and it is even complementary to testing in a precise way (finding a proof vs finding a counter-example).
Visualization is very useful in almost all areas. Applications that have a user interface encompass maybe 95% of all programs. For those programs seeing the program run is not just useful for judging aesthetics: it's also useful to see if it's working correctly. For example for a LtU-like website you want to see that if you post a reply by typing something and clicking the "Post comment" button, that the reply actually appears in the thread. You could of course test this programatically by inspecting the generated HTML, and this is sometimes done. But in my experience most people just fire up their browser and test it there. [btw, for some cool stuff see Visual Studio 11 for ASP.NET: if you hover your mouse over a line of code the browser highlights the elements that were affected by that line, and if you hover your mouse over an element in the browser the IDE highlights the code that generated it -- Google "visual studio page inspector"]
That doesn't mean that programmatic tests are useless: for less mundane things they are very useful (but keep in mind that most code is mundane). Even for those things, like sorting, function minimization and linear programming, visualizations can be incredibly helpful. Seeing the steps of the simplex algorithm visualized so that you see the algorithm jump from vertex to vertex (or not, if the program has a bug) is very useful. Conventional testing tools just have a tiny communication channel to the programmer: a list of booleans that say which tests failed/passed. Should we cram all useful information about the workings of a program through that channel? I think the approaches are complementary: first find an input that leads to incorrect output of your sorting routine with a quickcheck-like tool, and then view a visualization of the sorting process.
The goal of live programming is to let you quickly see the results of code changes.
While this sort of productivity is a nice consequence of live programming, it is not what motivates my pursuit of live programming (not my goal). I seek live programming because it leads to a better user experience for all users, not just the programmers:
Live programming is a necessary (though not sufficient) condition for my vision. A similar requirement is a gradual learning curve (no big discontinuities, not too much abstraction).
just throw away the game state and restart
What you're really saying is "let's leave this to the OS and external tools". You thus incorporate a user and programming experience of dubious quality and security, outside your control, poorly integrated, and imprecise and inconsistent with your language semantics.
I don't mind modeling restarts explicitly. Making it explicit allows me to restart specific subprograms or extensions, and results in a more consistent experience. (Many other process control patterns also benefit from explicit modeling.)
Hmm, any thoughts on how to ask these? We can get at perceptual bias -- whether programmers would agree with you without any proof. Getting at the underlying process is much trickier.
Btw, in terms of thinking, many researchers believe mental processes do have a sequential basis. Jeff Hawkins pop sci book clearly builds upon many neuro+cog sci theories to discuss how the brain (often) processes in terms of stories. For example, prediction is imagining a new story, while remembering is playing an old story. There is tension with other theories such as parallel association (see Big Book of Concepts for a very readable cog sci literature survey on learning), but the dischord often isn't too great. E.g., parallel association still works under his model because the model is hierarchical: stories might compete up the hierarchy for attention and down for action.
Hypothesis: people are a lot less aware of how they think than they believe they are.
For testing, it would be useful to record how people move between reading and editing different parts of a program, mean number of statements or words read between changing tasks, mean number of tasks per period, etc..
Note: sequential bias would suggests something stronger than "imagining a new story"; it would mean "imagining a story in sequential order". I expect that in practice we imagine many temporal points at once (i.e. know the ending of a scene before we know the middle) then chain them together.
Ah, I should clarify: we've been doing surveys, with ~400-1500 people at a time. So anything that would fit under that model :)
I expect that in practice we imagine many temporal points at once (i.e. know the ending of a scene before we know the middle) then chain them together.
Yes, I did mean "imagining a story in sequential order" (or in jumps, with each jump to another sequential segment). The jumps themselves may also be abstracted into a sequential sequence. ("I did this, [ruminate on what that was], which led to that"). This seems related to the hierarchy notion as well (it's not "just" stories).
As an application, this is somewhat related to ideas on organizing IDEs based on working sets. The sequencing part of the hypothesis would suggest such researchers should investigate temporal working sets: one view/workset should lead to another.
researchers should investigate temporal working sets: one view/workset should lead to another
That's an interesting observation. Though I'd ask instead how we can support developers in achieving spatial and temporal working sets without thinking about it. I expect Code Bubbles would be a good place to start.
I think the problematic and implications of our mind's functioning even stretches way beyond a mere sequential bias vs. parallelism bias of perspective.
Of course, to achieve a most often underspecified goal we make lists of things to perform in sequence. Be it conscious or not. Of course we're also able, at any single point in time, to keep open ten or so "ontological handles" (re: our problem solving analysis and/or strategies) all available for simultaneous/parallel use, or even sometimes overuse (cf. David's "don't forget to kill your darlings" metaphor).
But there is likely so much more to that alone. Be you well educated from academia, or self taught, or some mix thereof in between, there's always this overwhelming tension between the amount of effort we eventually figure we had to spend in trial and error processes, tedious, often sequential, and all these much more rare aha! moments where several things finally and simultaneously fit together and teach us how much accidental complexity has distracted us from the solution.
We apply divide and conquer all the time, on many different levels of abstraction. It's what I like to call the unique zoom in / zoom out computational feature of our brain. It s of course largely un- deterministic but is stunningly powerful:
this allows us from one second to the other to switch our mental mode in such a way that we go from considering something, say, as an implementation issue, to a more likely design flaw after reminding ourselves what were the premises for the objectives we had settled to achieve.
I would put what we traditionally oppose as of sequential nature vs. of a parallel one on two axis. Let's pretend the dumb Turing machine lives in Flatland and can't see the world other than thru seq vs par lenses.
My best guess is our brain, permanently sliding up and down a third axis, motivated by invention or standardization, or whatnot, is able to look down sometimes too close, sometimes from too afar, and also sometimes finally decide it got the big picture clear enough for a given milestone. I think it's because our natural language is able to manipulate languages as first class values that we agree upon to list in dictionaries for (of least possible ambiguity) references.
I suspect that human's working memory is 7 +-2 paper (and it's criticisms) would touch on similar sort of investigation. What people keep in working memory when coding is likely related to what information they need/use to create an implementation?
Interesting paper and nicely made survey.
I enjoyed taking the latter.
This is a great paper. Just one small point: I think you mean duality between ADTs and objects, not closures and objects? Although AFAIK objects like in Java and C# don't cleanly correspond to either side.
My hypothesis for Question 1 is that multi-shot continuations don't interact well with state. For example if you implement the classic `amb` operator and then do:
filter(xs, x -> amb(true,false))
This computes the powerset of xs and works perfectly if filter is implemented in a purely functional way. But if filter is implemented like this:
def filter(xs, p):
ys = []
for x in xs:
if p(x): ys.append(x)
return ys
Then it fails horribly. This is because calling a continuation doesn't revert imperative updates. With continuations the implementation details of internal state leak out and become global state. Continuations are just not useful enough to make the trade-off between being able to use internal state and being able to use multi-shot continuations. Co-routines are useful since they remove the need for turning your code inside-out when you want to make it asynchronous (which is why some form of coroutines was added to C#). I do not think there is any particularly social reason for why we don't see continuations used a lot.
In fact, backtracking with state reversing has been studied (eg. it's at the core of the notion of "worlds" in Oz, if I remember correctly; or at least there's a lot of examples of it in Peter Van Roy's excellent CTM book).
If you make state explicit using monads for example, you would have to decide how to compose those two effects. You can have continuation as the "outer" effect and state as the "inner" one, or the other way around. In Haskell with monad transformers this is quite clear; but other notions of effects speak about this (difficult) composition as well, see for example Andrej Bauer and Matija Pretnar's introduction to eff.
In your example, you appear to live in a language where both states and backtracking are ambiant effects, so indeed you are probably bound to the language designer's choice of how to interleave them. But that's maybe not a facility, you could think of effect annotations to say "here i'm speaking of the backtracked state".
Generally state is used to make code run faster (for example filling in an array with filter instead of functionally building up a list). In that case the state layer has to be the lowest layer, because it has to the the state natively provided by the machine. In particular it has to live below any (delimited) continuation effects. Backtracking state is expensive. It would be nice if we could keep the efficiency of state in the places where it is not used along with backtracking, while also having backtracking be reasonably efficient (in particular copying the whole heap is not good enough). Perhaps Oz worlds and maybe work on software transactional memory in imperative languages helps here.
Eff is very interesting work on the theoretical side. Indeed if you build state on top of their handlers, then any internal state is automatically backtracked over. This is not the case for their resource based implementation of state; that has the same problems as conventional state. For eff to become practical I think two things are required: (1) Efficient implementation (probably a lot of the work on implementing delimited continuations applies fairly directly) (2) General effects have some of the problems of fexprs, namely that fewer terms become contextually equivalent. So we need idioms, library design principles, and rules for reasoning about effects, perhaps formally in the form of a type system.
This is a great paper. Just one small point: I think you mean duality between ADTs and objects, not closures and objects? Although AFAIK objects like in Java and C# don't cleanly correspond to either side.
Not so complicated, I just meant in the sense of Guy Steele etc. An object can encode functions by having only one method, apply. A pure function can encode stateful objects by having any state modification return the updated record (a la one of the early chapters of TAPL). If mutation exists, closures would make this more faithful.
There seem to be a lot of theories about the adoption of continuations :)
I'm not sure if there really is a strict duality here. The difference between what you call OOP and what you call closures seems to be support for records vs no support for records. In Cook's treatment at least, objects = records + closures. Of course you can encode a value as a record containing just that value. But perhaps I was the only one confused by the word "duality". Duality to me suggests a two-way mapping such that if you go from one side to the other side and then back you end up with the same thing.
My reason for the non-adoption of continuations is pure speculation of course. Whether you can talk about reasons in a non-repeatable experiment also raises philosophical questions. Even if you ignore that issue there are multiple levels of reasons. For example the reason why Java programmers don't use conitnuations is because Java does not have continuations. But why doesn't Java have continuations? etc. My far-fetched theory says: Although Java's original designers were familiar with Scheme/Lisp, and were inspired by its semantics (and copied a lot of its semantics into Java: pass by reference, garbage collection, etc.), they didn't copy continuations because they didn't see any important applications in the Scheme programs they were familiar with, and because continuations were difficult to use in conjection with mutation (which is especially problematic in a language like Java because mutation is used so much). So not very useful + has problems with mutation => no continuations in Java ;)
This is the point behind the title of this website. But it is further exemplified by the prototype-based approach to programming, where you essentially create closures from other closures.
As for why Java doesn't have X, there is no need for a far fetched theory. Gosling, as well as former Sun researchers who tried to shove X, Y and Z into Java, will all tell you that Gosling had a golden design rule: If more than 2 people didn't ask for a feature, he would not do it, no matter how much sense it made.
Gosling sure could have coupled that golden rule with an eye towards sample size and bias. | http://lambda-the-ultimate.org/node/4537 | CC-MAIN-2017-17 | refinedweb | 14,749 | 62.07 |
Martin Odersky made a huge impact on the Java world with his design of the Pizza language. Although Pizza itself never became popular, it demonstrated that object-oriented and functional language features, when combined with skill and taste, form a natural and powerful combination. Pizza's design became the basis for generics in Java, and Martin's GJ (Generic Java) compiler was Sun Microsystem's standard compiler starting in 1.3 (though with generics disabled). I had the pleasure of maintaining this compiler for a number of years, so I can report from first-hand experience that Martin's skill in language design extends to language implementation.
Since that time, we at Sun tried to simplify program development by extending the language with piecemeal solutions to particular problems, like the for-each loop, enums, and autoboxing. Meanwhile, Martin continued his work on more powerful orthogonal language primitives that allow programmers to provide solutions in libraries.
Lately, there has been a backlash against statically typed languages. Experience with Java has shown that programming in a static language results in an abundance of boilerplate. The common wisdom is that one must abandon static typing to eliminate the boilerplate, and there is a rising interest in dynamic languages such as Python, Ruby, and Groovy. This common wisdom is debunked by the existence of Martin's latest brainchild, Scala.
Scala is a tastefully typed language: it is statically typed, but explicit types appear in just the right places. Scala takes powerful features from object-oriented and functional languages, and combines them with a few novel ideas in a beautifully coherent whole. The syntax is so lightweight, and its primitives so expressive, that APIs can be used with virtually no syntactic overhead at all. Examples can be found in standard libraries such as parser combinators and actors. In this sense Scala supports embedded domain-specific languages.
Will Scala be the next great language? Only time will tell. Martin Odersky's team certainly has the taste and skill for the job. One thing is sure: Scala sets a new standard against which future languages will be measured.
Neal Gafter
San Jose, California
September 3, 2008
Many people have contributed to this book and to the material it covers. We are grateful to all of them.
Scala itself has been a collective effort of many people. The design and the implementation of version 1.0 was helped by Philippe Altherr, Vincent Cremet, Gilles Dubochet, Burak Emir, Stephane Micheloud, Nikolay Mihaylov, Michel Schinz, Erik Stenman, and Matthias Zenger. Iulian Dragos, Gilles Dubochet, Philipp Haller, Sean McDirmid, Ingo Maier, and Adriaan Moors joined in the effort to develop the second and current version of the language and tools.
Gilad Bracha, Craig Chambers, Erik Ernst, Matthias Felleisen, Shriram Krishnamurti, Gary Leavens, Sebastian Maneth, Erik Meijer, David Pollak, Jon Pretty, Klaus Ostermann, Didier Remy, Vijay Saraswat, Don Syme, Mads Torgersen, Philip Wadler, Jamie Webb, and John Williams have shaped the design of the language by graciously sharing their ideas with us in lively and inspiring discussions, as well as through comments on previous versions of this document. The contributors to the Scala mailing list have also given very useful feedback that helped us improve the language and its tools.
George Berger has worked tremendously to make the build process and the web presence for the book work smoothly. As a result this project has been delightfully free of technical snafus.
Many people gave us valuable feedback on early versions of the text. Thanks goes to Eric Armstrong, George Berger, Alex Blewitt, Gilad Bracha, William Cook, Bruce Eckel, Stephane Micheloud, Todd Millstein, David Pollak, Frank Sommers, Philip Wadler, and Matthias Zenger. Thanks also to the Silicon Valley Patterns group for their very helpful review: Dave Astels, Tracy Bialik, John Brewer, Andrew Chase, Bradford Cross, Raoul Duke, John P. Eurich, Steven Ganz, Phil Goodwin, Ralph Jocham, Yan-Fa Li, Tao Ma, Jeffery Miller, Suresh Pai, Russ Rufer, Dave W. Smith, Scott Turnquest, Walter Vannini, Darlene Wallach, and Jonathan Andrew Wolter. And we'd like to thank Dewayne Johnson and Kim Leedy for their help with the cover art, and Frank Sommers for his work on the index.
We'd also like to extend a special thanks to all of our readers who contributed comments. Your comments were very helpful to us in shaping this into an even better book. We couldn't print the names of everyone who contributed comments, but here are the names of readers who submitted at least five comments during the eBook PrePrint® stage by clicking on the Suggest link, sorted first by the highest total number of comments submitted, then alphabetically. Thanks goes to: David Biesack, Donn Stephan, Mats Henricson, Rob Dickens, Blair Zajac, Tony Sloane, Nigel Harrison, Javier Diaz Soto, William Heelan, Justin Forder, Gregor Purdy, Colin Perkins, Bjarte S. Karlsen, Ervin Varga, Eric Willigers, Mark Hayes, Martin Elwin, Calum MacLean, Jonathan Wolter, Les Pruszynski, Seth Tisue, Andrei Formiga, Dmitry Grigoriev, George Berger, Howard Lovatt, John P. Eurich, Marius Scurtescu, Jeff Ervin, Jamie Webb, Kurt Zoglmann, Dean Wampler, Nikolaj Lindberg, Peter McLain, Arkadiusz Stryjski, Shanky Surana, Craig Bordelon, Alexandre Patry, Filip Moens, Fred Janon, Jeff Heon, Boris Lorbeer, Jim Menard, Tim Azzopardi, Thomas Jung, Walter Chang, Jeroen Dijkmeijer, Casey Bowman, Martin Smith, Richard Dallaway, Antony Stubbs, Lars Westergren, Maarten Hazewinkel, Matt Russell, Remigiusz Michalowski, Andrew Tolopko, Curtis Stanford, Joshua Cough, Zemian Deng, Christopher Rodrigues Macias, Juan Miguel Garcia Lopez, Michel Schinz, Peter Moore, Randolph Kahle, Vladimir Kelman, Daniel Gronau, Dirk Detering, Hiroaki Nakamura, Ole Hougaard, Bhaskar Maddala, David Bernard, Derek Mahar, George Kollias, Kristian Nordal, Normen Mueller, Rafael Ferreira, Binil Thomas, John Nilsson, Jorge Ortiz, Marcus Schulte, Vadim Gerassimov, Cameron Taggart, Jon-Anders Teigen, Silvestre Zabala, Will McQueen, and Sam Owen.
Lastly, Bill would also like to thank Gary Cornell, Greg Doench, Andy Hunt, Mike Leonard, Tyler Ortman, Bill Pollock, Dave Thomas, and Adam Wright for providing insight and advice on book publishing.
This book is a tutorial for the Scala programming language, written by people directly involved in the development of Scala. Our goal is that by reading this book, you can learn everything you need to be a productive Scala programmer. All examples in this book compile with Scala version 2.7.2.
The main target audience for this book is programmers who want to learn to program in Scala. If you want to do your next software project in Scala, then this is the book for you. In addition, the book should be interesting to programmers wishing to expand their horizons by learning new concepts. If you're a Java programmer, for example, reading this book will expose you to many concepts from functional programming as well as advanced object-oriented ideas. We believe learning about Scala, and the ideas behind it, can help you become a better programmer in general.
General programming knowledge is assumed. While Scala is a fine first programming language, this is not the book to use to learn programming.
On the other hand, no specific knowledge of programming languages is required. Even though most people use Scala on the Java platform, this book does not presume you know anything about Java. However, we expect many readers to be familiar with Java, and so we sometimes compare Scala to Java to help such readers understand the differences.
Because the main purpose of this book is to serve as a tutorial, the recommended way to read this book is in chapter order, from front to back. We have tried hard to introduce one topic at a time, and explain new topics only in terms of topics we've already introduced. Thus, if you skip to the back to get an early peek at something, you may find it explained in terms of concepts you don't quite understand. To the extent you read the chapters in order, we think you'll find it quite straightforward to gain competency in Scala, one step at a time.
If you see a term you do not know, be sure to check the glossary and the index. Many readers will skim parts of the book, and that is just fine. The glossary and index can help you backtrack whenever you skim over something too quickly.
After you have read the book once, it should also serve as a language reference. There is a formal specification of the Scala language, but the language specification tries for precision at the expense of readability. Although this book doesn't cover every detail of Scala, it is quite comprehensive and should serve as an approachable language reference as you become more adept at programming in Scala.
You will learn a lot about Scala simply by reading this book from cover to cover. You can learn Scala faster and more thoroughly, though, if you do a few extra things.
First of all, you can take advantage of the many program examples included in the book. Typing them in yourself is a way to force your mind through each line of code. Trying variations is a way to make them more fun and to make sure you really understand how they work.
Second, keep in touch with the numerous online forums. That way, you and other Scala enthusiasts can help each other. There are numerous mailing lists, discussion forums, a chat room, a wiki, and multiple Scala-specific article feeds. Take some time to find ones that fit your information needs. You will spend a lot less time stuck on little problems, so you can spend your time on deeper, more important questions.
Finally, once you have read enough, take on a programming project of your own. Work on a small program from scratch, or develop an add-in to a larger program. You can only go so far by reading.
This book is available in both paper and PDF eBook form. The eBook is not simply an electronic copy of the paper version of the book. While the content is the same as in the paper version, the eBook has been carefully designed and optimized for reading on a computer screen.
The first thing to notice is that most references within the eBook are hyperlinked. If you select a reference to a chapter, figure, or glossary entry, your PDF viewer should take you immediately to the selected item so that you do not have to flip around to find it.
Additionally, at the bottom of each page in the eBook are a number of navigation links. The "Cover," "Overview," and "Contents" links take you to the front matter of the book. The "Glossary" and "Index" links take you to reference parts of the book. Finally, the "Discuss" link takes you to an online forum where you discuss questions with other readers, the authors, and the larger Scala community. If you find a typo, or something you think could be explained better, please click on the "Suggest" link, which will take you to an online web application where you can give the authors feedback.
Although the same pages appear in the eBook as the printed book, blank pages are removed and the remaining pages renumbered. The pages are numbered differently so that it is easier for you to determine PDF page numbers when printing only a portion of the eBook. The pages in the eBook are, therefore, numbered exactly as your PDF viewer will number them.
The first time a term is used, it is italicized. Small code examples, such as x + 1, are written inline with a mono-spaced font. Larger code examples are put into mono-spaced quotation blocks like this:
def hello() { println("Hello, world!") }When interactive shells are shown, responses from the shell are shown in a lighter font.
scala> 3 + 4 res0: Int = 7
At, the main website for Scala, you'll find the latest Scala release and links to documentation and community resources. For a more condensed page of links to Scala resources, visit the website for this book:. To interact with other readers of this book, check out the Programming in Scala Forum, at:.
You can download a ZIP file containing the source code of this book, which is released under the Apache 2.0 open source license, from the book's website:.
Although this book has been heavily reviewed and checked, errors will inevitably slip through. To see a (hopefully short) list of errata for this book, visit. If you find an error, please report it at the above URL, so that we can be sure to fix it in a future printing or edition of this book. | http://www.artima.com/pins1ed/ | CC-MAIN-2017-43 | refinedweb | 2,095 | 59.84 |
After.
Disassembling the SIMD binary we noticed it does the following:
– Checks whether argv[1] is 31 characters
– If it is the argv is ran through a function called ‘frob’ which calculates a result based on the input using either an SSE or AVX function depending on what the CPU is capable of.
– After the function is finished the output is compared to a fixed value to see if it was correct.
We had a quick look at the SSE2 code, but it seemded quite complex. So we decided first to see if we might be able to ‘black-box’ reverse what the code did.
Looking at the disassembly for the main function we see the following, after the check whether argv[1] is 31 bytes:
0x000000000040070d <+61>: callq 0x4007d0 <frob> 0x0000000000400712 <+66>: mov 0x2021e7(%rip),%rdi # 0x602900 <expected> 0x0000000000400719 <+73>: mov $0x20,%ecx 0x000000000040071e <+78>: mov %rsp,%rsi 0x0000000000400721 <+81>: repz cmpsb %es:(%rdi),%ds:(%rsi) 0x0000000000400723 <+83>: je 0x400748 <main+120>
So it compares the values at [$rdi] = fixed to those at [$rsi].
We can put a breakpoint in gdb at *0x400721 to see what the calculated output is for some fixed inputs:
The desired output: gdb$ x/32bx $rdi 0x402458 <__dso_handle+48>: 0x02 0xee 0x5b 0xc5 0x9f 0x0a 0x49 0x34 0x402460 <__dso_handle+56>: 0x37 0xb0 0x1a 0x8c 0x37 0x30 0x18 0x10 0x402468 <__dso_handle+64>: 0xbc 0x9f 0xe7 0x5f 0x31 0xb4 0xa0 0xeb 0x402470 <__dso_handle+72>: 0x93 0x04 0x71 0x95 0xc5 0x8b 0xd1 0x3b Input AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: gdb$ x/32bx $rsi 0x7fffffffe490: 0x77 Input BAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA: gdb$ x/32bx $rsi 0x7fffffffe490: 0x74
We noticed that the output only chances in the first byte if change the first byte of the input. Without analyzing or thinking any further we implemented a script which manipulates the input byte for byte to find the key.
We implement a simple GDB script to log the output to a file:
b *0x400721 commands silent printf "Got\n" x/32bx $rsi c end
And run a letter for letter guesser
import string,os,re want = '02ee5bc59f0a493437b01a8c37301810bc9fe75f31b4a0eb93047195c58bd13b'.decode('hex') def getbytes(s): allb = '' s = s.split('\n') for l in s: if '0x7ffff' in l: bytes = l.split(':',2)[1] allb += bytes.replace('0x','').replace(' ','').replace('\t','') return allb.decode('hex') found = '' while True: good = False for g in string.printable: if g == '"': continue guess = found + g guess += "A"*(31-len(guess)) out = os.popen('gdb ./simd -x script <<< "r %s"' % guess).read() out = getbytes(out) if len(out) < 32: continue if out[len(found)] == want[len(found)]: print "Found %s" % guess found += g break
After keeping this running for a few minutes we get the key: 4rnt_v3ct0r_1nstruct10ns_c00l?!
Ofcourse afterwards, when writing the write-up you realize it could have been done much simpler… For example if we also look at the output for all B’s we see it is very similar to the output for all A’s. Almost looks like a simple XOR…
Let’s try that
def string_xor(a,key): return ''.join(chr(ord(a[i]) ^ ord(key[i%len(key)])) for i in range(len(a))) a = '77dd74f0813d3b1602c12992471f2a258fabc56a41c58fd98d2600e4e8f5b13b'.decode('hex') w = '02ee5bc59f0a493437b01a8c37301810bc9fe75f31b4a0eb93047195c58bd13b'.decode('hex') key = string_xor(a,'A'*31 + '\x00') print string_xor(w,key)
Yields the same result in much less code and much quicker… good reminder that thinking is faster than guessing 😀 | https://eindbazen.net/2012/05/plaid-ctf-2012-simd/ | CC-MAIN-2018-51 | refinedweb | 551 | 65.35 |
Barcode Software
Securing Network Protocols in C#
Render pdf417 in C# Securing Network Protocols
Changing Printer Availability and Priorities
generate, create barcodes resize none for office excel projects
BusinessRefinery.com/ barcodes
free barcode generator in asp.net c#
using details asp.net website to deploy barcodes on asp.net web,windows application
BusinessRefinery.com/ bar code
#FIXED "LOCATOR"
generate, create barcodes winform none on word document projects
BusinessRefinery.com/barcode
barcode rendering framework c# example
generate, create barcode high none on c sharp projects
BusinessRefinery.com/barcode
url = new URL(". xml");
crystal reports 2d barcode
use .net vs 2010 barcode generating to generate barcode with .net rotation
BusinessRefinery.com/ barcodes
using find asp.net web pages to add barcodes for asp.net web,windows application
BusinessRefinery.com/barcode
import java.net.*;
to produce quick response code and qrcode data, size, image with .net c# barcode sdk custom
BusinessRefinery.com/QR Code JIS X 0510
to print qr code and qr data, size, image with .net barcode sdk automatic
BusinessRefinery.com/qr codes
<customer cust_id="55323"> ... </customer>
free qr code font for crystal reports
generate, create qr code 2d barcode change none with .net projects
BusinessRefinery.com/QR-Code
zxing.net qr code reader
Using Barcode recognizer for item VS .NET Control to read, scan read, scan image in VS .NET applications.
BusinessRefinery.com/qrcode
C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C
free qr code font for crystal reports
using pdf .net framework crystal report to assign qr code jis x 0510 on asp.net web,windows application
BusinessRefinery.com/QR Code
qr code reader java app
using barcode implement for jsp control to generate, create qr code image in jsp applications. code
BusinessRefinery.com/Denso QR Bar Code
Resource
pdf417 javascript library
generate, create pdf417 2d barcode html none in java projects
BusinessRefinery.com/PDF417
using barcode integrating for word control to generate, create pdf 417 image in word applications. apply
BusinessRefinery.com/PDF417
VIRTUAL PRESENTATIONS THAT WORK
rdlc pdf 417
use rdlc reports pdf417 writer to insert pdf-417 2d barcode for .net lowercase
BusinessRefinery.com/PDF417
crystal reports code 39 barcode
use vs .net crystal report code 39 full ascii encoder to develop barcode 39 for .net action
BusinessRefinery.com/USS Code 39
The Future Is Bright
use web form barcode 39 maker to make code-39 with .net environment
BusinessRefinery.com/Code39
crystal reports pdf 417
generate, create pdf-417 2d barcode complete none in .net projects
BusinessRefinery.com/barcode pdf417
Throughout the content audit, we recommend you place your repositories into one of three categories: strategic, tactical, or replaceable. But what qualifies a system as being replaceable How should you decide which
.net code 39 reader
Using Barcode decoder for telephone .net framework Control to read, scan read, scan image in .net framework applications.
BusinessRefinery.com/Code 39 Extended
generate code 128 barcode in c#
use .net vs 2010 ansi/aim code 128 maker to deploy uss code 128 in visual c# downloading
BusinessRefinery.com/ANSI/AIM Code 128
if(elem.numElements() > 1){
System Troubleshooting
Does DQL support right joins No. DQL currently only supports left joins and inner joins. But this shouldn t slow you down, because left and right joins are interchangeable, depending on which side of the join you re standing on. To illustrate, consider the following two queries, which are equivalent:
WAP Wireless Application Protocol. A specification (not a standard) that enables the wireless transmission of data between a wireless phone or PDA and a content server. The specification is monitored and developed by the WAP Forum. WAP Gap A situation in WAP 1.x specifications in which encrypted data sent from a mobile device is decrypted and reencrypted at the WAP gateway, leading to data being exposed in cleartext for a brief millisecond. War driving The process of searching for open wireless access points by driving around. WBFH Wideband frequency hopping. Approved by the FCC in August 2000, WBFH permits channel bandwidths as wide as 3 and 5 MHz instead of the prior 1 MHz in the 2.4 GHz band. This increased bandwidth allows data rates as high as 10 Mbps per channel as compared to the original 2 Mbps maximum per channel (roughly 2 Mbps per 1 MHz of channel bandwidth). HomeRF 2.0 products and other FHSS products benefit from this. WECA Wireless Ethernet Compatibility Alliance. The organization that certifies the interoperability of IEEE 802.11 products and promotes WiFi and WiFi5 as global wireless standards. WEP Wired Equivalent Privacy. A security protocol defined in 802.11 that is designed to provide the same level of security as is found in a wired network. However, several security issues have been found with this protocol. WIM Wireless Identity Module. The secure storage location for information used in a WAP subscriber s transactions. WIM can either be a separate hardware module or a software component on an existing SIM card. WLANA Wireless LAN Association. A trade organization designed to foster market growth through increased awareness and understanding of networking technologies. WML Wireless Markup Language. Web programming language used to format web pages for viewing from wireless devices. WPAN Wireless personal area network. A logical grouping of wireless devices in a small area. WPKI Wireless public key infrastructure. The architecture that adopts existing PKI methods for use in wireless environments to enable secure mobile commerce. See PKI.
9: Building Your Own Weather Tracker Application
Pay attention to the arguments passed to the url() view helper in the previous listing. The second argument, as you already know, specifies the route name to use for URL generation. If this argument is null, the view helper will use the route that generated the current URL as the basis for the new URL. Since this can produce odd errors when working with routes that reset the URL request, it's a good idea to explicitly tell the framework to use its standard, or default, route, corresponding to a URL of the form /module/controller/action, in these cases.
You can also set special permissions for the printer standard permissions if necessary. Click the Advanced button on the Security tab, select an account, and click Edit to access its special permissions.
ANSI-41 handoff-forward processes include the serving MSC handoffforward process and the target MSC handoff-forward process, illustrated in Figure 9.6.
Figure 11-7 Continued
A catalog file, for example Driver.cat, might not be required.
24 U Step 2. Determine Your Test Readiness
Mobility and Intelligent Buildings
98
17:
Figure 10-2
More PDF417 on C#
vb.net barcode reader tutorial: Protecting Windows XP from Viruses in .net C# Embed pdf417 2d barcode in .net C# Protecting Windows XP from Viruses
vb.net barcode reader tutorial: 5: Advanced Networking in visual C# Attach barcode pdf417 in visual C# 5: Advanced Networking
vb.net qr barcode: 5: Advanced Networking in c sharp Access PDF-417 2d barcode in c sharp 5: Advanced Networking
bulk barcode generator excel: Glossary in c sharp Include PDF417 in c sharp Glossary
how to connect barcode scanner to visual basic 2010: Network . . . . . . . . 281 Connectivity in .net C# Draw PDF 417 in .net C# Network . . . . . . . . 281 Connectivity
vb.net barcode reader sdk: Internet Networking in C# Make PDF417 in C# Internet Networking
vb.net barcode reader free: viii in C#.net Access PDF417 in C#.net viii
how to connect barcode scanner to visual basic 2010: of Contents 12 in .net C# Encoding pdf417 2d barcode in .net C# of Contents 12
vb.net barcode scanner programming: of Contents Part 5 in visual C#.net Access PDF 417 in visual C#.net of Contents Part 5
vb.net barcode reader source code: Monitoring Windows XP Network Performance in C#.net Develop PDF 417 in C#.net Monitoring Windows XP Network Performance
vb.net barcode scanner source code: 1: Windows XP Networking in C# Generating PDF 417 in C# 1: Windows XP Networking
vb.net barcode reader tutorial: 1: Windows XP Networking in C#.net Render PDF-417 2d barcode in C#.net 1: Windows XP Networking
vb.net barcode reader tutorial: 1: Windows XP Networking in C# Generating PDF-417 2d barcode in C# 1: Windows XP Networking
vb.net barcode reader sdk: Windows Evolution from WINS to DNS in C# Generation PDF 417 in C# Windows Evolution from WINS to DNS
vb.net qr barcode: Internet Protocol Addressing in c sharp Create PDF-417 2d barcode in c sharp Internet Protocol Addressing
vb.net barcode reader sdk: The Automatic Metric setting is typically best when multiple IP gateways are available. in C#.net Use pdf417 2d barcode in C#.net The Automatic Metric setting is typically best when multiple IP gateways are available.
how to generate barcode in excel 2010: G Nodes. The AppleTalk term node has the same meaning as the generic in c sharp Add barcode pdf417 in c sharp G Nodes. The AppleTalk term node has the same meaning as the generic
vb.net barcode reader sdk: Adding Routers and Residential Gateways in C# Development barcode pdf417 in C# Adding Routers and Residential Gateways
vb.net barcode reader sdk: 1: Windows XP Networking in .net C# Include barcode pdf417 in .net C# 1: Windows XP Networking
Articles you may be interested
barcode generator in vb.net free download: Part I in C#.net Display UPC Symbol in C#.net Part I
com.google.zxing.qrcode.qrcodewriter c#: Checklist All devices are correctly terminated. in visual C# Development QR-Code in visual C# Checklist All devices are correctly terminated.
qrcoder c# example: Pathogens in C#.net Development QR Code 2d barcode in C#.net Pathogens
barcode reader in asp.net c#: Finance Strategies for Wireless Mobility in visual C#.net Generation code 128 barcode in visual C#.net Finance Strategies for Wireless Mobility
qrcoder c# example: xxviii in .net C# Creation qr codes in .net C# xxviii
free barcode inventory software for excel: Create the Application Directory in C# Build ECC200 in C# Create the Application Directory
download barcode scanner for java mobile: HALF-LIFE, t1/2 PERCENT RADIOACTIVE ISOTOPE REMAINING in visual C#.net Build barcode data matrix in visual C#.net HALF-LIFE, t1/2 PERCENT RADIOACTIVE ISOTOPE REMAINING
generate barcode image in c#: Sodium Reabsorption in visual C#.net Development qr barcode in visual C#.net Sodium Reabsorption
ssrs 2016 barcode: Three in C# Build QRCode in C# Three
barcode label printing in vb.net: Common R sum Dilemmas in Java Build QR Code ISO/IEC18004 in Java Common R sum Dilemmas
generate barcode image in c#: r1 r2 Q D Q in .net C# Embed qr bidimensional barcode in .net C# r1 r2 Q D Q
qr code generator c# codeproject: Installing and Running the Group Policy Management Console in .net C# Generating QR-Code in .net C# Installing and Running the Group Policy Management Console
generate barcode image in c#: plasma in visual C# Add Denso QR Bar Code in visual C# plasma
javascript barcode scanner input: Close the Address Book, and then close Outlook Express. in visual C#.net Print data matrix barcodes in visual C#.net Close the Address Book, and then close Outlook Express.
qrcode dll c#: Acronyms 277 in .net C# Creator QR Code ISO/IEC18004 in .net C# Acronyms 277
barcode scanner asp.net c#: Effective Diversification in visual C# Compose code-128c in visual C# Effective Diversification
open source qr code library c#: The software installs. in .net C# Printing qrcode in .net C# The software installs.
how to create barcode in ssrs report: Features of MPLS in C#.net Make PDF417 in C#.net Features of MPLS
qr code c# tutorial: Accessing Shared Printers on Windows 2000 or Later in visual C#.net Integration qrcode in visual C#.net Accessing Shared Printers on Windows 2000 or Later
vb.net barcode scanner tutorial: Seven in .net C# Display PDF-417 2d barcode in .net C# Seven | http://www.businessrefinery.com/yc2/365/259/ | CC-MAIN-2021-49 | refinedweb | 2,019 | 51.55 |
Perl Embedding
This article describes my experience embedding Perl into an existing application. The application chosen, sc, is a public-domain, character-based spreadsheet that often comes as part of a Linux distribution. The reason for choosing sc was twofold. First, I use sc for any spreadsheet-type tasks I have. Second, I was somewhat familiar with the source, because I once added code in order to format dates the way I wanted. Besides, I always thought it would be nice if sc had some sort of macro language—everything should be programmable in some way.
The first thing I did was to get the sc 6.21 source and compile it on my machine. This ensured that everything worked from the start, before I started making modifications to the code.
The next thing was to add the necessary code to sc.c to embed the Perl interpreter. The basics of this were:
Add the following include files
#include <EXTERN.h> #include <perl.h>
Add the variable for the Perl interpreter
static PerlInterpreter *MyPerl;
Put the code shown in Listing 1 in main in the file sc.c.
Update Makefiles to use the correct parameters. This consisted of adding CC options and linker options derived from the following commands:
perl -MExtUtils::Embed -e ccopts perl -MExtUtils::Embed -e ldoptsThese commands give you the compiler and linker options that your version of Perl was compiled with.
Nothing else needs to be done; the Perl interpreter is now in the code. Obviously you can't do anything yet, but you can work out any compilation problems. Right away, I had a few problems with some #define statements and a prototype for main. EXT and IF were the two offending #defines. I fixed these by appending “sc” to the end of them wherever they occurred in the original sc code, to make them unique. If you were writing an application from scratch, it would not be a bad idea to prepend a common prefix to each #define.
Perl, on the other hand, expected main to have a third argument, env, so I added it. I am still not sure where this argument comes from, but it doesn't seem to create any problems.
Once the base interpreter compiled successfully, I needed a way to call the functions. I looked at the sc source and found that one of the keystrokes, ctrl-k, was free for my use. I used this as my “call Perl” key-command macro, with macros from 0 to 9 defined. This combination calls predefined Perl subroutines called sc_macro_[0-9], when defined. The code in Listing 2 adds this functionality.
The function call_sc_perl_macro checks first to see if the subroutine exists with perl_get_cv. If null is not returned, it calls the function which has the name sc_macro_# where # is a digit from 0 to 9.
The perl_call_va function comes from Sriram Srinivasan's book Advanced Perl Programming, published by O'Reilly. This code was used to expedite my ultimate goal of embedding Perl into sc. The code for perl_call_va can be found in the file ezembed.c.
With sc compiled, I proceeded to test the interpreter by creating dummy macros in the file sc.pl to write some data to temporary files. Everything worked fine, which told me the Perl interpreter was working inside of sc.
With a working Perl interpreter embedded into sc and the ability to call Perl “macros”, the interfaces to the C functions in sc needed to be created to do useful work. Fortunately, sc is laid out nicely enough that, for the most part, all one has to do is wrap an already existing function and interface with its internal command parser.
The first thing I thought might be useful is to move the current cell around. Without that ability, I would be able to operate only on a single cell, which is not very useful. Besides, it was one of the least complicated sections of code and provided a good start.
The code for sc_forward_row is shown in Listing 3 and found in sc_perl.c. Before I describe this code, let me give you a quick overview of how Perl treats scalars. Each scalar has many pieces of information, including a double numeric value, a string value and flags to indicate which parts are valid. For our purposes, the scalar can hold three types of values: an integer value (IV), a double value (NV) and a string value (PV). For any scalar value (SV), you can get to their respective values with the macros SvIV, SvNV and SvPV.
Now, in the Listing 3 code, XS is a Perl macro that defines the function. dXSARGS sets up some stuff for the rest of XSub, such as the variable items that contains the number of items passed to Xsub on the Perl argument stack. If the argument count does not equal 1, XS_RETURN_IV returns 1 to Perl to indicate an error. Otherwise, the top element of the Perl argument stack, ST(0), is converted to an integer value and passed to the forwrow function.
Note that all of the XSub code was generated by hand. Some of this work can be done with Perl's xsubpp or with a tool called swig, but in this case, I felt it was simpler to code it myself.
Finally, tell the Perl interpreter about this Xsub with the statement:
newXS("sc_forward_row",sc_forward_row,"sc_perl.c");
The first argument is the name of the subroutine in Perl. The next argument is the actual C routine (in this case they are the same, but they don't have to be). The last argument is the file in which the subroutine is defined, and is used for error messages. I chose to create all of the newXS functions by parsing my sc_perl.c file with a Perl script, so that I would not have to do two things every time I added a new XSub.
> | http://www.linuxjournal.com/article/2901?quicktabs_1=0 | CC-MAIN-2015-18 | refinedweb | 993 | 71.85 |
Can any help me with this code, im trying to get the program to return to a function. I have no idea how this is done. Give a lil explanation so I know for next time. Thanks here the code :
// The "Calculator" class. import java.awt.*; import hsa.Console; public class Test { static Console c; static public void main (String [] args) { c = new Console (); int choice; c.println ("choose one of the following"); c.println ("add ...1"); choice = c.readInt (); if (choice == 1) { add (); // trying to go down to add function } static void int add (int a1, int a2, int total) // add function { c.println ("enter a number "); a1 = c.readInt (); c.println ("enter another number "); a2 = c.readInt (); total = a1 + a2; c.println (total); main (); // trying to go back to main } } | https://www.daniweb.com/programming/software-development/threads/36201/declaring-and-identifying-methods | CC-MAIN-2017-34 | refinedweb | 132 | 86.5 |
#include <MIteratorType.h>
The MIteratorType class is used on iterators where more than one type of filters can be specified. It also provides functionalities to set and get the filter list or individual types of filter. This class should be used in conjunction with DAG/DG/DependencyNodes iterators for using filter list (list of MFn::Type objects) on them, thus enabling faster traversal thro' iterators.
Also, the class has functionalities for specifying the type of object the iterator will be reset to. This could be an MObject, an MPlug or an MDagPath.
Type of object the iterator deals with.
Class Constructor
Initializes the filter type and object type.
Copy Constructor
Constructs the object from an object of the same type.
Destructor.
Sets the filter type to be used in the iterator. See MFn::Type for the list of filter types that can be set.
Sets the filter list. The types of filters to be traversed in the iterator is added to an array and then passed in to this function. This also enables filter list usage on iterators, as opposed to single filter.
Sets the object type. This function should be used only when we want to reset the iterator root. Iterator root can be reset to either an MObject, an MDagPath or to an MPlug. For each of this, there is a corresponding enum value, which has to be used as an argument to this function.
During creation of the iterator, this function has no effect.
Returns the type of filter.
Gets the list of filters.
Returns the object type.
Returns whether the we are using a single filter on the iterator or a filter list. | http://download.autodesk.com/us/maya/2009help/api/class_m_iterator_type.html | CC-MAIN-2015-40 | refinedweb | 277 | 67.35 |
Malware Threat Reports Are "Apples and Oranges" 191
Ant writes "The. Not only do the various security companies use different names for the threats they identify; they don't even identify the same threats."
Do any of them mention linux or OS-X? (Score:2, Insightful)
This will answer your question, symbolset - (Score:5, Insightful)
No they haven't.
That's why.
Most definitely not. Windows users have no idea about 'threat tables' or what the hell's going on, except that their antivirus program is blinking red and making noises and they have to keep clicking "yes" or "OK" to make it better..
Hence the inconsistent naming conventions and detection profiles across vendors. +5 informative.
Re: (Score:2)
"'Comparing the monthly statistics from different anti-virus companies is truly comparing apples and oranges,' said Tom Kelchner, Sunbelt Research Center manager. 'What one company detects and identifies as a specific, named piece of malcode, another may detect generically.'".
Good to know. Now I know which AV vendor I'll be choosing in the future.
Re: (Score:2, Insightful)
Straight from the Subscription FAQ. Fail troll is fail.
Re: (Score:3, Insightful)
There can only be one way out.
SEPPUKU.
Re: (Score:2, Funny)
Re: (Score:2)
He's a script.
Or he is you.
Re: (Score:2)
Either you have been drinking too much Ethanol or I have not been taking enough Tegretol.
Re: (Score:2, Insightful)
Alternatively they might have actually read the article, an
Re: (Score:3, Insightful)
September 29, 2009 11:51 AM PDT
Malware worldwide grows 15 percent in September
A rise in malware has caused the number of infected PCs worldwide to increase 15 percent just from August to September, says a report released Tuesday
Phew, I'm glad they're so much smarter - imagine how much more clickfraud and spam the botnets would be perpetrating if they hadn't wised up.
Close to 60% of all US Windows computers are hosting malware already, and that's not likely to change any time soon. The anti-malware indust
Re: (Score:3, Interesting)
And how much of that is caused by the bad practices of places like Worst Buy? As a PC repairman I get a lot of Best Buy and Staples machines across my desk, and the default settings these bunches use is just terrible. They ALL have Automatic Updates for Windows turned off, most haven't had so much as a single patch since they came from the factory, the only "protection" they have is a shitty 30 day crapware AV install, and some even have the firewall DISABLED by default! WTF?
I have to wonder with so many se
Re: (Score:3, Insightful)
This is why education is so important and the idea that a computer is simple is bad. People buy devices that are as powerful as supercomputers were 15 years ago and expect them to be as simple as a toaster. So they end up giving vast amounts of computing power and network bandwidth to criminals.
As for Best Buy -- just an example of how easy are a fool and his money parted. I recall reading an article about how many people just buy a new cheap PC after theirs is infected. Of course, current security practice
Re: (Score:2)
No, this is why the current monopoly general-purpose OS is such a bad idea.
If formats, protocols, APIs etc are open, then simple computers can be used for simple tasks. The hardware industry is trending in that direction with products that are cheap, functional and simple, like the Freescale Tablet.
A device like that could be made safe, reliable and uncomplicated given the right software selection. People who don't wan
Re: (Score:2)
Yeah, because Best Buy would harden Linux if they sold it in any numbers.
I don't know if you are the same guy, but I've seen the call for open OS a crapload in the comments on this article. Yet, I've seen nothing that indicates this wouldn't happen as bad (or worse) with Linux or some other currently existing OS that is "open". The only saving them now is the fact that the number one OS is such an easy target. Whether it's easy
Re: (Score:2)
It's to keep the big wheel turning and give you job security, without it, there would be no need for you, or the AV vendors. Didn't you know...
Re: (Score:2)
Might as well preinstall botnet clients at the factory.
No, that would be HP.
Re: (Score:2)
No, they don't.
Hairyfeet is a Microsoft apologist. He's always on hand to invent excuses for Microsoft's failings.
As any shopper will tell you, your computer comes from the reseller in a box from the manufacturer, and generally has a standard pre-install image ready to run. I've never seen any modification of settings, just the usual crapware installed.
Nope, this isn't a reseller problem - that's just blameshifting.
Re: (Score:2)
Oh give me a fucking break! Lord save us from paranoid Linux users!! For your info I have said on here about a bazillion times that Steve Ballmer is probably the shittiest fortune 500 company CEO ever, and have been more than happy to list their many failings (RRoD, Zune, no DX10/11 for XP, Vista) but quit trying to be paranoid and blame everything on 'teh evils M$!" okay?
And no shit they come with a default image, so do all the off lease office equipment I sell. You know what? I take a whole 2 minutes to r
Re: (Score:2)
Oh believe me pal, I can share some Worst Buy horror stories. The last shop I worked for (Now I do mostly SOHO and SMBs and the only home users are brought to me by word of mouth) was the "go to" place for those poor souls that went to Worst Buy.
Here are just a few that I can remember off the top of my head: One guy went in with a nearly $500 graphics card, came out with a $50 one, which of course when I told him and he went screaming to Worst Buy said "You can't prove you had a decent card in there". Folk
I'm just bragging (Score:2, Funny)
28 years of computing on networks, zero instances of malware. I feel special.
Re:I'm just bragging (Score:4, Insightful)
You mean "zero detected instances".
Re: (Score:2)
Can you point me to some malware that does so little, that it can remain undetected by a fairly savvy computer user?
I'm serious here - there's always a troll in these threads that makes the comment you just made. However, in my experience, I've never run into malware which was "stealth". Its entire purpose is to send mail, pop up ads, and propagate. All of that is damn easy to spot if you're reasonably well versed in how your computer normally runs.
I tend to believe a competent person when
Re: (Score:2)
Spurious network activity can be damn hard to spot. I'll admit that popup ads and so on are a bit of a give-away, but would you notice 1 kB/s of extra network traffic?
Most people who claim to have no malware don't even know what all the processes they currently have running are. They just don't have popup ads or other obvious symptoms.
Re: (Score:2)
Can you point me to malware that engages in only spurious network activity? All that I've seen are either mass mailers, which is pretty easy to spot, or ad-based, which by definition need to be visible. I've never seen malware that sent out an email an hour, only when the network was active.
(I've also never heard of one which modifies the blinkenlights on my router and modem. If I'm not using the internet, and they are flickering away, that'd be a problem.)
Re: (Score:2)
Keyloggers? Backdoors?
Both are malware, both will do nothing most of the time, and avoid detection as much as possible. Good luck finding out you have one.
Re: (Score:2)
is it even malware? What would it be doing?
The real risk does not come from pop-up ads, a changed browser or porn links on a desktop. Nor does it come from formatting harddisks or constantly rebooting. The dangerous thing would be rootkits that hide, remain unseen, log your keystrokes, log your internet traffic etc. and send them to a business rival. They could be buried deep in network traffic, for instance in DNS requests. In contrast to the usual "open some ad windows on the users screen" malware, in this case remaining unseen is crucial.
Re: (Score:2)
That's not malware. That's a targeted attack. We're talking about garden-variety, drive-by download, infected porn site malware here. We're talking about flies, you're talking about a unicorn.
Re: (Score:2)
Hidden software that logs keystrokes and sends the results off to a remote system has a lot of value. It doesn't need to only hit a targeted system. When they see results like:
mail.yahoo.com apoc@yahoo.com 123jass8
In the log file they know they have a new account to search
Re: (Score:2)
Example of competition gone wrong (Score:5, Insightful)
Everyone's always touting the benefits of competition, but here's a clear example of competition serving to confuse the market. There are a number of problems:.
2) Antivirus vendors are now trying to police what you can and can't do. Look at the numerous reports of false positives for programs that are legally grey (or black) but aren't viruses. I've personally had network tools come up as false positives and it's a pain to unquarantine and exclude them so they don't quarantine themselves again.
3) The main form of collusion between vendors seems to be fitting into Microsoft frameworks so they show up as antivirus software in the appropriate control panel and so you don't get warnings about invalid or out of date antivirus. But this in itself makes them more vulnerable to attack
4) The products are often so badly written that they cause as many problems as they solve. A bad update here or there can (and has in the past) caused irrevocable system damage that has required a reinstall or restore from backup for users. What's the point of an antivirus that does this. Worse I've seen much subtler performance problems from minor antivirus updates - in one case it brought a company I worked for's client's machines to their knees and initially they blamed us. Turns out a change in the engine meant very big files were being opened and re-scanned for every write. Needless to say it wasn't out fault.
5) Every vendor seems to have their own names for a virus. For pity sake can we have some kind of standard naming mechanism?
Isn't competition suppose to improve such things and open up the market? In this case it just hasn't happened. There has been implicit collusion but not of the right sort to improve or provide a diverse range of products. There's not one product that will protect you well.
Re: (Score:3, Interesting)
5) Every vendor seems to have their own names for a virus. For pity sake can we have some kind of standard naming mechanism?
A number or a hash?
Re: (Score:2, Insightful)
Re:Example of competition gone wrong (Score:4, Insightful)
Re: (Score:3, Interesting)
I'm guessing the reason you can't use multiple resident scanners is that just one will bring your system to a crawl.
I wrote: and not just the resident portion
I think the need for constantly running virus scanners is seriously overstated, at least for people who know not to run HorseSex.exe.
I got drive by downloaded 2 days ago. My antivirus didn't pick it up, but fortunately my firewall did (which prevented further virus downloads). I was looking for books on photography (reguarly non-sexual photography) and
Re:Example of competition gone wrong (Score:5, Insightful)
No, this is a clear example of a monopoly creating a market repairing broken Windows. That's why it seems confusing.
Consumers shouldn't be facing a choice of ineffective bandaids to patch over their computers' inability to keep malware out. They should be able to choose a computer/OS that is inherently resistant.
For computer users, this is a Red Queen's race, and Windows users have to keep paying and stay vigilant just to retain a semblance of control of their own machines. The real solution is to mandate open formats, APIs, and protocols, then let any OS vendor compete on level terms. When consumers can select an OS that suits them, including the level of security they wish to pay for, we will have competition. Only then will OS vendors have to improve their products to retain customers.
Re: (Score:2)
Re: (Score:3, Insightful)
Because, as I stated, we don't have open formats, APIs, and protocols.
That makes it difficult for computer users to move freely between OSs and prevents competition on real merits.
Re: (Score:3, Insightful)
It's not that they can't run on Linux, it's that they don't.
Re: (Score:3, Insightful)
It's a self-sustaining monopoly out there. How can you tell about some abstract choice if for a majority of people PC=Windows? And you can't really blame people here: all they see is Windows, on every shell in every computer store. Exclusive per-CPU deals led to a situation where OEM's pay the same to Microsoft no matter how many OS's they offer, so they usually offer one because it's cheaper that way.
What choice do consumers really have if they don't know about Linux? Windows vs. overpriced Apple computers
Re: (Score:2)
Irrelevant. That there's a monopoly on the OS doesn't have anything to do with the software that runs on it. We had a monopoly of petrol cars in the US for the longest time. Sure, that meant that the diesel Mercedes didn't sell here, but the competition between the petrol car makers was real. And that competition worked the way it was supposed to.
But antivirus makers not naming things t
Re: (Score:3, Insightful)
Please tell me how a virus can infect a Live CD?
Re: Live CD (Score:3, Interesting)
Re: (Score:3, Interesting)
Purely theoretical:
- User boots live-cd
- Some malware gets executed and stays in RAM (by user interaction or not)
- Malware reflashes the EEPROM holding the BIOS with some malicious code
- On next boot BIOS will store some malicious code in memory and does something very clever that makes the OS on the liveCD execute that code
It would be a very targeted attack, but not entirely impossible
Re: (Score:2)
Re: (Score:2)
If you were an OS developer, how would you prevent such an attack?
The game console makers prevent the attack just by requiring all executables to have been signed by the console maker and putting a policy in place that software from a one-man outfit won't get signed.
Re: (Score:2)
Which can still be defeated by exploitings bugs in approved software. The effect is more to restrict who can write for the platform. Even to attempt to control what the owner can do with their machine.
Re: (Score:2)
Some live CDs have extra writable area to save files, but it's stretching it to say a virus would be at all likely to make use of that.
Re: (Score:2)
You mean before or after the image is burned?
Re: (Score:2)
"Inherently"
You use that word a lot, but I don't think it means what you think it means.
Re: (Score:2)
No, there certainly is such a thing. I hate to be one to preach how great mac and Linux are, but they are 'Inherently resistant'(Combination of obscurity and the lack of the porosity leading weak points to be mainly the user, and even then defending him/her from his/herself). There is a huge difference between that and immunity though.
You are aware that the great majority of Windows malware in the last 5 or 10 years has been taking advantage of either the weak point between the keyboard and the chair or unpatched client software to install and spread?
Neither of which are exactly unknown on other platforms.
Re: (Score:3, Insightful)
The vast majority of said windows malware actually takes advantage of the user combined with the fact that user typically runs all his code as an admin.. Unix/Mac don't give you elevated privileges by default, and provide a well understood mechanism by which you can elevate your privileges which *should* make you think...
There is also worm type malware which attacks open network services, windows ships with several services on by default, even on a workstation install, which cannot easily be turned off and
Software sources (Score:2)
As an extension to the above, the windows mentality of downloading and executing binary installers from websites lends itself to malware
It's not just the Windows mentality. Mac OS X has the same mentality of downloading a disk image from a site and dragging the
.app bundle to the Applications folder. Likewise, if Linux ever gets widespread, it will likely have the mentality of adding a software publisher's repository to a machine's software sources and installing software that way.
Re: (Score:2)
The point is that [well-known companies' software repositories] are at least crptographically signed.
If a malware publisher can buy an Authenticode certificate for $200 per year, what makes you think these repos won't get signed in a way that the less-trained user is likely to trust?
And even if Linux was very popular, most people's everyday requirements would be preinstalled as part of the distro defaults or met from the distro's repos, or the signed trusted repos of large companies like Adobe.
So in other words, developers have to get their software published by either a distro (if free) or a large company (if non-free). But independent video games, for instance, can't go in the distro's repos because making the program and its data free or freely distributable, as required by the distro, would compromise the busines
Re: (Score:2)
windows ships with several services on by default
... [snip] ... Linux/Mac ships with virtually nothing listening by default
So they are the same then, right? You would have have not qualified "nothing" with "virtually" if you knew that you could get away with it (like if it was true)
.. so we have you using liberal language on one side and conservative language on the other, to say the exact same thing. Why is that?
Moving on:
... which cannot easily be turned off and are usually just hidden behind a software firewall... [snip]
... anything that is listening can be turned off and a software firewall (if you choose to enable one) provides an extra level of security on top of that
Oh look, you did it again.
Why are you so disingenuous?
The fact of the matter is that it is Windows users who are the big problem and if 2010 was the year of Linux, you can damn welkl expect 2011 to
Re: (Score:2)
I say "virtually" because i did not have any straight default installs at my disposal to verify..
Also there are too many different linux distributions to say with absolute certainty... A default install of Gentoo (having followed the standard installguide) has nothing listening on the network by default for instance...
Also the Ubuntu machine i have here, only seems to have sshd and cupsd listening on the network, and i explicitly enabled those services.
A tailored linux distro designed to perform a specific
Re: (Score:2)
Such elevation can also be applied on a per program basis. If there is an equivalent of setuid/sudo/etc in Windows it dosn't appear to be that well understood. To the point where "give th
Re: (Score:2)
My understanding is that it is automatic. That is, if the program is written right, you are logged in as user, and when something needs root, it pops up and states it's needed and asks for that permission. And for things that aren't smart enough to ask (older programs), you can right-click and run-as admin. I'm not set up right now to test this, but hopefully someone out there can check this on Vista or
Re: (Score:2, Informative)
Re: (Score:2)
I agree, security is a process not a product..
Unfortunately, our voices are nowhere near as loud as those of the vendors telling people that security is a product.
Re: (Score:3, Informative)
6) Vendors appear to put more effort into making their user interface "pop" rather trying to minimize resource usage and system impact. For example, Microsoft antivirus creates a system restore point every time the signatures are updated (once a day). Every time a system restore point is created my system become barely unusable for a couple of minutes. You can't control when it updates the signatures (currently for me it's around 23:20). Which brings me to:
7) Vendors want to use their own resistant schedule
How about latin names (Score:5, Interesting)
5) Every vendor seems to have their own names for a virus. For pity sake can we have some kind of standard naming mechanism?
How about a (latin/greek) Biological-like [wikipedia.org] naming system. After all, it works for biology and many (computer)viruses are derived from earlier versions of those viruses, so we could have actual hierarchies.
So you could have a name such as: "userus.dumbus.clicktus.pornolinkus.diabolicus"
Of course after the latin name we could come up with a "common" name - based on the name of the unfortunate tech who had the displeasure to remove it first.
Re: (Score:2)
The trouble is, everything would be under userus.dumbus.clicktus.pornolinkus so it would just be a common namespace and wasted characters.
Re: (Score:2)
"Why can't I run 3 different products side by side and decide which one's resident scanner I want switched on? I'm sure there are technical issue but I'm also sure they're not insurmountable."
Tried running different products using Thinapp thin installs? That would be one way to experiment.
Re: (Score:2)
They don't even have to be questionable. VNC manages to generate plenty of false positives, IME.
4) The products are often so badly written that they cause as many problems as they solve. A bad update here or there can (and has in the past) caused irrevocable system damage that has required a reinstall or restore from bac
Running multiple products (Score:3, Funny)
This is why I have to run 6 different scanners: because there isn't one that detects all the threats. I currently run 2 antivirus programs along with SpyBot, SuperAntiSpyware, Windows Defender, and Malwarebyte's Anti-Malware.
Re: (Score:2)
Re: (Score:3, Insightful)
... and then you complain Windows runs like a snail.
Re: (Score:3, Insightful)
``This is why I have to run 6 different scanners: because there isn't one that detects all the threats. I currently run 2 antivirus programs along with SpyBot, SuperAntiSpyware, Windows Defender, and Malwarebyte's Anti-Malware.''
And yet, people insist that Windows is user friendly. More so than other operating systems, even.
Re: (Score:2)
If you don't engage in risky behavior you don't have to worry so much. For example, paying for all your software should be enough to get you down to one virus scanner and two anti-malware programs
:)
Re: (Score:3, Informative)
Really?
Researchers Hijack a Drive-By Botnet.
They found more than 6,500 websites hosting malicious code that redirected nearly 340,000 visitors to malicious sites. Drive-by downloading involves hacking into a legitimate site to covertly install malicious software on visitors' machines
"Once upon a time, you thought that if you did not browse porn, you would be safe," says Giovanni Vigna, a UCSB professor of computer science and one of the paper's authors. "But staying away from the seedy places on the Internet is no longer an assurance of staying safe."
Re: (Score:2)
Warez doesn't typically come with malware, if anything pirate copies of various things often have malicious (defined as doing something detrimental to the user or his machine) code such as drm schemes removed.
I have done many incident response jobs, where one or more machines inside a company becomes infected with something that the av they subscribe to fails to detect, and it falls upon me to investigate the infection. Very few of these machines have any warez on them, or evidence of trying to view things
Drive-by downloads of fake antivirus software (Score:2)
If you don't engage in risky behavior you don't have to worry so much. For example, paying for all your software should be enough
Whom should I pay for Firefox and GNU Image Manipulation Program? But seriously, my aunt got drive-by-downloaded twice, both times by fake antivirus software, and she spends most of her time in Facebook. I didn't know Facebook had mandatory fees. The first time it happened ("System Security"), I was able to boot into safe mode and run MalwareBytes Anti-Malware, but this time ("Advanced Virus Remover", apparently a newer version of the same threat), safe mode just causes the computer to restart during boot.
Re: (Score:2)
a suggestion for you
1 grab a USB >PATA|SATA cable and a good screwdriver
2 pop the case on her computer and pull out the hard drive
3 use the cable to mount her hard drive on your computer
4 scan her drive on "NSA Paranoid" level (you may of course want to do a scandisk on it first)
5 backup her hard drive after it has been cleaned
6 replace her harddrive boot it and pray
Cleaning Windows with Ubuntu laptop? (Score:2)
grab a USB >PATA|SATA cable
For personal reasons that I would prefer not to disclose on Slashdot, she wants to pinch every penny from this fix; otherwise, she would have already taken the computer into a local repair shop. At this minute, without access to ask her, I'll assume that she'll tell me that she can't afford to buy a USB enclosure for this fix.
4 scan her drive on "NSA Paranoid" level (you may of course want to do a scandisk on it first)
My primary computer is a laptop that runs Ubuntu 9.10; her computer runs Windows XP Professional. Ubuntu won't mount an uncleanly unmounted NTFS without a special flag; even then, I ha
Re: (Score:2)
OK. But you can mount and read her files. So get some USB sticks and copy her files over to them. Then reformat the disk.
Yeah, it's a pain, and a lot of work. But it's a way forwards. Then, if the computer has enough power, install ubuntu and INSIDE it a virtual machine into which you install MSWind and any applications that she needs. Don't allow the virtual machine access to the internet.
I'm sure there are other ways forwards, and I don't know all the details, but this should work, though it would b
Re: (Score:2)
Double-click on the icon on your desktop named mbam-setup.exe.
I tried that, but the AVR-infected Windows Explorer said mbam-setup.exe was infected and refused to run it.
Re: (Score:2)
Get an iPhone. Seriously. Requiring signed and approved applications along with a mechanism to withdraw applications is the only feasible way I can see to somewhat secure a computer. Plus, http and smtp must die, instead requiring https and some better mail protocol with encryption and signatures.
Certificates should be issued by government, by the way. Preferably at a cost that will cover a reasonable identification procedure for the certificate holder. And I realize this raises a lot of issues with regards
Re: (Score:2)
Let me get this straight -- you're saying that the way to avoid to losing any control over our computers is... to give up all control over our computers?
Re: (Score:2)
Is the problem that bad, or is this just the latest version of Chicken Little? I use Avast! Antivirus, Malwarebytes, Spybot and Comodo's firewall. They update and scan each night when I'm not at the computer (which is on 24-7, by the way, and has been for more than five years). I've never had a virus or any serious malware infestation. Never. A few tracking cookies, the occasional inactive trojan and the like are invariably sacrificed at the nightly slaughter.
And yet you believe I should give up what
Re: (Score:2)
Six scanners?! You can't be serious...
If that's true you either REALLY need Windows or are plain masochistic. I don't use Windows for years now, but I still remember how a scanner trashes the hard disk and slow the whole system beyond acceptable for some hours. With six scanners it would take a whole day to run them through your disk once.
Thanks but no, thanks.
Me too. (Score:2)
I pay $24.95 a month in antivirus updates for my $449.98 netbook. I do a deep scan one day a month just to be on the safe side and I manage to keep infections down in the double digits. But what else can I do? Macs are too expensive and Linux just requires too much time.
Apples and Oranges - A Comparison (Score:5, Funny)
Who reads them anyway? (Score:2)
Missing threath (Score:2)
I not English write much good (Score:2)
Doesn't make sense to me. I mean, if Schemester Antivirus wants to identify a threat that is "not the same" as the one Flybynight Computer Security wants to identify, wouldn't one expect them to use different names?
That's like saying Ford calls its car Fiesta, while Toyota calls its car Tazz, but they are not the same car. (To include the obligatory car analogy.)
Point of interest (Score:2)
Just wanted to make a comment regarding anti-virus/malware vendors and how they co-operate with each other. Recently I took on some Sophos training for work - Sophos makes security software which includes (among other things) anti-virus.
From what I was told, they DO work with other AV vendors in one particular situation: samples. If a new virus/trojan/nasty is detected by any vendor in a partnership of vendors, they will provide a sample to others, but won't tell them their detection algorithms. That way th
They all identified the same top virus (Score:2)
Windows. The sample of reports listed had W32, Win32, or a virus targeting Windows (e.g., Conficker).
I think the results and the solution is pretty clear, and it's the same that it has been for more than 25 years.
Face facts (Score:2)
They all want you to be afraid of the maleware THEY sponsored the develpoment of so they KNOW they can cure your ills easily.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
I'm going to reply to your comments in "".
"I use Linux. Its true that there are some viruses for Linux, its just that I haven't ever had one."
Do you understand the difference between a Virus, and Spyware, Malware, Worms, and Root Kits? This idea you have is a mirage. Linux boxes have multiple serious security flaws, as all our systems do today, The idea peddled by some is that one side is immune, while the other is an open door way. I'd really rather people talked sensibly with a realisation that our curren
Re: (Score:2)
Linux has a significantly higher proportion of the server market however, and is dominant in the supercomputer market... The areas where Linux is strong are generally more useful to a hacker, as the systems are more likely to be running 24/7 and have access to far more bandwidth. So yes, Linux is very much a target and has plenty of people working to find ways onto Linux machines.
Re: (Score:2)
Supercomputers are yesterdays news. These botnets put them to shame on nearly every metric. The idea that you mentioned them as an important target in laughable, because even if hackers got in.. they would get noticed rather quickly even if nobody is watching for it when that 7 hour job instead takes 14.
The key to the success of botnets is that very few eve
Re: (Score:3, Interesting)
Linux is too fragmented. Get 20 million Ubuntu Karmic users (or whatever) and you'll see some malware. Of course, if you see much Linux malware crop up, then you'll see some userspace tools for SElinux... or such is my hope.
Re:I think we can kiss this meme good night now. (Score:4, Insightful)
Re: (Score:2)
That's a pretty unfair comparison for this discussion. If you run Windows with just a service like a firewall then it too is pretty secure. It is only when you start installing more complicated programs to read emails, browse the web and load office documents that it starts to become vulnerable to viruses.
Re: (Score:2)
Out of that 2 billion Linux machines, how many are used as interactive user workstations (ie desktop & notebook clients)?
It matters.
Servers are usually administered by someone who knows something about what they're doing. Consumer appliances are often not administered at all - but that's fine, because their software loadout comes with everything they will ever need and any updates come as a "whole system software replacement" from the manufacturer. An appliance's small functional set compared to a gener
Re: (Score:2)
Me thinks your estimates are far more than a little inaccurate.
I know of no one outside of a Google employee that runs Linux on any device they own.
None of the non-tech savvy have a Linux based router, and the tech-savvy people I know that use something custom use a BSD.
I've yet to come across a Linux based WAP or router in the real world.
Sure all your linux friends may use one, but that isn't exactly an unbiased comparison.
Re: (Score:2)
Many consumer network appliances do run Linux, but don't advertise it. What operating system a device uses is meaningless to most end users, and many devices don't have a published method of changing the software running on the devic
Re: (Score:2)
Many might run Linux, but many use closed source embedded operating systems. The vendors would easily switch to something else. For example, I believe Linksys switched from Linux to VxWorks in one model because they could get away with including use less memory.
To most end users, the fact that the device uses Linux simply do | http://tech.slashdot.org/story/10/01/11/033238/malware-threat-reports-are-apples-and-oranges | CC-MAIN-2015-11 | refinedweb | 5,958 | 69.72 |
Introduction to Shuffle() in Java
Java has many built-in functions to perform different operations on collections or other data types, and one of them is shuffle. Shuffle function is available in many other languages like Python.
- The shuffle function is used to shuffle the collection elements.
- It randomly permutes the list elements passed in parameters.
- There are two methods to shuffle in Java one is using the collections shuffle method, and another is by using random class.
- The collection shuffle function can also be called in two ways, one with a random parameter to specify randomness and another without parameter.
- shuffle(<list>)
- shuffle(<list>, <random function>)
- You need to pass the list so that we convert the array to the list, then pass it to a collection.shuffle and again convert the result to an array.
Syntax for Shuffle() in Java
Declaration for shuffle method:
public static void shuffle(List<?> list)
public static void shuffle(List<?> list, Random random)
Parameters:
- List: The list which you will pass will be shuffled.
- Random: It’s the random function passed with a seed value that will become the source of randomness.
Returns:
- The shuffle function doesn’t return any value; it just shuffles the list.
Examples of Shuffle() in Java
In the example below, we created a list from an array with some alphabets and used the shuffle method to shuffle the array. Every time you run, you would get a different shuffled list.
Example #1
Code:
import java.util.*;
public class CollectionsShuffleExampleWithoutRandom {
public static void main(String[] args) {
List<String> list = Arrays.asList("R", "A", "H", "U", "L");
System.out.println("Before Shuffle : "+list);
Collections.shuffle(list);
System.out.println("After shuffle : "+list);
}
}
Output:
Example #2
In the example below, we create a linked list of integers and add some integers to it. But here we passed another argument that is the Random function which will become the source of Randomness. Then we passed the Random function with seed value 5. This is another flavor, or we can say the method of using shuffle function with Randomness.
Code:
import java.util.*;
public class CollectionsShuffleWithRandom {
public static void main(String[] args) {
//Create linked list object
LinkedList<Integer> list = new LinkedList<Integer>();
//Add values
list.add(90);
list.add(100);
list.add(1);
list.add(10);
list.add(20);
System.out.println("Before Shuffle = "+list);
//Random() to shuffle the given list.
Collections.shuffle(list, new Random());
System.out.println("Shuffled with Random() = "+list);
//Random(5) to shuffle the given list.
Collections.shuffle(list, new Random(5));
System.out.println("Shuffled with Random(5) = "+list);
}
}
Output:
Shuffling without Shuffle Method
If you want more control over shuffle, then you could write your own method to shuffle the list with the random method and another approach to shuffle the list. This method is more flexible and easy to fit in any application. You can actually understand how shuffle works inside Java’s built-in method.
Input: An int array
Output: Shuffled array(in a randomized order)
Example:
public static int[] ShuffleArray(int[] array){
Random rand = new Random(); // Random value generator
for (int i=0; i<array.length; i++) {
int randomIndex = rand.nextInt(array.length);
int temp = array[i];
array[i] = array[randomIndex];
array[randomIndex] = temp;
}
return array;
}
The above function where you just need to pass an array integer, and it will return a shuffled array. Inside the function, you can see we are iterating the array till its length and generating a random number, and it will be treated as an array index, which will be swapped with another array. This is how the elements will be swapped inside an array. The resulting array will be a swapped one.
From the above function, we can get a basic concept of the shuffle function where a list of values will be sent, and a random number will be generated each time while iterating the elements in the array. The element will be swapped with another element in the same list with the index randomly generated from a random function.
Important Points for Shuffle Function
- This method works on randomly permuting the list elements and shuffling them.
- Each time executed, the result can be different.
- The function doesn’t take much time and runs in linear time.
- If you provide a list that doesn’t implement the RandomAccess interface, then shuffle will first copy the list into an array, shuffle the array copy, and then copies it into a list of the result and return it.
- Shuffle traverses the list backwards – the last element up to the second element repeatedly.
- While traversing, it randomly selects elements and swaps them with the current position.
- The randomly selected element is from the portion of the list from the first element to the current element.
Exceptions:
- UnsupportedOperationException: if the passed list or list-iterator does not support a set operation.
Applications of Shuffle
There could be many situations where the shuffle function below is some applications:
- Shuffling a list of questions in a QA application.
- Shuffling list of quotes that you want to show to your app users every day.
- Lottery application where you want to assign a ticket number to the user.
- Generating unique transaction numbers for a payment field.
- Generation unique user id for different users can be prefixed to user id.
- It can also be used in cryptographic applications.
Conclusion
In the above article, we understood how shuffle works and how you can use it. There can be multiple use cases somewhere you would be using shuffle function with random parameter else without random parameter, and some applications might need a different flexible implementation where you can write your own shuffle function using Java’s Random function.
Recommended Articles
This is a guide to Shuffle() in Java. Here we discuss the Introduction and Important Points for Shuffle Function along with different examples and its code implementation. You may also have a look at the following articles to learn more – | https://www.educba.com/shuffle-in-java/ | CC-MAIN-2021-49 | refinedweb | 993 | 55.54 |
Enters the lock associated with a specified monitor.
Syntax
#include <prmon.h> void PR_EnterMonitor(PRMonitor *mon);
Parameter
The function has the following parameter:
Description
When the calling thread returns, it will have acquired the monitor's lock. Attempts to acquire the lock for a monitor that is held by some other thread will result in the caller blocking. The operation is neither timed nor interruptible.
If the monitor's entry count is greater than zero and the calling thread is recognized as the holder of the lock,
PR_EnterMonitor increments the entry count by one and returns. If the entry count is greater than zero and the calling thread is not recognized as the holder of the lock, the thread is blocked until the entry count reaches zero. When the entry count reaches zero (or if it is already zero), the entry count is incremented by one and the calling thread is recorded as the lock's holder. | https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_EnterMonitor | CC-MAIN-2018-34 | refinedweb | 157 | 52.19 |
Anaconda - Managing python environments and 3rd-party libraries in TouchDesigner
It has been quite a few times that I see on the Derivative forum or on social networks, cases where users are struggling with third party Python libraries / packages integration in TouchDesigner. While you should not consider the following example the ultimate solution, it saved me quite a few times and Anaconda is a nice tool to use even outside of the TouchDesigner context.
Important: This article was not tested on Mac and is written from a Windows user perspective.
What is Conda / Anaconda?
Conda is a cross-platform, language.
As per
What is the difference between Anaconda and Miniconda?
Miniconda is a free minimal installer for conda. It is a small, bootstrap version of Anaconda that includes only conda, Python, the packages they depend on, and a small number of other useful packages, including pip, zlib and a few others.
As per
Setup
For the sake of this tutorial, at the time of writing (06 16 2021), the version used are:
TouchDesigner 2021.13610, the latest stable release
Anaconda install is Anaconda3-2021.05-Windows-x86_64 Python 3.8 Win 10 64-bit
Windows 10 64-bit
Install Anaconda
The install is fairly straightforward. For the sake of that example, we will do a vanilla install of the latest version of Anaconda (Windows 64-bit) in a fairly vanilla environment, meaning: no local Python installation, no changes, no previous installation of Anaconda or other things. It’s a pretty clean environment.
First, head to the Miniconda documentation page or the Anaconda (full installer) page.
What you’ll want is to download the installer for Windows / Mac, preferably in 64-bit and (important) w/ Python 3.x. The latest builds of either Anaconda or Miniconda 3 should all be coming with Python 3.8 by default.
Once the installer is downloaded, start the installation, and go through the install steps. You can keep all the default recommended choices.
You are done with the installation, let’s check that everything is installed and running correctly.
Confirm installation
Let’s go over this screenshot and a basic conda command.
In the terminal, you see
(base) which is here to remind you which environment of conda is currently the active environment. Followed by your user path.
The first command we’ll learn is
conda env list which is pretty self-explanatory, it will list your environments.
You should get a result similar to the screenshot above, where the star
* before the path is the currently active environment.
If you get a similar result, we are good to go.
Create an environment
First, launch TouchDesigner.
You want to know which version of Python TouchDesigner is currently using so that you can match this version in your conda environment.
To do so, use Alt+T in TouchDesigner, which will open your textport.
You’ll see the following window.
You can read on the second line that the Python version currently used by TouchDesigner is Python 3.7.2.
We will match this version.
Now, go back to the conda command prompt and type:
conda create -n td-demo python=3.7.2
Where:
conda - the shortcut/context for Anaconda
create - self-explanatory, to create an environment
-n - for the name of the environment followed by your new environment name, with no space, here “td-demo”
python= - to force a python install in the version matching TouchDesigner, 3.7.2 in our case
I like to name my TouchDesigner environments with “td-” since I work a lot with Anaconda, sometimes outside of the TouchDesigner context. It helps me avoid using an environment for, let’s say, Tensorflow, and using that same environment in the td- context w/ a bunch of other packages that might cause issues with a side project or just be confusing.
This is quite important because environments/environment management is one of the great features of Anaconda. And you want to avoid messing with your TouchDesigner local python installation or using a conda environment with a Python version that doesn’t match TouchDesigner.
Once you type the command, press enter, it might take a little while.
You’ll see the following lines (or similar) printing in the command prompt.
It is the default wheels and binaries for the conda environment to run in Python 3.7.2. You can press
y to proceed.
Conda will download and install all the packages in your new environment.
Once the downloads and installation are done, you’ll see the following (or similar).
Activate (move to) an environment
As it is stated in the screenshot above, you can now use the command
conda activate td-demo to move to this newly created environment.
Where
activate is the action and
td-demo the environment we want to move to. You should now see at the front of the line
(td-demo) which means you are in your new environment.
Congrats!
Install additional packages
For the sake of this tutorial, we are going to install 2 main packages.
The first one is sckikit-learn. A recent question in the forum made me go through these exact same steps to debug a user issue, and it was extremely easy to get scikit up and running in TouchDesigner with conda. The second one is open3d, which was introduced by Darien Brito recently, in his Kinect Azure Point Cloud merger.
Let’s go through the steps on how to install those packages.
Most of the time, typing
conda install packagename will work. But a good source to find packages, package names and versions is to go to the central repository on
Note: pip does come with Anaconda, if a package is missing from the Anaconda package repository, you can still use pip with the more traditional
pip install packagename. It will still install the package, using pip this time, but as part of your Anaconda environment.
Let’s look for Scikit, and we can easily find out that the command we are looking for is
conda install scikit-learn
You can now type the following in your conda command prompt, and conda will look for all the required wheels and binaries for scikit-learn to run properly, in your current environment and on Python 3.7.2.
You should see the following
You can press
y to install the extra packages.
Now, let’s do the same with open3d. On anaconda.org, we can find that the command to use is
conda install -c open3d-admin open3d
Let’s type that and press enter in our conda command prompt.
You should see quite the list of additional packages and dependencies. If you don’t need open3d, you can press
n, else, process with the install and press
y.
Additionally, you can use the command
conda list within your environment to list all the packages installed.
Link environment to TouchDesigner
Welcome to the TouchDesigner side, it might sound tricky, but it’s actually fairly straightforward here.
Open your TouchDesigner project.
In this project, you will create an Execute DAT and toggle on the “Start” toggle parameter.
What we want to do next, on startup, ensure that the site-packages of our conda environment are added to the path of TD and packages binaries / libs / dlls.
First, let’s type in the Anaconda command prompt the command we used earlier
conda env list
Remember? It’s listing the environments and paths of each environment.
Look for your newly created environment, or the active (the one with a star *) environment you want to add to TouchDesigner.
Write down the path, if your install is using default settings, it should be something like
C:/Users/YOUR_USERNAME/miniconda3/envs/YOUR_ENVIRONMENT_NAME
In our case, if you are following this tutorial, the environment name should be
td-demo
Back to TouchDesigner, at the top of the Execute DAT, replace the
onStart() method with the following code
Note: Double check all the paths used in the code snippet, it could be that your anaconda 'envs' folder is not in your User folder depending on your conda install settings and conda version, as well as your OS.
import sys import os import platform def onStart(): user = 'YOUR_USERNAME ' # Update accordingly condaEnv = 'YOUR_ENVIRONMENT_NAME' # Update accordingly if platform.system() == 'Windows': if sys.version_info.major >= 3 and sys.version_info.minor >= 8: """ Double check all the following paths, it could be that your anaconda 'envs' folder is not in your User folder depending on your conda install settings and conda version. """ os.add_dll_directory('C:/Users/'+user+'/anaconda3/envs/'+condaEnv+'/DLLs') os.add_dll_directory('C:/Users/'+user+'/anaconda3/envs/'+condaEnv+'/Library/bin') else: """ Double check all the following paths, it could be that your anaconda 'envs' folder is not in your User folder depending on your conda install settings and conda version. """ # Not the most elegant solution, but we need to control load order os.environ['PATH'] = 'C:/Users/'+user+'/anaconda3/envs/'+condaEnv+'/DLLs' + os.pathsep + os.environ['PATH'] os.environ['PATH'] = 'C:/Users/'+user+'/anaconda3/envs/'+condaEnv+'/Library/bin' + os.pathsep + os.environ['PATH'] sys.path = ['C:/Users/'+user+'/anaconda3/envs/'+condaEnv+'/Lib/site-packages'] + sys.path else: """ MacOS users should include path to .dlybs / MacOS binaries, site-packages """ os.environ['PATH'] = '/Users/'+user+'/opt/anaconda3/envs/'+condaEnv+'/lib' + os.pathsep + os.environ['PATH'] os.environ['PATH'] = '/Users/'+user+'/opt/anaconda3/envs/'+condaEnv+'/bin' + os.pathsep + os.environ['PATH'] sys.path = ['/Users/'+user+'/opt/anaconda3/envs/'+condaEnv+'/lib/python3.9/site-packages'] + sys.path return
Where:
YOUR_WINDOWS_USERNAME should be the name of your user folder, found in
C:/Users/
YOUR_ENVIRONMENT_NAME should be
td-demoif you are following this tutorial.
You can now either save your project and restart or simply pulse the pulse button of the Start parameter of the Execute DAT.
os.add_dll_directory()which is quite handy in our use case, but not yet available in Python 3.7.2 at the time of writing of this article.
A note for MacOS users:
To know where is your Anaconda environment, as per Anaconda’s official doc:
- Open a terminal window.
- If you want the location of a Python interpreter for a conda environment other than the root conda environment, run
conda activate environment-name.
- Run
which python.
In the same terminal, you can type
python, and import a package such as
import numpy if you installed numpy in that environment. Typing
numpy, it will print out where was numpy imported from and show the site-packages folder location, within your Anaconda environment folder.
IMPORTANT: You might encounter issues if you are on an M1 Mac, and run different architectures. If you are running the native ARM build of TouchDesigner, your Anaconda install, Anaconda environment and libraries should all be the native ARM versions as well. More details are available with documentation for Homebrew here: Category:Python - Derivative
Use your packages (and environment) in TouchDesigner
You should now be able to import the packages you installed using conda. Let’s make sure of it. In TouchDesigner, press Alt+t to open the textport.
Starting with scikit-learn, type
import sklearn and press enter. If the command goes through without error, it means sklearn was imported with success.
You can make sure by typing
sklearn.__file__ which should point to the
__init__.py of sklearn or
sklearn.__version__ which should print the same version that the Anaconda command prompt was listing.
If you installed open3d as well, you can proceed with the same command
import open3d. If the command goes through without error.
You can now use those packages in TouchDesigner, congratulations!
If an issue occurs, it can be due to a large number of factors. I would recommend first to go back to your environment in the conda command prompt and type
python -c “import sklearn”, if the command is working without issues, then an issue occurred while linking your conda environment to TouchDesigner.
Conclusion
You now have a basic introduction to Anaconda and a simple integration in TouchDesigner. It should help unlock access to a few interesting projects with all the great libraries that come out from the Python community. I hope that this tutorial helped and that you were able to go through it without issues. If you have any questions, comments, points to add, please leave a comment and I will try to answer accurately and update this tutorial if necessary! | https://derivative.ca/community-post/tutorial/anaconda-managing-python-environments-and-3rd-party-libraries-touchdesigner | CC-MAIN-2022-40 | refinedweb | 2,057 | 54.52 |
Type: Posts; User: Diamo
Bloody hell, thanks man, that was really nice of you to post that full explanation. I'm re-writing as we speak and as you suggested I'm making a weapon a class. A lot of the code I wrote was just...
Hi All,
It has become clear to me after designing a short test program that I am doing something fundamentally wrong in the design of my project. I tried making a short test project, stripped to...
Hi Guys,
Firstly, sorry about the non-use of the code tag, I've updated both my posts. I hope they are a bit more readable now.
Further points:
1. Removed "using namespace std" from the...
Yeah, I have been 'using namespace std;' but you are of course correct.
Enemy.h :
#ifndef ENEMY_H_INCLUDED
#define ENEMY_H_INCLUDED
#include "globals.h"
#include <iostream>
#include...
Hi All,
I'm still very new to C++ so apologies in advance if I am not too clear in my description of my problem.
I am making a text based drug dealing game which has an Class called Enemy which... | http://forums.codeguru.com/search.php?s=561331adab58456a3a659602efabeab4&searchid=7386171 | CC-MAIN-2015-32 | refinedweb | 184 | 84.07 |
Ao Resists the Forces of Darkness (pbrt meets Nim)
Table of contents
- Overview
- Notes on the book
- Notes on Nim
- Conclusion
- Further links of interest
Overview
I started reading the awesome Physically Based Rendering: From Theory to Implementation book a few weeks ago, which made me realise that it’s probably for the best if I rewrote my ray tracer from the ground up based on the ideas presented in the book. After all, good coders borrow, great coders steal, and at the very least we can say that I’m proficient at stealing—the rest will hopefully follow!
I also got a bit tired with the long titles of my previous ray tracing related posts, so from now on I will call the project just Ao. Why on Earth that particular name? Well, first I wanted to use the name Ra after the ancient Egyptian sun god, but it looks like some French guy had already beaten me to it. I liked the idea of using the name of some ancient solar deity (it looks like I’m not alone with this), but then Sol was kind of taken, and Huitzilopochtli doesn’t quite roll off the tongue either… So in the end, I chose Ao, which I think is quite cool and could also stand for ambient occlusion as well. Moreover, I live in Australia, so that’s another good reason for choosing a Polynesian god in this geographical vicinity.
.” (source)
<ominous sound effects>
From henceforth, Ao shall resist the Forces of Darkness!
</ominous sound effects>
Okay, now that we got that out of the way, here’s some words about my experience with the book so far. The general idea is that I will read the book from start to end and (re)implement everything in Nim as I go. I am not going to follow it to the letter though; sometime I might use a different convention, approach or algorithm either for performance reasons or simply due to personal preference.
Notes on the book
I have only read the first two chapters so far, but I can already say that I am extremely impressed by the book; it’s a work of art and very obviously a labour of love. The topics are well presented, the explanatory texts are very well written in a somewhat terse but interesting style, and the authors generally do a good to excellent job at explaining the theory behind the algorithms. I say generally because a few times I found myself wanting to do further research on a given proof, but this is probably more due to me not exactly being a math genius than the authors’ fault…
For example, their less than one page derivation of the rotation transforms wasn’t quite clear to me, so I went googling and finally found this paper that made everything crystal clear. But then, the book is already 1100+ pages long and giving more detailed proofs could easily have doubled that I guess, so I’m okay with having to do some extra reading from time to time. Doing your own research helps internalising knowledge better anyway.
One good source of computer graphics related information where the proofs are explained in a bit more detail is Scratchapixel which I wholeheartedly recommend. For the math stuff I found a very good online resource, Paul’s Online Math Notes, that seems very promising (I just prefer reading to watching videos and he provides downloadable PDF versions of most of his materials).
Coordinate system
pbrt uses a left-handed coordinate system, which is the default coordinate system of DirectX, POV-Ray, RenderMan and Unity, among many others. Right-handed coordinate systems, on the other hand (no pun intended), are the standard in mathematics, physics and engineering. OpenGL also uses a right-handed coordinate system by default (although that’s been the source of a perpetual debate for quite some time now, just have a look here or here).
In practical terms, most graphics environments allow to switch their default handedness (OpenGL and DirectX certainly do), but as in the world of science right-handed is the standard and I’m also interested in OpenGL programming (plus I have zero interest in DirectX), I am just going to stick with right-handed. One consequence of this is that occasionally I’ll have to work a bit harder to correctly implement the algorithms presented in the book. Well, if nothing else, this will require me to have a really solid understanding of what I’m doing!
Vectors, Normals, Points
The book introduces separate vector, normal and point templates, which contain an awful lot of code duplication, and in my opinion just complicate things for little gain. Overall, I don’t think the better type safety is worth the added code complexity and the potential performance penalty (because you’d need to convert data back and forth between different types a lot). Because of this, many systems just don’t bother with making these distinctions (GLSL and OpenEXR spring to mind) and just define a single universal vector type instead to keep things simple. Then it’s up to the actual code to interpret the data in the right context. That’s what I’m doing here too; all vectors, normals and points are represented by a single vector type:
type Vec2*[T] = object x*, y*: T Vec3*[T] = object x*, y*, z*: T Vec2f* = Vec2[FloatT] Vec2i* = Vec2[int] Vec3f* = Vec3[FloatT] Vec3i* = Vec3[int]
Matrix inverse
I have introduced a special fast version of the 4x4 matrix inverse operation called
rigidInverse that can be used to quickly invert affine transforms that don’t have a scaling component. The optimised version only costs 24 FLOPs instead of the 152 FLOPs of the general version (6.3x speedup!). I was able to make good use of this in the
lookAt procedure for some internal calculations.
Ray-box intersection tests
The book presents a mostly straightforward implementation of the slab method invented by Kay and Kayjia for calculating ray-box intersections (where by box we mean axis-aligned bounding boxes, or AABBs). The problem with their algorithm is that it contains a lot of conditional statements which hurt performance. AABB tests must be as fast as possible because a large percentage of the total run time of the renderer will be spent performing these intersection tests. Luckily, there’s an optimised branchless version out there which I ended up adopting. This version reports false hits for degenerate cases where any of the ray origin’s coordinates lay exactly on the slab boundaries, but this is negligible if the actual ray-object intersection routines are correct, and well worth the added 15-20% performance boost compared to the 100% correct version.
Notes on Nim
The best way to learn the intricacies of any programming language is to write some non-trivial piece of software in it, and pbrt certainly falls into this category. Implementing the code from the first two chapters in Nim has taught me several useful lessons which I am going to summarise below.
Project structure
Nim doesn’t have the concept of access modifiers and packages like Java and Scala, or namespaces like C++. The only available organisational unit is the module that can export some of its symbols, otherwise they are private and unaccessible to the outside world. One file can contain only one module, the filename minus the extension being the name of the module (although a module can be split up into several files with the use of
include). All module names have to be unique within the same compilation unit.
After much contemplation and experimentation I came up with the following project structure that mirrors that of pbrt. For brevity, only two main modules are presented here,
core and
filter.
src/ core/ common.nim geometry.nim shape.nim types.nim filters/ box.nim gaussian.nim types.nim main.nim test/ core/ allCoreTests.nim commonTests.nim geometryTests.nim shapeTests.nim filters/ boxTests.nim gaussianTests.nim allTests.nim nim.cfg README.md
As mentioned above, module names must be unique per compilation unit; that’s the reason why I had to call the unit test modules
<modulename>Test, otherwise I wouldn’t be able to import
<modulename> into them. This also means that public submodules that are imported by other modules must have unique names, for example
core/common and
filters/common could not be imported by the
main module (remember, the filesystem path is not part of the module name, just the filename).
nim.cfg contains the following:
path="src/core" path="src/filters" # add a new entry for every module
This way we can conveniently just import submodules by the name of the submodule, as they are all unique. This is much cleaner and easier to maintain than using relative paths, especially in the unit tests. For instance:
import geometry # imports src/core/geometry.nim import gaussian # imports src/filters/gaussian.nim
The
types.nim file inside each main module is a special thing that I am going to explain a bit later.
Inlining
The Nim compiler doesn’t do automatic inlining of small functions across module boundaries; it is the programmer’s responsibility to annotate such functions with the
{.inline.} pragma like this:
proc vec3f*(x, y, z: FloatT): Vec3f {.inline.} = result = Vec3f(x: x, y: y, z: z) assert(not hasNaNs(result))
This is a small thing, but forgetting about it can result in severe performance penalties in numerical code that needs to be as fast as possible.
Calling parent methods
Nim doesn’t have a convenient
super() pseudo-method that would allow the calling of parent methods in a straightforward manner. This left me scratching my head for a while until I found the answer in the Nim forums. There are two problems here that require slightly different solutions, namely calling parent constructors and calling ordinary parent methods.
Calling parent constructors
Constructor chaining is most easily accomplished by introducing internal
init helper procedures for every subclass which then can be called with the subclass type converted to the parent class type. It’s much easier to understand this by looking at a concrete example:
type Shape* = object of RootObj x*, y*: float visible*: bool Circle* = object of Shape radius*: float proc init(self: var Shape, x, y: float, visible: bool) = self.x = x self.y = y self.visible = visible proc initShape*(x, y: float, visible: bool): Shape = init(result, x, y, visible) proc init(self: var Circle, x, y, radius: float, visible: bool) = init(self.Shape, x, y, visible) # this is the trick, call init on a Shape self.radius = radius proc initCircle*(x, y, radius: float, visible: bool): Circle = init(result, x, y, radius, visible) # Test var c = initCircle(5, 8, 10.2, true) echo c # Prints: (radius: 10.2, x: 5.0, y: 8.0, visible: true)
The trick is happening in the
init procedure of
Circle, where we first convert the
Circle to a
Shape and run the parent
init procedure on it.
Calling ordinary parent methods
For ordinary methods, using
procCall that disables dynamic binding for a given call is the solution:
method draw(self: Shape) {.base.} = echo "Shape.draw enter" echo "Shape.draw exit" method draw(self: Circle) = echo "Circle.draw enter" procCall self.Shape.draw # or Shape(self).draw echo "Circle.draw exit" c.draw # Prints: # Circle.draw enter # Shape.draw enter # Shape.draw exit # Circle.draw exit
Managing circular dependencies
Nim allows recursive module dependencies, as described in the manual. They are a bit tricky to work with in more complex scenarios, and different techniques are involved when dealing with circular procedure calls versus circular type dependencies. (Perhaps there are even more cases when dealing with more complex language features like macros, but I haven’t got so far yet with my use of Nim.)
Circular procedure calls
Not sure if this is the proper name for this pattern, but the example below should make it clear what I’m referring to. Let’s try to define two functions in two separate modules that call each other co-routine style (blowing up the stack, eventually). It turns out that we need to use forward proc declarations to be able to accomplish this:
# bar.nim proc barProc*() # (1) forward declaration (there's no proc body) import foo # (2) stop parsing bar.nim & continue with foo.nim proc barProc*() = # (5) parsing foo.nim completed, continue from here echo "bar" fooProc() when isMainModule: barProc()
# foo.nim import bar # (3) only the already known symbols in bar.nim are imported, # which is only the forward declaration of barProc proc fooProc*() = echo "foo" barProc() # (4) this works because of the forward declaration
Running the code with
nim c -r bar will print out
bar and
foo on alternating lines until we hit a stack overflow. If we wanted to be able to compile
foo.nim separately as well, we’d need to put a forward declaration at the top of the
foo module too (should be obvious why after following the path of execution in the above listings):
proc fooProc*() import bar proc fooProc*() = echo "foo" barProc()
Circular type dependencies
Nim only allows the forward declaration of procedures; for types, we’ll need a different approach. Moreover, there’s a further limitation that mutually recursive types need to be declared within a single type section (this is so to keep compilation times low).
Sufficiently complex applications usually have quite complex type graphs where certain types reference each other. Initially, I had a number of “submodules” inside my
core module, each of them defining a number of types. Many of these types have references to types defined in other submodules. Attempting to tackle these type dependencies on a case by case basis is just a lot of extra mental overhead and boring work, so the generic solution I ended up with was moving all my top-level types into a new
core/types.nim submodule (using the
core module as an example) which would then be imported by all
core submodules. All the types in
core/types.nim are defined in a single type section—this way I don’t even need to think about circular type dependencies anymore.
As a concrete example,
core/geometry.nim would start like this:
import common, types import math export types.Vec2f, types.Vec2i, types.Vec3f, types.Vec3i export types.Box2f, types.Box2i, types.Box3f, types.Box3i export types.Ray export types.RayDifferential
The
export statements ensure that the public types of this submodule will be available to the importing module. Private types that are used only internally between the submodules simply don’t get exported anywhere.
First I was averse to the idea of moving all the types into a single file, away from the actual method implementations, but then I grew to like it. It’s not a bad thing to see all types from all submodules in one place, especially when there are lots of complex interdependencies between them. As an interesting note, Haskell, F# and OCaml have the same limitation regarding circular type dependencies.
One drawback with this approach is that all properties defined in
types.nim must be public (exported with
*), otherwise the submodules themselves wouldn’t be able to access them. This breaks encapsulation and can be a problem for bigger projects with many developers working on the code. In reality, I don’t think this is a big deal though for people who know what they are doing. Even the original pbrt authors made a good point about exposing the internal data structures of most of their objects; doing “proper encapsulation” by the book would just add lots of extra cruft that is kind of unnecessary for small to medium sized projects developed by a single person or a handful of people.1
Conclusion
So long for today folks, hope you have enjoyed today’s session. You can check out the current state of Ao in the GitHub repository. The takeaway message is that
Nim is great; if you’re interested in a cute language with C-like performance characteristics that is a joy to use, you should definitely check it out, and
pbrt is not just one of the best books on computer graphics that I ever had the pleasure of reading, but also one of the best technical books overall! If you are interested in computer graphics and don’t have it yet, it deserves a place on your bookshelf! It’s a steal for the asking price. | https://blog.johnnovak.net/2017/06/18/ao-the-beginning/ | CC-MAIN-2019-22 | refinedweb | 2,763 | 59.03 |
Programmingempire
In this post, I will demonstrate several Examples of Query Operations using LINQ in C#. In order to get an overview of LINQ, you can read this article on Language-Integrated Query (LINQ) in C#.
Examples of Query Operations using LINQ in C#
Since LINQ helps us in creating queries that work with various kinds of data sources, we will use arrays for representing the data sources. In order to learn queries on SQL Server database, read this article on LINQ To SQL Examples.
Selection
Basically, the Selection operation returns the data. Further, this operation returns the whole data as well as the subset of data. Even more, it can also return the computed values. As an illustration, see the following examples.
Example 1
The following example shows how a selection operation is used to fetch values from an array of integers. Additionally, the query also returns the computed values.
using System; using System.Linq; namespace QueryOperations { class Program { static void Main(string[] args) { Console.WriteLine("Data Source: "); int[] arr = { 1, 2, 3, 4, 5, 6, 7, 8 }; foreach (int i in arr) Console.Write(i + " "); Console.WriteLine(); Console.WriteLine("Examples of Selection..."); var q1 = from p in arr select (p, p + 1, p * p, p * p * p); foreach(var ob in q1) { Console.WriteLine($"Item1: {ob.p}, Item2: {ob.Item2}, Item3: {ob.Item3}, Item4: {ob.Item4}"); } } } }
Output
Example 2
In a similar manner, we can also use an array of objects. The following example shows how an array of objects of the Employee class acts as the data source for the query. Furthermore, the query also returns a computed value called Increment.
class Employee { public String ename { get; set; } public int salary { get; set; } public override string ToString() { return $"Employee Name: {ename}, Salary: {salary}"; } } class Program { static void Main(string[] args) { Employee[] arr1 = new Employee[] { new Employee{ename="A", salary=24000}, new Employee{ename="B", salary=59000}, new Employee{ename="C", salary=44000}, new Employee{ename="D", salary=73000} }; Console.WriteLine("Data Source: "); foreach (Employee ob in arr1) Console.WriteLine(ob + " "); Console.WriteLine(); Console.WriteLine("After Executing Selection Query: "); var q2 = from p in arr1 select new { p.ename, p.salary, Increment = p.salary * 0.2 }; foreach (var ob in q2) { Console.WriteLine($"Name: {ob.ename}, Salary: {ob.salary}, Increment: {ob.Increment}"); } } }
Output
Sorting
The following examples illustrate sorting in LINQ.
Example 3
As shown in the next query, LINQ performs sorting using the orderby clause. Also, you can sort in descending order with the use of orderby …. descending clause.
Console.WriteLine("Examples of Sorting..."); var q3 = from p in arr orderby p descending select (p, p * p); foreach (var ob in q3) { Console.WriteLine($"Item1: {ob.p}, Item2: {ob.Item2}"); } var q4 = from p in arr1 orderby p.salary select new { p.ename, p.salary, Increment = p.salary * 0.2 }; foreach (var ob in q4) { Console.WriteLine($"Name: {ob.ename}, Salary: {ob.salary}, Increment: {ob.Increment}"); }
Output
Filtering
Certainly, many times we need to retrieve data that satisfies certain criteria. Therefore, we need filtering. The following examples demonstrate filtering using the where clause.
Example 4
The following examples show the usage of where clause for filtering.
Console.WriteLine("Examples of Filtering..."); Console.WriteLine("Elements with values > 4"); var q5 = from p in arr where p>4 select (p, p * p); foreach (var ob in q5) { Console.WriteLine($"Item1: {ob.p}, Item2: {ob.Item2}"); } Console.WriteLine("Employee Records where increment > 10000"); var q6 = from p in arr1 where p.salary*0.2>10000 select new { p.ename, p.salary, Increment = p.salary * 0.2 }; foreach (var ob in q6) { Console.WriteLine($"Name: {ob.ename}, Salary: {ob.salary}, Increment: {ob.Increment}"); }
Output
Summary
In this article on Examples of Query Operations using LINQ in C#, the query operations on Selection, Sorting, and Filtering are covered. Furthermore, the examples also demonstrate the use of LINQ to Objects. In other words, the queries mentioned here operate on IEnumerable<T> collection.
Further examples on Join and Grouping queries are given here._4<< | https://www.programmingempire.com/examples-of-query-operations-using-linq-in-c/ | CC-MAIN-2022-27 | refinedweb | 670 | 52.66 |
Hi,
Today we are glad to announce availability of the new CLion EAP build. Download it from our confluence page right away, and share your feedback with us. A variety of new and long-awaited features and many important bug fixes were introduced in this build. Let’s have a look at the most valuable of them.
Create new C++ class, source file or header
When pressing
Alt+Insert (on Windows/Linux) or
Cmd+N (on OS X) in Project view or selecting New in the context menu there, you’ll find several new options:
- C++ Class generates a pair of a source file and a header, including header in the source file and creating class stub in the header file:
- C/C++ Source File generates a simple source file, you can also select to create an associated header with it.
- C/C++ Header File generates a simple header file.
In all three cases you can also select the targets in which the files need to be included, and CLion will automatically update the appropriate CMakeLists.txt in your project.
Give this new feature a try and provide your comments and feedback to us. Any related issues are very welcome to our tracker.
Make all
Default “all” target for CMake project is supported now, that means you can find it in the configurations list, edit and select it for build and run. To run this configuration CLion asks you to select an executable. In general the IDE allows you now to change the executable for any configuration of your choice or even make configuration not runnable by changing this value to “Not selected”.
CMake actions in main menu
There are a couple of useful CMake actions that we’ve placed into CMake tool window. And now we’ve decided to add CMake section to Tools menu:
We’ve also placed Reload CMake Project action in File and Project View context menus for you convenience.
Other important changes include:
- A while ago we’ve updated CLion to use PTY as an I/O unit for running your targets. Now Linux and OS X users will get the same behaviour while debugging.
- A problem on OS X with debugger not stopping on breakpoints in class and template methods is fixed now.
- Latest CLion EAP build has CMake/GCC/Clang output coloring enabled so that you can find your way through the resulted output easier. If you still prefer the non-colored output use CMAKE_COLOR_MAKEFILE=OFF for CMake output, -fno-color-diagnostics flag for Clang and -fno-diagnostics-color for GCC.
- An issue with no flush to console without
\nwas fixed.
And last but not least, biicode, a C/C++ dependency manager, can now be used easily together with CLion! Get the details in our separate blog announcement.
The full list of fixed issues can be found in our tracker.
Develop with pleasure,
The CLion Team
I like the new “make all”, but there is still something wrong with the configuration handling.
I still cannot depend on configurations that do not have an executable. For example, I have a CMake setup in which a “deploy” target is used to copy files to a correct location, this “deploy” target does not create an executable or whatever.
So to be able to run my deployed application I created a new configuration which uses the correct directory and runs some executable.
This configuration depends on the “deploy” target, running this fails with “Executable is not defined”. I simply want to be able to depend on another make target, whatever that target is. Often this should not result in executing anything…
Now you can just set executable to “Not selected” to tell CLion that configuration could not be run. And then you can run the configuration that has an executable set (you can set it yourself manually in the edit configuration menu). Will it solve a problem for you?
Thanks for the reply, that is what I tried. My “deploy” target has no executable set. The “run” target is only a CLion configuration, not a CMake target.
To trigger a deploy before running, the “run” target depends on “deploy” in the configuration.
When I run my “run” target it starts building the “deploy” target (as expected), when it is done, it gives an error: “Error running deploy: Incorrect run configuration Executable is not specified”.
I’ve got it now, thanks for the explanation. Could you please create a ticket with description here:. Will consider such case in the implementation.
See
Thanks.
Excellent.
Is there any special reason why you now default to awt.useSystemAAFontSettings=lcd ? On my desktop (Ubuntu 14.04, i3 + gnome-session, LCD screen) this looks much worse than older awt.useSystemAAFontSettings=on, as in all characters suddenly get a rainbowy glow around them, but with awt.useSystemAAFontSettings=on everything looks great. Not a huge deal, but now I always have to remember to patch clion64.vmoptions upon every update…
Could you please describe here () and attach some screenshots with the description? We’ll check if we use it intentionally.
Sorry, I don’t have access to my private account on YouTrack from work, so I can only post the screenshots here:
1) With CLion defaults (awt.useSystemAAFontSettings=lcd), notice the rainbows:
2) With the my own settings (awt.useSystemAAFontSettings=on, swing.aatext=true, sun.java2d.xrender=True), no rainbows:
Hope that helps!
Just noticed that the rainbowy image looks more or less acceptable on Dell Latitude E5450 laptop screen, but very annoying on a stand-alone DELL P2214H and similar screens. Still, a more conservative (“on”) anti-aliasing looks better in all cases to my taste.
I think the P2214H is a BGR monitor. That is, the subpixels are in a different order than the more common RGB order. The font renderer needs to be informed about this in some way, though I’m not sure how that works under Java + Ubuntu.
Could you please point JDK version and OS details?
Ubuntu 14.04 x86_64, JRE: 1.7.0_75-b13 amd64, JVM: OpenJDK 64-Bit Server VM by Oracle Corporation.
Regarding the in-place refactoring: where did the nice contrast color boxes around the variable being renamed and other uses go? Is this a new style, or something is wrong and I should report it as a bug? If it’s a new style, is it possible to get them back by changing some setting?
By the way, unfortunately, rename refactoring at the point of definition is still not working in most cases, and variables with the same name in adjacent blocks are sometimes thought of being valid in the current block.
With Rename looks like you mean smth related to and. We know about it and going to fix asap.
Usages should be highlighted and the renamed symbol under the caret should have a red box, still this can be the problem pointed above. Leave you sample in the issue, please.
You are right, the rename refactoring problems that I’ve noticed generally fall in one of those 2 categories. Okay, looking forward to getting them fixed!
Regarding the boxes, please see this screenshot of what happens when I press Shift+F6 in the latest EAP: . As you can see, the renamed symbol isn’t red (actually it is, but it’s selected, so the selection color overrides the red background color), and moreover neither the renamed symbol, nor the usages have the red / green boxes around them as it used to be the case in the previous EAP. I want my boxes back
!
Usages never have boxes, they were highlighted, but not boxed. Are you sure you’ve seen it?
Still it should appear on the selected symbol. What jdk version is it? And what os also?
Okay, I’ve just tried with the previous EAP (default settings, no changes to the *.vmoptions) and the red/green boxes aren’t there either
I’m not sure about green boxes, it could be that they never existed (or maybe that could happen when I search, and then refactor? search should have boxes, right?), or that I confused it with PyCharm, but I think they would be cool to have…
Shall I try installing latest JVM from Oracle, instead of using the one from the distribution repositories?
Yes, you can try. Please, share the results then.
Still the red box around selected symbol should appear (so that you see both – selection and a red box), and it does for us here on Ubuntu with open JDK 1.7.0_65.
Could you please also try
1) install jdk 8
2) with jdk 8 check some options from like
And share with us if it makes better/worse look.
I’ve tried the latest version from the Oracle website [JRE: 1.8.0_40-b26 (Oracle Corporation) JVM: 25.40-b25 (Java HotSpot(TM) 64-Bit Server VM)] and the refactoring still looks the same like I posted earlier ( ), that is no red box or any other boxes for that matter.
Search, however, does have nice green boxes: <— it would be great to have red one for refactored symbol and green ones for usages.
I've also tried all other antialiasing settings, but the results are the same as for OpenJDK, and "on" still looks best (see comparison above).
One small question to get the full picture – does this ‘fields’ actually resolved correctly? I mean if you place a caret at the usage and go to declaration (with Ctrl+B), will it navigate correctly to the declaration?
And considering the antialiasing problem – I’ve placed your comments and description to the issue tracker –. Please follow and provide more details if we need some.
> One small question to get the full picture – does this ‘fields’ actually resolved correctly?
I’ve tried, yes, it is; with Ctrl+B I can correctly jump to the declaration in the function parameter list without any problem.
> And considering the antialiasing problem – I’ve placed your comments and description to the issue tracker –.
Thanks, I will subscribe from home!
Clion EAP: (no red box); PyCharm Professional 4.0.5: (red box). Neither have green boxes when renaming, only in search. Suggest to 1) bring back the red box to Clion 2) add green boxes to both
I’m afraid nothing was removed on purpose and we’ve failed to reproduce here in CLion. We’ll discuss the problem with the team and come back to you. Sorry for inconvenience.
Looks like we’ve found the problem and it will be fixed with the next build. Please, check when it’s available and provide feedback.
Anyways, the new EAP is really hot. In the last few days I’ve been checking the blog a couple of times per day to see if it’s finally there or not. Overall, it’s much more stable than the previous one, and also a lot more performant / responsive on my project. Keep up the good work!
Thanks! We’ll do our best.
Great work! One minor issue though. When displaying a large (many lines) console output, the now default behaviour is not to scroll automatically to the end, but keep only the initial output lines. To go to the end, I have to press the scroll-to-end button. Before this release, the default behaviour was to automatically scroll to the end. Was this change intentional? I preferred the automatic scroll, is there any way of achieving it?
Thanks!
Looks like it was broken occasionally in the latest EAP (). Should be fixed in the next build. Sorry for inconvenience.
Здравствуйте!
IDE Выглядит крайне многообещающе, интересует такой маленький вопрос (не смог найти такой парметр в Code style). Можно ли изменить положение * и & в случае когда они озночают указатель и ссылку. Т.е я хочу сигнатуру метода вида const T* foo() и const T& foo(). Тоже самое косается обявления указателей и ссылок, необходимо чтобы * и & были вплотную к типу (т.е int* p и int& r).
Во вкладке Spaces в разделе Other посмотрите на Before/After ‘*’/’&’ in declarations.
Hello!
CLion is a great IDE, but I have one little trouble with code style.
I want to change position of ‘*’ and ‘&’ operators close to type (not to names).
For example I want int* p instead int *p (int& r instead int &r). And also I need another methods signature: const T& method() instead T const& method.
Can I change something in code style for such a result?
Thank you!
As I’ve already answered check Spaces tab, Other section, Before/After ‘*’/’&’ in declarations.
Hi! Thanks for making the world better!
Do you plan to add more features to new class window?
It would be awesome if when creating new class to indicate namespace.
And also a good feature that I think is useful is to have the possibility to split header and sources in different folders. I saw that a lot CMake projects use to have a include and a src folder, something like this
include/LIBNAME/Code.hpp
src/LIBNAME/Code.cpp
it will be something like this added in future updates?
Thanks!
Have a good day!
Thanks. We haven’t thought about it, but definitely could consider. Please feel free to add you ideas (with use cases description) to our tracker: | https://blog.jetbrains.com/clion/2015/03/new-clion-eap-create-new-class-make-all-biicode/ | CC-MAIN-2016-40 | refinedweb | 2,213 | 73.37 |
I'm transitioning my classes from the "Desire to Learn" (D2L) CMS to the ANGEL CMS. Unfortunately, I didn't export my classes before I lost access to D2L. Fortunately, I know a great person who was able to export the D2L files and send them to me.
Unfortunately, ANGEL doesn't recognize D2L files. Bummer.
All of my "Reading Quiz" questions are stored in a question bank which is an xml file. If I knew how to parse XML, I could probably cut and paste questions in to ANGEL.
I don't know how to parse XML, but I do know how to use Google. I found a bunch of tutorials, but ultimately settled on the Dive into Python tutorial.
My question bank had 2345 entries in it. The entries were made up of three things: a) the actual questions, b) multiple choice answers and distractors, and c) empty entries between questions. All the entries were identified with the 'mattext' tag, although there didn't seem to be an easy way to separate the various types of entries.
Here is some code I wrote. I'm putting it here so I don't lose it.
#!/usr/bin/env python
from xml.dom import minidom
xmldoc = minidom.parse('./questiondb.xml')
mattextlist = xmldoc.getElementsByTagName('mattext')
for i in range(2345):
#print i
try:
print mattextlist[i].firstChild.data
print "
\n"
except AttributeError:
print "
\n"
An AttributeError was triggered by every one of the empty entries. Printing the blank lines every time an empty entry was reached made the output of the script easier to interpret visually.
This was a fun little bit of python. I'm glad I was able to salvage stuff from the D2L class files.
1 comment:
Sounds like you know _exactly_ how to parse XML.
Google is my first step in most programming projects. | https://blog.drewsday.com/2011/08/parsing-xml-with-python.html | CC-MAIN-2022-27 | refinedweb | 307 | 75.71 |
- Type:
Improvement
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: None
-
- Component/s: core/query/scoring
-
- Lucene Fields:New
First things first: I am not an IR guy. The goal of this issue is to make 'surgical' tweaks to lucene's formula to bring its performance up to that of more modern algorithms such as BM25.
In my opinion, the concept of having some 'flexible' scoring with good speed across the board is an interesting goal, but not practical in the short term.
Instead here I propose incorporating some work similar to lnu.ltc and friends, but slightly different. I noticed this seems to be in line with that paper published before about the trec million queries track...
Here is what I propose in pseudocode (overriding DefaultSimilarity):
@Override public float tf(float freq) { return 1 + (float) Math.log(freq); } @Override public float lengthNorm(String fieldName, int numTerms) { return (float) (1 / ((1 - slope) * pivot + slope * numTerms)); }
Where slope is a constant (I used 0.25 for all relevance evaluations: the goal is to have a better default), and pivot is the average field length. Obviously we shouldnt make the user provide this but instead have the system provide it.
These two pieces do not improve lucene much independently, but together they are competitive with BM25 scoring with the test collections I have run so far.
The idea here is that this logarithmic tf normalization is independent of the tf / mean TF that you see in some of these algorithms, in fact I implemented lnu.ltc with cosine pivoted length normalization and log(tf)/log(mean TF) stuff and it did not fare as well as this method, and this is simpler, we do not need to calculate this mean TF at all.
The BM25-like "binary" pivot here works better on the test collections I have run, but of course only with the tf modification.
I am uploading a document with results from 3 test collections (Persian, Hindi, and Indonesian). I will test at least 3 more languages... yes including English... across more collections and upload those results also, but i need to process these corpora to run the tests with the benchmark package, so this will take some time (maybe weeks)
so, please rip it apart with scoring theory etc, but keep in mind 2 of these 3 test collections are in the openrelevance svn, so if you think you have a great idea, don't hesitate to test it and upload results, this is what it is for.
also keep in mind again I am not a scoring or IR guy, the only thing i can really bring to the table here is the willingness to do a lot of relevance testing!
Attachments
- is related to
LUCENE-2186 First cut at column-stride fields (index values storage)
- Closed | https://issues.apache.org/jira/browse/LUCENE-2187 | CC-MAIN-2020-50 | refinedweb | 470 | 57.1 |
Collections.sort() not sorting with compareTo Override
java comparator
java sort list of objects by date property
java sort arraylist of objects by field
collections.sort not working
sort list of objects in java without comparator
java collections sort ascending or descending
java comparable
As the title states - here's my
Packet class - I'm trying to sort into ascending order of
packetNum:
public class Packet implements Comparable<Packet>{ private short packetNum; private short authKey; private byte[] audio; public Packet() { packetNum = 0; authKey = 0; audio = null; } public Packet(short packetNum, short authKey, byte[] audio) { this.packetNum = packetNum; this.authKey = authKey; this.audio = audio; } @Override public int compareTo(Packet other) { int cmp = 0; if (this.packetNum < other.packetNum) { cmp = -1; } if (this.packetNum == other.packetNum) { cmp = 0; } else { cmp = 1; } return cmp; } }
And here's my sorting code in another class'
main (inside a while loop):
//Packet constructed Packet received = new Packet(packetNumReceived, authKeyReceived, encryptedAudio); //Get packet num short packetNum = received.getPacketNum(); //Hold block for reordering (16 packets) ArrayList<Packet> block = new ArrayList<Packet>(); while (running) { //Add packet to ArrayList block.add(received); System.out.println(packetNum); //Re-order packets if (block.size() == 16) { Collections.sort(block); for (int i = 0; i < block.size(); i++) { //print out the sorted packet numbers System.out.println(block.get(i).getPacketNum()); player.playBlock(block.get(i).getAudio()); } block.clear(); } }
The packet numbers printed are in the same (incorrect) order, before and after the sort. I've also check the array elements directly, and the order is not changed at all. These sections of code are the only time the Packet class is touched/referenced at all, not sure what I'm doing wrong. These are my only 2 classes, and there are no reused variable names across them.
@Override public int compareTo(Packet other) { int cmp = 0; if (this.packetNum < other.packetNum) { cmp = -1; } if (this.packetNum == other.packetNum) { cmp = 0; } else { cmp = 1; } return cmp; }
In this code, you are returning 1 if
this.packetNum == other.packetNum gives you false, even if you wanted to return -1.
You forgot an else:
(...) else if (this.packetNum == other.packetNum) { cmp = 0; } (...)
Sorting with Comparable and Comparator in Java, forEach(System.out::print); Collections.reverse(simpsons); simpsons.stream(). forEach(System.out::print); } }. Note that we've overridden the compareTo() method and passed in If we try to pass a Simpson that does not implement Comparable , we will In the first sort() invocation, the array is sorted to: when I compare these franchise objects without the Collections.sort they compare correctly, However when I test using the Collections.sort like I have here I get an output like this: GA, 670.0, Roddy White GA, 560.0, Julio Jones IN, 1270.5, Andrew Luck IN, 950.0, T.Y. Hilton MD, 1210.0, Ray Rice SC, 740.0, Greg Olsen WA, 980.5, Alfred Morris
You're always returning
1 when the
packetNum doesn't match because you're missing an
else if.
@Override public int compareTo(Packet other) { int cmp = 0; // default to zero if (this.packetNum < other.packetNum) { cmp = -1; // set to -1 in one case } // MISSING ELSE! if (this.packetNum == other.packetNum) { cmp = 0; // set to zero if equal } else { cmp = 1; // set to 1 if NOT EQUAL! } return cmp; }
It's also true, as other have pointed out, that subtracting them or using
Short.compare would make this code more terse and readable.
Java ArrayList of Object Sort Example (Comparable And Comparator), We generally use Collections.sort() method to sort a simple array list. Bound mismatch: The generic method sort(List) of type Collections is not applicable for the arguments We are overriding compare method of Comparator for sorting. If compareTo() returns a negative integer, the object is less than the specified object. If compareTo() returns a zero, the object is equal to the specified object. If compareTo() returns a positive integer, the object is greater than the specified object. Here’s how we sort a List of Employee object using Comparable interface in Java:
Don't code what's already coded for you, use
Short.compare(this.packetNum, other.packetNum);
Java Collections sort() - Sort List of Objects by Field, Learn to use Collections.sort() method to sort arraylist of custom objects in java All elements in the list must be mutually comparable and should not throw We may need to sort list of custom objects which have their own sorting logic. @ Override. public int compareTo(Employee o) {. return this .getId(). 1. Pass Comparator as argument to sort() method. Comparators, if passed to a sort method (such as Collections.sort (and Arrays.sort), allow precise control over the sort order. In the following example, we obtain a Comparator that compares Person objects by their age.
Comparable and Comparator in Java Example, Comparable and Comparator in Java are very useful for sorting the collection of objects. is overridden to print the user-friendly information about the Employee public String toString() Employee cannot be cast to java.lang. @Override public int compareTo(Employee emp) { //let's sort the employee based on an id in � We encountered an Exception. Look at the exception -- it says: "System.Invalid.Operation". That means Sort() method is only used to sort normal array list with "integers" or "string" types. But for sorting the complex types, it will not know which type to sort (Ex: EmpAge or EmpSalary). So we get an error, "invalid operation".
Java Collections sort(), Java Collections sort method, Java Collections.sort(), Java Collections sort example Hence, we can see that Collections.sort() has sorted the list of String in Lexical order. And it does not return anything. this.name=name; this.taste= taste; } @Override public int compareTo(Object o) { Fruit f = (Fruit) o; return this.id - f.id ; } }. Programmers frequently need to sort elements from a database into a collection, array, or map. In Java, we can implement whatever sorting algorithm we want with any type.
How to sort a List of objects using Comparator in Java, This class's implementor needs to override the abstract method compare() defined in Comparators, if passed to a sort method (such as Collections.sort ( and If two people have same age, their relative ordering in the sorted list is not fixed. 1. Sort ArrayList of Objects – Collections.sort( List ) It sorts the specified List items into their natural order. All elements in the list must implement the Comparable interface. All elements in the list must be mutually comparable and should not throw ClassCastException. This sort is guaranteed to be stable.
other.packetNumshouldn't work, since
packetNumis a
privatefield. I would imagine this to not compile at all.
- @Gendarme This isn't Smalltalk. The Packet class can look at private fields, even of other instances of itself.
- As an aside: if you compare two object of same type
Typeby an
int-field (say
x), it is common to implement the
compareTo(Type that)as
return this.x - that.x;
- @Turing85 that is exactly what Short.compare does
- @Turing85 common, but wrong. It fails with extreme values. Use Integer.compare() or comparingInt().
- OK, confused by
gives you false[:-)
- This doesn't really answer the question.
- This answer the question by pointing OP to not reinvent the wheel and use what is already done. As you can see also other answers are now pointing to this
- You can add comments to your answers (as David Conrad did), but you can't post comments as answers.
- It can't be a valid answer if it doesn't answer the stated question. Can it? It's a fantastically helpful comment, but simply not an answer.
- @rustyx understandable. However I think OP knows very well how to apply my one-liner inside his compareTo method. Anyway, I'll delete the answer shortly, I just wanted to express my opinion. | https://thetopsites.net/article/54992370.shtml | CC-MAIN-2021-25 | refinedweb | 1,303 | 58.38 |
23 September 2008 13:05 [Source: ICIS news]
MUMBAI (ICIS news)--Industrial new orders in the 27-member EU grew 2.8% in July year on year, and that for chemicals and man-made products rose 5.8%, the official statistics agency Eurostat said on Tuesday.?xml:namespace>
In the 15-member eurozone, industrial orders grew 1.6% for the period compared with July 2007 and orders for chemicals and chemical products increased 7% .
Month on month industrial orders rose 2.3% in the EU, while they were up 1% in the eurozone. Orders for chemicals and chemical products were up 0.4% in EU and 1.1% in eurozone month on month.
Industrial new orders
Source: | http://www.icis.com/Articles/2008/09/23/9158456/eu-new-chemical-orders-rise-5.8-in-july.html | CC-MAIN-2013-48 | refinedweb | 117 | 61.83 |
4, 2006, at 8:20 PM, Michael Tsai wrote:
> On Oct 28, 2006, at 1:46 PM, Michael Tsai wrote:
>
>> Are there any known problems using the subprocess module with PyObjC?
>> My app has some child NSThreads that use subprocess to run helper
>> tools. Sometimes (on several different Macs), the helper tool never
>> completes. If I open Activity Monitor, I see that there's a new
>> process with the name and memory footprint of my app
>
> I determined that the new process is blocked during an import:
>
> #0 0x90024427 in semaphore_wait_signal_trap ()
> #1 0x90028414 in pthread_cond_wait ()
> #2 0x004c77bf in PyThread_acquire_lock (lock=0x3189a0, waitflag=1)
> at Python/thread_pthread.h:452
> #3 0x004ae2a6 in lock_import () at Python/import.c:266
> #4 0x004b24be in PyImport_ImportModuleLevel (name=0xaad74 "errno",
> globals=0xbaed0, locals=0x502aa0, fromlist=0xc1378, level=-1) at
> Python/import.c:2054
> #5 0x0048d2e2 in builtin___import__ (self=0x0, args=0x53724c90,
> kwds=0x0) at Python/bltinmodule.c:47
> #6 0x0040decb in PyObject_Call (func=0xa94b8, arg=0x53724c90,
> kw=0x0) at Objects/abstract.c:1860
>
> and that the code in question is in os.py:
>
> def _execvpe(file, args, env=None):
> from errno import ENOENT, ENOTDIR
>
> If I change os.py so that it imports the constants outside of
> _execvpe, the new process no longer blocks in this way. This message
> from Chris Kane:
>
> <>
>
> implies that one should not use kernel resources (which I assume
> includes locks) on Mac OS X between calling fork() and exec(). So I'm
> wondering if this is the reason that my fix seems to work and, if so,
> if this is a bug in Python.
I'm pretty sure I've seen this description ("subprocess hangs in the
import of errno") before, you might want to check the python-dev
archives and/or SF bugtracker for Python. Now that I write this I
vaguely recollect that this is a bug in Python, some thread other
than the one calling fork is holding the import lock (a C-level lock)
which means the import lock is held after the fork as well but the
single thread in the new process isn't the one holding it -> instant
deadlock.
Ronald | https://sourceforge.net/p/pyobjc/mailman/pyobjc-dev/?viewmonth=200611&viewday=5 | CC-MAIN-2018-17 | refinedweb | 356 | 67.18 |
I'd like to map a port number to a user (linux user that is running a process that is binding to the port).
How can I do it in java?
I know I can go out to the shell and run bash commands that map a port to a PID, and then PID to user, but I'd like to keep it inside java if I can.
The more general question is: I have a webapp application that receives requests from localhost, and I'd like to know which local user performed the HttpServletRequest, so I can attach proper authorities to it.
I'm using spring security for all remote connections. However, I have a small part of the application (separated from the webapp) that is running locally alongside the application server, and that application is authenticated using the linux user mechanism. So for that reason, I bypass the server authentication rules for localhost (assuming all localhost access is permitted). The problem is with authorization - I need the identify the user running the localhost requests. Any idea how can I achieve this?
This is Linux dependent code, but not difficult to port to Windows.
This is not a Servlet code, but would work in that case as well:
Lets say I've a ServerSocket waiting on accept() call. When it receives a client request, it creates a Socket at another port to deal with that 'remote' request.
ServerSocket ss = new ServerSocket(2000); System.out.println("Listening on local port : " + ss.getLocalPort()); while(...) { Socket s = ss.accept(); System.out.println("accepted client request, opened local port : " + s.getPort()); ... }
So, you need to feed the output of s.getPort() from above snippet to the following program's main() method.
public class FindUserByPort { public static void main(String[] args) throws Exception { String cmd = "netstat -anp | grep "; int port = Integer.valueOf(args[0]); cmd = cmd + port ; Process pr = Runtime.getRuntime().exec(cmd); InputStream is = pr.getInputStream(); BufferedReader br = new BufferedReader(new InputStreamReader(is)); String line = null; List<Integer> pIDs = new ArrayList<Integer>(); while ((line = br.readLine()) != null) { if (line.contains("127.0.0.1:" + port)) { String pidPname = line.substring(line.indexOf("ESTABLISHED") + "ESTABLISHED".length()); pidPname = pidPname.trim(); String pid = pidPname.split("/")[0]; pIDs.add(Integer.valueOf(pid)); } } if (pIDs.size() > 0) { for (int pid : pIDs) { String command = "top -n1 -b -p " + pid ; Process p = Runtime.getRuntime().exec(command); InputStream _is = p.getInputStream(); BufferedReader _br = new BufferedReader(new InputStreamReader(_is)); String _line = null; while ((_line = _br.readLine()) != null) { _line = _line.trim(); if(_line.startsWith(String.valueOf(pid))) { String[] values = _line.split(" "); System.out.println("pid : " + pid + ", user : " + values[1]); } } _is.close(); _br.close(); } } is.close(); br.close(); } } | http://www.dlxedu.com/askdetail/3/40dd0e0aa805592caa1a821f633055e3.html | CC-MAIN-2019-04 | refinedweb | 443 | 51.55 |
The most common functionality required in any MCMS project is to get access to the content contained in the placeholder. MCMS has different kind of placeholder definitions like HtmlPlaceholder, ImagePlaceholder, XmlPlaceholder and AttachmentPalceholder. It is desirable to get the content according to the type of placeholder for e.g in case of ImagePlaceholder you want the URL of the Image or some other properties rather then just a blog of string or HTML. You can get that blog by pl.Datasource.RawContent for any type of placeholder.
Here is how to access the content of different types of placeholder.
First make sure to have the following using statement at the top of your utility class
using Microsoft.ContentManagement.Publishing.Extensions.Placeholders;
then
private string ProcessPlaceholderContent(Placeholder pl) { string content = string.Empty ;
if (pl is HtmlPlaceholder) { content = ((HtmlPlaceholder)pl).Html ; } else { if (pl is ImagePlaceholder) { content = ((ImagePlaceholder)pl).Src ; if (content == string.Empty) content = "~/images/Common/FFFFFF.gif"; } else { if (pl is AttachmentPlaceholder) { content = ((AttachmentPlaceholder)pl).Url ; } else { if (pl is XmlPlaceholder) { content = ((XmlPlaceholder)pl).XmlAsString ; } else { content = pl.Datasource.RawContent ; } } } } return content; }
Also Note that in case of empty placeholder I don't want to display broken image so I am replaying with a FFFFFF.gif that is 1 pixel transparent image.
Placeholder pl here is the name of the placeholder as defined in MCMS Template Explorer and not the id of the Placeholder Control that is dragged in to HTML view but the name used in BindToPlaceholder of that Control.
Print | posted on Sunday, May 15, 2005 5:04 PM |
Filed Under [
Microsoft Content Management Server
] | http://geekswithblogs.net/jawad/archive/2005/05/15/PlaceholderContent.aspx | crawl-002 | refinedweb | 267 | 50.73 |
ASP.NET 2.0 brought with it a new compilation model for Web Applications (together with a new Project Model for them, the Web Site Project (WSP) model).
The WSP project model for Web Applications is extremely productive for program development due to its "Edit & Continue" features. However, the older ASP.NET 1.x WAP project model had deployment advantages, including better modularity and defined dependencies in your code, and the delivery of only the created assembly to the server, thereby protecting your intellectual property rights.
This article discusses the conversion of WAP projects to WSP and vice versa so that you can get the best of both worlds during development and deployment. The article would also discuss this approach specifically for DotNetNuke modules as that is where I invented (or discovered) this easy conversion (assisted by Visual Studio), and would discuss specific tricks as regards to DNN modules.
Compilation and debugging in ASP.NET 1.x was always a tricky task. You had to create a Web Application (WAP) project inside your website directory, ensuring that your bin directory is updated with the new DLL file as soon as you did a re-build. This WAP project contained both your business logic as well as the UI layer (the .aspx, .ascx files etc.). The major problem with this approach was debugging. This blog post by Mitchel Sellers describes in detail the steps involved in debugging a WAP project. As you would notice, the steps including Manual Attach are a pain in the neck to be performed over-and-over again. Not to mention that all the WAP project files were locked for editing by Visual Studio during debugging. That meant that you needed to detach, make your changes, compile the whole solution again, and reattach for every small change you made to your code. Also notice, as Mitchel mentions in the above blog, this process is pretty slow (taking up to 45 seconds on DotNetNuke sites, which is a long time from a developer's perspective).
ASP.NET 2.0 introduced a new Dynamic Compilation model. Here, you place your business logic that needs to be available across the Web Application in the App_Code directory, with all the rest of the UI and other things going into any directory as named by you (you, however, cannot choose 7-8 directory names reserved as special directories with specific purposes by ASP.NET 2.0, including App_Code, App_Data, App_Themes etc.)
The major benefit of this approach from a development perspective is dynamic compilation of all code outside any ASP.NET reserved directories. What this means is that you can change the code in a running Web Application, and when you request the page containing the code again, that page alone is recompiled without any other recompilations. This offers an extremely RAD opportunity. However, remember that changes made to any file in any of ASP.NET's reserved directories, global.asax, and the web.config lead to the entire App Domain being recycled (effectively meaning that the entire Web Application is recompiled and rebuilt, in simpler terms).
(One more point worth noting is that this "Edit & Continue" like feature for web pages is a bit different than its Desktop counterpart. In Desktop apps, you can modify the code while the control is in the debugger. However, in Web apps, you need to make changes to your code and request the page from the browser again. You cannot make changes while in the debugger, and if you do, those would be visible only on the next request to that page. Also, don't forget to save your page after making the changes. Saving is not required in desktop apps, but required in Web apps, i.e., in WSP projects.)
ASP.NET 1.x WAP projects, however, definitely made deployment simpler than ASP.NET 2.0 WSP projects. In particular, you provided compiled assemblies for deployment (together with just the .aspx or .ascx files, without any code-behind). The code-behind was compiled into the project assembly.
I would be biased if I don't mention that you have similar functionality for WSP projects. Here also, you can pre-compile your project and deliver only the compiled code for deployment. But this compilation produces tens, hundreds, or even thousands of assemblies (one each for each project folder) depending upon your project size and structure. The naming and number of these assemblies was again something that made it difficult to manage. Michael Washington explains in this blog post about WSP compilation (remember, his explanation is targeted specifically for DNN modules, but you can easily pick the higher level concepts from it).
Also, the WSP model makes it hard to manage inter-dependencies in your site modules. If you are designing a modular and extensible approach for your site, where you have clearly defined dependencies of one module on another, you might face situations where Module1 code uses Module2 code, and vice versa. There is nothing preventing these circular dependencies in WSP projects as technically they are part of the same WSP project. The modules are only there in your design. In implementation, they are part of the same project.
Refactoring the above modules as two WAP projects would lead you to prevent such circular dependencies as only one assembly can refer to another without creating a circular reference (which obviously is disallowed).
That should be enough of a background. Let's start converting code from a WSP project to a WAP one. Much of this process is automated using Visual Studio 2005 or greater.
Well, Microsoft relied so heavily on the pros of the WSP project model while releasing ASP.NET 2.0 and VS2005 that no support was provided for WAP projects in the initial release of VS2005.
So, if you are using VS2005 (without the SP1, the support for WAP projects have been integrated from VS2005 SP1 onwards), you need to download the WAP add-in for VS2005 from Microsoft, from here. But before doing so, you also need to install an update to VS2005 for supporting WAP projects. The update is available here. Download and install the update first before installing the WAP add-in. Please note that you don't need to perform any of these two installs if you are using VS2005 SP1 or VS2008, as they come integrated with WAP project support. More information on WAP projects is available here.
Now, create a new "Web Application Project" (this should be an installed template when you select "New Project" after you have your VS ready for WAP projects).
Create a new folder named "App_Code" in your project. VS will ask you to rename it to Old_App_Code. Agree to it. Now, copy all files of this project from your WSP project's App_Code folder to this folder in your WAP project.
Next, copy all other files and folders that belong to this project from your WSP project folder to this WAP project folder. Now, select "Show All Files" from Solution Explorer in your WAP project, and right-click and include all files you copied to your WAP project into your project. (You have an "Include In Project" option for each item you right-click, if that is not included in your project but is inside the project folder. You don't need to do this for each individual file. You can select folders and include them and their files all at once.)
Finally, right-click the project node in "Solution Explorer", the name of your project. There should be an option (probably fourth one, but depends upon your settings) that says "Convert to Web Application". Select it. That's it. VS would go over all your files and make the necessary changes (including generating new .designer files) to convert the code for WAP.
There might be errors in this process (mostly due to missing Assembly References in your WAP project). If so, add references to the desired assemblies, right-click the project node, and select "Convert to Web Application" again. Other common reasons for errors include missing namespace imports. Also note that you can reference one WAP project from another. WAP projects behave like class libraries.
The deployment of WAP projects includes publishing the project. Right-click the project node again and select "Publish Project". On the window that appears, select the Publish Path and some other mostly self-explanatory options. Click Publish, and VS would copy all the necessary files required for deployment of that project to the folder you specified.
I have noticed some differences between VS2005 and VS2008 in the Publish process. Specifically, VS2005 also copied .sql files to the Publish directory, whereas VS2008 does not. So, you might need to copy some files manually to the Publish folder. You can also create a build script to automate this task. I will present some part of the build script below.
You can zip up and provide the Publish folder you specified for deployment. After a couple of conversions of WSP projects to WAP, it now takes me less than 2-3 minutes to perform this conversion (discounting the occasions when I have to perform some refactoring due to circular references).
All software products undergo revision. So, you have published your WAP project, it is successful, and the client is happy. Now, he/she needs enhancements. And, you want to go back to the WSP model for development for reasons that should be obvious now.
Unfortunately, VS would not help you go back to WSP in any way. Not to worry. I came up with my own solution. I analyzed the project files after conversion to WAP and compared them with the original WSP files (using KDiff - I cannot explain how useful I have found this free utility in many situations, while comparing different sets of files for differences).
And, I noticed that VS performs a comparatively simpler process for WAP conversion. (It basically generates a designer file for each of your markup files, and converts the CodeFile attribute in the each markup file to Codebehind, that's it. It leaves all other files untouched.)
CodeFile
Codebehind
And, I simulated the reverse process to go back to WSP. I have created myself helper projects for this task, that do such things for me on a click. However, those projects are extremely dependent upon my development environment and the directory structures which I use to organize my projects. I am reproducing the core code of a couple of these helper projects here so you can use them for yourself as you desire.
The code below accepts a DirectoryInfo object for a directory path, and converts all the files in that directory suitable for use in a WSP project. It would preserve the sub-directories of the DirectoryInfo object path.
DirectoryInfo
Private Shared sourceDir As String = "C:\Source"
Private Shared destinationDir As String = "C:\Destination"
Public Shared Sub Main()
processDirectory(New DirectoryInfo(sourceDir))
End Sub
Private Shared Sub processDirectory(ByVal dirInfo As DirectoryInfo)
'Create all directories in the destination.
createSubDirectories(dirInfo)
For Each file As FileInfo In dirInfo.GetFiles("*", SearchOption.AllDirectories)
processFile(file)
End Sub
Private Shared Sub createSubDirectories(ByVal dirInfo As DirectoryInfo)
For Each dir As DirectoryInfo In dirInfo.GetDirectories
createSubDirectories(dir)
Dim destination As String = dirInfo.FullName.Replace(sourceDir, destinationDir)
If (Not Directory.Exists(destination)) Then
Directory.CreateDirectory(destination)
End If
End Sub
Private Shared Sub processFile(ByVal file As FileInfo)
Dim destination As String = file.FullName.Replace(sourceDir, destinationDir)
If (file.Name.EndsWith("designer.vb", StringComparison.CurrentCultureIgnoreCase) _
Or file.Name.EndsWith("designer.cs", _
StringComparison.CurrentCultureIgnoreCase)) Then
'Ignore & do not copy to destination directory.
Exit Sub
ElseIf (file.Name.EndsWith("ascx", _
StringComparison.CurrentCultureIgnoreCase)) Then
'Replace CodeBehind with CodeFile
Dim text As String
text = IO.File.ReadAllText(file.FullName)
text = text.Replace("Codebehind", "CodeFile")
'Write the file to destination in proper directory.
IO.File.WriteAllText(destination, text)
Else
'Copy the file as it is.
file.CopyTo(destination, True)
End If
End Sub
As I said above, I have tailored this process for DNN module development because that is where I use it most of the time.
Specifically, I am presenting below some code and parts of a successful build script that would allow you to package your Module for installation into DNN on successful build.
Private Sub parseCompleteDefinitions()
Dim files As String() = IO.Directory.GetFiles(basePath & _
"Scripts\(1) Original", "*.sql")
For Each fileName As String In files
If (Not fileName.StartsWith("Old", _
StringComparison.CurrentCultureIgnoreCase)) Then
'DO NOT generate SqlDataProvider for Sql files starting woth Old
Dim file As New IO.FileInfo(fileName)
Dim objDef As String = IO.File.ReadAllText(fileName)
createScript(file.Name.Remove(file.Name.LastIndexOf("."c)), _
objDef, basePath & "Scripts\", False)
End If
End Sub
Private Sub createScript(ByVal objName As String, ByVal objDef As String, _
ByVal basePath As String, ByVal writeOriginal As Boolean)
Dim modified As String
modified = objDef.Replace("[dbo].", "{databaseOwner}{objectQualifier}")
'Adjust for GO
Dim ownerVar As String = "DECLARE @owner nvarchar(MAX);" & vbCrLf & _
"set @owner = SUBSTRING(N'{databaseOwner}', 1, _
LEN(N'{databaseOwner}')-1);" & vbCrLf & _
"EXEC sys.sp_addextendedproperty"
modified = modified.Replace("EXEC sys.sp_addextendedproperty", ownerVar)
modified = modified.Replace("EXECUTE sp_addextendedproperty", ownerVar)
modified = modified.Replace("N'dbo'", "@owner")
'IMPORTANT: Perform, this step only AFTER the above 2 steps.
modified = modified.Replace("dbo.", "{databaseOwner}")
If (modified.Contains("dbo")) Then
Throw New Exception("Still containes dbo:- " & objName)
End If
'Remember, DNN needs DataProvider files in UTF-8 encoding.
Dim stream As New IO.StreamWriter(basePath & "(2) DNN\" & _
objName & ".SqlDataProvider", False, Text.Encoding.UTF8)
Using stream
stream.Write(modified)
End Using
If (writeOriginal) Then
stream = New IO.StreamWriter(basePath & "(1) Original\" & objName & ".sql")
Using stream
stream.Write(objDef)
End Using
End If
End Sub
You would notice from the code it depends upon my project directory structure. You can easily tweak it for yours.
First, set this in your project's Post-build:
set solDir=$(SolutionDir)
set proDir=$(ProjectDir)
set targetAssemblyName=$(TargetName)
if $(ConfigurationName)==Release "$(ProjectDir)build.bat"
Now, create a build.bat in your project's root directory. Here are some parts I have put in it:
set winRar="D:\WinRAR\winrar"
cd "%solDir%"
cd..
echo "Preparing Sql Script"
../../SqlScriptGenerator "%proDir%"
set source=%proDir%DesktopModules\
set publishDir=%solDir%Publish\%targetAssemblyName%\DesktopModules\
echo "Copying DNN Manifest to Publish Directory"
md "%publishDir%"
cd "%source%"
copy "%baseDir%%targetAssemblyName%.dnn" "%publishDir%%baseDir%"
echo "Copying Sql & SqlDataProvider files to Publish Directory"
md "%publishDir%%scriptDir%"
copy "%scriptDir%*.Sql" "%publishDir%%scriptDir%"
echo "Copying Common Dependencies"
set source=%proDir%bin
set target=%publishDir%%baseDir%Dependencies
md "%target%"
cd "%source%"
copy "%source%\AjaxControlToolkit.dll" "%target%\"
copy "%source%\%targetAssemblyName%.dll" "%target%\"
echo "Creating zip package"
cd "%publishDir%%baseDir%"
del %targetAssemblyName%.zip
%winRar% a -r -ibck -afzip %targetAssemblyName%
You most probably are referencing DNN's Label, SectionHead controls etc., from ~/controls. Remember to also copy the controls folder from your WSP solution to your WAP project, if your WAP project is outside the WSP solution. Else, it would generate errors when you select "Convert to Web Application" in VS.
Label
SectionHead
There is no doubt to me about WSP's RAD and debugging advantages and WAP's deployment ones. I have found the WSP approach better for development and the WAP approach better for deployment. With WSP, I can modify the code-behinds without having the entire installation recompile (of course, changes to App_Code classes need recompilation).
I always develop my modules as WSP. Then, when they are complete, I create a new WAP project, and add the existing files using the 'Add Existing Item' option from the VS context-menu appearing on right-clicking a project node in Solution Explorer. This further avoids copying effort. Conversion to WAP is a breeze after that, with VS supporting it.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/36690/WSP-to-WAP-and-vice-versa-in-under-5-minutes | crawl-003 | refinedweb | 2,637 | 57.16 |
Given n boxes containing some chocolates arranged in a row. There are k number of students. The problem is to distribute maximum number of chocolates equally among k students by selecting a consecutive sequence of boxes from the given lot. Consider the boxes are arranged in a row with numbers from 1 to n from left to right. We have to select a group of boxes which are in consecutive order that could provide maximum number of chocolates equally to all the k students. An array arr[] is given representing the row arrangement of the boxes and arr[i] represents number of chocolates in that box at position ‘i’.
Examples:
Input : arr[] = {2, 7, 6, 1, 4, 5}, k = 3 Output : 6 The subarray is {7, 6, 1, 4} with sum 18. Equal distribution of 18 chocolates among 3 students is 6. Note that the selected boxes are in consecutive order with indexes {1, 2, 3, 4}.
Source: Asked in Amazon.
The problem is to find maximum sum sub-array divisible by k and then return (sum / k).
Method 1 (Naive Approach): Consider the sum of all the sub-arrays. Select the maximum sum. Let it be maxSum. Return (maxSum / k). Time Complexity is of O(n2).
Method 2 (Efficient Approach): Create an array sum[] where sum[i] stores sum(arr[0]+..arr[i]). Create a hash table having tuple as (ele, idx), where ele represents an element of (sum[i] % k) and idx represents the element’s index of first occurrence when array sum[] is being traversed from left to right. Now traverse sum[] from i = 0 to n and follow the steps given below.
- Calculate current remainder as curr_rem = sum[i] % k.
- If curr_rem == 0, then check if maxSum < sum[i], update maxSum = sum[i].
- Else if curr_rem is not present in the hash table, then create tuple (curr_rem, i) in the hash table.
- Else, get the value associated with curr_rem in the hash table. Let this be idx. Now, if maxSum < (sum[i] – sum[idx]) then update maxSum = sum[i] – sum[idx].
Finally, return (maxSum / k).
Explanation:
If (sum[i] % k) == (sum[j] % k), where sum[i] = sum(arr[0]+..+arr[i]) and sum[j] = sum(arr[0]+..+arr[j]) and ‘i’ is less than ‘j’, then sum(arr[i+1]+..+arr[j]) must be divisible by ‘k’.
C++
Java
Python3
C#
Output :
Maximum number of chocolates: 6
Time Complexity: O(n).
Auxiliary Space: O(n). | https://tutorialspoint.dev/data-structure/hashing-data-structure/maximum-number-chocolates-distributed-equally-among-k-students | CC-MAIN-2021-17 | refinedweb | 409 | 68.16 |
There’s a lot of new material in Python Latest version and much of it is pretty simple. For example, str, bytes and bytearray now have an isascii() function that returns ‘true’ if they only include ASCII characters.
THE BELAMY
Sign up for your weekly dose of what's up in emerging technology.
In this article, we list down 7 essential features of the latest version which Python aspirants must get updated with.
Postponed Evaluation Of Annotations
The appearance of type hints in Python revealed two glaring usability concerns with the functionality of annotations which were added in the latest Python 3.7.2 version.
- annotations could only handle names which were previously available in the current scope, in other words, they didn’t support foremost references of any kind, and
- annotating source code had opposing impacts on startup time of Python programs.
Both of these problems are fixed by delaying the evaluation of annotations. Instead of compiling code which executes expressions in annotations at their rendering time, the compiler stores the annotation in a string form comparable to the AST of the expression in question. If needed, commentaries can be resolved at runtime using typing.get_type_hints(). In the common case where this is not needed, the annotations are inexpensive to store and make startup time quicker.
Usability-wise, annotations now carry forward references, making the subsequent syntax valid:
Code:
class D:
@classmethod
def from_string(cls, source: str) -> D:
...
def validate_b(self, obj: C) -> bool:
…
class C:
…
Since this change breaks adaptability, the new behaviour requires to be enabled on a per-module basis in Python 3.7 using a __future__ import:
Forced UTF-8 Mode
On programs other than Windows, there has always been a difficulty with the locale supporting UTF-8 or ASCII (7-bit) characters. The regular C locale is typically ASCII. This influences how text files are read (for instance). UTF-8 comprises all 8-bit ASCII characters and 2-4 length characters, as well. This change applies the locale to be one of the promoted UTF-8 locales. users can change what happens by introducing a new environment variable, PYTHONCOERCELOCALE. To practice ASCII requires both this setting and PYTHONUTF8 to be disabled. Setting PYTHONUTF8 to 1 force the Python interpreter to manage UTF-8, but if this isn’t described, it defaults to the locale setting; only if PYTHONCOERCELOCALE is disabled does it use ASCII mode. To explain technically, UTF-8 is not the default on Unix systems unless the user explicitly disables these two settings.
Built-in breakpoint()
With previous versions of Python, user’s could get a breakpoint by using the pdb debugger and this code, which breaks after function1() is described and before function2():
function1()
import pdb; pdb.set_trace()
function2()
The creator of PEP 553 found this a bit fiddly, and two statements on a line bothered some Python developers. So now there’s a unique breakpoint() function:
function1()
breakpoint()
function2()
This works in combination with the Python environment variable PYTHONBREAKPOINT. When Set to 0 the breakpoint does nothing. Address it a function and module value, and it will import the module and call it. The users can modify this programmatically at runtime; that’s very handy for having conditional breakpoints.
Data Classes
If users are storing data in classes, this latest feature clarifies things by generating boilerplate code for users.A data class is a normal Python class with the extension of a @dataclass decorator. It makes handling of Type hints, a new feature since Python 3.5 where users explain variables to provide a reference as to the type of variable. Users have to use this for domains in a data class.In the example below, we can see the: str after the variable name. That’s a type reference.If users want their data class to be permanent, just add (frozen=true) to the decorator. Here’s a simple example explicating an immutable data class. There’s not a lot of code in the class declaration; all the normal stuff is generated for users. The last two lines use the class and print out the occurrences:
Code:
from dataclasses import dataclass
@dataclass(frozen=True)
class ReadOnlyStrings(object):
field_name : str
field_address1 : str
test = {ReadOnlyStrings(1,’Damodar’), ReadOnlyStrings(2,’Bhagat’)}
print(test)
Output:
{ReadOnlyStrings(field_name=2, field_address1=’Bhagat’),
ReadOnlyStrings(field_name=1, field_address1=’Damodar’)}
Core Support for typing module and Generic Types
The previous Python Version was composed in such a way that it would not include any changes to the core CPython interpreter. Now type hints and the typing module are widely used by the community, so this restriction is eliminated. The PEP introduces two special methods __class_getitem__() and mro_entries, these classifications are now used by most classes and special constructs in typing. As a consequence, the speed of different operations with types extended up to 7 times, the nonexclusive types can be used without metaclass conflicts, and various long-standing bugs in typing module are fixed.
Development Runtime Mode
The -X parameter of CPython which is the standard implementation of Python which lets the users set multiple implementation specifications, and has been extended to incorporate the word ‘dev.’ This activates new runtime checks such as debug hooks on memory allocators, lets the faulthandler module dump the Python traceback, and allows asyncio debug mode a mode which attaches extra logging and warnings but reduces asyncio performance. users program can identify if it’s running dev mode by checking sys.flags.dev_mode.
Code:
python -X dev
Python 3.7.0 (default, Jan 21 2019, 16:40:07)
[GCC 7.3.0] on (any operating system)
Type "help", "copyright", "credits" or "license" for additional information.
>> import sys
>> print (sys.flags.dev_mode)
True
Hash-Based .pyc Files
Python has traditionally terminated the up-to-dateness of bytecode cache files (i.e., .pyc files) by connecting the source metadata with authorisation metadata saved in the cache file header when it was produced. While active, this invalidation method has its disadvantages. When filesystem timestamps are too inferior, Python can miss source updates, leading to user confusion. Additionally, having a timestamp in the cache file is uncertain for build reproducibility and content-based build regularities. The new Python feature extends the pyc format to concede the hash of the source file to be accepted for invalidation instead of the source timestamp. Such .pyc files are described as hash-based. By default, Python still practices timestamp-based invalidation and does not generate hash-based .pyc files at runtime. Hash-based .pyc files may be created with py_compile or compileall. Hash-based .pyc files appear in two variants: checked and unchecked. Python validates checked hash-based .pyc files versus the identical source files at runtime but doesn’t do so for unchecked hash-based pycs. Unchecked hash-based .pyc files are a valuable performance optimization for environments where a system outside to Python is responsible for managing .pyc files up-to-date. | https://analyticsindiamag.com/whats-new-in-python-3-7-2-look-at-7-features-that-you-should-know/ | CC-MAIN-2022-33 | refinedweb | 1,153 | 54.42 |
In this article we will discuss and implement API versioning in ASP.NET Core 5
You can watch the full video on YouTube
You can the source code from GitHub using the following link:
So what we will cover today:
- Whats the problem?
- What is versioning?
- Implementations
- Different types of versioning
- Ingredients
- Coding
As always you will find the source code in the description down below. Please like, share and subscribe if you like the video. It will really help the channel
Problem
How are we going to be dealing with changes over time, when we are building our application we focus on completing the project without taking too much consideration about future maintainability and support for the API.
Once we publish our API we cannot change it, as systems and users will be using this API and we don't want to break their work.
What is API Versioning?
It is evolving the API without breaking the current version and the clients applications who are using it.
This is not product versioning rather on how the API functions and work.
API versioning is complex since we need to support the old version and the new version and make sure our changes doesn’t break existing functionality. We could have new clients which want to use the latest and greatest functionalities while other just want the old version with the basic implementations.
Implementations
There is a lot of ways we can use API versioning, and there is no right answer for all. Every case is different based on your user scenarios and user case.
We always need to think about our clients and how they are using this API and how we can always achieve the best results.
Versioning Types
URI
Query String
Media Type and Header
Ingredients
- Visual Studio Code ()
- .Net 5 SDK ()
Code Time
We will start by creating a sample API, this API will basically return some information that we need.
Will start by creating our project
dotnet new webapi -n "SampleAPI"
Now let us open the project and do some clean up, we will be deleting the following
- WeatherForecast.cs
- Controllers/WeatherForecastController.cs
Once these 2 files are remove now its time to create a new folder in the root directory called Models, and inside the Models folder will add a new class called User.
public class User { public int Id { get; set; } public string Name { get; set; } }
Now we will need to create a controller, so inside the controllers folder let us create a new class called UsersController
[ApiController] [Route("api/users")] public class UsersController : ControllerBase { [HttpGet("{userId}")] public IActionResult GetUser(int userId) { var user = new User { Id = userId, Name = "Mohamad" }; return Ok(user); } [HttpGet()] public IActionResult AllUsers() { List<User> _users = new List<User>() { new User { Id = 1, Name = "Mohamad" }, new User { Id = 2, Name = "Richard" }, new User { Id = 3, Name = "Neil" } }; return Ok(_users); } }
Now what we need to is version this API, so lets imagine after 6 months we need to add some breaking changes into our API to serve new functionality but we don't want to break any existing integrations that exist with our current APIs.
We want a way to inform the customer who is integrating with our API that is deprecated without breaking the functionality
So how can we accomplish this? there is multiple ways we can do achieve this result.
Versioning nuget package
The first way to achieve this is via Microsoft.AspNetCore.Mvc.Versioning package let us add the package to our application and see how can utilise it
dotnet add package Microsoft.AspNetCore.Mvc.Versioning --version 5.0.0
Once the package is installed we need to go to StartUp.cs and update the ConfigureServices method with the following
services.AddApiVersioning();
So now lets run our application again and see what happens, we get the following error
Why? because once we have added this line we have added the Versioning functionality to our application but we still didn't configure it and we still didn't tell the .Net core how it will handle the versioning of the API.
How can we make a work around to get access to the api, we need to add a query string to our url to inform AspNetCore which version we want
We can think now that as long as we have the api-version attached to every Url we should be fine, well not exactly as we will still break current implementations with our clients we need a better way to approach this by doing the following:
We need AspNetCore to automatically default to the first version of the API to do that we need to update the ConfigureServices method in the StartupClass to the following
services.AddApiVersioning(opt => { // Will provide the different api version which is available for the client opt.ReportApiVersions = true; // this configuration will allow the api to automaticaly take api_version=1.0 in case it was not specify opt.AssumeDefaultVersionWhenUnspecified = true; // We are giving the default version of 1.0 to the api opt.DefaultApiVersion = ApiVersion.Default; // new ApiVersion(1, 0); });
Now the api will be returning the api-supported version in the headers so the client will know all of the available headers
Now we have configured the versioning in our startup class, it's time for us to add the new API endpoints that will be replacing the old one.
So the way to do that is to organise our controllers in version folder which mean that we need to create directories under the controllers folder called v1, v2 ... and inside each of the folders we add the controllers based on what changes they have. So inside the controllers folder lets add 2 folders v1, v2 and move the UsersController to inside V1. So it will look something like this
And now let us update the UsersController to the following
[ApiVersion("1.0")]
and will add a .v1 to the namespace
Now let us introduce our breaking change, which will be switching the UserId from int to Guid
Inside the models folder we will add a new class called UserV2
public class UserV2 { public Guid Id { get; set; } public string Name { get; set; } }
And inside the Controllers/v2 folder lets add a new controller called UsersController
[ApiController] [Route("api/users")] [ApiVersion("2.0")] public class UsersController: ControllerBase { [HttpGet("{userId}")] public IActionResult GetUser(Guid userId) { var user = new UserV2 { Id = userId, Name = "Mohamad" }; return Ok(user); } [HttpGet()] public IActionResult AllUsers() { List<UserV2> _users = new List<UserV2>() { new UserV2 { Id = Guid.NewGuid(), Name = "Mohamad" }, new UserV2 { Id = Guid.NewGuid(), Name = "Richard" }, new UserV2 { Id = Guid.NewGuid(), Name = "Neil" } }; return Ok(_users); } }
Now having the API version number coming directly from the query string is not the right way of doing it in actual production projects, there might be a way to accomplish this in a way better way like the following
- header
- url segment
- query string
- media type
Url Segment
We have already covered the query string we will start by covering the url segment so what we want to accomplish is something like this
V1: v2:
This is its a much cleaner way to see which api version are we targeting and to make it easier for our clients to implement. so how do we accomplish this ? We need to update our V2 UsersController to the following
[ApiController] [Route("api/{version:apiVersion}/users")] [ApiVersion("2.0")]
We have updated the route of our controller to automatically take the api version number, and for the v1 of the UsersController we will not change anything as we don't want to break our existing integrations
The next one is media type
We will need to update the startup class to take implement media type versioning, So inside the startup class we need to add the following
opt.ApiVersionReader = new MediaTypeApiVersionReader("x-api-version");
So now we can send the x-api-version in the accept header and we can specify which version of the API we want to use and utilise
Header
What if i want to have my own custom header and not use the accept header, its also very easy to do by updating the startup class to the following
opt.ApiVersionReader = new HeaderApiVersionReader("x-api-version");
And now we can send the x-api-version as its own header and our api will accept it
Combine multiple ways to accept api version
To enable multiple ways to accept api versions for example i want to enable the accept header and the custom header at the same time. We need to update the startup class to the following
opt.ApiVersionReader = ApiVersionReader.Combine( new MediaTypeApiVersionReader("x-api-version"), new HeaderApiVersionReader("x-api-version") );
Now lets say we want to start deprecating our APIs the way to do it is to update the API version attribute as the following
[ApiVersion("1.0", Deprecated = true)]
And now our API will return the deprecated version to the client in the header like this
Thank you for reading, please ask your questions in the comments down below.
Discussion (0) | https://dev.to/moe23/asp-net-core-5-api-versioning-3jnp | CC-MAIN-2021-21 | refinedweb | 1,511 | 53.65 |
If you are the one using the most popular frontend library React to build your UI, you might have definitely heard about props and state which triggers the UI update as and when there is a change to either of them; the state could be a local state(synchronous) or network state(asynchronous).
Managing the state in the React is always been a talk of the down with so many libraries in hand such as,
redux,
mobx,
recoil, and the list goes on, let me explain to you how we can leverage this without adding any additional dependency to the app and reduce the bundle size of the app.
If you are using React for quite a while (at least from React 16.3) you might have heard about one of the most popular library Redux to manage the complex UI state, because of its predictable state and support for async state management with the help of redux-thunk and redux-saga libraries.
There are plenty of libraries that you can use it as middleware in redux and extend the redux capabilities. If you are setting up the redux from scratch you need to have boilerplate code set up before you start working on it. Recent versions of redux offer hooks-based API to reduce some of the boilerplate but still, you need to know about actions, reducers, middlewares, and so on.
If you are using the latest React or React 16.8 or above, you might already be using one of the most popular features introduced in react, hooks. Hooks help you write the components without writing classes and manage the state of the react app with ease.
In this post I'll explain the usage of useReducer hook with the help of other hooks, such as useEffect, useMemo, useRef, and useState to manage the complex UI state without using the redux. This post assumes that you all know the basics of hooks and how to use them. If you have not used it before I recommend you all to read the official documentation for getting started.
Let's assume that we are building a simple book library CRUD app, where you can add, delete, and manage your library based on your interests. I'm using one of the React UI patterns used widely with redux, container, and presentational components pattern to demonstrate this example, this can fit any pattern you are already using.
books-container.js
import React, {useReducer, useMemo, useEffect, useRef} from 'react' import _ from 'lodash' import BooksLayout from './books-layout' // Extract this to utils file, can be reused in many places // Same as that of redux's bindActionCreators method const bindActionCreators = (reducerMap, dispatch) => _.reduce( reducerMap, (result, val, type) => ({ ...result, [type]: payload => dispatch({type, payload}), }), {} ) // Initial state of the app const initialState = { books: {}, // To keep track of progress of a API call and to show the // progress in the UI bookReadState: null bookDeleteState: null bookUpdateState: null } const reducerMap = { setBooks: (state, books) => ({ ...state, books, }), updateBook: (state, book) => ({ ...state, books: // merge state.books with updated book details }, deleteBook: (state, book) => ({ ...state, books: // update the state.books with deleted book }), setBookReadState: (state, bookReadState) => ({ ...state, bookReadState }), setBookUpdateState: (state, bookUpdateState) => ({ ...state, bookUpdateState }), setBookDeleteState: (state, bookDeleteState) => ({ ...state, bookDeleteState }), } const useService = ({id, actions}) => { // abortController can be used to abort the one or more request // when required, can also be used to abort when multiple requests are made // within a short period, so that you don't make multiple requests const abortController = useRef(new global.AbortController()) actions = useMemo( () => ({ ...actions, readBooks: async () => { try { const data = await readBooks({ fetchCallback: actions.setBookReadState}) actions.setBooks(data) } catch(error) { // error handling } }, updateBook: async book => { try { const data = await updateBook({book, fetchCallback: actions.setBookUpdateState}) actions.updateBook(data) } catch(error) { // error handling } }, deleteBook: async id => { try { const data = await deleteBook({id, fetchCallback: actions.setDeleteReadState}) actions.deleteBook(data) } catch { // error handling } }, }), [actions] ) useEffect(() => { const controller = abortController.current // Invoke the actions required for the initial app to load in the useEffect. // Here I'm reading the books on first render actions.readBooks() return () => { controller.current.abort() } }, [actions]) return {actions} } const reducer = (state, {type, payload}) => reducerMap[type](state, payload) const BooksContainer = props => { const [state, dispatch] = useReducer(reducer, initialState) const actions = useMemo(() => bindActionCreators(reducerMap, dispatch), []) const service = useService({...props, state, actions}) return ( <BooksLayout {...state} {...service} {...props} /> ) } export default BooksContainer
books-layout.js
import React from 'react' const BooksLayout = ({books, actions, bookReadState, ...props}) => { return ( <> {bookReadState === 'loading' ? <div>Loading...</div> : {books.map(book => ( // UI Logic to display an each book // button to click to delete // call actions.deleteBook(id) ) ) } } </> ) } export default BooksLayout
As you can see in the above example you can control the state of your app in the container and don't have to worry about connecting the state to each component separately as you need to do in redux.
In the above example, I kept all the code in a single file for the demonstration purpose and parts of the code were not complete, replace the code with your abstractions for network calls, business logic, and UI logic based on your needs. You can improve this code by separating the logic based on your needs for more reusability across the app as the DRY(Don't Repeat Yourself) principle suggests.
Redux shines and scales well for complex apps with a global store. In this article, I'm trying to explain, how you can leverage the useReducer in place of redux to achieve the global state management with less code, and no need to worry about adding new packages to the app and we can reduce the bundle size of the app significantly.
Please leave a comment and follow me for more articles.
Discussion | https://dev.to/kpunith8/how-to-manage-a-complex-ui-state-with-usereducer-hook-instead-of-redux-32ik | CC-MAIN-2020-45 | refinedweb | 940 | 52.39 |
Jon Watte wrote:
> + The problem we're wanting to solve is that it takes 10 minutes to
> crawl through 20,000 files, trying to figure out which ones have
> changed when you want to commit (or similar).
H.
Further, most checkins don't need to check the entire tree for
changed files, nor even a substantial fraction of the entire tree.
Usually you're working in a directory somewhere near the leaves,
with (if you're unlucky) 1000 files under it. No?
I don't have any objection to the proposal, but I'm puzzled
by the apparent need for it. If it takes 10 minutes to find
which files, out of 20k, have changed, then I think something
is being done wrong in Subversion. And if you often have to
do a checkin for which Subversion needs to look at 20k files,
then I think something's wrong in the organization of your
project. I am open to correction on both issues.
*
My experiment was admittedly a very simple-minded one,
and represents a best case in a few ways. I created a
tree of directories and empty files as follows:
def build(dir, n_files, n_dirs, depth):
os.mkdir(dir)
for i in range(n_files):
open(os.path.join(dir, str(i)), "w").close()
depth -= 1
if depth <= 0: return
for j in range(n_files, n_files+n_dirs):
build(os.path.join(dir, str(j)), n_files, n_dirs, depth)
build("foo", 10, 3, 8)
and I crawled it doing a stat on each node as follows:
time find foo -ls > /dev/null
This was on a local filesystem on a FreeBSD box.
Hardware: Athlon/1GHz, 256Mb. FreeBSD's filesystem
is quite fast, and the hardware -- though nowhere near
today's bleeding edge -- is quite decent. But, still,
NFS on a 300MHz Ultrasparc (say) surely can't be
more than (say) 25 times slower, which would be
less than 2 minutes.
If statting 2*20k files takes 10 minutes then you're statting
about 70 files a second. That's very, very, very slow.
--
g
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
This is an archived mail posted to the Subversion Dev
mailing list. | https://svn.haxx.se/dev/archive-2002-09/1311.shtml | CC-MAIN-2017-22 | refinedweb | 374 | 72.46 |
Answered by:
Using LDAP to search attribute bit flags using attribute OID values
Hello everyone,
My question stems from trying to understand the OID and syntax behind this classic LDAP search to find disabled users:
"(useraccountcontrol:1.2.840.113556.1.4.803:=2)"
What I am interested in is the value 1.2.840.113556.1.4.803, specifically how it differentiates from the value 1.2.840.113556.1.4.8, which is the OID of the useraccountcontrol attribute:
Now, this website below says that the 03 and 04 are designators of the AND and OR operations, respectively, and are added on to the end of the OID:
However, using this logic, I can't get these 03 and 04 operators to work with other attribute OID's that use flags as values, such as the "searchflags" attribute, e.g. a LDAP search of "(searchflags:=1.2.840.113556.1.2.33404:=0) returns nothing, using the OR (04) operation at the end of the "searchflags" OID of 1.2.840.113556.1.2.334.
So back to my original question, for the useraccountcontrol OID of 1.2.840.113556.1.4.8, is this OID at all related to the bitwise AND extensible match of 1.2.840.113556.1.4.803 (like just adding a 03 to designate an AND operation), or is this extensible match value of 1.2.840.113556.1.4.803 completely separate from the useraccountcontrol OID of 1.2.840.113556.1.4.8?
If I have my terms mixed up, please feel free to correct me on what the proper terms are.
Thanks!
- Edited by KentYeabower Monday, June 9, 2014 8:53 PM adding hyperlink
Question
Answers
All replies
Hmm yeah I posted that link above in my OP as well, and I was hoping that the OID values of these bitwise filters were somehow related to the shorter OID of the "useraccountcontrol" attribute, but it looks like it's just a coincidence.
So I wonder if the "useraccountcontrol" section of this article from my OP is a little misleading when it says:).
Following this logic, I should be able to use the "03" and "04" in other bitwise operations with different OID's to search "AND" or "OR", but as I pointed out in my OP above, I can't seem to make this work with adding the
"03" and "04" onto the end of other OID's. So I will go with Christoffer that these bitwise OID's (1.2.840.113556.1.4.803 and 1.2.840.113556.1.4.804) are unique in themselves, and the fact that they are 2 characters away from the OID of the "useraccountcontrol" attribute (1.2.840.113556.1.4.8) is just coincidence.
This does seem strange however, and it seems like there should be some correlation here....
If anyone has any more info, I would love to hear it!
OIDs form a hierarchical namespace. 1.2.840.113556 is Microsoft's OID, so anything defined by Microsoft will be under that. 1.2.840.113556.1.4 is an OID allocated to the Active Directory group inside Microsoft, so things that group defined will often end up there, as did useraccountcontrol(8) and the matching rules (803 and 804). No deep mystery.
DonH | https://social.technet.microsoft.com/Forums/windowsserver/en-US/4b79e2a3-d5c6-4b9e-bc67-63993e42f068/using-ldap-to-search-attribute-bit-flags-using-attribute-oid-values?forum=winserverDS | CC-MAIN-2019-18 | refinedweb | 556 | 68.91 |
In this post, I’ll take a look at using R.NET with the creation of an application written in C#. As this is a first, basic step, I’ll provide the code to make a calculator. Not exciting I know, but it demonstrates the basics!
I provide full code for the C# solution at the end: through the post here, I’ll highlight the bits that I assume readers will be most interested in – specifically, bits that actually get C# and R to talk to one another.
Anyway, here is what the calculator ended up looking like:
Background
I’ve recently been tinkering with making GUI interface-type-things for R. I previously posted a guide on working with RApache - which is a webserver that can run R scripts. This time around, I’m using C# and .NET.
I began my attempts with making a GUI that can run R in some shape or form by taking a look at the various options on offer. Aside from RApache on the web front, you have Rook as well. Then there is this, but it doesn’t have an open source license (and appears to not work with the latest version of R), so I haven’t tried it. There is also Rcpp and RInside, which I believe can be used for this sort of purpose as well, but I haven’t got around to trying them out yet.
Getting Started
You’ll need to download the necessary file from the R.NET page.
You’ll also need to add a reference to the R.NET assembly into your solution.
Connecting to R
Here’s the basic code you’ll need:
using RDotNet; ... REngine.SetDllDirectory(@"C:\Program Files\R\R-2.13.0\bin\i386"); REngine.CreateInstance("RDotNet");
Note that here it’s assumed that you’ll have R version 2.13.0 installed in the default location, just like me. The file being searched for here is R.dll.
With that setup, to get connected to R in any of your methods, all you need is this:
REngine engine = REngine.GetInstanceFromID("RDotNet");
Simple! engine can now send and receive information, just like an R console would normally. It also persists through commands. For example, if you send the command x<- 1 and then ask for x , it will remember that it’s 1. Great!
Details of the Calculator
It’s a very basic calculator. All I did was bind each button press to add the text from that button (e.g., “1″, “2″, “3″ and so on) to the text box at the top (called textBox_input). When the user hits the = sign, the text in textBox_input gets sent to R for evaluation. Here’s the function that adds button presses to the textBox_input box:
private void add_input(string input) { if (textBox_output.Text!="") { textBox_input.Text = ""; textBox_output.Text = ""; } textBox_input.Text += input; }
The if statement at the start of add_input checks to see if a calculation has already been run (i.e., the output textBox has text in it). If that is the case, then it wipes the proverbial slate clean, allowing us to put in something new to evaluate.
Interacting with R from R.NET
There are a number of ways to interact with R from R.NET. R.NET gives you access to various data types from R (e.g., numeric vectors, data frames, lists, and so on). For the purposes of the little calculator being demonstrated here, it’s a case of evaluating the input text of the calculator as follows:
string input = textBox_input.Text; NumericVector x = engine.EagerEvaluate(input).AsNumeric();
EagerEvaluate sends input (the text in the input textbox) to our R instance for evaluation. Here, the information we get back from R is converted to a numeric vector.
Next, we turn to what we know from R. If we ask R to evaluate the following:
40 + 2
It returns us a nice and simple 42.
All that’s being done with the NumericVector x is that the output of that 42 is being added to one of R.NET’s NumericVector data types. If you want R.NET to grab a whole load of numbers in this fashion, it’s very easy.
Finally, it’s a case of setting up the output from the calculator to be equal to the first value in x. This is because we’re expecting x to be only one value anyway.
textBox_output.Text += x[0];
This then gives the output of 42 and adds it to the output textbox.
Next Steps
Here I’ve only shown a very simple example of how to use R.NET. I don’t think the world needs any new calculators, but in the future I’ll have a go at some more complex and interesting ways to get C# and R being friendly with one another.
Full C# Code
Here is all of my C# code. Note that I’ve included some bits and pieces to handle errors and exception, as well as different versions of R. Specifically, there’s a while loop in there to help the user hunt down the R.dll file if they don’t have the latest version of R, or it’s installed in something other than the default directory.
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using RDotNet; namespace RNet_Calculator { public partial class Form1 : Form { // set up basics and create RDotNet instance // if anticipated install of R is not found, ask the user to find it. public Form1() { InitializeComponent(); string dlldir = @"C:\Program Files\R\R-2.13.0\bin\i386"; bool r_located = false; while (r_located == false) { try { REngine.SetDllDirectory(dlldir); REngine.CreateInstance("RDotNet"); r_located = true; } catch { MessageBox.Show(@"Unable to find R installation's \bin\i386 folder. Press OK to attempt to locate it."); if (folderBrowserDialog1.ShowDialog() == DialogResult.OK) { dlldir = @folderBrowserDialog1.SelectedPath; } } } } // This adds the input into the text box, and resets if necessary private void add_input(string input) { if (textBox_output.Text!="") { textBox_input.Text = ""; textBox_output.Text = ""; } textBox_input.Text += input; } // the equals button, which evaluates the text private void button_equals_Click(object sender, EventArgs e) { textBox_output.Text = ""; REngine engine = REngine.GetInstanceFromID("RDotNet"); String input = textBox_input.Text; try { NumericVector x = engine.EagerEvaluate(input).AsNumeric(); textBox_output.Text += x[0]; } catch { textBox_output.Text = "Equation Error"; } } // Begin the button function calls - long list and not exciting private void button_1_Click(object sender, EventArgs e) { add_input(button_1.Text); } private void button_2_Click(object sender, EventArgs e) { add_input(button_2.Text); } private void button_3_Click(object sender, EventArgs e) { add_input(button_3.Text); } private void button_4_Click(object sender, EventArgs e) { add_input(button_4.Text); } private void button_5_Click(object sender, EventArgs e) { add_input(button_5.Text); } private void button_6_Click(object sender, EventArgs e) { add_input(button_6.Text); } private void button_7_Click(object sender, EventArgs e) { add_input(button_7.Text); } private void button_8_Click(object sender, EventArgs e) { add_input(button_8.Text); } private void button_9_Click(object sender, EventArgs e) { add_input(button_9.Text); } private void button_point_Click(object sender, EventArgs e) { add_input(button_point.Text); } private void button_0_Click(object sender, EventArgs e) { add_input(button_0.Text); } private void button_plus_Click(object sender, EventArgs e) { add_input(button_plus.Text); } private void button_minus_Click(object sender, EventArgs e) { add_input(button_minus.Text); } private void button_multiply_Click(object sender, EventArgs e) { add_input(button_multiply.Text); } private void button_divide_Click(object sender, EventArgs e) { add_input(button_divide.Text); } private void button_left_bracket_Click(object sender, EventArgs e) { add_input(button_left_bracket.Text); } private void button_right_bracket_Click(object sender, EventArgs e) { add_input(button_right_bracket.Text); } private void button_ce_Click(object sender, EventArgs e) { textBox_input.Text = ""; textBox_output... | http://www.r-bloggers.com/making-guis-using-c-and-r-with-the-help-of-r-net/ | CC-MAIN-2014-42 | refinedweb | 1,250 | 59.6 |
Essentials
All Articles
What is LAMP?
Linux Commands
ONLamp Subjects
Linux
Apache
MySQL
Perl
PHP
Python
BSD
ONLamp Topics
App Development
Database
Programming
Sys Admin
Ruby, if you've never heard of it, is an object-oriented scripting language,
similar in many ways to Perl and Python. It originates from Japan and is
young, as far as programming languages go. There are many really good
reasons you might want to use the Ruby language; I'm not going to go into
all of them here, but the one at the core of this article is the ease with
which you can write Ruby extensions in C.
I'm a big fan of the so-called agile programming languages. I think they
have a huge advantage over more traditional languages like C and C++. They also
have some drawbacks, among the largest being that there's an awful lot of
existing code written in C and C++. It's hard to sell people on moving to
something new if they have to leave all their old toys behind.
The standard response to these sort of arguments is that you can easily
write an extension that bridges the gap between your old C code and your new
Perl or Python, or whatever agile language is hot this week, code. Unfortunately,. All you need to know to start is in the
README.EXT file in the top level of the Ruby source tree. If you
need help with something that isn't documented there, you can't ask for a
clearer example than the Ruby source code itself. In short, I was just aching
for a test case, some C code I could wrap up in a Ruby extension to prove how
simple it is to make something that's easy to use. For my test case, I chose the
GenX library.
README.EXT
Related Reading
Programming Ruby
The Pragmatic Programmer's Guide, Second Edition
By Dave Thomas
GenX is a simple C library for generating correct, canonical XML. It
verifies that its data is valid UTF-8 and the structure of the XML document
being generated is valid, and it forces you to use canonical XML--that may not
mean much to you right now, but it can be significant if you need to compare two
XML documents to determine their equivalence. Tim Bray wrote GenX and hosts it
at GenxStatus.
GenX originally attracted me because it provides a way to avoid problems related
to invalid XML, which I've encountered in my own work. In addition to its
usefulness, it's also perfect for an example of how to embed a C library in
Ruby because it's very small, self contained, and has a well-defined API we can
wrap up in Ruby without too much trouble.
At this point it's worth asking whether a Ruby extension is really the best way
to make this kind of functionality available.
Using an extension means that users need to install a binary distribution
precompiled for their versions of Ruby and their operating systems, or to build
it themselves, which means they need access to a C compiler. Additionally,
extending Ruby via C has its own set of dangers. If you screw something up in a
standard Ruby module, pretty much the worst you can do is cause an exception to
be thrown. It's possible for users to recover from this if they're paranoid enough
about catching exceptions and structure their code correctly. In a C-based
extension, an error can corrupt memory or cause a segmentation fault or any
number of other problems from which recovery is difficult, and all of which have the
chance to crash the underlying Ruby interpreter.
That said, in this particular case I think providing direct access to the
underlying GenX library via a C extension is the way to go. The GenX library is
available right now, it works, and it does its job in a very efficient manner.
There's no reason to duplicate functionality unnecessarily. Even if I did
rewrite this in pure Ruby, all I am likely to accomplish is slowing things down.
Plus, GenX is exceptionally self contained; while using the library does
require that users either use a precompiled extension or possess a C
compiler, it at least doesn't bring in any other third-party requirements.
Finally, the GenX API is quite straightforward. It's reasonable to assume that
we'll be able to implement this extension without undue risk of crashing our
Ruby interpreter due to bugs in our code.
The first step in writing a Ruby extension is to create something that
compiles and runs. That means writing an extconf.rb file that tells
Ruby how to compile and link your extension, and then writing the bare-bones C
file that makes up the extension. With these two steps completed, you'll have a
Ruby module that you can require and a new class you can
instantiate, albeit not a very useful one because it won't actually have any
methods.
require
The extconf.rb file is a short Ruby program that
makes use of the mkmf module to build a simple makefile,
which you use to build your extension. There's a fair amount of
specialized functionality you can put in your extconf.rb
file, but for our purposes the bare minimum will do. Here's the
entirety of my extconf.rb file:
mkmf
require 'mkmf'
dir_config("genx4r")
create_makefile("genx4r")
This tells Ruby to use all of the .c files in the current working
directory to build a extension named genx4r, and that it should write
out a makefile to compile and link it. If you copy all the .c and
.h files from the GenX tarball into the current directory, you can run
ruby extconf.rb && make and then have a Ruby extension
sitting there just waiting for you to require it in our script.
Here's the process:
ruby extconf.rb && make
$ ruby extconf.rb
creating Makefile
$ make
gcc -fno-common -g -Os -pipe -no-cpp-precomp -fno-common -DHAVE_INTTYPES_H
-pipe -pipe -I. -I/usr/lib/ruby/1.6/powerpc-darwin7.0 -I. -c -o charProps.o
charProps.c
gcc -fno-common -g -Os -pipe -no-cpp-precomp -fno-common -DHAVE_INTTYPES_H
-pipe -pipe -I. -I/usr/lib/ruby/1.6/powerpc-darwin7.0 -I. -c -o genx.o
genx.c
cc -fno-common -g -Os -pipe -no-cpp-precomp -fno-common -DHAVE_INTTYPES_H
-pipe -pipe -dynamic -bundle -undefined suppress -flat_namespace
-L/usr/lib/ruby/1.6/powerpc-darwin7.0 -L/usr/lib -o genx4r.bundle charProps.o
genx.o -ldl -lobjc
$ ls
Makefile charProps.o genx.c genx.o
charProps.c extconf.rb genx.h genx4r.bundle*
$ irb
irb(main):001:0> require 'genx4r'
LoadError: Failed to lookup Init function ./genx4r.bundle
from (irb):1:in `require'
from (irb):1
irb(main):002:0>
OK, so that sort of works.... There's a Ruby extension, but trying to
require it from inside Ruby only produces an error. That's because
none of the .c files defined an Init function. When Ruby
tries to load an extension, the first thing it does is look for a function named
Init_extname, where extname is the name of the
extension. Because that function doesn't exist, Ruby obviously can't find it
and throws a LoadError exception.
Init
Init_extname
extname
LoadError
The next step is to implement Init_genx4r to allow the
extension to load successfully. The bare minimum necessary is simply an empty
function named Init_genx4r that takes no arguments and returns
nothing. I like that. Here are the current contents of the genx4r.c
file:
Init_genx4r
#include "ruby.h"
void
Init_genx4r()
{
/* nothing here yet */
}
Rerun extconf.rb and make. When you try to load
the genx4r module with require, you should have
better results:
extconf.rb
make
genx4r
$ irb
irb(main):001:0> require 'genx4r'
=> true
irb(main):002:0>
The extension loads, but it still doesn't actually do anything. It needs
definitions for the classes that make up the interface to the GenX library.
For now, I'll define one top-level Ruby module named GenX and a
single class, Writer, that lives in it. That class is simply a
thin wrapper around the C-level genxWriter type. Here's the next
iteration of genx4r.c:
GenX
Writer
genxWriter
#include "ruby.h"
#include "genx.h"
static VALUE rb_mGenX;
static VALUE rb_cGenXWriter;
static void
writer_mark (genxWriter w)
{}
static void
writer_free (genxWriter w)
{
genxDispose (w);
}
static VALUE
writer_allocate (VALUE klass)
{
genxWriter writer = genxNew (NULL, NULL, NULL);
return Data_Wrap_Struct (klass, writer_mark, writer_free, writer);
}
void
Init_genx4r ()
{
rb_mGenX = rb_define_module ("GenX");
rb_cGenXWriter = rb_define_class_under (rb_mGenX, "Writer", rb_cObject);
/* NOTE: this only works in ruby 1.8.x. for ruby 1.6.x you instead define
* a 'new' method, which does much the same thing as this. */
rb_define_alloc_func (rb_cGenXWriter, writer_allocate);
}
That's a lot of new code. First comes the #include of
genx.h, because it needs to use functions defined by GenX. The two
VALUE variables represent the module and the class. Each object in
Ruby (and remember, everything in Ruby is an object) has a
VALUE; think of it as a reference to the object. The beginning of
Init_genx4r initializes these variables by calling
rb_define_module to create the GenX module and
rb_define_class_under to define the Writer class.
#include
genx.h
VALUE
rb_define_module
rb_define_class_under
Next, Ruby needs to know how to allocate the guts of the
GenX::Writer object. That's where allocate comes in,
using rb_define_alloc_func to associate the
writer_allocate function with the allocate method.
writer_allocate creates a genxWriter object with
genxNew and turns it into a Ruby object via the
Data_Wrap_Struct macro. Data_Wrap_Struct simply
takes a VALUE representing the class of the new object (passed as
an argument to writer_allocate), two function pointers used for
Ruby's mark-and-sweep garbage collection, and a pointer to the underlying C-level data structure--in this case the genxWriter itself--and
returns a new VALUE that refers to the object, which is simply a
thin wrapper around the C-level pointer. Finally, the code has the mark
function for the object, writer_mark, which actually does nothing,
and the destructor, writer_free, which calls
genxDispose to clean up the genxWriter allocated in
writer_allocate. When wrapping a more complicated structure that
includes references to other Ruby-level objects, the mark function must call
rb_gc_mark on each of them to tell Ruby when nothing references them any longer and they are ready for garbage collection.
GenX::Writer
allocate
rb_define_alloc_func
writer_allocate
genxNew
Data_Wrap_Struct
mark
writer_mark
writer_free
genxDispose
rb_gc_mark
One thing to note about genx4r.c is that the only function
accessible outside that file is Init_genx4r. Everything else is
static, which means that it won't leak out into the global
namespace and cause linkage errors if some other part of the program happens to
use the same function or variable name.
static
It's time to take a quick jaunt through irb to confirm that
it's possible to create an instance of the new class:
irb
$ irb
irb(main):001:0> require 'genx4r'
=> true
irb(main):002:0> w = GenX::Writer.new
=> #<GenX::Writer:0x321f84>
irb(main):003:0>
Sure enough, it's an instance of our new GenX::Writer class.
Adding a few methods will make it actually useful!
Pages: 1, 2, 3
Next Page
Sponsored by:
© 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.onlamp.com/pub/a/onlamp/2004/11/18/extending_ruby.html?page=3&x-order=date | CC-MAIN-2017-04 | refinedweb | 1,920 | 62.38 |
Hi!
I hope I'm not missing the point here, but you are talking about e.g. binary (or text) file
attachments?
If the server-side is controlled by you, this should work: I have created my own datatype
for sending binary attachments like this in java (I am not using custom headers at all):
//----------------------START CODE------------------------------------
package no.geomatikk.soap.datatypes;
public class Base64File {
private String name;
private byte[] data;
public String getName() { return name; }
public void setName(String name) { this.name = name; }
public byte[] getData() { return data; }
public void setData(byte[] data) { this.data = data; }
}
//----------------------END CODE--------------------------------------
I run this through Java2WSDL and get:
...
<complexType name="Base64File">
<sequence>
<element name="name" nillable="true" type="soapenc:string"/>
<element name="data" type="soapenc:base64"/>
</sequence>
</complexType>
...
You can generate client-side classes with WSDL2Java from this, and get client code you don't
need to edit manually.
This works with Axis client (also when sending an array of Base64File), and it should work
with .NET also (not tested yet).
The reason I use this approach is that I have heard that attachments are not very interoperable
for the time being. Byte arrays are automatically sent/received BASE64 coded, so it is a OK
way to do things.
Any comments?
Eirik Wahl
Bravida Geomatikk AS
-----Original Message-----
From: butek@us.ibm.com [mailto:butek@us.ibm.com]
Sent: 16. august 2002 14:55
To: axis-user@xml.apache.org
Subject: Re: attachements and headers
I'm in the process of implementing JAX-RPC attachments in WSDL2Java/Java2WSDL. I'm not done
yet so don't expect too much, but if you grab the latest nightly build and just use DataHandler
you should be OK.
For example, if you started with the Java method:
public DataHandler echoDataHandler(DataHandler in0) throws RemoteException;
Java2WSDL would give you the following WSDL:
<wsdl:definitions ... xmlns:apachesoap="" ...
<wsdl:message
<wsdl:part
</wsdl:message>
<wsdl:message
<wsdl:part
</wsdl:message>
<wsdl:operation
<wsdl:input
<wsdl:output
</wsdl:operation>
And WSDL2Java would take this and give you the appropriate mappings.
Note that this is NOT interoperable WSDL. We've introduced an AXIS-specific type: apachesoap:DataHandler.
The reason we did this is that WSDL 1.1 MIME attachments is more-or-less broken. For instance,
you cannot represent an array of attachments with WSDL 1.1. WSDL2Java WILL eventually digest
WSDL 1.1 WSDL in its limited usability, but Java2WSDL will never generate it.
Russell Butek
butek@us.ibm.com
Please respond to axis-user@xml.apache.org
To: <axis-user@xml.apache.org>
cc:
Subject: attachements and headers
Hi,
I have the foolowing problem: I'm using WSDL2Java to generate
client code for a SOAP application. I would like to send attachments
and custom SOAP headers, but without touching the generated code.
Client code generation and usage is automatized so there is no way for me
to alter the generated code before using it. I know attechments can
be added to the Call object, but the Call objects are not available to
me any more, since they are created at every method call.
Is there any way currently that I can solve this problem ?
If there is no way, please read forward, and see what I figured out.
Looking at the Call source is that maybe I can
solve this by adding attachements and headers (or other info) to the
client stub using the generated API, as properties. These will
finally be put into the MessageContext by the Call object. This way
I might write a client-side handler that will extract those properties
and convert them into proper attachments, inserting them into the
Message object.
What do you think about this ? Will it work ?
Thanx in advance for your answers, Geza | http://mail-archives.apache.org/mod_mbox/axis-java-user/200208.mbox/%3C89F2D7244FA78A48B8AD71BBC76E6734877F8F@bra-no-24-001.corp.bravida.com%3E | CC-MAIN-2017-47 | refinedweb | 624 | 57.77 |
I am following the docs on how to compile and get the examples to compile. I am not having any luck.
I am using mac book pro OS X El Capitan 10.11.1
g++ -o cyisowrite_test cyisowrite_test.c -l cyusb -l usb-1.0
clang: warning: treating 'c' input as 'c++' when in C++ mode, this behavior is deprecated
In file included from cyisowrite_test.c:7:
./../include/cyusb.h:18:10: fatal error: 'libusb-1.0/libusb.h' file not found
#include <libusb-1.0/libusb.h>
^
1 error generated.
make: *** [all] Error 1
Hi,
can you please confirm that the folder structure is intact as provided in the package or if any inlcdues are missing?
Regards,
-Madhu Sudhan
Please see screen shot. All i did was unzip and cd to the examples directory like the README.txt explains and did make and you will see the error i get.
Please create a Tech Support Case with Cypress. | https://community.cypress.com/thread/15511 | CC-MAIN-2017-51 | refinedweb | 159 | 69.89 |
Cropyble is a module that allows a user to easily perform crops on an image containing recognizable text. This module utilizes optical character recognition (OCR) from Google by way of pytesseract.
Project description
Cropyble
Author: Skyler Burger Version: 1.2.1
Overview
Cropyble is a class that allows a user to easily perform crops on an image containing recognizable text. This class utilizes optical character recognition (OCR) with the assitance of Tesseract OCR and Pytesseract. Images containing clear, printed, non-decorative text work best with the OCR capabilities.
This is :sparkles: my first package on PyPI :sparkles: and I welcome feedback. Feel free to submit issues if you spot an area that could use improvement.
Architecture
Packages
- pillow: a Python package for manipulating images
- pytesseract: Python bindings for Tesseract
- tesseract: a command-line program and OCR engine
Getting Started
Linux & Mac OS
- This class requires an additional piece of software that is not available through PyPI. Install tesseract on your machine with
sudo apt-get install tesseract-ocr
- Install Cropyble with either
pip3 install cropybleor preferably with a environment manager such as
pipenv
- Place the following import statement at the top of your file:
from cropyble import Cropyble
- Create Cropyble instances and get to cropping!
Example:
# example.py from cropyble import Cropyble my_img = Cropyble('demo.jpg') my_img.crop('world', 'output.jpg')
In the above example, imagine that
demo.jpg is an image that contains the words 'hello world' and is located in the same directory as
example.py. An instance of Cropyble is created with a path to the input image. Cropyble then performs OCR on the image and stores information regarding the characters and words recognized, as well as their bounding boxes, within the instance of the class. By calling the
.crop() method of the instance with a word contained in the image and a path to an output file, a cropped image of the word is created. The output file is created if it does not exist, or is overwritten if it already exists.
API
- Cropyble(input_path): Takes in a string representing the input image location. Cropyble runs OCR on the image using
pytesseractand stores the bounding boxes for recognized words and characters for future crops.
- .crop(word, output_path): Takes in a string representing the word or character you'd like cropped from the image and a second string representing the output image path. Generates a cropped copy of the query text from the original image and saves it at the specified location.
- .get_box(word): Takes in a string representing a word that was recognized in the image. Returns a tuple representing the bounding box of the word in the format (x1, y1, x2, y2). The origin (0, 0) for images is located in the top-left corner of the image.
- .get_words(): Returns a list of words that were recognized within the input image.
Change Log
07/22/2019 - 0.1.0
- Corrected bounding box math. Images are being properly cropped.
07/27/2019 - 0.2.0
- Refactored cropping functions into a class to minimize work needed to perform multiple crops on a single image.
07/30/2019 - 0.3.0
- Cropyble can now accept a path for the input image and crop() accepts a path for the output image.
08/02/2019 - 1.1.0
- Cropyble can now crop words and characters recognized within an image using the same crop() method.
10/08/19 - 1.1.4
- Refactored for packaging
- Uploaded to PyPI, bumpy ride
01/06/20 - 1.2.0
- Added
__repr__and
__str__magic methods to Cropyble class.
- Added
.get_box()and
.get_words()methods to Cropyble class
01/07/20 - 1.2.1
- Re-released to PyPI
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/cropyble/ | CC-MAIN-2021-04 | refinedweb | 632 | 55.54 |
I am not sure, if this is possible or not. For the following query :-
var query: Query = new Query(TopPlayers, "wonDate == ?", "06292017");
query.limit = 3;
query.orderBy = "highScore DESC";
Query returned - 1 Top player = [Entity type="TopPlayers" id="uGUb73UHE1r3mekf" createdAt="2017-06-29T10:52:38.787Z" updatedAt="2017-06-29T10:55:59.504Z" ownerId="uGUb73UHE1r3mekf"]
Its not returning the highScore of the player.
Am I missing something or used the wrong query?
What should be the approach for such a requirement.
Thanks
Shaun
Does the "TopPlayers" entity — that is, the respective AS3 class — contain a "highscore" property? And is that property R/W? In that case, the instance that's returned should contain the appropriate value!
package {
import com.gamua.flox.Entity;
public class TopPlayers extends Entity {
private var _highScore: Number = 0;
private var _wonDate: String;
public function TopPlayers() {
// constructor code
}
public function get highScore(): Number {
return _highScore;
}
public function set highScore(value: Number): void {
_highScore = value;
}
public function get wonDate(): String {
return _wonDate;
}
public function set wonDate(value: String): void {
_wonDate = value;
}
}
}
Above is the class! & Below is the function that saves value!
public function saveScore(val: Number, time: Date): void {
myScore.id = currentPlayer.id ;
myScore.highScore = val;
myScore.wonDate = getTheDate(time);
//save it to Flox using the instance method
myScore.saveQueued();
}
hw to change the variables R/W property ?
Hmmmm — that does look correct to my eyes; the highscore *should* be stored correctly. Maybe the code you use as the query callback is wrong? Could you post the code you use for "query.find()"?
here is it!
query.find(
function onComplete(topPlayers: Array): void {
if (topPlayers.length > 0) {
trace("Query returned - Top players = ", topPlayers);
} else if (topPlayers.length == 0) {
trace("Query returned Empty Array");
}
},
function onError(error: String): void {
//Something went wrong during the execution of the query.
//The player's device may be offline.
trace("The player's device may be offline", error);
});
Also i wanted know can flox do queries like sql ???
eg :-
Select SUM(ammo) FROM Player WHERE team = "WildFoxes";
Player will be an entity with a variables 'ammo' and 'team'.
ammo will be a integer variable and team will string. | https://forum.starling-framework.org/d/19322-query-for-top-players-their-high-score | CC-MAIN-2019-22 | refinedweb | 356 | 60.21 |
Hello I have a project for school and my teacher never did a great job at explaining how to do certain aspects of my code. I've done a lot of research on Java and how to get this to work but none worked and i've been working on this code for over 48 hours trying to get this math to work.
Basically I have to make a program that can do monthly payment's.
If I enter a loan amount of :100,000
with the rate of :6
and the years being :30
I should get back about :$599.55
Also if I enter1: 10,000
with the rate being 4.5
years being 3
I should get back about $ 297.47
Here's my code.
Code Java:
package monthypayment2; import java.util.Scanner; public class Monthypayment2 { private static double monthlypay; private static double monthlypay1; private static String choice; /** * @param args the command line arguments */ public static void main(String[] args) { Scanner in = new Scanner(System.in); double months = 0; double numyears; double loanamount; double rate; //prompt loan amount System.out.print("Enter Loan Amount:"); //promp the user loanamount = in.nextDouble(); //prompt rate System.out.print("Enter Rate:"); rate = in.nextDouble(); //prompt years System.out.print("Enter Number of Years:"); numyears = in.nextDouble(); //calculate monthlypay1 = loanamount * rate / (1 - 1 / Math.pow(1 + rate, months)); System.out.println("The Monthly Payment is: $" + monthlypay1); System.out.print("Would you like to calculate again?:(y/n)"); choice = in.next(); System.out.println(); } }
Any Help would be appreciated.
also when I run my code I get the monthyly payment is :$Infinity.
When i put months as a value of 12 i will get something like $600000488.65 | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/32945-monthly-payment-calculation-errors-please-help-printingthethread.html | CC-MAIN-2014-10 | refinedweb | 285 | 69.07 |
Disk Management
The Disk Management snap-in to the Microsoft Management Console (MMC) is a tool for managing disk storage systems. Wizards guide you through creating partitions or volumes and initializing or upgrading disks. New key features of Windows 2000 Server Disk Management include the following:
Online Disk Management You can perform most administrative tasks without shutting down the system or interrupting users. For example, you can create various partition layouts and choose protection strategies, such as mirroring and striping, without restarting the system. You can also add disks without restarting. Most configuration changes take effect immediately.
Remote Disk Management As an administrator, you can manage any remote (or local) computer that runs Windows 2000.
Figure 19.2 shows some of the View menu options you can select in Disk Management.
.gif)
Figure 19.2 Disk Management MMC Snap-In
Basic and Dynamic Storage
There are two types of disk storage available with Windows 2000: basic or dynamic. Basic storage supports partition-oriented disks. A basic disk can hold primary partitions, extended partitions, and logical drives. Basic disks might also contain spanned volumes (volume sets, mirrored volumes (mirror sets, striped volumes (stripe sets, and redundant array of independent disks or RAID-5 volumes. In Microsoft® Windows NT® version 4.0 or earlier, RAID-5 was known as a stripe set with parity. If you want computers to access these volumes, and if those computers run Windows NT 4.0 or earlier, Microsoft® Windows® 98 or earlier, or Microsoft® MS-DOS®, you need to create basic volumes.
Dynamic storage supports new volume-oriented disks and is new with Windows 2000. It overcomes the restrictions of partition-oriented disk organization and facilitates multidisk, fault-tolerant disk systems. With dynamic storage, you can perform disk and volume management without restarting the operating system. On a dynamic disk, storage is divided into volumes instead of partitions. A volume consists of a portion or portions of one or more physical disks in any of the following layouts: simple, spanned, mirrored, striped, and RAID-5 volumes. Dynamic disks cannot contain partitions or logical drives, and cannot be accessed by MS-DOS or Microsoft® Windows® 98 and earlier versions. You can use dynamic storage to set up a fault-tolerant system by using multiple disks.
When you attach a new disk to your computer, you need to initialize the disk before you can create volumes or partitions. When you initialize the disk, select dynamic storage if you want to create simple volumes on the disk or if you plan to share the disk with other disks to create a spanned, striped, mirrored, or RAID-5 volume. Select basic storage if you want to create partitions and logical drives on the disk.
Table 19.2 shows tasks that you can perform on basic and dynamic disks by using Disk Management.
Table 19.2 Tasks for Basic and Dynamic Disks
Volume Management
Windows 2000 includes significant improvements in the architecture of volume management. Volume management includes the processes that create, delete, alter, and maintain storage volumes in a system. The new architecture improves the manageability and recoverability of volumes in an enterprise environment.
A Logical Disk Manager (LDM) has been introduced to the architecture to extend fault tolerance functionality, to improve system recovery, to encapsulate volume information so that disks can be easily moved, and to provide improved management functionality. This service is responsible for volume creation and deletion, fault tolerance features (RAID), and volume tracking. You use the Disk Management snap-in to manage local and remote volumes.
Volume management has the following features:
You can create any number of volumes in the free space on a physical hard disk or create volumes that span two or more disks.
Each volume on a disk can have a different file system, such as the file allocation table (FAT) file system or the NTFS file system.
Most changes that you make to your disk are immediately available. You do not need to quit Disk Management to save them or restart your computer to implement them.
Volume Mount Points
As part of Disk Management, you can create volume mount points. Volume mount points provide you with a quick way to bring data online and offline. They are file system objects in the Windows 2000 internal namespace that represent storage volumes. When you place a volume mount point in an empty NTFS directory, you can graft new volumes into the namespace without requiring additional drive letters. An example of how you might use volume mount points is to have a computer with a single drive and volume formatted as C and to mount a disk as C:\Games.
Some possible uses for volume mount points include:
To provide additional space for programs For example, you mount a disk as C:\Program Files. Then, when you need additional disk space, you add a disk to the system and span it with the disk at C:\Program Files.
To create different classes of storage For example, create a stripe set for performance and mount it as C:\Scratch; and create a mirror set for robustness and mount it as C:\Projects. Users will see the directories normally, but their scratch directory will be fast, and their projects directory will be protected by a mirror set.
To create multiple mount points for a volume For example, a volume is mounted as C:\Games and C:\Projects. Be aware that nothing prevents cycles in the namespace. If you mount a volume as D and also as D:\Docs, because D is mounted underneath itself, it creates a cycle in the namespace. Applications that do enumeration get into an endless loop on this volume.
Volume mount points are robust against system changes that occur when hardware devices are added to or removed from a computer. You are no longer limited to the number of volumes you can create based on the number of drive letters.
Disk Defragmentation
Another Disk Management feature is the Disk Defragmenter. You can use this tool to locate files and folders that have become fragmented and to reorganize clusters on a local disk volume. Disk Defragmenter organizes clusters so that files, directories, and free space are physically more contiguous. As a result, your system can gain access to your files and folders and can save new ones more efficiently. If you have considerable fragmentation, Disk Defragmenter can improve your overall system performance significantly in relation to disk input/output (I/O).
The Disk Defragmentation feature decides where files should be located on the disk, but NTFS and FAT move the clusters around.
You can use this tool with disk volumes that are formatted for FAT16, FAT32, or NTFS.
Considerations for Using Dynamic Storage
Consider the following when creating volumes:
Dynamic storage uses a volume-oriented scheme for disk organization. Windows NT Server is not compatible with dynamic disks.
You can use the Windows 2000 Setup program to configure disk space while upgrading to Windows 2000 Server.
Note
You can create new volumes and partitions on unallocated portions of the disk without losing data on existing volumes. However, if you plan to change your volume topology, you have to backup your data because making changes to existing volumes erases all existing data.
- You can configure the internal hard disk on a new computer during initial setup when you load the Windows 2000 Server operating system software. You can use Disk Management to make changes to the disk after installation.
For more information about managing disks, see "Disk Concepts and Troubleshooting" and "Data Storage and Management" in the Microsoft ® Windows ® 2000 Server Resource Kit Server Operations Guide. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/cc960726(v%3Dtechnet.10) | CC-MAIN-2019-26 | refinedweb | 1,272 | 52.9 |
Debug Expression Blend applications
If your Microsoft Expression Blend application does not behave the way that you expect, or if errors occur when you try to test your application, there may be a bug in your application. It can be difficult to understand what is causing a bug or where in your application it exists, but it helps to understand the types of bugs that you might encounter.
Syntax errors
When you build your application, any syntax errors are displayed in the Errors tab of the Results panel of Expression Blend, or in the Error List panel in Microsoft Visual Studio 2010 .
Syntax errors occur if your Extensible Application Markup Language (XAML) or code does not follow the formatting rules of the language. The description of the error can help you understand how to fix it. The description also specifies the name of the file and the line number where the error occurs. Some common causes of syntax errors are as follows:
A keyword has been misspelled or the capitalization is wrong.
Quotation marks are missing around strings of text.
A XAML element is missing a closing tag.
A XAML element exists in a location where it is not allowed. You can avoid these errors by editing your documents in Design view in Expression Blend, or in Visual Studio 2010.
In a code file, a function or method call does not include the required parameters. For example, the
MessageBox.Show()method must have at least one parameter, such as the string in,
MessageBox.Show("Hello").
In a code file, a variable of one type is being assigned to a different type. For example, the
MessageBox.Show()method can have a string argument, but it cannot have an integer argument.
In C#, a method that does not need arguments might not have braces at the end. For example,
this.InitializeComponent;will cause a syntax error because the correct line is
this.InitializeComponent();.
For information about XAML syntax, see the overview topics for individual controls listed in the Windows Presentation Foundation Control Library
and the Silverlight Control Gallery
on MSDN. For information about programming syntax, you can search on MSDN for keywords in your code.
Compilation errors
When you build your application, any compilation errors are displayed in the Errors tab of the Results panel of Expression Blend, or in the Error List panel in Visual Studio 2010.
Compilation errors occur when the compilation system of Expression Blend or Visual Studio 2010 cannot find something that your project requires. For example, if your Windows Presentation Foundation (WPF) project is missing a reference to the WPF assemblies, you might get an error such as "The name 'Window' does not exist in namespace ''". If you get this error, you can click Add Reference on the Project menu to add references to the following WPF assemblies in the "C:\Program Files\Reference Assemblies\Microsoft\Framework\" folder:
PresentationCore.dll
PresentationFramework.dll
WindowsBase.dll
If you still receive errors such as "The name '<member>' does not exist in the current context," there might be another assembly reference missing, or you might need to add a using (C#) or Imports (Visual Basic .NET) statement to your code for the missing namespace. To find out which assembly or namespace is required, see the MSDN reference topic for the member that is causing the error.
Some other common causes for compilation errors are as follows:
A keyword has been misspelled or the capitalization is wrong.
A class is not referenced properly in your application. For example, if your application uses a custom class which is implemented in a separate .cs or .vb code file with its own namespace, any document in your application that uses the custom class needs to include a line like the following, where FullyQualifiedNamespace is the namespace in the code file:
xmlns:SampleNamespace="clr-namespace:FullyQualifiedNamespace"
The compiler options are not set properly, or your system is not capable of building Microsoft .NET Framework–based applications. If you have the Microsoft .NET Framework, and you are building your application by using Expression Blend or Visual Studio 2010, this should not be an issue.
A file has not been saved before you try to build the project. For example, if you use the Events panel
of Expression Blend to generate a new event handler method in the code-behind file (thus opening the code-behind file in Visual Studio), and then try to build the project in Expression Blend without first saving the code-behind file, you will get an error saying that your project does not contain a definition for the event handler.
Run-time errors
You have a run-time error if your application builds but it behaves in an unexpected way when you run it (by pressing F5 in Expression Blend). Run-time errors are the most difficult to identify because they involve errors in logic. Sometimes, you can fix run-time errors by trying out different changes in your XAML or code until you understand what is going on behind the scenes. However, it is faster to actually watch what is going on behind the scenes by stepping through your code line by line as the application is running.
For more information, see Debug Expression Blend applications in Visual Studio 2010.
Some common causes of run-time errors are as follows:
XAML elements are not laid out properly, or the wrong panel object is being used to contain other objects.
To learn about layout, see Arranging objects, or see The Layout System
and Alignment, Margins, and Padding Overview
in the WPF section on MSDN.
A XAML element is not hooked up to the correct event handler. This can happen if you create many event handler methods and then assign the wrong one to the XAML element. To see which event handlers are assigned to a XAML element in a WPF project open in Expression Blend, select the object in the Objects and Timeline panel, and then, in the Properties panel, click the Events button.
For more information, see Writing code that will respond to events.
An animation trigger in Expression Blend is not set properly. For example, animation storyboards must be started in any trigger if you want to be able to stop or pause them after the application is loaded. (All animation storyboards are started in the Window.Loaded trigger by default, but you can change that.)
For more information, see 2010.
For more information, see Debug Expression Blend applications in Visual Studio 2010.
In a code-behind file, user interface (UI) updates are executed on the same thread as other programming logic that should be performed on a separate thread. For example, if you create an event handler method that updates the text that is displayed in a Label, performs some other calculations, and then updates the text in the same Label again before the event handler method completes, you will see only the last update. This is because the rendering of your UI occurs at the end of your event handler method and all processing is done on the same thread, so your application cannot take time out during the execution of your method to update the UI.
For information about how to write WPF applications that have multiple UI updates and calculations, see Threading Model
in the WPF section on MSDN.
In an event handler method in a code-behind file, UI elements or their properties are referenced before they are available. For example, in a Window1() constructor method, you will not be able to access UI elements yet. In an OnInitialized() event handler method, you can access UI elements, but you cannot examine properties like ActualWidth because the UI elements have not been laid out yet. In an OnLoaded() event handler method, you can do what you want to do with UI elements that exist in your XAML document.
For more information, see Object Lifetime Events
in the WPF section on MSDN.
Debugging in Visual Studio 2010
Expression Blend is a design tool for creating rich user interfaces for WPF-based applications and Microsoft Silverlight applications. You can use Visual Studio 2010 to open, build, and debug Expression Blend projects. If you are having trouble debugging your application by using the Run Project (F5) feature of Expression Blend, you can use Visual Studio 2010 to obtain detailed error messages about run-time errors.
For more information, see Debug Expression Blend applications in Visual Studio 2010.
Debugging performance issues
WPF provides a suite of performance assessment tools that allow you to analyze the run-time behavior of your application and determine how you can improve performance.
For more information, see Performance Profiling Tools for WPF
and Optimizing WPF Application Performance
in the WPF section on MSDN.
Event tracing
Experienced .NET programmers can add code to their WPF applications to trigger custom debugging events that help them to debug more complicated bugs. This feature is called Event Tracing for Windows (ETW). The WPF Event Trace profiling tool uses ETW for event logging.
For more information, see "Event Tracing" and "PresentationTraceSources" in Performance Profiling Tools for WPF
on MSDN.
Debugging hybrid applications
If you have an application that uses both WPF and another technology like Windows Forms programming, you may experience problems such as unexpected overlapping behavior, scaling behavior, control focus issues, and so on.
For information that can help you debug hybrid applications, see Troubleshooting Hybrid Applications
in the WPF section on MSDN.
Security
While being debugged, your application has the same security permissions that it has when another person uses it.
For more information, see Deploy and publish Expression Blend applications.
For more information about WPF application security, see Security
in the WPF section on MSDN.
Getting help
If you need more help debugging your Expression Blend application, you can search the Windows Presentation Foundation Forum
or the Silverlight learning center
for posts related your issue or post a question.
Send feedback about this topic to Microsoft. © 2011 Microsoft Corporation. All rights reserved. | https://docs.microsoft.com/en-us/previous-versions/visualstudio/design-tools/expression-studio-4/cc294906(v=expression.40) | CC-MAIN-2018-22 | refinedweb | 1,664 | 51.78 |
Want to see the full-length video right now for free?
Blocks.
The Climate Control gem allows a test to be run with its own
set of temporary environment variables (usually accessed through
ENV).
It uses blocks to establish a contained scope for the test and its modified
environment without polluting the general environment for other tests.
RSpec.describe Environment do it "modifies the ENV only when the block is run" do updated_env = nil Environment.set FOO: "bar" do updated_env = ENV["FOO"] end expect(updated_env).to eq "bar" expect(ENV["FOO"]).to be_nil end end
Blocks can be delineated with the familiar
do and
end or with curly braces.
(Remember that not all curly braces represent blocks! Sometimes they signify a
hash.) The block allows the establishment of a lexical scope for an anonymous
method.
# a block using do-end within ".body" do has_css(".nav") end # a block using {} 2.times { puts "hello" }
& preceding an argument (such as
&block) designates a block argument in a
method's argument list. In this method,
&block means the argument will be a
block:
def run(&block) begin cache_old_values assign_env block.call ensure reset_env end end
Within a method, a block argument can be executed by sending it
#call.
Keep in mind that the block can be called multiple times!
yieldand explicit context)
When you define a block, you can specify "block variables" (declared between
pipe characters, such as
|memo|) that are only accessible within the block.
result = Calculator.run(start: 5) do |c| c.add 5 end
Within a method
#yield provides a separate means of invoking a block (in
addition to the previously mentioned
Proc#call), with the ability to pass
arguments to a block (which then become block arguments). You can see
#yield
in Rails layouts, when the layout yields to the specified view.
def self.run(start:) yield Operations.new(start) end
The block's execution will return the value of the last evaluation of the block. This behavior allows the building of chainable methods.
result = Calculator.run(start: 5) do |c| c.add 5 c.multiply_by 2 c.subtract 15 end expect(result).to eq 5
Object#instance_exec
We discussed
#call and
#yield as ways to invoke a block. There is a separate
third way to invoke a block.
Object#instance_exec allows you to run a block in
the context of the calling object, meaning that references in the block
(including
self) refer to the calling object. Here's the example from Ruby's
documentation:
class KlassWithSecret def initialize @secret = 99 end end k = KlassWithSecret.new k.instance_exec(5) {|x| @secret+x } #=> 104
Within the block,
@secret references the instance variable in the object that
received
#instance_exec with the block.
FactoryGirl makes use of
#instance_exec to set attributes in a factory. In
this simplified factory example,
#instance_exec is used with
#method_missing
to dynamically build a hash key-value pairs out of method-argument pairs. The
methods declared in the block will be executed within the runner's context,
where they will not be defined. The runner will then delegate those method calls
(with their arguments) to
#method_missing, where the implementation will build
up the hash's keys and values. (Note that
#method_missing is not part of
Ruby's block functionality, but it is often used with it.)
def method_missing(name, *args, &block) @result[name.to_sym] = args.first end
The
#block_given? method can be called within a method to check
if a block argument was supplied. Here Josh uses
#block_given? to allow the
HashDSL class to build a hash with an arbitrary level of nested values using
recursion:
def method_missing(name, *args, &block) @result[name.to_sym] = if block_given? Runner.new.instance_exec(&block).__result__ else args.first end self end | https://thoughtbot.com/upcase/videos/blocks-what-are-they-good-for | CC-MAIN-2019-13 | refinedweb | 619 | 59.09 |
- 19 Jul 2019 12:36:51 UTC
- Distribution: Class-Inspector
- Module version: 1.36
- Source (raw)
- Browse (raw)
- Changes
- How to Contribute
- Repository (git clone)
- Issues
- Testers (5496 / 1 / 0)
- KwaliteeBus factor: 2
- 85.53% Coverage
- License: perl_5
- Perl: v5.8.0
- Activity24 month
- Tools
- Download (26.77KB)
- MetaCPAN Explorer
- Permissions
- Permalinks
- This version
- Latest version++ed by:8 non-PAUSE usersand 4 contributors
Adam Kennedy
- Tom Wyant
- Steffen Müller
- Kivanc Yazan (KYZN)
- Dependencies
- File::Spec
- base
- Reverse dependencies
- CPAN Testers List
- Dependency graph
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- METHODS
- SEE ALSO
- AUTHOR
NAME
Class::Inspector - Get information about a class and its structure
VERSION
version 1.36
my $bool = Class::Inspector->installed($class);
The
installedstatic method tries to determine if a class is installed on the machine, or at least available to Perl. It does this by wrapping around
resolved_filename.
Returns true if installed/available, false if the class is not installed, or
undefif the class name is invalid.
loaded
my $bool = Class::Inspector->loaded($class);
The
loadedstaticif the class name is invalid.
filename
my $filename = Class::Inspector->filename($class);
For a given class, returns the base filename for the class. This will NOT be a fully resolved filename, just the part of the filename BELOW the
@INCentry.
print Class->filename( 'Foo::Bar' ); > Foo/Bar.pm
This filename will be returned with the right separator for the local platform, and should work on all platforms.
Returns the filename on success or
undefif the class name is invalid.
resolved_filename
my $filename = Class::Inspector->resolved_filename($class); my $filename = Class::Inspector->resolved_filename($class, @try_first);
For a given class, the
resolved_filenamestatic method returns the fully resolved filename for a class. That is, the file that the class would be loaded from.
This is not necessarily the file that the class WAS loaded from, as the value returned is determined each time it runs, and the
@INCinclude path may change.
To get the actual file for a loaded class, see the
loaded_filenamemethod.
Returns the filename for the class, or
undefif the class name is invalid.
loaded_filename
my $filename = Class::Inspector->loaded_filename($class);
For a given loaded class, the
loaded_filenamestatic method determines (via the
%INChash) the name of the file that it was originally loaded from.
Returns a resolved file path, or false if the class did not have it's own file.
functions
my $arrayref = Class::Inspector->functions($class);
For a loaded class, the
functionsstatic method returns a list of the names of all the functions in the classes immediate namespace.
Note that this is not the METHODS of the class, just the functions.
Returns a reference to an array of the function names on success, or
undefif the class name is invalid or the class is not loaded.
function_refs
my $arrayref = Class::Inspector->function_refs($class);
For a loaded class, the
function_refsstatic method returns references to all the functions in the classes immediate namespace.
Note that this is not the METHODS of the class, just the functions.
Returns a reference to an array of
CODErefs of the functions on success, or
undefif the class is not loaded.
function_exists
my $bool = Class::Inspector->function_exists($class, $functon);
Given a class and function name the
function_existsstatic method will check to see if the function exists in the class.
Note that this is as a function, not as a method. To see if a method exists for a class, use the
canmethod for any class or object.
Returns true if the function exists, false if not, or
undefif the class or function name are invalid, or the class is not loaded.
methods
my $arrayref = Class::Inspector->methods($class, @options);
For a given class name, the
methodsstatic method will returns ALL the methods available to that class. This includes all methods available from every class up the class'
@ISAtree.
Returns a reference to an array of the names of all the available methods on success, or
undefif the class name is invalid or the class is not loaded.
A number of options are available to the
methodsmethod, a la
full, the separate class and method, and a CODE ref to the actual function ( if available ). Please note that the function reference is not guaranteed to be available.
Class::Inspectoris intended at some later time, to work with modules that have some kind
my $arrayref = Class::Inspector->subclasses($class);
The
subclassesstatic method will search then entire namespace (and thus all currently loaded classes) to find all classes that are subclasses of the class provided as a the parameter.
The actual test will be done by calling
isaon the class as a static method. (i.e.
My::Class->isa($class).
Returns a reference to a list of the loaded classes that match the class provided, or false is none match, or
undefif the class name provided is invalid.
SEE ALSO, Class::Handle, Class::Inspector::Functions
AUTHOR.
Module Install Instructions
To install Class::Inspector, copy and paste the appropriate command in to your terminal.
cpanm Class::Inspector
perl -MCPAN -e shell install Class::Inspector
For more information on module installation, please visit the detailed CPAN module installation guide. | https://metacpan.org/pod/Class::Inspector | CC-MAIN-2021-49 | refinedweb | 845 | 52.6 |
Hello,
I have a custom python script and tool which is designed to allow a user to select and zoom to a locality. Param[0] is a feature layer (containing locality boundaries) and param[1] is a multi value list which is populated from a field in the selected feature layer in param[0]. If I open the MXD and open the tool the localities in param[1] populate perfectly. If I use the tool to select a particular locality and zoom to it the script will run perfectly. However, if I then reopen the tool to use it again the only value which will be populated in param[1] is the locality value I used when I previously ran the tool and not all the locality values. I cannot get this to reset until I close the MXD and open it fresh again.
The validation code is attached. Can anyone please tell me what I am doing wrong? I am sure it is something terribly simple which I am not seeing...
Cheers,
Gino
FYI I decided to hard-code the featurelayers path into the validation code and it works fine. I still cannot explain the behaviour in my previous post.
strange... if a hard-code path works then perhaps the full path to file isn't being grabbed.
One useful tip to exploit if your data are in a location relative to the running script/tool is
path = sys.argv[0]
data = path + "/DataFolder/some.gdb/something
so data refers now to the location of the something you are looking for since it is in a gdb in a folder relative to the location of the running script
Thanks Dan. Yep, I don't know why it does this, I would think it if it works the first time it should have reference to the layer set correctly, not sure why it would lose it after the first run?
This tool is designed to work with layers in a particular MXD so in this instance I just grabbed the layer from the TOC and used its datasource property as the input. Will keep your suggestion in mind when I need to reference the script folder however 🙂
Cheers,
Gino
I'm wondering if using {} instead of [] for the list is an issue or whether in the end the set is being converted to a list by arcpy so it's no issue. ({} delineates a set [] delineates a list.)
As for Dan's comment about the local path. There are two options:
os.path.dirname(sys.argv[0]) or os.path.dirname(__file__). Which is best depends on what you need -- usually
__file__.
See: Difference between __file__ and sys.argv[0] (StackOverflow)
(In .tbx script tool validation code
__file__ will be the path to the function - packed in the tbx file using a relative link syntax: u'C:\\test\\Toolbox.tbx#Script_z.InitializeParameters.py',
sys.argv[0] will just give you an empty string as I would guess the validation code is being run with
execfile().)
One more thing about path - maybe you instead want the path to the mxd? (Don't forget to import os at the top of your validatino code.)
here = os.path.dirname(arcpy.mapping.MapDocument("CURRENT").filePath)
Back to your original question: I would remove the if statement in the initialize code to make sure it always at least attempts to run.
def initializeParameters(self):
"""Refine the properties of a tool's parameters. This method is
called when the tool is opened."""
try:
self.params[1].filter.list = sorted(
[row[0] for row in arcpy.da.SearchCursor(
self.params[0].value, "ADMINAREANAME") if row[0]])
except:
pass
Gino's attached .py snippet is pretty short so I am including it here:."""
if self.params[0].value:
self.params[1].filter.list = sorted(
{row[0] for row in arcpy.da.SearchCursor(
self.params[0].value, "ADMINAREANAME") if row[0]})
def updateParameters(self):
"""Modify the values and properties of parameters before internal
validation is performed. This method is called whenever a parameter
has been changed."""
if self.params[0].altered:
if self.params[0].value:
self.params[1].filter.list = sorted(
{row[0] for row in arcpy.da.SearchCursor(
self.params[0].value, "ADMINAREANAME") if row[0]})
return
def updateMessages(self):
"""Modify the messages created by internal validation for each tool
parameter. This method is called after internal validation."""
return | https://community.esri.com/t5/python-questions/strange-tool-validation-behaviour/m-p/6442/highlight/true | CC-MAIN-2021-31 | refinedweb | 732 | 64.51 |
My efforts to rebase the various branches in Dart Comics onto a base updated for the M1 release proceed apace. After tonight, I think all that remains is the websocket branch. But first, an issue found by my new favorite tool, the
dart_analyzer.
At one point, I had played with exceptions in Dart and it is this that is breaking for me now. Specifically, according to
dart_analyzer, exceptions now follow a new format:
➜ scripts git:(mvc-futures) dart_analyzer main.dart file:/home/chris/repos/dart-comics/public/scripts/Views.AddComicForm.dart:98: This style of catch clause has been deprecated. Please use the 'on' <type> 'catch' '(' <identifier> (',' <identifier>)? ')' form. 97: } 98: catch (Exception e) { ~ Compilation failed with 1 problem.I fixed a similar issue a while back. As the analyzer says, all I need to do is reformat the catch. And in this case, since I am not trying to catch anything other than the high top-level
Exceptionclass, I can catch with no type:
That solves my problems as far asThat solves my problems as far as
try { collection.create({ 'title':title.value, 'author':author.value }); } catch (e) { print("Exception handled: ${e.type}"); }
dart_analyzeris concerned, but this seems like a good time to try to test exceptions. Looking through unittest, there seems to be only
expectThrowfor working with expectations. It is marked as deprecated. But, since there is no indication of what it is deprecated in favor of, I will give it a go anyhow.
Way back in the day, I had tried out some very early Dart unit testing in this sample application. I still have the remnants in the
testssub-directory, so I start building new tests right alongside the old (maybe I will update the old next). But when I try to run the empty test file as a sanity test, I get errors about not being able to find the unittest library:
➜ dart-comics git:(mvc-futures) ✗ dart tests/Views.AddComicFormTest.dart Unable to open file: /home/chris/repos/dart-comics/tests/packages/unittest/unittest.dart'': Error: line 1 pos 1: library handler failed import 'package:../unittest/unittest.dart'; ^The
pubspec.yamlfile in my application's root directory includes unittest.dart and it is definitely installed:
➜ dart-comics git:(mvc-futures) ✗ cat pubspec.yaml name: Dart Comics dependencies: dirty: any uuid: any unittest: sdk: unittest ➜ dart-comics git:(mvc-futures) ✗ ls packages dirty unittest uuidEventually, I realize that, if I name my testing directory as
testinstead of
tests, then Dart Pub will create a link to the packages directory:
➜ dart-comics git:(mvc-futures) ✗ rm -rf test ➜ dart-comics git:(mvc-futures) ✗ mkdir test ➜ dart-comics git:(mvc-futures) ✗ pub install Resolving dependencies... Dependencies installed! ➜ dart-comics git:(mvc-futures) ✗ ls -l test total 0 lrwxrwxrwx 1 chris chris 38 Dec 9 23:16 packages -> /home/chris/repos/dart-comics/packagesSo it seems that
testis the official sub-directory for Dart testing.
Next up is the question of how to test client-side code. Back in the day, I had to setup a web page so that the test suite could run in Dartium. It seems as though something like this is still going to be required. I had expected trouble when some of my code tried to access
documentin the DOM, but my empty unit test crashes before it even reaches that. It crashes when the library being tested tries to import
dart:html:
➜ dart-comics git:(mvc-futures) ✗ dart test/Views.AddComicForm_test.dart Do not know how to load 'dart:html''': Error: line 3 pos 1: library handler failed import 'dart:html'; ^ '': Error: line 3 pos 1: library handler failed import '../public/scripts/Views.AddComicForm.dart'; ^Bummer.
It seems that I still cannot escape the need for a browser when testing client-side Dart code. To get started with that, I create a sanity-check test:
import 'package:unittest/unittest.dart'; import '../public/scripts/Views.AddComicForm.dart'; main() { test('Sanity check', (){ Expect.equals(1+1, 2); }); }Then I load my test suite via a script src tag:
<html> <head> <title>Hipster Test Suite</title> <script type="application/dart" src="Views.AddComicForm_test.dart"></script> <script type="text/javascript"> // start dart navigator.webkitStartDart(); </script> </head> <body> <h1>Test!</h1> </body> </html>And yup, in the Dart console, that test passes:
That's a roundabout way to make progress, but I call it a day there. I ultimately need a way to run tests under continuous integration and this seems lacking right now. I may explore the browser testing a bit more tomorrow.
Day #594 | https://japhr.blogspot.com/2012/12/trying-to-test-client-side-dart.html | CC-MAIN-2018-30 | refinedweb | 759 | 55.44 |
Send a message to the system logger
#include <stdio.h> #include <sys/slog.h> int slogf( int opcode, int severity, const char * fmt, ... );
The major and minor codes are defined in <sys/slogcodes.h>.
The formatting characters that you use in the message determine any additional arguments.s, respectively.
The vslogf() function is an alternate form in which the arguments have already been captured using the variable-length argument facilities of <stdarg
Severity levels
There are eight levels of severity defined. The lowest severity is 7 and the highest is 0. The default is 7.
If you want the data in the log message to be interpreted as text, use a bitwise OR to add _SLOG_TEXTBIT to the severity. If this bit is set, slogf() and vslogf() also write the log message on stderr.
The size of the message sent to slogger, or -1 if an error occurs.
Any value from the Errors section in MsgSend(), as well as:
); } } | http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.lib_ref/topic/s/slogf.html | CC-MAIN-2018-09 | refinedweb | 159 | 67.04 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.