text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
\(\pm \(t\) is on the interval \([1, \infty]\) and the outer integral with respect to \(x\) is on the interval \([0, \infty]\).
Non-constant integration bounds can be treated in a similar manner; the example from above \(2^{k}+1\) for some integer \(k\),.
>>> low-level callback functions¶
A user desiring reduced integration times may pass a C function pointer through scipy.LowLevelCallable to quad, dblquad, tplquad or nquad.
/* testlib.c:
import and the two diagonals immediately above and the two immediately below the main diagonal are the nonzero diagonals. This is important, because the inputs mu and ml of odeint are the upper and lower bandwidths of the Jacobian matrix. When the variables are interleaved, mu and ml are 2. | https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html | CC-MAIN-2017-22 | refinedweb | 121 | 54.83 |
LINEAR ALGEBRA IN NUMPY
In this tutorial, we are going to learn about the linear algebra in numpy and explore methods like dot, vdot, inner, matmul etc. We will be looking at some handy examples to understand these methods in more depth.
What is Linear Algebra?
Linear Algebra is the study of linear equations and their properties in mathematics. It is widely used in engineering mainly covering subjects like Physics and Mathematics. You can view the full reference of Linear Algebra here
What is linear algebra in Numpy?
Numpy has a built in linear algebra module which is used for doing linear algebra. Let’s look at some of the functions of linear algebra.
Dot function in Numpy
This function returns the scalar dot product of two arrays. This function is similar to the matrix multiplication Let’s look at a quick example to understand more in detail:
In order to understand dot product multiplication, view this tutorial here. Let’s look at a numpy example here:
import numpy as np x = np.array([[10,20],[30,40]]) y = np.array([[10,20],[30,40]]) print(x) print(y) print(np.dot(x,y))
Output:
[[10 20] [30 40]] [[10 20] [30 40]] [[ 700 1000] [1500 2200]]
V-Dot function in Numpy
This function returns the vector dot product of two arrays. This function is similar to the matrix multiplication. Let’s look at a quick example to understand more in detail:
import numpy as np x = np.array([[10,20],[30,40]]) y = np.array([[10,20],[30,40]]) print(x) print(y) print(np.vdot(x,y))
Output:
[[10 20] [30 40]] [[10 20] [30 40]] 3000
Inner function in Numpy
This function gives the product of single dimensional arrays. For more dimensions, this function returns the sum of product of elements over the last axis. For example:
import numpy as np print(np.inner(np.array([10,20,30]),np.array([0,10,0])))
Output:
200
Matmul Function in Numpy
This function is used for multiplying two matrices. If two shapes of the matrices aren’t the same then an error will erupt. Let’s have an example:
import numpy as np x = [[2,4],[2,3]] y = [[5,6],[4,1]] print(np.matmul(x,y))
Output:
[[26 16] [22 15]]
Determinant function in Numpy
The determinant function is used to perform calculations diagonally in a matrix. The linalg.set() is used for calculating the determinant of a matrix. Let’s look at an example:
import numpy as np arr = np.array([[10,20],[30,40]]) print(np.linalg.det(arr))
Output:
-200.0000000000001
Linear Algebra Solve in Numpy
This method is used for solving an algebraic quadratic equation where we provide values in the form of a matrix. For example:
| https://python-tricks.com/linear-algebra-in-numpy/ | CC-MAIN-2021-39 | refinedweb | 465 | 57.77 |
They also serve who only stand and wait.
John Milton 16081674 On His Blindness [1652]
More often than not, a parent process needs to synchronize its actions by waiting until a child process has either stopped or terminated its actions. The wait system call allows the parent process to suspend its activity until one of these actions has occurred (Table 3.9).
Table 3.9. Summary of the wait System Call.
The activities of wait are summarized in Figure 3.11.
Figure 3.11. Summary of wait activities.
The wait system call accepts a single argument, which is a pointer to an integer, and returns a value defined as type pid_t . Data type pid_t is found in the header file and is most commonly a long int. If the calling process does not have any child processes associated with it, wait will return immediately with a value of -1 and errno will be set to ECHILD (10) . However, if any child processes are still active, the calling process will block (suspend its activity) until a child process terminates. When a waited-for child process terminates, the status information for the child and its process ID (PID) are returned to the parent. The status information is stored as an integer value at the location referenced by the pointer status . The low-order 16 bits of the location contain the actual status information, and the high-order bits ( assuming a 32-bit machine) are set to zero. The low-order bit information can be further subdivided into a low- and high-order byte. This information is interpreted in one of two ways:
In this second situation, if a core file has been produced, the leftmost bit of byte 0 will be a 1. If a NULL argument is specified for wait , the child status information is not returned to the parent process, the parent is only notified of the child's termination.
Here are two programs, a parent (Program 3.9) and child (Program 3.10), that demonstrate the use of wait .
Program 3.9 The parent process.
File : p3.9.cxx /* A parent process that waits for a child to finish */ #include + #include #include #include #include #include 10 using namespace std; int main(int argc, char *argv[] ){ pid_t pid, w; int status; + if ( argc < 4 ) { cerr << "Usage " << *argv << " value_1 value_2 value_3 "; return 1; } for (int i = 1; i < 4; ++i) // generate 3 child processes 20 if ((pid = fork( )) == 0) execl("./child", "child", argv[i], (char *) 0); else // assuming no failures here cout << "Forked child " << pid << endl; /* + Wait for the children */ while ((w=wait(&status)) && w != -1) cout << "Wait on PID: " << dec << w << " returns status of " << setw(4) << setfill('0') << hex 30 << setiosflags(ios::uppercase) << status << endl; return 0; }
The parent program forks three child processes. Each child process is overlaid with the executable code for the child (found in Program 3.10). The parent process passes to each child, from the parent's command line, a numeric value. As each child process is produced, the parent process displays the child process ID. After all three processes have been generated; the parent process initiates a loop to wait for the child processes to finish their execution. As each child process terminates, the value returned to the parent process is displayed.
Program 3.10 The child process.
File : p3.10.cxx /* The child process */ #define _GNU_SOURCE + #include #include #include #include #include 10 #include using namespace std; int main(int argc, char *argv[ ]){ pid_t pid = getpid( ); + int ret_value; srand((unsigned) pid); ret_value = int(rand( ) % 256); // generate a return value sleep(rand( ) % 3); // sleep a bit if (atoi(*(argv + 1)) % 2) { // assuming argv[1] exists! 20 cout << "Child " << pid << " is terminating with signal 0009" << endl; kill(pid, 9); // commit hara-kiri } else { cout << "Child " << pid << " is terminating with exit(" << setw(4) << setfill('0') << setiosflags(ios::uppercase) + << hex << ret_value << ")" << endl; exit(ret_value); } }
In the child program, the child process obtains its own PID using the getpid call. The PID value is used as a seed value to initialize the srand function. A call to rand is used to generate a unique value to be returned when the process exits. The child process then sleeps a random number of seconds (03). After sleeping, if the argument passed to the child process on the command line is odd (i.e., not evenly divisible by 2), the child process kills itself by sending a signal 9 (SIGKILL) to its own PID. If the argument on the command line is even, the child process exits normally, returning the previously calculated return value. In both cases, the child process displays a message indicating what it will do before it actually executes the statements.
The source programs are compiled and the executables named parent and child respectively. They are run by calling the parent program. Two sample output sequences are shown in Figure 3.12.
Figure 3.12 Two runs of Programs 3.9 and 3.10.
linux$ parent 2 1 2 <-- 1 Forked child 8975 Forked child 8976 Child 8976 is terminating with signal 0009 Forked child 8977 Wait on PID: 8976 returns status of 0009 Child 8977 is terminating with exit(008F) Wait on PID: 8977 returns status of 8F00 Child 8975 is terminating with exit(0062) Wait on PID: 8975 returns status of 6200 linux$ parent 2 2 1 <-- 2 Forked child 8980 Forked child 8981 Forked child 8982 Child 8982 is terminating with signal 0009 Wait on PID: 8982 returns status of 0009 Child 8980 is terminating with exit(00B0) Wait on PID: 8980 returns status of B000 Child 8981 is terminating with exit(00D3) Wait on PID: 8981 returns status of D300
(1) Two even values and one odd
(2) Two even values and one odd but in a different order.
There are several things of interest to note in this output. In the first output sequence, one child processes (PID 8976) has terminated before the parent has finished its process generation. Processes that have terminated but have not been wait ed upon by their parent process are called zombie processes. Zombie processes occupy a slot in the process table, consume no other system resources, and will be marked with the letter Z when a process status command is issued (e.g., ps -alx or ps -el ). A zombie process cannot be killed [11] even with the standard Teflon bullet (e.g., at a system level: kill -9 process_id_number ). Zombies are put to rest when their parent process performs a wait to obtain their process status information. When this occurs, any remaining system resources allocated for the process are recovered by the kernel. Should the child process become an orphan before its parent issues the wait , the process will be inherited by init , which, by design, will issue a wait for the process. On some very rare occasions, even this will not cause the zombie process to "die." In these cases, a system reboot may be needed to clear the process table of the entry.
[11] This miraculous ability is the source of the name zombie .
Both sets of output clearly show that when the child process terminates normally, the exit value returned by the child is stored in the second byte of the integer value referenced by argument to the wait call in the parent process. Likewise, if the child terminates due to an uncaught signal, the signal value is stored in the first byte of the same referenced location. It is also apparent that wait will return with the information for the first child process that terminates, which may or may not be the first child process generated.
It is easy to see that the interpretation of the status information can be cumbersome, to say the least. At one time, programmers wrote their own macros to interrogate the contents of status. Now most use one of the predefined status macros. These macros are shown in Table 3.10.
Table 3.10. The wstat Macros.
The argument to each of these macros is the integer status value (not the pointer to the value) that is returned to the wait call. The macros are most often used in pairs. The WIF macros are used as a test for a given condition. If the condition is true, the second macro of the pair is used to return the specified value. As shown below, these macros could be incorporated in the wait loop in the parent Program 3.9 to obtain the child status information:
... while ((w = wait(&status)) && w != -1) if (WIFEXITED(status)) // test with macro cout << "Wait on PID: " << dec << w << " returns a value of " << hex << WEXITSTATUS(status) << endl; // obtain value else if (WIFSIGNALED(status)) // test with macro cout << "Wait on PID: " << dec << w << " returns a signal of " << hex << WTERMSIG(status) << endl; // obtain value ...
While the wait system call is helpful, it does have some limitations. It will always return the status of the first child process that terminates or stops. Thus, if the status information returned by wait is not from the child process we want, the information may need to be stored on a temporary basis for possible future reference and additional calls to wait made. Another limitation of wait is that it will always block if status information is not available. Fortunately, another system call, waitpid , which is more flexible (and thus more complex), addresses these shortcomings. In most invocations, the waitpid call will block the calling process until one of the specified child processes changes state. The waitpid system call summary is shown in Table 3.11.
Table 3.11. Summary of the waitpid System Call.
The first argument of the waitpid system call, pid , is used to stipulate the set of child process identification numbers that should be waited for (Table 3.12).
Table 3.12. Interpretation of pid Values by waitpid .
The second argument, *status , as with the wait call, references an integer status location where the status information of the child process will be stored if the waitpid call is successful. This location can be examined directly or with the previously presented wstat macros.
The third argument, options , may be 0 (don't care), or it can be formed by a bitwise OR of one or more of the flags listed in Table 3.13 (these flags are usually defined in the header file). The flags are applicable to the specified child process set discussed previously.
Table 3.13. Flag Values for waitpid .
If the value given for pid is -1 and the option flag is set to 0, the waitpid and wait system call act in a similar fashion. If waitpid fails, it returns a value of 1 and sets errno to indicate the source of the error (Table 3.14).
Table 3.14. waitpid Error Messages.
We can modify a few lines in our current version of the parent process (Program 3.9) to save the generated child PIDs in an array. This information can be used with the waitpid system call to coerce the parent process into displaying status information from child processes in the order of child process generation instead of their termination order. Program 3.11 shows how this can be done.
Program 3.11 A parent program using waitpid .
File : p3.11.cxx #include #include #include #include + #include #include using namespace std; int main(int argc, char *argv[] ){ 10 pid_t pid[3], w; int status; if ( argc < 4 ) { cerr << "Usage " << *argv << " value_1 value_2 value_3 "; return 1; + } for (int i=1; i < 4; ++i) // generate 3 child processes if ((pid[i-1] = fork( )) == 0) execl("./child", "child", argv[i], (char *) 0); else // assuming no failures here 20 cout << "Forked child " << pid[i-1] << endl; /* Wait for the children */ for (int i=0;(w=waitpid(pid[i], &status,0)) && w != -1; ++i){ + cout << "Wait on PID " << dec << w << " returns "; if (WIFEXITED(status)) // test with macro cout << " a value of " << setw(4) << setfill('0') << hex << setiosflags(ios::uppercase) << WEXITSTATUS(status) << endl; else if (WIFSIGNALED(status)) // test with macro 30 cout << " a signal of " << setw(4) << setfill('0') << hex << setiosflags(ios::uppercase) << WTERMSIG(status) << endl; else cout << " unexpectedly!" << endl; } + return 0; }
A run of this program (using the same child processProgram 3.10) confirms that the status information returned to the parent is indeed ordered based on the sequence of child processes generation, not the order in which the processes terminated. Also, note that the status macros are used to evaluate the return from waitpid system call (Figure 3.13).
Figure 3.13 Output of Program 3.11.
linux$ p3.11 2 2 1 Forked child 9772 Forked child 9773 <-- 1 Child 9773 is terminating with exit(008B) <-- 2 Forked child 9774 Child 9772 is terminating with exit(00CD) Wait on PID 9772 returns a value of 00CD <-- 3 Wait on PID 9773 returns a value of 008B Child 9774 is terminating with signal 0009 Wait on PID 9774 returns a signal of 0009
(1) Order of creation :
(2) Order of termination :
(3) Order of wait :
On some occasions, the information returned from wait or waitpid may be insufficient. Additional information on resource usage by a child process may be sought. There are two BSD compatibility library functions, wait3 and wait4 , [12] that can be used to provide this information (Table 3.15).
[12] It is not clear if these functions will be supported in subsequent versions of the GNU compiler, and they may limit the portability of programs that incorporate them. As these are BSD-based functions, _USE_BSD must be defined in the program code or defined on the command line when the source code is compiled.
Table 3.15. Summary of the wait3/wait4 Library Functions.
The wait3 and wait4 functions parallel the wait and waitpid functions respectively. The wait3 function waits for the first child process to terminate or stop. The wait4 function waits for the specified PID ( pid ). In addition, should the pid value passed to the wait4 function be set to 0, wait4 will wait on the first child process in a manner similar to wait3 . Both functions accept option flags to indicate whether or not they should block and/or report on stopped child processes. These option flags are shown in Table 3.16.
Table 3.16. Option Flag Values for wait3 / wait4 .
Both functions contain an argument that is a reference to a rusage structure. This structure is defined in the header file . [13]
[13] On some systems, you may need the header file instead of , and you may need to explicitly link in the BSD library that contains the object code for the wait3/wait4 functions. */ };
If the rusage argument is non-null, the system populates the rusage structure with the current information from the specified child process. See the getrusage system call in Section 2 of the manual pages for additional information. The status macros (see previous section on wait and waitpid ) can be used with the status information returned by wait3 and wait4 . See Table 3.17.
Table 3.17. wait3 / wait4 Error Messages.
Programs and Processes
Processing Environment
Using Processes
Primitive Communications
Pipes
Message Queues
Semaphores
Shared Memory
Remote Procedure Calls
Sockets
Threads
Appendix A. Using Linux Manual Pages
Appendix B. UNIX Error Messages
Appendix C. RPC Syntax Diagrams
Appendix D. Profiling Programs | https://flylib.com/books/en/1.23.1/waiting_on_processes.html | CC-MAIN-2018-22 | refinedweb | 2,546 | 60.85 |
Build a Decentralized Chat App with Knockout and IPFS
How to build a fully decentralized chat app in under an hour with <100 lines of code
Decentralized applications (Dapps) are apps that run on a decentralized network (of peers) via (preferably trust-less) peer-to-peer (p2p) protocols. One of their biggest strengths is that they avoid any single point of failure. Unlike traditional apps, there is no single entity who can completely control their operation. Dapps are a relatively new concept (so a standard definition is still a bit elusive), but the most prolific set of examples are operating as Smart Contracts on the Ethereum blockchain. Dapps have become increasingly popular as new decentralized technologies such as blockchains and projects like the Interplanetary File System (IPFS) have gained more attention and momentum.
There are many good reasons why developers should start seriously looking at developing decentralized apps, including — but certainly not limited to — scalability (in general, the network of peers participates in hosting the app, limiting pressure on your own infrastructure) and trust (by definition, Dapp code is open source and often content addressed, so your code can be independently verified). And there are now plenty of examples out there, from basic voting apps to advanced p2p collaboration tools, that can help paint of picture of the power of Dapp.
In today’s post, we’re going to develop a simple decentralized chat Dapp that runs on IPFS’s publish-subscribe mechanism, a p2p messaging pattern that allows peers to communicate on the open, decentralized web. While we’re at it, we’re going to develop our Dapp using the Model–view–viewmodel (MVVM) software design pattern, to give you a sense of using decentralized tools in a real-world development scenario. You’ll see that building fully-working decentralized apps that take advantage of IPFS is becoming increasingly easy thanks to the amazing work of the IPFS community of developers. But before we get started, here’s a quick overview of the primary decentralized messaging pattern we’re going to use to make our Dapp shine.
Pubsub
Pubsub (or publish-subscribe) is a pretty standard messaging pattern where the publishers don’t know who, if anyone’ will subscribe to a given topic. Basically, we have publishers that send messages on a given topic or category, and subscribers who receive only messages on a give topic they are subscribed to. Pretty easy concept. The key feature here is that there is no direct connection between publishers and subscribers required… which makes for a pretty powerful communication system. Ok, so why am I talking about this here? Because pubsub allows for dynamic communication between peers that is fast, scalable, and open… which is pretty much what we need to build a decentralized chat app… perfect!
Right now, IPFS uses something called floodsub, which is an implementation of pubsub that essentially just floods the network with messages, and peers are required to listen to the right messages based on their subscriptions, and ignore the rest. This probably isn’t ideal, but it is an excellent first pass, and works pretty well already. Soonish, IPFS will take advantage of gossipsub, which is more like a proximity-aware epidemic pubsub, where peers will communicate with proximal peers, and messages will be routed more efficiently this way. Watch this space… because this is going to be an important part of how IPFS scales and speeds things like IPNS up over the medium term.
Getting started
So to start, let’s clone the Textile dapp template repo, which is really just a simple scaffolding to help us accelerate our development process. We’ve used this template in previous examples (here and here). Feel free to use your own development setup if you prefer, but I’m going to assume you’re working off our template for the remainder of this tutorial.
git clone chat-dapp
cd chat-dapp
yarn remove queue window.ipfs-fallback
yarn add ipfs ipfs-pubsub-room knockout query-string
If you want to follow along, but from a fully-baked working version then instead of lines 3 and 4 above…
git checkout build/profile-chat
yarn install
Ok, so first things first. What do those packages above get us? Let’s start with
ipfs and
ipfs-pubsub-room. Obviously, we’re going to use
ipfs for interacting with the IPFS network… but what about
ipfs-pubsub-room? This is a really nice package from the IPFS shipyard (a GitHub repo for incubated projects by the IPFS community) that simplifies interacting with the IPFS pubsub facilities. Basically, it allows developers to easily create a room based on an IPFS pubsub channel, and then emits membership events, listens for messages, broadcasts and direct messages to peers. Nice.
Model–view–viewmodel
Next we have the
knockout and
query-string packages. The latter of these two is just a simple package for parsing a url query string, and really just simplifies our lives a bit when developing dapps with url parameters (which we’ll do here) — nothing fancy here. But the
knockout package actually is pretty fancy, and we’ll use it to develop our app using a real software architectural pattern: Model–view–viewmodel (MVVM).
MVVM facilitates a separation of development of the graphical user interface — be it via a markup language or GUI code — from development of the business logic or back-end logic (the data model). For the uninitiated, this pattern introduces the concept of a ‘view model’ in the middle of your application, which is responsible for exposing (converting) data objects from your underlying model so that it is easily managed and presented. Essentially, the view model of MVVM is a value converter, meaning the view model is responsible for exposing (converting) the data objects from the model in such a way that objects are easily managed and presented. The new role of your view is then to simply ‘bind’ model data exposed by the view model to your view. In this respect, the view model is more model than view, and handles most, if not all of the view’s display logic.
Ok, so what does this ‘get us’? Well, it allows us to develop dynamic apps using a simpler declarative programing style, we get automatic UI/data model updates and dependency tracking between UI elements ‘for free’, plus it facilitates a clearer separation of concerns. But more than anything, its a way to explore a common design pattern with decentralized software components. And why Knockout? Because it is really easy to get started building single-page applications with minimal markup/code, their interactive tutorials are super helpful, and they provide useful docs covering various MVVM concepts and features.
Next steps
If you want to see where we’re headed, you can check out this diff of our target
build/profile-chat branch with the default
dapp-template (
master) branch. In the mean time, let’s setup our new imports. Start by editing your
src/main.js file using your favorite text editor/IDE, and replace line 2 (
import getIpfs from 'window.ipfs-fallback') with:
import Room from 'ipfs-pubsub-room'
import IPFS from 'ipfs'
import ko from 'knockout'
import queryString from 'query-string'
// Global references for demo purposes
let ipfs
let viewModel
Now that we’ve adjusted our imports (and added some global variables for later), you can run
yarn watch from the terminal to get our local build server running. We’ll have to make some changes before our code will run properly, but it’s useful to have our code ‘browserfied’ for us as we work.
IPFS peer
Next, we’ll create a new
IPFS object to use to interact with the decentralized web. Unlike in previous tutorials, we’ll create an IPFS object directly, rather than relying on something like
window.ipfs-fallback. This is primarily because we want more control over how we setup our IPFS peer. In particular, we want to be able to enable some experimental features (i.e.,
pubsub), and control which swarm addresses we announce on (see this post for details). So our async
setup function now becomes:)
viewModel.error(err) // Log error...
}
}
View model
Now that we have our imports in order, and our IPFS peer initializing the way we want, let’s start editing/creating our view model. You can do this inside our modified
setup function, so that the first few lines of that function would now look something like this:
const setup = async () => {
// Create view model with properties to control chat
function ViewModel() {
let self = this
// Stores username
self.name = ko.observable('')
// Stores current message
self.message = ko.observable('')
// Stores array of messages
self.messages = ko.observableArray([])
// Stores local peer id
self.id = ko.observable(null)
// Stores whether we've successfully subscribed to the room
self.subscribed = ko.observable(false)
// Logs latest error (just there in case we want it)
self.error = ko.observable(null)
// We compute the ipns link on the fly from the peer id
self.url = ko.pureComputed(() => {
return `{self.id()}`
})
}
// Create default view model used for binding ui elements etc.
viewModel = new ViewModel()
// Apply default bindings
ko.applyBindings(viewModel)
window.viewModel = viewModel // Just for demo purposes later!
...
Basically, we are creating a relatively simple Javascript object with properties that control the username (
name), the current message (
message), the local IPFS Peer Id (
id), as well as application state information such as an array of past messages (
messages), and whether we’ve successfully subscribed to the given chat topic (
subscribed). We also have a convenience computed property for representing a user’s IPNS link, just for fun. You’ll notice for each of these properties, that we’re using knockout’s Observable objects. From the Knockout docs:
[…].
So in order to be able to react to changes in a given property, we need to make it an observable property. This is kind of the whole point of observables in Knockout: other code can be notified of changes. As we’ll see shortly, this means we can ‘bind’ HTML element properties to our view model’s properties, so that, for example, if we have a
<div> element with a
data-bind="text: name" attribute, the text binding will register itself to be notified when the
name property of our view model changes. As always, these concepts are much easier to understand when you have some code to play with, so let’s start modifying our
src/index.html to take advantage of Knockouts observable binding features.
Binding properties
To take advantage of the view model that we have just set up, we’ll need to specify how our various HTML elements ‘bind’ to our view model properties. We do this using Knockout’s
data-bind property. There aren’t a lot of changes to make here, but your
<body> div should now look something like this (we’ll go over the various components one by one to make sure we’re all on the same page):
<body>
<div id="main">
<div class="controls">
<input id="name" type="text" data-
</div>
<div class="output"
data-
<div>
<a data-bind="text: msg.name,
css: { local: msg.from === $root.id() },
attr: { href: `ipfs.io/ipns/${msg.from}` }">
</a>
<div data-</div>
</div>
</div>
<div class="input">
<input id="text" type="text" placeholder="Type a message"
data-
</div>
</div>
<script src="bundle.js"></script>
</body>
Since an image is worth 1000 words, here’s what your web-app should now look like if you refresh
localhost:8000/ (you might also want to copy the minimal CSS from here, so it looks a bit nicer):
As you can see (I’ve added a blue background to our output div for visual reference), we have three main elements: i) a
'name'
input element for controlling our username, ii) an
'output' div for displaying our chat history (this will hold series of message divs with username, IPNS links, and the message), and iii) a
'text' message
input for typing our messages. The only ‘new’ syntax you’ll likely be unfamiliar with are the
data-bind attributes, so we’ll go through those one at a time:
<input id="name" type="text" data-: bind the
nameproperty of our
viewModelto the
valueof this
inputelement.
<input id="text" type="text" data-: bind the
messageproperty of our
viewModelto the
valueof this
inputelement, and only
enablethe element if
subscribedis
true.
<div class="output" data-: for each item in the
messagesarray, label item as
msg, and…
<a data-: bind the
textof the hyperlink element to the
nameproperty of the
msg, set the
hrefattribute to string (template literal) containing the
fromproperty of the
msg, and finally, set the CSS class of the element to
'local'if the
fromproperty of the
msgis equal to the root
viewModel’s
id(i.e., if this was your own message).
Whew that was a lot of new ideas! But the markup is actually pretty simple, and it greatly simplified our overall code, because Knockout handles all of the interactions between app state and view elements. And just so we’re all on the same page before moving forward, here’s the current state of the web-app as we have it now...
Pubsub interactions
Alright, now it’s time to actually add some chat capabilities. For this, we’re going to rely on the very awesome
ipfs-pubsub-room library. We’re going to start by modifying our
src/main.js file again, this time, by creating a new
try/catch block containing all the interaction callbacks we’ll need. We’ll go through each section of this code separately, but you can also follow along by checking out the file diff or the secondary state in this gist.
try {
ipfs.on('ready', async () => {
const id = await ipfs.id()
// Update view model
viewModel.id(id.id)
// Can also use query string to specify, see github example
const roomID = "test-room-" + Math.random()
// Create basic room for given room id
const room = Room(ipfs, roomID)
// Once the peer has subscribed to the room, we enable chat,
// which is bound to the view model's subscribe
room.on('subscribed', () => {
// Update view model
viewModel.subscribed(true)
})
...
Once our IPFS peer is ready, we
await the peer
id, update our
viewModel’s
id property, setup the pubsub
Room (here we use a fixed room id, but in practice you’ll likely want to use a query string to specify this… see the example on GitHub), and then subscribe to the
subscribed event on the
room and link it to our
viewModel’s
subscribed property. This will automatically enable the chat input box once we have successfully subscribed to the room. So far, so good.
...
// When we receive a message...
room.on('message', (msg) => {
const data = JSON.parse(msg.data) // Parse data
// Update msg name (default to anonymous)
msg.name = data.name ? data.name : "anonymous"
// Update msg text (just for simplicity later)
msg.text = data.text
// Add this to _front_ of array to keep at bottom
viewModel.messages.unshift(msg)
})
...
Now we subscribe to the
Room’s
message event, where we specify a callback that parses the
msg data (as JSON), updates the username (or uses
'anonymous'), updates the
msg text, and then adds the
msg object to our
viewModel’s
messages observable array. Again, pretty straightforward.
...
viewModel.message.subscribe(async (text) => {
// If not actually subscribed or no text, skip out
if (!viewModel.subscribed() || !text) return
try {
// Get current name
const name = viewModel.name()
// Get current message (one that initiated this update)
const msg = viewModel.message()
// Broadcast message to entire room as JSON string
room.broadcast(Buffer.from(JSON.stringify({ name, text })))
} catch(err) {
console.error('Failed to publish message', err)
viewModel.error(err)
}
// Empty message in view model
viewModel.message('')
})
// Leave the room when we unload
window.addEventListener('unload',
async () => await room.leave())
})
...
Finally, we subscribe to
message changes on our view model (likely as a result of user interaction), and specify a callback that will get the current message (
msg), the username (
name), and
broadcast the
msg to the entire
room as a JSON-encoded string. The makes it possible to type in the input text box and submit the message when the user submits the text. The rest of the code is cleanup and error handling…
Test it
If you go ahead and refresh your app in your browser, and open another window or browser, you should now be able to communicate between windows, similarly to the session depicted here.
You can add even more windows (users) to the chat session (as many as you like), and add additional features like announcing when someone joins or leaves the room, etc. which I’ll leave as an exercise for the reader. In the mean time bask in the glory of knowing you’ve just created a fully-working chat app using a little bit of Javascript, some minimal HTML markup, a tiny bit of CSS, and a whole new appreciation for the decentralized web. The best part about all of this is there are no centralized points of control involved. Of course, we haven’t added in any security measures, encryption, or privacy controls, so think twice before using this to hold real-life conversions over the decentralized web.
Deploy it
And speaking of easy, because this particular dapp doesn’t rely on any external code or locally running peers, we can quite easily deploy it over IPFS. It’s as easy as building the code, adding the
dist/ folder to IPFS, and opening it in a browser via a public gateway:
yarn build
hash=$(ipfs add -rq dist/ | tail -n 1)
open
That’s all
And there you have it! In this tutorial, we’ve managed to build a fully-working decentralized chat app with minimal code and effort. We’ve kept things pretty simple, but managed to take advantage of real-world programming patterns that make app development a breeze. All in an effort to demonstrate how surprisingly easy it is to develop real-world apps on top of IPFS and its underlying libraries. But before you go decentralizing all the things, make sure you evaluate all the possible downsides as well. If you like what you’ve read here, why not check out some of our other stories and tutorials, or sign up for our Textile Photos wait-list to see what we’re building with IPFS. While you’re at it, drop us a line and tell us what cool distributed web projects you’re working on— we’d love to hear about them! | https://medium.com/textileio/build-a-decentralized-chat-app-with-knockout-and-ipfs-fccf11e8ce7b?source=collection_home---4------1--------------------- | CC-MAIN-2019-09 | refinedweb | 3,092 | 50.36 |
To limit the storage of a site collection you can create and apply Quota Templates.
Some important things to know about quotas:
- Quotas can only be applied to Site Collections, not to single sites or entire applications
- Quota space includes library files, list contents (announcements, etc), master pages, etc (ie. Everything)
- Files in the Recycle Bin are part of the quota calculation
- The warning e-mail is only sent once!
- Once the quota has been reached, no more uploads are permitted
- Once the quota has been reached, many site edits are prohibited! For example adding an announcement or even modifying an existing view displays this: "Your changes could not be saved because this SharePoint Web site has exceeded the storage quota limit."
- When a site is limited by a quota there is a new option in Site Actions -> Site Settings: "Storage space allocation". Here the site admin can see where quota is being used.
- The last item uploaded that exceeded the quota will be uploaded successfully, even if it takes the site way over quota.
To create quotas:
Go to Central Administration and drill down to Application Management and Quota Templates. Create as many quota templates as needed.
When a user exceeds their quota
During a Multiple Upload they may see:
During a single Upload or other edit such as adding an announcement they will see:
To track quotas
Site administrators can go to Site Actions -> Site Settings and click "Storage space allocation" to review space usage. Here they can display lists of libraries, documents, lists and Recycle Bin contents. These lists are by default sorted on size, descending. They can be resorted on Date Modified or Size.
9 comments:
Mike, I'm seeing a 12 hive log file entry for this sort of error repeatedly for the past few days. However, the sharepoint site does not have a quota template defined in central admin.
The 12hive error doesn't indicate which SharePoint site on this instance has exceeded the storage quota. There are many sharepoint sites on the instance, so it would be non-trivial to check the hundreds of sites to figure it out from the site settings option you mention. Also, we're using SharePoint / MOSS 2007 and I don't see the storage space allocation link on the site settings page. Do you have any additional things that could be tried?
Larry,
You say you have many sites... only site collections have quotas, so you may not have too many to check.
To find site collections with quotas:
Do you have PowerShell on your server? If so try this to:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
$uri = new-object uri("")
$app = [Microsoft.SharePoint.Administration.SPWebApplication]::LookUp($uri)
$sites | where {$_.quota.StorageMaximumLevel -gt 0} | % {$_.url}
Or here's a console app:
using System;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Administration;
namespace AllSitesWithQuotas
{
class Program
{
static void Main(string[] args)
{
Uri appUri = new Uri("");
SPWebApplication app = SPWebApplication.Lookup(appUri);
foreach (SPSite site in app.Sites)
{
if (site.Quota.StorageMaximumLevel > 0)
Console.WriteLine(site.Url);
}
Console.ReadLine();
}
}
}
Mike
In the PowerShell example above I left out one line, just after the $apps = line. It should be:
$sites = $apps.sites
I never get the warning mail. Who is suppose to get this mail?
Edgar Corona,
The Site Collection Administrator setup in Central Administration for that site collection should get it. (The admin also must have checkmarked the "Send warning e-mail" option.)
Check with your SharePoint server administrators to see who is listed as the first (or second) site collection administrator.
Mike
Mike, I apologize for the long delay. I just recently got PowerShell going on my server and am trying out your suggestion. In my case, the results of the code, with the adjusted line, is that nothing is displayed - no error, no sites, etc. I presume that would imply no quotas. Which is what Central Admin appears to indicate. And yet users still get this error...
Larry,
Your farm may have multiple applications. Here's a shorter form of the PowerShell script that will search all SharePoint Applications and all Site Collections: (assuming you are logged in with proper permissions)
Get-SPWebApplication | Get-SPSite | Where {$_.Quota.StorageMaximumLevel -gt 0} | Select Url, {$_.Quota.StorageMaximumLevel}
Here's the same thing with a little nicer formatting of the output:
Get-SPWebApplication | Get-SPSite | Where {$_.Quota.StorageMaximumLevel -gt 0} | Select Url, @{Label="Quota (MB)"; Expression{$_.Quota.StorageMaximumLevel / 1MB}} | Format-Table -AutoSize
Mike
Larry,
I just looked back at your original question and it looks like you have SP 2007. The script I just added above is for 2010.
Check back for a 2007 version...
Mike
Larry,
Here's the 2007 version:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
$farm = [Microsoft.SharePoint.Administration.SPFarm]::Local
$websvcs = $farm.Services | where -FilterScript {$_.GetType() -eq [Microsoft.SharePoint.Administration.SPWebService]}
$websvcs | Select -ExpandProperty WebApplications | Select -ExpandProperty Sites | Where {$_.Quota.StorageMaximumLevel -gt 0} | Select Url, {$_.Quota.StorageMaximumLevel}
Here's an alternate last line with better formatting of the output:
$websvcs | Select -ExpandProperty WebApplications | Select -ExpandProperty Sites | Where {$_.Quota.StorageMaximumLevel -gt 0} | Select Url, @{Label="Quota (MB)"; Expression{$_.Quota.StorageMaximumLevel / 1MB}} | Format-Table -AutoSize
Mike | http://techtrainingnotes.blogspot.com/2008/02/sharepoint-site-collection-quotas.html | CC-MAIN-2017-04 | refinedweb | 866 | 50.53 |
perltutorial japhy sorting <!-- INDEX BEGIN --> <UL> <LI><A HREF="#name">NAME</A></LI> <LI><A HREF="#synopsis">SYNOPSIS</A></LI> <LI><A HREF="#description">DESCRIPTION</A></LI> <LI><A HREF="#content">CONTENT</A></LI> <UL> <LI><A HREF="#table of contents">Table of Contents</A></LI> <LI><A HREF="#nave sorting">Naïve Sorting</A></LI> <UL> <LI><A HREF="#exercises">Exercises</A></LI> </UL> <LI><A HREF="#the orcish maneuver">The Orcish Maneuver</A></LI> <UL> <LI><A HREF="#exercises">Exercises</A></LI> </UL> <LI><A HREF="#radix sort">Radix Sort</A></LI> <UL> <LI><A HREF="#exercises">Exercises</A></LI> </UL> <LI><A HREF="#sorting by index">Sorting by Index</A></LI> <LI><A HREF="#schwartzian transforms">Schwartzian Transforms</A></LI> <LI><A HREF="#guttmanrosler transforms">Guttman-Rosler Transforms</A></LI> <UL> <LI><A HREF="#exercises">Exercises</A></LI> </UL> <LI><A HREF="#portability">Portability</A></LI> </UL> <LI><A HREF="#author">AUTHOR</A></LI> </UL> <!-- INDEX END --> <HR> <P> <H1><A NAME="name">NAME</A></H1> <P>Resorting to Sorting</P> <P> <HR> <H1><A NAME="synopsis">SYNOPSIS</A></H1> <P>A guide to using Perl's <TT>sort()</TT> function to sort data in numerous ways. Topics covered include the Orcish maneuver, the Schwartzian Transform, the Guttman-Rosler Transform, radix sort, and sort-by-index.</P> <P> <HR> <H1><A NAME="description">DESCRIPTION</A></H1> <P>Sorting data is a common procedure in programming -- there are efficient and inefficient ways to do this. Luckily, in Perl, the <TT>sort()</TT> function does the dirty work; Perl's sorting is handled internally by a combination of merge-sort and quick-sort. However, sorting is done, by default, on strings. In order to change the way this is done, you can supply a piece of code to the <TT>sort()</TT> function that describes the machinations to take place.</P> <P>We'll examine all differents sorts of sorts; some have been named after programmers you may have heard of, and some have more descriptive names.</P> <P> <HR> <H1><A NAME="content">CONTENT</A></H1> <P> <H2><A NAME="table of contents">Table of Contents</A></H2> <OL> <LI><STRONG><A NAME="item_Nave_Sorting">Naïve Sorting</A></STRONG><BR> Poor practices that cause Perl to do a lot more work than necessary. <P></P> <LI><STRONG><A NAME="item_The_Orcish_Maneuver">The Orcish Maneuver</A></STRONG><BR> Joseph Hall's implementation of "memoization" in sorting. <P></P> <LI><STRONG><A NAME="item_Radix_Sort">Radix Sort</A></STRONG><BR> A multiple-pass method of sorting; the time it takes to run is linearly proportional to the size of the largest element. <P></P> <LI><STRONG><A NAME="item_Sorting_by_Index">Sorting by Index</A></STRONG><BR> When multiple arrays must be sorted in parallel, save yourself trouble and sort the indices. <P></P> <LI><STRONG><A NAME="item_Schwartzian_Transforms">Schwartzian Transforms</A></STRONG><BR> Wrapping a <TT>sort()</TT> in between two <TT>map()</TT>s -- one to set up a data structure, and the other to extract the original information -- is a nifty way of sorting data quickly, when expensive function calls need to be kept to a minimum. <P></P> <LI><STRONG><A NAME="item_Guttman%2DRosler_Transforms">Guttman-Rosler Transforms</A></STRONG><BR> It's far simpler to let <TT>sort()</TT> sort as it will, and to format your data as something meaningful to the string comparisons <TT>sort()</TT> makes. <P></P> <LI><STRONG><A NAME="item_Portability">Portability</A></STRONG><BR> By giving sorting functions a prototype, you can make sure they work from anywhere! <P></P></OL> <P> <H2><A NAME="nave sorting">Naïve Sorting</A></H2> <P>Ordinarily, it's not a difficult task to sort things. You merely pass the list to <TT>sort()</TT>, and out comes a sorted list. Perl defaults to using a string comparison, offered by the <TT>cmp</TT> operator. This operator compares two scalars in ASCIIbetical order -- that means "1" comes before "A", which comes before "^", which comes before "a". For a detailed list of the order, see your nearest <EM>ascii</EM>(1) man page.</P> <P>To sort numerically, you need to supply <TT>sort()</TT> that uses the numerical comparison operator (dubbed the "spaceship" operator), <TT><=></TT>:</P> <C> @sorted = sort { $a <=> $b } @numbers; # ascending order @sorted = sort { $b <=> $a } @numbers; # descending order</C> <P>There are two special variables used in sorting -- <TT>$a</TT> and <TT>$b</TT>. These represent the two elements being compared at the moment. The sorting routine can take a block (or a function name) to use in deciding which order the list is to be sorted in. The block or function should return -1 if <TT>$a</TT> is to come before <TT>$b</TT>, 0 if they are the same (or, more correctly, if their position in the sorted list could be the same), and 1 if <TT>$a</TT> is to come after <TT>$b</TT>.</P> <P>Sorting, by default, is like:</P> <C> @sorted = sort { $a cmp $b } @unsorted;</C> <P>That is, ascending ASCIIbetical sorting. You can leave out the block in that case:</P> <C> @sorted = sort @unsorted;</C> <P>Now, if we had a list of strings, and we wanted to sort them, <EM>in a case-insensitive manner</EM>. That means, we want to treat the strings as if they were all lower-case or upper-case. We could do something like:</P> <C> @sorted = sort { lc($a) cmp lc($b) } @unsorted; # or @sorted = sort { uc($a) cmp uc($b) } @unsorted;</C> <DL> <DT><DD> <STRONG>Note:</STRONG> There is a difference between these two sortings. There are some punctuation characters that come after upper-case letters and before lower-case characters. Thus, strings that <EM>start</EM> with such characters would be placed differently in the sorted list, depending on whether we use <TT>lc()</TT> or <TT>uc()</TT>. <P></P></DL> <P>Now, this method of sorting is fine for small lists, but the <TT>lc()</TT> (or <TT>uc()</TT>) function is called twice for <EM>each</EM> comparison. This might not seem bad, but think about the consequences of performing massive calculations on your data:</P> <C> )</C> <P>This gets to be tedious. There's obviously too much work being done. We should only have to split the strings once.</P> <P> <H3><A NAME="exercises">Exercises</A></H3> <OL> <LI> Create a sorting subroutine to sort by the length of a string, or, if needed, by its first five characters. <C> @sorted = sort { ... } @strings;</C> <P></P> <LI> Sort the following data structure by the value of the key specified by the "cmp" key: <C> @nodes = ( { id => 17, size => 300, keys => 2, cmp => 'keys' }, { id => 14, size => 104, keys => 9, cmp => 'size' }, { id => 31, size => 2045, keys => 43, cmp => 'keys' }, { id => 28, size => 6, keys => 0, cmp => 'id' }, );</C> <P></P></OL> <P> <H2><A NAME="the orcish maneuver">The Orcish Maneuver</A></H2> <P>This method of speeding up sorting comparisons was named by Joseph Hall. It uses a hash to cache values of the complex calculations you need to make:</P> <C> {;</C> <P>This procedure here uses a hash of array references to store the name and age for each person. The first time a string is used in the sorting subroutine, it doesn't have an entry in the <TT>%cache</TT> hash, so the <TT>||</TT> part is used.</P> <P>That is where this gets its name -- it is the OR-cache manuever, which can be lovingly pronounced "orcish".</P> <P>The main structure of Orcish sorting is:</P> <C> { my %cache; sub function { my $data_a = ($cache{$a} ||= mangle($a)); my $data_b = ($cache{$b} ||= mangle($b)); # compare as needed } }</C> <P>where <TT>mangle()</TT> is some function that does the necessary calculations on the data.</P> <P> <H3><A NAME="exercises">Exercises</A></H3> <OL> <LI> Why should you make the caching hash viewable only by the sorting function? And how is this accomplished? <P></P> <LI> Use the Orcish Manuever to sort a list of strings in the same way as described in the first exercise from "Naïve Sorting". <P></P></OL> <P> <H2><A NAME="radix sort">Radix Sort</A></H2> <P>If you have a set of strings of constant width (or that can easily be made in constant width), you can employ <EM>radix sort</EM>. This method gets around calling Perl's <TT>sort()</TT> function altogether.</P> <P>The concept of radix sort is as follows. Assume you have <EM>N</EM> strings of <EM>k</EM> characters in length, and each character can have one of <EM>x</EM> values (for ASCII, <EM>x</EM> is 256). We then create <EM>x</EM> "buckets", and each bucket can hold at most <EM>N</EM> strings.</P> <P>Here is a sample list of data for <EM>N</EM> = 7, <EM>k</EM> = 4, and <EM>x</EM> = 256: john, bear, roar, boat, vain, vane, zany.</P> <P>We then proceed to place each string into the bucket corresponding to the ASCII value of its rightmost character. If we were then to print the contents of the buckets after this first placement, our sample list would look like: van<STRONG>e</STRONG>, joh<STRONG>n</STRONG> vai<STRONG>n</STRONG> bea<STRONG>r</STRONG> roa<STRONG>r</STRONG> boa<STRONG>t</STRONG> zan<STRONG>y</STRONG>.</P> <P>Then, we use the character immediately to the left of the one just used, and put the strings in the buckets accordingly. This is done in the order in which they are found in the buckets. The new list is: be<STRONG>a</STRONG>r, ro<STRONG>a</STRONG>r, bo<STRONG>a</STRONG>t, jo<STRONG>h</STRONG>n, va<STRONG>i</STRONG>n, va<STRONG>n</STRONG>e, za<STRONG>n</STRONG>y.</P> <P>On the next round, the list becomes: v<STRONG>a</STRONG>in, v<STRONG>a</STRONG>ne, z<STRONG>a</STRONG>ny, b<STRONG>e</STRONG>ar, r<STRONG>o</STRONG>ar, b<STRONG>o</STRONG>at, j<STRONG>o</STRONG>hn.</P> <P>On the final round, the list is: <STRONG>b</STRONG>ear, <STRONG>b</STRONG>oat, <STRONG>j</STRONG>ohn, <STRONG>r</STRONG>oar, <STRONG>v</STRONG>ain, <STRONG>v</STRONG>ane, <STRONG>z</STRONG>any.</P> <P>This amount of time this sorting takes is constant, and easily calculated. If we assume that all the data is the same length, then we take <EM>N</EM> strings, and multiply that by <EM>k</EM> characters. The algorithm also uses some extra space for storing the strings -- it needs an extra <EM>Nk</EM> bytes. If the data needs to be padded, there is some extra time involved (if a character is undefined, it is set as a NUL ("\0")).</P> <P>Here is a radix implementation. It returns the list it is given in ASCIIbetical order, like <TT>sort @list</TT> would.</P> <C> # sorts in-place (meaning @list gets changed) # set $unknown to true to indicate variable length radix_sort(\@list, $unknown);</C> <C> sub radix_sort { my ($data, $k) = @_; $k = !!$k; # turn any true value into 1 if ($k) { $k < length and $k = length for @$data } else { $k = length $data->[0] } while ($k--) { my @buckets; for (@$data) { my $c = substr $_, $k, 1; # get char $Exercises</A></H3> <OL> <LI> Why does radix sort start with the right-most character in a string? <P></P> <LI> Does the order of the elements in the input list effect the run-time of this sorting algorithm? What happens if the elements are already sorted? Or in the reverse sorted order? <P></P></OL> <P> <H2><A NAME="sorting by index">Sorting by Index</A></H2> <P>Given the choice between sorting three lists and sorting one list, you'd choose sorting one list, right? Good. This, then, is the strategy employed when you sort by <EM>index</EM>. If you have three arrays that hold different information, yet for a given index, the elements are all related -- we say these arrays hold data in <EM>parallel</EM> -- then it seems far too much work to sort all three arrays.</P> <C> @names = qw( Jeff Jon Ray Tim Joan Greg ); @ages = qw( 19 14 18 14 20 19 ); @gender = qw( m m m m f m );</C> <P:</P> <C> @names = qw( Jon Tim Ray Greg Jeff Joan ); @ages = qw( 14 14 18 19 19 20 ); @gender = qw( m m m m m f );</C> <P>But to actually sort <EM>these</EM> lists requires 3 times the effort. Instead, we will sort the <EM>indices</EM> of the arrays (from 0 to 5). This is the function we will use:</P> <C> sub age_or_name { return ( $ages[$a] <=> $ages[$b] or $names[$a] cmp $names[$b] ) }</C> <P>And here it is in action:</P> <C> @idx = sort age_or_name 0 .. $#ages; print "@ages\n"; # 19 14 18 14 20 19 print "@idx\n"; # 1 3 2 5 0 4 print "@ages[@idx]\n"; # 14 14 18 19 19 20</C> <P>As you can see, the array isn't touched, but the indices are given in such an order that fetching the elements of the array in that order yields sorted data.</P> <DL> <DT><DD> <STRONG>Note:</STRONG> the <TT>$#ages</TT> variable is related to the <TT>@ages</TT> array -- it holds the highest index used in the array, so for an array of 6 elements, <TT>$#array</TT> is 5. <P></P></DL> <P> <H2><A NAME="schwartzian transforms">Schwartzian Transforms</A></H2> <P>A common (and rather popular) idiom in Perl programming is the Schwartzian Transform, an approach which is like "you set 'em up, I'll knock 'em down!" It uses the <TT>map()</TT> function to transform the incoming data into a list of simple data structures. This way, the machinations done to the data set are only done once (as in the Orcish Manuever).</P> <P>The general appearance of the transform is like so:</P> <C> @sorted = map { get_original_data($_) } sort { ... } map { transform_data($_) } @original;</C> <P>They are to be read in reverse order, since the first thing done is the <TT>map()</TT> that transforms the data, then the sorting, and then the <TT>map()</TT> to get the original data back.</P> <P>Let's say you had lines of a password file that were formatted as:</P> <C> username:password:shell:name:dir</C> <P>and you wanted to sort first by shell, then by name, and then by username. A Schwartzian Transform could be used like this:</P> <C> @sorted = map { $_->[0] } sort { $a->[3] cmp $b->[3] or $a->[4] cmp $b->[4] or $a->[1] cmp $b->[1] } map { [ $_, split /:/ ] } @entries;</C> <P>We'll break this down into the individual parts.</P> <DL> <DT><STRONG><A NAME="item_Step_1%2E_Transform_your_data%2E">Step 1. Transform your data.</A></STRONG><BR> <DD> We create a list of array references; each reference holds the original record, and then each of the fields (as separated by colons). <C> @transformed = map { [ $_, split /:/ ] } @entries;</C> <P>That could be written in a <TT>for</TT>-loop, but understanding <TT>map()</TT> is a powerful tool in Perl.</P> <C> for (@entries) { push @transformed, [ $_, split /:/ ]; }</C> <P></P> <DT><STRONG><A NAME="item_Step_2%2E_Sort_your_data%2E">Step 2. Sort your data.</A></STRONG><BR> <DD> Now, we sort on the needed fields. Since the first element of our references is the original string, the username is element 1, the name is element 4, and the shell is element 3. <C> @transformed = sort { $a->[3] cmp $b->[3] or $a->[4] cmp $b->[4] or $a->[1] cmp $b->[1] } @transformed;</C> <P></P> <DT><STRONG><A NAME="item_Step_3%2E_Restore_your_original_data%2E">Step 3. Restore your original data.</A></STRONG><BR> <DD> Finally, get the original data back from the structure: <C> @sorted = map { $_->[0] } @transformed;</C> <P></P></DL> <P>And that's all there is to it. It may look like a daunting structure, but it is really just three Perl statements strung together.</P> <P> <H2><A NAME="guttmanrosler transforms">Guttman-Rosler Transforms</A></H2> <P>Perl's regular sorting is very fast. It's optimized. So it'd be nice to be able to use it whenever possible. That is the foundation of the Guttman-Rosler Transform, called the GRT, for short.</P> <P>The frame of a GRT is:</P> <C> @sorted = map { restore($_) } sort map { normalize($_) } @original;</C> <P>An interesting application of the GRT is to sort strings in a case-insensitive manner. First, we have to find the longest run of <TT>NUL</TT>s in all the strings (for a reason you'll soon see).</P> <C> my $nulls = 0; # find length of longest run of NULs for (@original) { for (/(\0+)/g) { $nulls = length($1) if length($1) > $nulls; } }</C> <C> $Exercises</A></H3> <OL> <LI> Write the <TT>maxlen()</TT> function for the previous chunk of code. <P></P></OL> <P> <H2><A NAME="portability">Portability</A></H2> <P>You can make a function to be used by <TT>sort()</TT> to avoid writing potentially messy sorting code inline. For example, our Schwartzian Transform:</P> <C> @sorted = { $a->[3] cmp $b->[3] or $a->[4] cmp $b->[4] or $a->[1] cmp $b->[1] }</C> <P>However, if you want to declare that function in one package, and use it in another, you run into problems.</P> <C> #!/usr/bin/perl -w</C> <C> package Sorting;</C> <C> sub passwd_cmp { $a->[3] cmp $b->[3] or $a->[4] cmp $b->[4] or $a->[1] cmp $b->[1] }</C> <C> sub case_insensitive_cmp { lc($a) cmp lc($b) }</C> <C> package main;</C> <C> @strings = sort Sorting::case_insensitive_cmp qw( this Mine yours Those THESE nevER );</C> <C> print "<@strings>\n";</C> <C> __END__ <this Mine yours Those THESE nevER></C> <P>This code doesn't change the order of the strings. The reason is because <TT>$a</TT> and <TT>$b</TT> in the sorting subroutine belong to <TT>Sorting::</TT>, but the <TT>$a</TT> and <TT>$b</TT> that <TT>sort()</TT> is making belong to <TT>main::</TT>.</P> <P>To get around this, you can give the function a prototype, and then it will be passed the two arguments.</P> <C> #!/usr/bin/perl -w</C> <C> package Sorting;</C> <C> sub passwd_cmp ($$) { local ($a, $b) = @_; $a->[3] cmp $b->[3] or $a->[4] cmp $b->[4] or $a->[1] cmp $b->[1] }</C> <C> sub case_insensitive_cmp ($$) { local ($a, $b) = @_; lc($a) cmp lc($b) }</C> <C> package main;</C> <C> @strings = sort Sorting::case_insensitive_cmp qw( this Mine yours Those THESE nevER );</C> <C> print "<@strings>\n";</C> <C> __END__ <Mine nevER THESE this Those yours></C> <P> <HR> <H1><A NAME="author">AUTHOR</A></H1> <P>Jeff <TT>japhy</TT> Pinyan, <EM><A HREF="mailto:japhy@perlmonk.org">japhy@perlmonk.org</A></EM></P> <P><EM><A HREF=""></A></EM></P> | http://www.perlmonks.org/?displaytype=xml;node_id=128722 | CC-MAIN-2014-52 | refinedweb | 3,209 | 57 |
I am helping to code a stop-motion program that is to be cross platform, and on windows it works great. For those who do not know, stop motion is just a fancy term for animation. This program allows users to plug in Nikons, Canons, and Webcams into the computer, and have the program display a live view of the scene, and then have the ability to manually control the camera from there. Included is a framework file from canon for the camera, with a path defined as shown
import com.sun.jna.Native;
initialization and such
public static EdSdkLibrary EDSDK = (EdSdkLibrary) Native.loadLibrary("Macintosh/EDSDK.framework/EDSDK",EdSdkLibrary.class, options);
JNA will successively attempt to load frameworks from ~/Library/Frameworks, /Library/Frameworks, and /System/Library/Frameworks, based on the core framework name (EDSDK in this case).
If the
loadLibrary call succeeds, then the library was found. If the library was not found, you'll get an
UnsatisfiedLinkError.
Frameworks are basically bundles of a shared library with other resources; ESDK.framework/ESDK is the actual shared library (for frameworks, OSX omits the "dyld" suffix normally found on a shared library on OSX).
EDIT
Here's how to make a symlink so that the paths look more like what JNA is expecting. From a terminal (run Terminal.app):
% ln -s /your/complete/path/to/Macintosh/EDSDK.framework ~/Library/Frameworks/EDSDK.framework
When this is done successfully, you should see the following when listing (ls) the symlink:
% ls -l ~/Library/Frameworks/EDSDK.framework lrwxrwxr-x 1 YOU YOU 50 Mar 31 01:13 /Users/YOU/Library/Frameworks/EDSDK.framework -> /your/complete/path/to/Macintosh/EDSDK/Framework/EDSDK.framework
You should see the symlink path (where JNA will look) on the left, with the path to the real file on the right. If not, delete the symlink file and try again. Note that you may need to create the directory
~/Library/Frameworks first; it may not yet exist.
Finally, make sure that the library you're trying to load matches the VM you're trying to load with; 64-bit with 64-bit, 32-bit with 32-bit. Canon does not provide a universal binary of their library, so you'll need to point to one or the other or merge the two using
lipo. | https://codedump.io/share/tNY6oRcqRugB/1/how-to-define-paths-to-frameworks-on-a-mac-in-java | CC-MAIN-2016-50 | refinedweb | 382 | 54.83 |
Hey everyone. First post on these boards! :cool: I recently got into very basic C-programming because I'm taking a mandatory course in college for C-programming. The professor has assigned a few simple programs for us to create with the limited knowledge of C-programming that we have so far. Anyways, on to what brought me here...
I've had a couple minor problems with a couple of the programs he's assigned. The first one here, all the code looks perfect. It looks like it should run perfectly. But then when I run it, it asks for the first input, waits to take it in (and takes it in), waits for the second input, then skips right over that and moves on to asking the third user input and waiting for that. What's up with that? Please look at this code and tell me what's wrong with it, because I just can't see it.
Code:
// Description: Calculates how many pictures you can store on a CD.
#include <stdio.h>
#define MB 1048576
#define PX 3
int main()
{
// Delcares necessary variables
int cd_space, pic_length, pic_width;
int pic_size, number_pics;
// Asks user for input
printf("How many megabytes of free space is on the CD? ");
scanf("&d", &cd_space);
printf("What is the length of each picture in pixels? ");
scanf("%d", &pic_length);
printf("What is the width of each picture in pixels? ");
scanf("%d", &pic_width);
// Calculates how many whole pictures can be placed on the CD
pic_size = pic_width * pic_length * PX;
number_pics = (cd_space * MB) / pic_size;
printf("You can store %d pictures on your CD.", number_pics);
return 0;
}
Second program here, I'm getting different numbers in the results than my professor's practice run of the program. Can you please look at this one and tell me if all the math and everything looks correct, and it might be my professor's program that was wrong?
Code:
// Description: Calculates the number of jelly beans in a container.
#include <stdio.h>
#define PI 3.14159265
int main()
{
// Declare necessary variables
double jb_radius, conta_radius, conta_height, percent_air;
double jb_volume, conta_volume, conta_volume_noair, jb_number;
// Request user input
printf("What is the radius of a single jelly bean? ");
scanf("%lf", &jb_radius);
printf("What is the radius of the container? ");
scanf("%lf", &conta_radius);
printf("What is the height of the container? ");
scanf("%lf", &conta_height);
printf("What percentage of the container is filled with air? ");
scanf("%lf", &percent_air);
// Calculate volumes and number of jelly beans
conta_volume = PI * conta_radius * conta_radius * conta_height;
conta_volume_noair = conta_volume - (conta_volume * (percent_air * .01));
jb_volume = 4 / 3 * PI * jb_radius * jb_radius * jb_radius;
jb_number = conta_volume_noair / jb_volume;
// Tell user how many jelly beans are in the jar
printf("There are %.0lf jelly beans in the jar.", jb_number);
return 0;
}
Thanks to any and everyone who helps! :D
- Randy | http://cboard.cprogramming.com/c-programming/69319-new-c-programmer-need-help-here-printable-thread.html | CC-MAIN-2014-52 | refinedweb | 459 | 72.26 |
Source: Deep Learning on Medium
I recently completed my third project where I have to Build and traine a deep neural network to classify traffic signs, using TensorFlow. Experiment with different network architectures. Performe image pre-processing and validation to guard against overfitting.
What I did to Build a Traffic Sign Recognition
Steps of this project are the following:
- Load the data set (see below for links to the project data set)
- Explore, summarize and visualize the data set
- Design, train and test a model architecture
- Use the model to make predictions on new images
- Analyze the softmax probabilities of the new images
- Summarize the results with a written report
Data Set Summary & Exploration
I used the numpy library to calculate summary statistics of the traffic signs data set:
The size of training set is 34799 The size of test set is 12630 The shape of a traffic sign image is (32, 32, 3) The number of unique classes/labels in the data set is 43
Here is an exploratory visualization of the data set. It pulls in a random set of 8 images and labels them with the correct names in reference with the csv file to their respective id’s.
def plot_figures(figures, nrows = 1, ncols=1, labels=None):
fig, axs = plt.subplots(ncols=ncols, nrows=nrows, figsize=(12, 14))
axs = axs.ravel()
for index, title in zip(range(len(figures)), figures):
axs[index].imshow(figures[title], plt.gray())
if(labels != None):
axs[index].set_title(labels[index])
else:
axs[index].set_title(title)
axs[index].set_axis_off()
I also detail the dataset structure by plotting the occurrence of each image class to see how the data is distributed. This can help understand where potential pitfalls could occur if the dataset isn’t uniform in terms of a baseline occurrence.
unique_train, counts_train = np.unique(y_train, return_counts=True)
plt.bar(unique_train, counts_train)
plt.grid()
plt.title("Train Dataset Sign Counts")
plt.show()
Design and Test a Model Architecture
I decided to convert the images to grayscale. I assume this works better because the excess information might add extra confusion into the learning process. After the grayscale I also normalized the image data because I’ve read it helps in speed of training and performance because of things like resources. Also added additional images to the datasets through randomized modifications.
import tensorflow as tf
from tensorflow.contrib.layers import flatten
from math import ceil
from sklearn.utils import shuffle
# Convert to grayscale
X_train_rgb = X_train
X_train_gray = np.sum(X_train/3, axis=3, keepdims=True)
X_test_rgb = X_test
X_test_gray = np.sum(X_test/3, axis=3, keepdims=True)
X_valid_rgb = X_valid
X_valid_gray = np.sum(X_valid/3, axis=3, keepdims=True)
Here is an example of a traffic sign images that were randomly selected.
Here is a look at the normalized images. Which should look identical, but for some small random alterations such as opencv affine and rotation.
I did a few random alterations to the images and saved multiple copies of them depending on the total images in the dataset class type.
Here is an example of 1 image I changed at random. More can be seen further in the document, but the original is on the right and the randomized opencv affine change is on the left. Small rotations are also visible further along as stated.
I increased the train dataset size to 89860 and also merged and then remade another validation dataset. Now no image class in the train set has less then 1000 images.
Validation set gained 20% of the original total mentioned above. I did this using scikit learns train test split method.
To train the model, I used an ….
| Layer. |. Description |
|: — — — — — — — — — — -:|: — — — — — — — — — — — — — — — — — — — — — — -:|
| Input | 32x32x1 RGB image |
| Convolution 3×3 | 2×2 stride, valid padding, outputs 28x28x6 |
| RELU | |
| Max pooling | 2×2 stride, outputs 14x14x64 |
| Convolution 5×5 | 2×2 stride, valid padding output 10x10x6 |
| RELU | |
| Max Pooling | 2×2 stride, outputs 5x5x16 |
| Convolution 1×1 | 2×2 stride, valid padding, outputs 1x1x412 |
| RELU | |
| Fully connected | input 412, output 122 |
| RELU | |
| Dropout | 50% Keep |
| Fully connected | input 122, output 84 |
| RELU | |
| Dropout | 50% Keep |
| Fully connected | input 84, output 43 |
why you think the architecture is suitable for the current problem.
To train the model, I used an LeNet for the most part that was given, but I did add an additional convolution without a max pooling layer after it like in the udacity lesson. I used the AdamOptimizer with a learning rate of 0.00097. The epochs used was 27 while the batch size was 156. Other important parameters I learned were important was the number and distribution of additional data generated. I played around with various different distributions of image class counts and it had a dramatic effect on the training set accuracy. It didn’t really have much of an effect on the test set accuracy, or real world image accuracy. Even just using the default settings from the Udacity lesson leading up to this point I was able to get 94% accuracy with virtually no changes on the test set. When I finally stopped testing I got 94–95.2% accuracy on the test set though so I think the extra data improved training accuracy, but not a huge help for test set accuracy. Although this did help later on with the images from the internet.
My final model results were:
Training set accuracy of 100.0% validation set accuracy of 99.3% test set accuracy of 95.1% If an iterative approach was chosen:
What were some problems with the initial architecture?
The first issue was lack of data for some images and the last was lack of knowledge of all the parameters. After I fixed those issues the LeNet model given worked pretty well with the defaults. I still couldn’t break 98% very easily until I added another convolution. After that it was much faster at reaching higher accuracy scores.
How was the architecture adjusted and why was it adjusted?
Adding a couple dropouts with a 50% probability.
Which parameters were tuned? How were they adjusted and why?
Epoch, learning rate, batch size, and drop out probability were all parameters tuned along with the number of random modifications to generate more image data was tuned. For Epoch the main reason I tuned this was after I started to get better accuracy early on I lowered the number once I had confidence I could reach my accuracy goals. The batch size I increased only slightly since starting once I increased the dataset size. The learning rate I think could of been left at .001 which is as I am told a normal starting point, but I just wanted to try something different so .00097 was used. I think it mattered little. The dropout probability mattered a lot early on, but after awhile I set it to 50% and just left it. The biggest thing that effected my accuracy was the data images generated with random modifications. This would turn my accuracy from 1–10 epochs from 40% to 60% max to 70% to 90% within the first few evaluations. Increasing the dataset in the correct places really improved the max accuracy as well.
What are some of the important design choices and why were they chosen?
I think the most important thing I learned was having a more uniform dataset along with enough convolutions to capture features will greatly improve speed of training and accuracy.
Test a Model on New Images
Here are five German traffic signs that I found on the web:
I used semi-easy images to classify and even modified them slightly. I made them all uniform in size and only had one partially cut off.
Predict the Sign Type for Each Image
Analyze Performance
Here are the results of the prediction:
The model was able to correctly guess 5 of the 5 traffic signs, which gives an accuracy of 100%. This compares favorably to the accuracy on the test set although I did throw it a softball.
Please click here to view my project code
I hope this helped you to as base to start your way! | https://mc.ai/traffic-sign-recognition-udacity/ | CC-MAIN-2019-22 | refinedweb | 1,352 | 63.09 |
Hi Guys,
I’m relatively new to this and thought I’d share my first useful (sorta) sketch.
I’m building a HUGE LED clock, but so far I’ve only built a small display on a breadboard for testing. I’m working on getting the code done first.
I know a clock isn’t particularly exciting or original, but I wanted to do mine without the use of external driver or decoder ICs. I’ve seen lots of videos on youtube of people driving led displays with these ICs and usually doing a fairly useless count up from 0 to 9999.
I have common anode LED modules, the segment cathodes are driven directly by the arduino IO pins. The anodes are also driven by the arduino but via 4 transistors because the current would overload the arduino pins. (Each segment uses about 20mA.)
The multiplexing and INT > BCD > 7 Segment conversion is all done by the arduino so no external ICs (apart from the RTC and DS18B20 temp sensor).
I bought the RTC module via eBay from Hong Kong and it works a treat, and for less than £5 I’m not complaining.
This is still a work in progress so there’s more to come.
Things it does now:
-4 Digit multiplexing
-Reads time from RTC and displays in 24 hour format.
-Reads temperature from DS18B20 and displays in Celsius every 30 seconds.
Things to come:
-Ability to set the RTC via buttons.
-Make temperature scroll on and off the display (to hide the short blank while temperature is being read).
-Light sensor to turn off display at night.
-LCD module to show date and other info.
-Automatic adjustment for DST.
Here’s a youtube video of it working
I don’t know if anyone will find this useful but I thought i’d post it anyway.
Here’s the code. I haven’t done a circuit diagram yet but will at some point. It’s simple anyway so you should be able to work it out.
#include <DallasTemperature.h> // #include <OneWire.h> // #include <SimpleTimer.h> // #include <Wire.h> //Included with Arduino IDE #define DS1307_ADDRESS 0x68 //RTC Module Address #define ONE_WIRE_BUS 14 //DS18B20 temperature sensor connected to analog pin 0. byte digits[15] = { //Bit pattern for 7 segment display // -AFBGEDC Segments labelled as per datasheet for Kingbright DA56-11EW dual common anode LED display module. B10001000, // 0 1 represents segment OFF, 0 is ON B11011011, // 1 the MSB is not used. B10010100, // 2 B10010001, // 3 B11000011, // 4 B10100001, // 5 B10100000, // 6 B10011011, // 7 B10000000, // 8 B10000001, // 9 B10000111, // Degree symbol B10101100, // Letter C B10100100, // Letter E B10100110, // Letter F B11110111 // - Symbol }; int anodes[4] = {2,3,4,5}; //Common anodes for 7 segment displays connected to these digital outputs via NPN transistors //Pin 2 is first digit, 3 is second and so on. int cathodes[7] = {6,7,8,9,10,11,12}; //Cathodes for each segment all tied together and connected to these digital outputs. //Segment to pin assignment //6>C, 7>D, 8>E, 9>G, 10>B, 11>F, 12>A int data[4]; //4 byte array stores value to be displayed. int startAddress = 0; //Data start address on the DS1307 byte bcdToDec(byte val) { // Convert binary coded decimal to normal decimal numbers return ( (val/16*10) + (val%16) ); } SimpleTimer timer; //Set up timer instance. OneWire oneWire(ONE_WIRE_BUS); //Start the oneWire bus DallasTemperature sensors(&oneWire); //Create Dallas Temperature library instance. void setup() { for (int i = 0; i < 4; i++) //Set anode pins mode to output and put them LOW. { pinMode(anodes[i], OUTPUT); digitalWrite(anodes[i], LOW); } for (int i = 0; i < 8; i++) //Set cathode pins mode to output and put them HIGH. { pinMode(cathodes[i], OUTPUT); digitalWrite(cathodes[i], HIGH); } Wire.begin(); // Start the I2C bus. sensors.begin(); //Start the Dallas Temperature library. timer.setInterval(30000, showTemp); //Set to diaplay temperature every 30 seconds. } void loop() { readTime(); outputDisplay(data); timer.run(); } void outputDigit(int seg) //Outputs segment data for an individual digit. { for (int s = 0; s < 7; s++) //Read a bit at a time from the selected digit in the digits array and output it to the correct { //pin boolean bitState = bitRead(digits[seg], s); //Read the current bit. digitalWrite(cathodes[s], bitState); //and output it. } } void outputDisplay(int dig[4]) //Scan the display once with the 4 digit int array passed in. { for (int d = 0; d < 4; d++) { outputDigit(dig[d]); //Set up the segment pins with the correct data. digitalWrite(anodes[d], HIGH); //Turn on the digit. delay(2); //Hold it on for 2ms to improve brightness. digitalWrite(anodes[d], LOW); //And turn it off again before looping untl all digits have been displayed. } } void showTemp() { sensors.requestTemperatures(); //Request temperature from the sensor. float temp = (sensors.getTempCByIndex(0)); //Read temperature from the first (only) sensor on the bus. int t = (int) temp; //Convert the temperature to int for display int starttime = millis(); int endtime = starttime; //Store the internal timer counter value to make this loop run for a set period. while ((endtime - starttime) <=5000) // do this loop for 5000mS { data[0] = (t / 10); //Calculate and store temperature 10s value data[1] = (t % 10); //Calculate and store temperature 1s value. data[2] = 10; //Degree symbol data[3] = 11; //Letter C for Celcius outputDisplay(data); //Send to diaplay. endtime = millis(); //Read internal timer counter to see how long this loop has been running. } } void readTime() { // Reset the register pointer Wire.beginTransmission(DS1307_ADDRESS); Wire.send(startAddress); Wire.endTransmission(); Wire.requestFrom(DS1307_ADDRESS, 7); int second = bcdToDec(Wire.receive()); int minute = bcdToDec(Wire.receive()); int hour = bcdToDec(Wire.receive() & 0b111111); //24 hour time data[0] = (hour / 10); data[1] = (hour % 10); data[2] = (minute / 10); data[3] = (minute % 10); } | https://forum.arduino.cc/t/4-digit-led-clock-ds1307-rtc-and-ds18b20-temperature-sensor/82831 | CC-MAIN-2022-21 | refinedweb | 955 | 65.73 |
Libraries/WhenToRewriteOrRename
From HaskellWiki
Revision as of 10:35, 9 June 2010
There have been a few cases of major API changes / rewrites to famous old packages causing problems, including:
- QuickCheck 1 vs 2
- parsec 2 vs 3
- OpenGL
- Haxml 1.13,1.19 to call the new library 'fgl'.
It is a controversial step to write a new library and give it the same name as an existing, famous library. Let's look at the arguments.
1 Reasons to use the old.
- It makes development by new users simpler by not fracturing the package-space (the "Which version of QuickCheck should I use?" problem).
- It decreases the maintainer workload as the same person or team will often be responsible for both packages.
- A lot of respected members of the Haskell community (e.g. Cale) do not like many aspects of the current API (and thus refuse to use it) and we're taking their points of view into account.
- The new version of the library is keeping the "spirit" of old fgl alive, but modernising the interface and providing new functionality (e.g. the ability to restrict the label types or use a custom Node type).
- There will be a full transition guide between the old and new versions (something that was lacking for packages like QuickCheck from what I could tell).
- Major version numbers exist for a reason: to denote breakage. We really need to educate developers to avoid having too lax or open-ended package dependencies.
2 Reasons not to use the name
- Code that depends on 'fgl' will break. There are 23 direct and 25 indirect dependencies on fgl.
- Rebuttal:
- Have contacted all maintainers of packages on Hackage which have packages without an upper bound on the version of fgl used; most have already gotten back to me saying they will release a bug-fix version to resolve this.
- With the Package_versioning_policy, people should always have upper bounds on their dependencies anyway.
- Until new fgl is stabilised, Hackage can set the default version of fgl to be < 6 (same as what happened with base-3 to base-4 transition, etc., thus any packages that do not have an upper bound will not be affected by cabal-install, etc.
-.
- This is true. However, every now and then someone tries to work out what this mystical packedstring library is and tries to use it (old invalid deps, etc.).
- The package has been stable for ~10 years -- why change a stable API? It is already "perfect"
- As mentioned above: many people do not think that the current API is perfect.
- The new package really isn't the same package in any sense.
- Rewrites by new teams damage the brand of famous packages (e.g. parsec 3)
- No additional breakages are introduced.
- Not sure what your point is here.
- If you weren't maintainer of 'fgl' this rewrite wouldn't even be possible to call 'fgl' -- there's a conflict of interest.
- Of course not, but I volunteered to become the maintainer of fgl precisely to modernise the interface (which as far as I know is why Martin Erwig gave fgl up for adoption: he didn't have time to make changes that people were asking him for).
- Maintaining Haskell98 compatability. Keep it simple. (See regex-posix's mistakes here)
- Not sure what you mean by this point; what are regix-posix's mistakes? Whilst in general I can see Haskell98 (or Haskell2010) compatability being a good thing to keep (in case someone uses another compiler, etc.) if there's a good argument to be made for why a certain extension would be useful then why shouldn't we use it? Whilst I mightn't have been working on a major Haskell library back then, it was pointed out to me a while back that you shouldn't constrain yourself by enforcing Haskell98 compatability for no reason.
- Distros that support the Haskell Platform will have to keep an old version of fgl around for a long time anyway.
- I don't intend to have the new fgl be actually used by people for a while yet anyway, as I intend to get the ecosystem built up around it (fgl-algorithms, etc.) first.
- I think that keeping base-3 compatibility in xmonad just to ensure that people using the Long Term Release of Ubuntu has in a sense held it back, as it was more of a pain to transition to base-4 later on than it would have been to do it earlier (using extensible-exceptions if nothing else).
- The original author might not approve of the use of the name.
- If this is true, then why did he publicly state in mailing lists that he wanted someone to take over?
- having tutorials not work for later revisions is more confusing than having various packages doing the same thing.
- The current tutorials do not fully work with the current version anyway, and we will be writing tutorials (already had one offer to help out with this).
- separate names (both for the package name and module name-space) is that its easier to have both packages installed at the same time
- A valid argument, especially when seeing the fall-out between mtl and transformers.
3 Possible Compromises
- Until we're ready to release, either don't release fgl on Hackage or call it fgl-experimental or something.
- The name "Functional Graph Library" is rather vague anyway, whereas something like "inductive-graphs" makes more sense in terms of the actual data structures, etc. involved. As such we could give the new version of the library something like that if down the track we could officially deprecate the fgl library (like how packedstring has been deprecated).
- We could officially split up the fgl package namespace even further: rather than having fgl + fgl-algorithms, etc. we could have something like fgl-classes, fgl-algorithms, etc. As such the base name is kept whilst there is no ambiguity on which version is being used. | http://www.haskell.org/haskellwiki/index.php?title=Libraries/WhenToRewriteOrRename&diff=prev&oldid=34932 | CC-MAIN-2014-23 | refinedweb | 999 | 69.01 |
Check Duplicates
PabloTheDeveloper
Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
This Python template lets you get started quickly with a simple one-page playground.
import re
raw_input = """
Monday, November 30 (Sherriff)
Time Slot
Team
9:00 AM
Team 1-18
9:15 AM
Team 1-08
9:30 AM
Team 1-04
9:45 AM
Team 2-21
10:00 AM
Team 1-01
10:15 AM
Team 1-38
10:30 AM
Team 1-03
10:45 AM
Team 2-24
11:00 AM
Team 1-09
1:00 PM
Team 1-43
1:15 PM
Team 1-14
1:30 PM
Team 1-25
1:45 PM
Team 1-07
2:00 PM
Team 1-42
2:15 PM
Team 1-16
2:30 PM
Team 1-23
Monday, November 30 (McBurney)
Time Slot
Team
11:00 AM
Team 1-05
11:15 AM
Team 1-37
11:30 AM
Team 1-31
11:45 AM
Team 2-28
Enter to Rename, Shift+Enter to Preview
Advanced usage
If you want a more complex example (external libraries, viewers...), use the Advanced Python template
Open Source Your Knowledge: become a Contributor and help others learn. Create New Content | https://tech.io/playgrounds/55974/check-duplicates | CC-MAIN-2021-04 | refinedweb | 213 | 67.93 |
On Fri, 2012-05-04 at 07:13 -0700, Eric W. Biederman wrote: > M> > Interesting. I guess what truly puzzles me is what serializes all of> the processes. Even synchronize_rcu should sleep and thus let other> synchronize_rcu calls run in parallel.> > Did you have HZ=100 in that kernel? 400 tasks at 100Hz all serialized> somehow and then doing synchronize_rcu at a jiffy each would account> for 4 seconds. And the nsproxy certainly has a synchronize_rcu call.HZ=250> The network namespace is comparatively heavy weight, at least in the> amount of code and other things it has to go through, so that would be> my prime suspect for those 29 seconds. There are 2-4 synchronize_rcu> calls needed to put the loopback device. Still we use> synchronize_rcu_expedited and that work should be out of line and all of> those calls should batch.> > Mike is this something you are looking at a pursuing farther?Not really, but I can put it on my good intentions list.> I want to guess the serialization comes from waiting on children to be> reaped but the namespaces are all cleaned up in exit_notify() called> from do_exit() so that theory doesn't hold water. The worst case> I can see is detach_pid from exit_signal running under the task list lock.> but nothing sleeps under that lock. :(I'm up to my ears in zombies with several instances of the testcaserunning in parallel, so I imagine it's the same with hackbench.marge:/usr/local/tmp/starvation # taskset -c 3 ./hackbench -namespace& for i in 1 2 3 4 5 6 7 ; do ps ax|grep defunct|wc -l;sleep 1; done[1] 29985Running with 10*40 (== 400) tasks.139732726119913572marge:/usr/local/tmp/starvation # Time: 7.675-Mike | https://lkml.org/lkml/2012/5/4/172 | CC-MAIN-2015-48 | refinedweb | 292 | 65.83 |
Nice features. Since we are in the Visual C++ blog, i suppose this standard will be implemented faster in VC++ than the previous standard. At least, I hope so. :-)
The character types were sorely missing (wchar_t didn't cut it) and variadic templates are really cooooooool.
Other features I long for are foreach and auto (automatic type deduction)
Oh, i forgot.
Is there any plans for C99 support?
I know C99 has some weird features (tgmath and new meaning for static) and others that conflict with C++ (clog, _Bool), but there are some really useful cleanups there. For example: snprintf (slightly different than VC++'s _snprintf), declarations allowed after statements (just like C++), better aggregate initialization, VLAs, new math rounding functions and other library improvements.
/* I guess that long long, "//" comments and trailing comma in enum declarations are already supported. */
Does the next version of Visual C++ implements this part? or should we wait till ISO release the laguage Specification at the end of this decade?
Oh brother, guess we have to wait _yet_ another decade:
* to add 24-bit chars, char24_t "aaa"24
* get rid of that short, long, and long long crap
* bool constants, 0z11011101
* macros that are type safe, #macro foo( int x, float x)
* standardized way of disable up-casting; NO doubles for you Mr. PS2
* standardized & automatic __FUNCTION__
Mr Pohoreski, aren't templates /all about/ being typesafe macros?
But I agree on __FUNCTION__ and 0z01010101 bit constants.
Long time ago I have read about something like auto typing such as;
float a = 0.12f;
autotype b = a;
and b gets the type of a automatically , it would help much when writing iterators;
vector<int> a;
-vector<int>::iterator it = a.begin();
instead
+autotype it = a.begin();
it would have been much nicer ...
I'll address some of these comments:
C-99 - we have added support for 'long long' and variadic macros: we'll add more support for other features once there is enough customer demand (and up until now there has been very little).
auto, foreach, and __FUNCTION__ - these features are still under consideration by the C++ Committee: once they are in the Standard we will look at implementing them.
Nice!
Michael, what do you mean with __FUNCTION__?
Is it the same as C99's __func__?
Ahmet, autotype is going to be spelled "auto" (reusing the keyword). I really look forward to using this feature with iterators and other complex types!
It seems that this blog post is more up-to-date than this page:
Since there is no mention about the withdrawal of "Generalized Constant Expressions".
Cool, variadic macros are in VC++2005. I hadn't noticed them, but now that i know they are implemented it is likely that i will start using them.
Yet other features i forgot to mention:
stdint.h and inttypes.h
Fortunately, C++ has equivalents for them in boost :-)
And what about TR1?
Will we be able to use TR1 libraries before the next standard is complete?
Hi
How can I post message from one class to another? In detail, suppose I have class-1 which contains a user-defined message (say WM_OVER) and its corresponding function. I have another one class (say class-2). From class-2, by simply having reference to class-1, can I post/send the message (WM_OVER) so that the corresponding function in class-1 will be invoked?
MyClass1.h
----------
CPP / C++ / C Code:
#include "MyClass2.h"
class class1
{
......
....
BEGIN MESSAGE_MAP
OnMessage(WM_OVER,fun_in_class1)
END MESSAGE_MAP
.......
void fun_in_class1()
void call2class2()
class2 obj_of_class2;
obj_of_class2.fun_in_class2(this);
/***end of class-1****/
MyClass2.h
---------
class class2
...
..
void fun_in_class2(class1 &obj_of_class1)
obj_of_class1.PostMessage(WM_OVER,0,0);
/***end of class-2****/
Hey, did you forget rvalue reference and std::move? IMHO, that is the single-most important feature. I hope that will be part of the standard
The Microsoft compiler creates a warning when using "this" in a constructor's initializion list.
Is this because of a standards issue, or what exactly IS the issue?
I have rummaged around online for a sufficient amount of time to find some information on this question. I guess I am looking for answers in all the wrong places.
Any help would be appreciated.
[John (Jay) Meyer]
> The Microsoft compiler creates a warning when
> using "this" in a constructor's initializion list.
> Is this because of a standards issue, or what
> exactly IS the issue?
While unrelated to Jonathan's blog post, I can answer this.
C4355 exists for a simple reason: paragraph 12.6.2/8 of the C++ Standard. It is okay to call member functions of an object under construction in the constructor's body (inside the braces). However, during the initialization of bases and members (inside the init list, before the opening brace of the ctor body), calling member functions of the object being constructed results in undefined behavior.
In an init list, passing the "this" pointer to a base or member constructor makes it very easy to violate 12.6.2/8. Therefore, the compiler emits a warning.
[Sohail]
> Hey, did you forget rvalue reference and std::move?
> IMHO, that is the single-most important feature.
> I hope that will be part of the standard
You're in luck! Rvalue references and move constructors have already been integrated into the Working Paper (see ikk's link to the N2228 paper above). This means that, barring catastrophe, they will be a part of C++0x.
In his introduction, Jonathan explained that he was describing only those features added to the Working Paper in the most recent Committee meeting.
[ikk]
> And what about TR1?
> Will we be able to use TR1 libraries before the
> next standard is complete?
A Microsoft Connect bug was filed about adding TR1 to Visual Studio 2005 and Orcas. Please read Martyn Lovell's response here:
Summary: Although TR1 will not be in Orcas RTM, "We will definitely be revisiting TR1 in future versions."
--
If anyone has further questions, feel free to E-mail me.
Thanks,
Stephan T. Lavavej (stl@microsoft.com)
Visual C++ Libraries Developer! | http://blogs.msdn.com/vcblog/archive/2007/06/04/update-on-the-c-0x-language-standard.aspx | crawl-002 | refinedweb | 1,011 | 66.23 |
Type: Posts; User: sunny_sz
I am not sure whether or not I already correctly understand what you are try to explainReference to my first post directly.
You want to build a .DLL file with your present C# project or just invoke a C++ dll file in C#??
If it is the latter, as I mentioned above, just copy and compile simply, otherwise create a new dll...
You are doing an application with C# and want to invoke C++ DLL file in C#, right? If yes, please reference to my previous post above, otherwise, please clarify your questions clearly.
[DllImport(“MyDLL.dll")]
public static extern int mySum (int a,int b);
Reference Creating a reliable timer .
Yes, you are right :cool: ! Its my mistake in my written work. :blush: :blush:
You may use it directly.
Assume VB DLL name is project1.dll
The first thing you have to do in VC++ is to add #import ".\\Project1.dll" no_namespace, then
.........
CoInitialize(NULL);
_Class1Ptr ptr;...
DWORD to CString
DWORD a
CString str;
str.format("%d",a);
afxmessagebox(str);
CString to DWORD
You can try SetProcessWorkingSetSize(IntPtr process, int minSize, int maxSize) at a certain time.
-------------------------------------------
If finding this post helps, please Rate this post...
In addition to QuinnJohns post, please reference to How to use Progress Bar in C#..??.
Hi codeguru Members:
I am facing an issue about dataase backup, details as below:
One application is running real-time, and is read/write to the database, now I make some settings or download...
In addition to HanneSThEGreaT, you can reference How to Make a Program close itself if it has been inactive for long time also.
I don't clearly understand what you mean.
You said you want to change the for loops with some other alternate function, but we do not know to change the number of loops or change functions of loops ??
Firstly, please use code tag.
2nd, Take a look at this
Firstly, Please use code tag for your code snippet.
Second, your code:
while (m_nCurFrame++ <m_nMax)
{
if(bb)
{
break;
}
The files you provide so many and I am using VC6 environment not you are using, I cannot be sure you are missing what lib file, you have to read up your codes and find out. Good luck!
Those codes are written by yourself? and what kinds of plateform you are using? You have to go at these points first.
You have include head file stdio.h, I believe that printf will work in vista.
There is Visual C++ 6.0 version only in my pc, but seeing from you post above, I guess that lack of specified files (e.g. .LIB etc) may cause also, please double check.
Try SetWindowPos .
It is a little of complex system, it is impossible to demonstrate this just in few words here.
Anyway, please try to play vedio that acquired from server real time.
Theoretically speaking, nothing is impossible.
Do not forget to close file when writting string into a file. | http://forums.codeguru.com/search.php?s=c77e1238b127011f819807ea7cc879a1&searchid=6151573 | CC-MAIN-2015-06 | refinedweb | 498 | 75.2 |
Custom Pages
Table of Contents
Sahana Eden supports custom Pages as part of it's overall Templates system.
This allows customisation of:
- Totally new pages
Custom pages are defined within the templates folder to reduce issues with merging.
NOTE: This is still a work-in-progress: the details may change...
The web server is normally configured to redirect requests to:
to:
Which maps to
controllers/default.py & the
index() function therein.
The default application & controller can be easily edited in
web2py/routes.py or within the web server configuration, however editing the default function might cause problems for other Sahana modules.
Sahana Eden is designed to check for the presence of a custom template and, if one is configured, then it attempts to load:
/private/templates/<template>/controllers.py
& then run the
index() function inside there instead.
Notes:
- You must have an
__init__.pyin your template folder for this to work (it can be empty).
- Unlike normal controllers, the scope this function is run in will start empty, so import what you need
- If a custom View template is passed in from the Template folder then this will need to be provided as a File not a String to work in compiled mode:
import os from gluon import current from gluon.http import HTTP response = current.response path = os.path.join(current.request.folder, "private", "templates", response.s3.theme, "views", "index.html") try: # Pass view as file not str to work in compiled mode response.view = open(path, "rb") except IOError: raise HTTP("404", "Unable to open Custom View: %s" % path)
Custom Page
If a URL is specified like:<custompage>
The this alternate page will attempt to be loaded using the
custompage() function in:
/private/templates/<template>/controllers.py | https://eden.sahanafoundation.org/wiki/DeveloperGuidelines/Templates/CustomPages?version=3 | CC-MAIN-2022-27 | refinedweb | 289 | 56.86 |
We recently made a refresh of the Windows Phone documentation and samples. We've updated several samples:
- Background Agent
- Background Transfer Service
- Quick Card
- Search Extensibility
We also added the following new sample that shows how to use types in the NetworkInformation namespace to get the network status of the device, mobile network and other information.
- Network and Device Information
To get the latest samples go to the Code Samples page on MSDN.
In addition, we made fixes to several docs, some based on your feedback on UserVoice. For example, we added links to the camera sample from the camera docs. Please keep your feedback coming and we'll keep updating and making changes! | https://blogs.msdn.microsoft.com/silverlight_sdk/2011/12/20/updates-to-the-windows-phone-docs/ | CC-MAIN-2019-35 | refinedweb | 114 | 52.49 |
This guide will show you how to use the VIXL framework. We will see how to set up the VIXL assembler and generate some code. We will also go into details on a few useful features provided by VIXL and see how to run the generated code in the VIXL simulator.
The source code of the example developed in this guide can be found in the
examples directory (
examples/getting-started.cc).
First of all you need to make sure that the header files for the assembler and the simulator are included. You should have the following lines at the beginning of your source file:
#include "a64/simulator-a64.h" #include "a64/macro-assembler-a64.h"
VIXL's assembler will generate some code at run-time, and this code needs to be stored in a buffer. It must be large enough to contain all of the instructions and data that will be generated. In this guide we will use a default value of 4096 but you are free to change it to something that suits your needs.
#define BUF_SIZE (4096)
All VIXL components are declared in the
vixl namespace, so let's add this to the beginning of the file for convenience:
using namespace vixl;
Now we are ready to create and initialize the different components.
First of all we need to allocate the code buffer and to create a macro assembler object which uses this buffer.
byte assm_buf[BUF_SIZE]; MacroAssembler masm(assm_buf, BUF_SIZE);
We also need to set-up the simulator. The simulator uses a Decoder object to read and decode the instructions from the code buffer. We need to create a decoder and bind our simulator to this decoder.
Decoder decoder; Simulator simulator(&decoder);
We are now ready to generate some code. The macro assembler provides methods for all the instructions that you can use. As it's a macro assembler, the instructions that you tell it to generate may not directly map to a single hardware instruction. Instead, it can produce a short sequence of instructions that has the same effect.
For instance, the hardware
add instruction can only take a 12-bit immediate optionally shifted by 12, but the macro assembler can generate one or more instructions to handle any 64-bit immediate. For example,
Add(x0, x0, -1) will be turned into
Sub(x0, x0, 1).
Before looking at how to generate some code, let's introduce a simple but handy macro:
#define __ masm->
It allows us to write
__ Mov(x0, 42); instead of
masm->Mov(x0, 42); to generate code.
Now we are going to write a C++ function to generate our first assembly code fragment.
void GenerateDemoFunction(MacroAssembler *masm) { __ Ldr(x1, 0x1122334455667788); __ And(x0, x0, x1); __ Ret(); }
The generated code corresponds to a function with the following C prototype:
uint64_t demo_function(uint64_t x);
This function doesn‘t perform any useful operation. It loads the value 0x1122334455667788 into x1 and performs a bitwise
and operation with the function’s argument (stored in x0). The result of this
and operation is returned by the function in x0.
Now in our program main function, we only need to create a label to represent the entry point of the assembly function and to call
GenerateDemoFunction to generate the code.
Label demo_function; masm.Bind(&demo_function); GenerateDemoFunction(&masm); masm.Finalize();
Now we are going to learn a bit more on a couple of interesting VIXL features which are used in this example.
VIXL's assembler provides a mechanism to represent labels with
Label objects. They are easy to use: simply create the C++ object and bind it to a location in the generated instruction stream.
Creating a label is easy, since you only need to define the variable and bind it to a location using the macro assembler.
Label my_label; // Create the label object. __ Bind(&my_label); // Bind it to the current location.
The target of a branch using a label will be the address to which it has been bound. For example, let's consider the following code fragment:
Label foo; __ B(&foo); // Branch to foo. __ Mov(x0, 42); __ Bind(&foo); // Actual address of foo is here. __ Mov(x1, 0xc001);
If we run this code fragment the
Mov(x0, 42) will never be executed since the first thing this code does is to jump to
foo, which correspond to the
Mov(x1, 0xc001) instruction.
When working with labels you need to know that they are only to be used for local branches, and should be passed around with care. There are two reasons for this:
They can‘t safely be passed or returned by value because this can trigger multiple constructor and destructor calls. The destructor has assertions to check that we don’t try to branch to a label that hasn't been bound.
The
B instruction does not branch to labels which are out of range of the branch. The
B instruction has a range of 2^28 bytes, but other variants (such as conditional or
CBZ-like branches) have smaller ranges. Confining them to local ranges doesn‘t mean that we won’t hit these limits, but it makes the lifetime of the labels much shorter and eases the debugging of these kinds of issues.
On ARMv8 instructions are 32 bits long, thus immediate values encoded in the instructions have limited size. If you want to load a constant bigger than this limit you have two possibilities:
__ Mov(x0, 0x1122334455667788);
The previous instruction would not be legal since the immediate value is too big. However, VIXL's macro assembler will automatically rewrite this line into multiple instructions to efficiently generate the value.
VIXL also provides a way to do this:
__ Ldr(x0, 0x1122334455667788);
The assembler will store the immediate value in a “literal pool”, a set of constants embedded in the code. VIXL will emit literal pools after natural breaks in the control flow, such as unconditional branches or return instructions.
Literal pools are emitted regularly, such that they are within range of the instructions that refer to them. However, you can force a literal pool to be emitted using
masm.EmitLiteralPool().
Now we are going to see how to use the simulator to run the code that we generated previously.
Use the simulator to assign a value to the registers. Our previous code example uses the register x0 as an input, so let's set the value of this register.
simulator.set_xreg(0, 0x8899aabbccddeeff);
Now we can jump to the “entry” label to execute the code:
simulator.RunFrom(entry.target());
When the execution is finished and the simulator returned, you can inspect the value of the registers after the execution. For instance:
printf("x0 = %" PRIx64 "\n", simulator.xreg(0));
The example shown in this tutorial is very simple, because the goal was to demonstrate the basics of the VIXL framework. There are more complex code examples in the VIXL
examples directory showing more features of both the macro assembler and the ARMv8 architecture. | https://android.googlesource.com/platform/external/vixl/+/da619e4/doc/getting-started.md | CC-MAIN-2019-04 | refinedweb | 1,170 | 52.8 |
there is 1 unmatched record in excel which i need to get as my output
I would suggest you to use either VB.NET or C# and using the Except command. you would need to add additional DLL references (System.Data.DataSetExtensions.dll and System.Core.dll) and namespaces (System.Linq).
The C# code then would be:
colOut = col1.AsEnumerable().Except(col2.AsEnumerable(), DataRowComparer.Default).CopyToDataTable();
...where col1 and col2 are input collections and colOut is an output collection.
Mind you that you that the code above will find you rows from col1 that are not present in col2, to find rows from col2 that are not present in col1 you will either have to switch the inputs and run it again or tweak the code some more. | https://intellipaat.com/community/15982/how-do-i-get-unmatched-data-between-an-excel-and-sql-data-table-in-blue-prism | CC-MAIN-2021-04 | refinedweb | 128 | 73.78 |
. Overview
A high level overview of the Plone codebase
1.1. Audience
Who is this manual for?
This reference manual is intended to give an overview of the processes and conventions of the development of Plone, both as a point of reference for existing developers, and a source of explanation for new developers.
It is not intended as a general guide for how to develop applications on top of Plone, but some of the "best practice" documentation here may be useful to third party developers as well.
This document is a work in progress, and will probably continue to be so for a good while. We will endeavour to publish pages and sections when they are complete, but don't be surprised if you find that there are major pieces of the puzzle missing at present.
1.2. Contributing
How you can contribute to Plone
Like all open source products, Plone only moves forward on the contributions of volunteers. If you would like to contribute, we will be incredibly happy!
However, there are many people and companies who rely on Plone day-to-day, so we have to introduce some degree of quality control over the code base. Plone's source code is hosted in an open Subversion respository on, but only members of the developer team have commit-rights. Before you can get commit rights, three things need to happen:
- You must familiarise yourself a little with the community. Get on the chatroom, start asking and (especially) answering questions on the mailing lists, and get to know the more active developers a bit. If you arrive in the #plone chatroom and ask how you can help, chances are you will be met with gratitude and openness, so don't be afraid to introduce yourself. If no-one answers, it's probably just a bad time of the day: try the developers' list instead.
- You must prove that your code does not suck. There are two types of people in the world - those who can write code, and those who cannot. Actually, we take that back - there are also UI people. And accessibility experts. And translators. And documentation writers. And testers. However you wish to contribute, though, we'd like to see the calibre of what you produce. See below for ways of contributing without commit-access to the source code repository.
- You must sign the contributor agreement. This offers some intellectual copyright protection, and ensures that the Plone Foundation is able to exercise some control over the codebase to ensure it is not appropriated for someone's unethical purposes. Think IBM vs. SCO.
Now, there are a number of other ways you can help out, which do not require contributor agreements or anything else than the right attitude.
Testing
We always need more testers. If you believe you have found a bug in Plone, particularly if you are using a less common operating system, have a very large site, or operate in non-English locales, please report it in the bug collector. Please search the tracker thoroughly before submitting a bug, so that we avoid duplicates.
Before submitting a bug, please log in with your plone.org username. If you don't have an account, just click the "join" link in the blue bar at the top on plone.org. You can submit bugs without being logged in, but when you are logged in, you will be given email notification when your bug is responded to. This is important if we need clarification, for example - you can see a request for clarification, and then add a follow-up with more details.
Please think carefully before submitting a bug. If the bug is really in a third party product, do not use the Plone tracker, as we won't be able to help you - contact the authors directly, instead, or use their tracker in the products area. If it is a Plone bug, search for similar issues and add a follow-up to an existing bug if necessary - we will be notified of this just as clearly as with new bugs.
If you have to submit a new bug, try to write as clearly and concisely as you can to ensure that we understand your problem. Include relevant product versions, and always enter your Plone and Zope versions. Be honest about the severity - submitting a bug as "critical" if it is not is unlikely to win you any sympathy.
Patches and verification
Very often, issues end up in the collector for a long time because we don't know whether there was actually a problem, or if the submitter was experiencing a local configuration issue. If a bug looks sketchy, it's very helpful if you can try to replicate the problem on your own setup and add a follow-up about your experiences. If you have more insight to offer than original submitter, this is of course incredibly useful!
If you think you know how to fix a bug, this is your chance to shine. Once logged in, you can submit patches to the tracker. Even if you can't provide a patch, but can do some research and come up with a way in which the issue may be resolved easily, this will help the developers immensely. Adding patches to the collector is one of the best ways of showing off your skills and proving that you should be given commit-access to the Plone repository.
Documentation
Any member of plone.org (that includes you, if you want to be) can submit documentation to the Plone documentation area. If you have discovered something that you feel was poorly documented, you will gain much praise by writing a short How-to or a Tutorial on the subject. Once you are finished, submit your document for review, and we will publish it if it is not giving bad advice or too difficult to understand. You will probably find that writing documentation also helps you structure your own understanding of Plone, and forces you to consider issues you may have otherwise glossed over.
If you need help with writing documentation, or you're not sure where to start, send a mail to the plone-docs list/newsgroup
Helping others
The best way to gain respect in the community is by helping others. If you spend some time in the #plone chatroom or on the Plone users' mailing list answering people's questions when you know the answer, people will take notice. Helping others will help your own understanding, not to mention the warm fuzzy feeling you'll get afterwards.
1.
1.4. Special events
From time to time we organize certain special events to move Plone forward. This page describes the most common ones: Bug days, Sprints and Symposiums/Conferences.
Overview
Part of our development process are some special events that are organized from time to time. The main idea behind these is to get people together to work on Plone at the same time, both as it is more efficient as you can get immediate feedback and as it is certainly more fun to work with others.
What is a Bug Day?
From time to time, especially in the weeks leading up to a release, the Plone Community arranges so-called "Bug Days". These days focus on identifying and fixing bugs and other issues with the Plone core, and is an excellent chance to get to know both Plone and its developers better. We therefor get together on IRC and collectively fix a selected set of bugs.
The Bug-Wrangler is the person who determines what bugs needs to be fixed and prioritizes as well as allocates bugs to each individual developer. Bug Day's happen in many other open source projects. Anyone can participate in a bug day; general users who are comfortable with using Subversion can test that a developer has fixed a bug or that the user can not provoke the bug in a different manner. And no matter what your current skill level is - from totally new to Plone to experienced Plone developer - you can make a difference!
Bug days are a critical development and social event that brings the developers closer through producively exorcising software of evil bugs. Bug days are usually announced on the developer mailing list and on the frontpage of plone.org.
What is a Sprint?
Tres Seaver originally came up with this idea. While a bug day lasts only one day or a weekend and people usually meet online, a Sprint typically lasts for several days to a full week and people meet in person. The idea of a Sprint is for a group of people to enhance, create, or fix one or more pieces of infrastructure. Usually they are focused on a specific set of topics instead of the entire product line.
Sprints are either funded by organizations or individuals who need specific features or are interested in working on some. Sprints happen yearly in the Plone community from sunny San Francisco to the top of mountains in the Alps where the only electricity is produced by a generator and a satellite uplink is used for internet connectivity. Sprints are extremly productive cultural events in the world of Plone. You can have a look at the list of past and upcoming Sprints.
What is a Symposium/Conference?
This is not a special kind of event for Plone or the open source world, but exactly what happens in other businesses as well. Usually there is one official Plone conference per year and at least one regional Symosium. Most of the time there is a business and a development orientated track each consisting of a series of talks and tutorials. See the events section for past and upcoming events.
1.
1.6. Other resources
Other places you may want to go for help
This guide may not (and probably will not) give you all the details you need in order to resolve any confusion about Plone's internals. Luckily, there are several other places you can go.
- The rest of the documentation should be your first point of call.
- The Plone Developer list is a prime source of help. Please only post to this list with topics regarding the development of Plone itself, not third-party product development or customisation.
- Similarly, the #plone IRC chatroom is where most real-time discussion about Plone takes place. People with operator status are typically Plone developers.
- If you are contributing to Plone, you should make a point to both read the developer list regularly, and be online in #plone whenever you can, especially while you are working on Plone.
2. Conventions and professional practice
Plone conventions and professional practice - learn these!
2.1. Package naming conventions
How to name Python packages that contain plain-Python or Zope 3 components and thus do not need to live in the 'Products' directory.
Going forward, there is consensus that we should use Zope 3 style
programming practices whenever possible. One part of that is that things
that don't need to be Zope 2 products shouldn't live in in
$INSTANCE_HOME/Products/ - instead, they should be simple Python
packages, living in
$INSTANCE_HOME/lib/python or somewhere else on the
PYTHONPATH that Zope is given when it starts up.
In doing so, code that is part of Plone core should adhere to the following conventions
Generic components without specific Plone dependencies live
directly under the top-level
plone namespace
A good example is the the Zope3-style plone.i18n.
It is desirable to factor out generic interfaces and code into such packages whenever possible, to foster re-use.
Packages should have as few dependencies as possible. Thus, if plain-Python will do, that is better than plain-Zope 3, which is better than code that depends on specific facets of Zope 2.
Extensions of such general components (or separate components)
that provide a tighter integration with Plone-the-application should
live in the secondary namespace
plone.app
See for example plone.app.i18n.
Components that need to be Zope 2 products will most likely continue
to live in
$INSTANCE_HOME/Products still, at least for now. If possible
(which it may not be), however, you should try to not depend on
components being Zope 2 products.
Packages in the svn repository should follow modern Python guidelines and provide the necessary information to be packagable as eggs
The easiest way of achieving this is to use the ZopeSkel paste deploy script, as follows.
This will install the
easy_install program, normally to the Place
where the Python binary is found. Look in the log messages of the
installation script to see where they land. For more information, see
the documentation
- Install ZopeSkel:
$ easy_install ZopeSkel==dev
This will install Paste Deploy and the
paster script in the same
location as easy_install (again, watch the terminal output) and some
Zope skeletons to use with this tool. To see them, run:
$ paster create --list-templates
- To create a basic Plone package, run:
$ paster create -t plone
It will ask you a number of questions and then generate the basic
package layout. When asked for a project name, use the dotted name of
the package, e.g.
plone.i18n.
To create a plone.app package instead, run:
$ paster create -t plone_app
When ready, import the package to the Plone svn repository.
To learn more about setuptools, see its documentation. In particular, see the documentation on development-mode
2
2.
2.4. Unit testing Plone
Every feature, bugfix and other part of Plone should be properly unit tested, and all unit tests should be run and made to pass before each commit. Read on for more.
Plone is a large and complex system. Were you to blindly change some feature, you would probably break another one in ways you could never have imagined. Therefore, good unit testing is vital to the survival of Plone. Every feature needs a set of unit tests for base cases and edge cases.
Don't look at unit testing as some necessary evil: Unit testing actually makes testing fun! Ideally, you should write tests before you implement your code or make changes. Your mission is then to make the tests pass. This is generally much easier and more enjoyable than fiddling with things in UI. Unit tests also save you time in the long run, because you are less likely to let obscure bugs go unnoticed, or break code which you thought worked with seemingly "harmless" changes.
Please refer to the unit testing tutorial for the basics of unit testing. This is required reading!
Plone unit tests live in
CMFPlone/tests. You should most definitely read through some of the existing test files and familiarise yourself with how unit tests are written.
Fixing bugs
When you fix a bug in Plone, you must write at least one unit test to prove that it's there! That test should fail the first time you run it. Then fix the bug and run the test again. This time, the test must pass. Be sure to test Plone in the browser too, and run all the unit tests before commiting your bug fix.
Adding features
When you add a new feature, it is especially important that you write unit tests up front. If you need to write migrations (more on that later) you will need to test both the migration methods themselves, and the state the migration would bring the portal to.
If you are touching code that appears to have poor test coverage, it is your duty to expand the test coverage. Just because the guy before you was negligent doesn't mean you have a license to be. Plone's test coverage is improving all the time, but could be better. If anything, writing tests for code you are interacting with will reduce the chance that you'll get blamed for someone else's mistakes.
When to change a test
Sometimes you may be tempted to change a test. Be prepared to defend yourself! If you change a test, you are invalidating the unit test runs of every single person who's run the test suite with that test before you. Occasionally, tests are wrong - sometimes they test the wrong thing, sometimes they never can fail (which means the person who wrote the test to begin with was bad and wrote the test after the code - remember, all tests start with a failure!). If you do feel you should change a test, make sure your checkin message is clear on why you changed it, and discuss it with the release manager if you are at all in doubt.
3. Plone patterns and best practice
Sometimes, there is a right way to do things. This section deals with how to approach Plone development with performance, usability, testability and localisability in mind.
3.1. Performance
Some general guidelines about coding for performance.).
3!.
3
3.
3.
3.
4. General Plone concepts
Fundamental concepts used in Plone
4.1. Migration and portal creation
How to write and test Plone migrations, and how migrations are used during portal creation
4.1.1. Portal creation and migration concepts
What happens when the portal is created, and how does Plone handle migrations between versions?
Since Plone is a constantly evolving system, there is frequently a need to change the initial state of the system. However, to allow people to upgrade between releases, we cannot simply change the portal creation method. Instead, the
portal_migration tool is used to provide incremental changes to the base installation.
Prior to version 2.5 and the introduction of GenericSetup, when a new Plone Site was created, it would begin at a base version (e.g. Plone 2.0) and then run the migration steps for each release until the most current release. This was done to ensure that migrated and freshly created sites were always in sync.
With the introduction of GenericSetup, the state of the Plone site is persisted to XML files found in
CMFPlone/profiles/default. When a new Plone site is created (see
CMFPlone/factory.py) this profile is loaded using the
portal_setup tool. However, for users with an older version installed, migrations are run from their current installation to the most recent version.
Thus, writing migrations is vital when you change Plone in such a way that the base assumptions about what is available change. For example, should you need to add a new tool, this must be created during a migration step to ensure that it is always available. Similarly, when changing actions or default values for properties, adding new properties in portal_properties, adding a depedency on a new product, or changing anything else that becomes part of the "base configuration" in Plone, migrations are needed.
In fact, this means that you must make two changes: One to the XML files that set up the site, the other to the migration steps that update existing sites. This is expected to change over time, as GenericSetup becomes even more generic and usable for upgrades, but for now, migrations are necessary.
Luckily, changing the base profile XML is quite simple and self-explanatory. Consider the following, which sets up a few properties in 'portal_properties/navtree_properties':
<object name="portal_properties" meta_type="Plone Properties Tool" xmlns: <object name="navtree_properties" meta_type="Plone Property Sheet"> <property name="title">NavigationTree properties</property> <property name="sortAttribute" type="string">getObjPositionInParent</property> ...
Adding a new property to this propertysheet is as simple as adding a new
<property> element to that file. Remember to write a test first, though!
Migrations are a bit tricker, because during upgrades you general cannot make any assumptions about the system. For example:
- Tools which are available in a standard Plone installation may have been deleted
- Types which ship with Plone may have been disabled or deleted
- Users may have changed actions, properties etc. - in most cases, they would expect their changes to persist after an upgrade, but in some cases, overriding users customisations may be necessary
- Templates, scripts and other skin elements may have been customised, and the skin layers re-ordered in such a way that an acquired script or template may not be the exact version you expect
- Content-space objects with ids of skin elements may mess with acquisition
- Users may run migrations several times, for example if one migration step (hopefully not yours!) fails the first time around. Generally, migrations should not break (or break the site!) if this happens.
Hence, you have to be very careful when writing migrations. The next page describes how to write and unit test migrations. Not properly writing and testing migrations is a sure-fire way of getting your contributions rejected, so please read this carefully.
4.1.2. Writing migrations
How to write migrations
Migrations take the form of global functions, collected in files created for each version to which migration would be expected. At the time of writing, Plone is at 2.1 alpha 2, and so new migrations should go in
CMFPlone/migrations/v2_1/betas.py. Make sure that you select the latest appropriate file when adding new migrations.
In this file, you will notice a method:
def alpha2_beta1(portal): """2.1-alpha2 -> 2.1-beta1 """ out = [] #Make object paste action work with all default pages. fixObjectPasteActionForDefaultPages(portal, out) return out
This shows the alpha2-to-beta1 migration method with one migration. At the time of writing, this is the latest migration method, but there will be later ones added shortly, so make sure you use the right one!
The method
fixObjectPasteActionForDefaultPages is defined later in the same file. By convention, all migration methods take two arguments,
portal - the root portal object, and
out - a list to which string trace messages should be appended.
You should make a point to look at existing migrations before writing a new one. As mentioned on the previous page, you cannot make any assumptions about an installed system - people do delete tools and replace them with their own versions (for example CMFMember replaces MembershipTool), customise templates and scripts and generally try as hard as they can to make the life of a poor migration writer hard.
General migration advice
- Unit test carefully (see next page)
- Never assume tools and objects are there.
Always use the third default argument to
getToolByName and
getattr, for example:
portal_membership = getToolByName(portal, "portal_membership", None) or
child = getattr(parent, "childId", None). Then, make sure that you test the returned value:
if portal_membership is not None: ...
Notice the explicit test for
is not None - the shortcut
if portal_membership: is not sufficient.
- Always err on the side of caution - make sure your migration tool is safe to run twice, safe to run when none of your assumptions hold, and will never, ever destroy a user's site!
4.1.3. Testing migrations
How to write unit tests for migrations and portal creation
The easiest way to test your migration is to run a forced migration in the
portal_migration tool. You should do this to make sure the migration gets the expected result in the UI. Ideally, you should also create a fresh Plone site and make sure your changes show up there as well.
However, it is vital to properly unit test your migrations. Given the variety of conditions under which they may be run, migrations need to be as simple and well tested as possible.
There are two unit test files which pertain to migrations and portal creation:
CMFPlone/tests/testMigrations.py and
CMFPlone/tests/testPortalCreation.py.
First of all, you need to test your migration functions individually. Look at
testMigrations.py - you will need to import your migration method at the top of the file, and add your unit tests to the test class corresponding to the major version the migration is for - for example
TestMigrations_v2_1 for Plone 2.1 migrations.
Be sure to look at the existing unit tests there for examples. Typically, you will have at least one unit test for when your migration works as normal, one for when it is called twice, and one for each assumption which may be removed, e.g.
testMyMigration,
testMyMigrationTwice and
testMyMigrationNoTool. If you have more specific needs, you should have no problems finding examples of various ways of unit testing migrations in testMigrations.py.
Finally, you must add a test for the result of the migration to
testPortalCreation.py. Typically, this will test for the existence of some new property or object created by the migration - in other words, it will assert the assumptions introduced by the migration.
The crucial difference between these two files is that in testMigrations, you test the migration function itself, whilst in testPortalCreation you test the state of the portal that is asserted by the migration. As of Plone 2.5, the portal creation tests will actually test the GenericSetup profile, not the migrations, but it is important that the final test state for a successful migration step is consistent with the portal creation test you add here.
Again, you should have no trouble finding examples of tests in testPortalCreation.
A note about version control
The migration function files and migration unit test files are prime candidates for svn conflicts, because they are typically worked on by several developers simultanoeusly during periods of high activity. Hence, be careful when resolving conflicts, and always re-run all unit tests before commiting after resolving any conflicts. This goes for any conflicts, of course, but in these files it is particularly important!
5. Specific areas
Explanations of various areas of the codebase that may not be obvious
5.1. Content types
The relationship between CMF content types, Plone extensions, and ATContentTypes, as well as the machinery that publishes content objects to the browser
5.1.1. ATContentTypes
Since Plone 2.1, Plone has shipped with ATContentTypes for its default content types
ATContentTypes is a re-implementation of the standard CMF types as Archetypes content. It adds a numer of features to the standard CMF types and offers more flexibility in extending and re-using content types. The RichDocument tutorial explains how ATContentTypes are subclassed and how they make use of the latest conventions in Archetypes and CMF.
The ATContentTypes product is installed during the creation of a Plone site. It will migrate the base CMF content types to its own equivalents using its own highly generic migration framework.
Please note that ATContentTypes aims to be usable in plain CMF. It has a number of optional Plone dependencies, in the form:
if HAS_PLONE21: ... else: ...
Plone has no direct dependencies on ATContentTypes, nor on Archetypes. There are a few generic interfaces in
CMFPlone.interfaces that are used by both Archetypes/ATContentTypes and Plone, but we do not wish to have any direct dependency on Archetypes, since Archetypes is essentially just a development framework to make developing CMF content types easier. By minimising the number of dependencies, we ensure that plain-CMF (and in the future, plain-Zope 3) content types are still usable within Plone.
ATContentTypes and Plone both depend on
CMFDynamicViewFTI. This is a wrapper on the standard CMF FTI type that adds support for the
display menu by recording a few extra properties for the available and currently-selected view methods. It also provides a mixin class,
CMFDynamicViewFTI.browserdefault.BrowserDefaultMixin, which enables support for the
display menu (or rather, the interface
CMFDynamicViewFTI.interfaces.ISelectableBrowserDefault).
5.1.2. The 'display' menu
The 'display' menu is the drop-down that lets content authors select which view template to use, or which object to set as a default-page in a folder.
The
display menu is found in
global_contentmenu.pt and supports three different functions:
- Set the display template (aka "layout") of the current content object, provided that object supports this.
- Set the default-page of a folder, provided the folder supports this
- If viewing a folder with a default-page, allow selecting the standard view template/layout for that folder, thus unsetting the default-page.
There are two interfaces in
CMFDynamicViewFTI.interfaces that are used to support this functionality:
- IBrowserDefault
- Provides information about the layout current selection of a given content object, including any selected deafult-page
- ISelectableBrowserDefault
- Extends IBrowserDeafult with methods to manipulate the current selection
The canonical implementation of both these interfaces is in
CMFDynamicViewFTI.browserdefault.BrowserDefaultMixin. This in turn gets the vocabulary of available view methods from the FTI (and hence this can be edited through-the-web in
portal_types), and stores the current selection in two properties on each content object:
layout, for the currently selected view template, and
default_page if any default page is selected. If both are set, the default-page will take precedence.
BrowserDefaultMixin actually provides a
__call__ method which means that will render the object with its default layout template. However,
PloneTool.browserDefault() will actually query the interface directly to find out which template to display - please see the next page for the gory details.
5.1.3. Restricting addable types
The constrain-types machinery and how it drives the "restrict..." option under the "add item" menu.
As of Plone 2.1, the "add item" menu supports a "restrict..." page that lets the user decide which items can and cannot be added to that folder. This functionality is defined in a pair of interfaces in
CMFPlone.interfaces.constrains,
IConstrainTypes for read-only access and
ISelectableConstrainTypes for the mutators.
The canonical implementation of these interfaces is in
ATContentTypes.lib.constraintypes. This provides storage for the constraint mode (more below) and the list of locally allowed and "preferred" types. The preferred types are the ones that appear in the list immediately, and the rest of the allowed types appear behind a "more..." item.
The constraint type mode can be
ACQUIRE (the default),
DISABLED or
ENABLED. When disabled, the settings in
portal_types are used. When enabled, the list of types explicitly set are used. When set to acquire, the parent folder's types will be used if the parent is of the same portal type as the folder in question. If they are of different types the settings in
portal_types apply.
The rest of the
ConstrainTypesMixin class overrides CMFCore's
allowedContentTypes and
invokeFactory methods to ensure the constraints are enforced.
5.1.4. From Zope to the Browser
How do content types get "published" (in the Zope sense, not the workflow sense) to the web browser?
There is a fairly complex mechanism that determines how a content object ends up being displayed in the browser. The following is an adaptation of an email to the plone-devel list which aims to untangle this complexity. It pertains to Plone 2.1 only.
Assumptions:
- You want the
viewaction to be the same as what happens when you go to the object directly for most content types...
- ...but for some types, like File and Image, you want the "view" action to display a template, whereas if you go straight to the object, you get the file's contents
- You want to be able to redefine the
viewaction in your custom content types or TTW in portal_types explicitly. This will essentially override the current layout template selection. Probably this won't be done very often for things deriving from ATContentTypes, since here you can register new templates with the FTI and have those be used (via the "display" menu) in a more flexible (e.g. per-instance, user-selectable) way, but you still want the "view" action to give the same power to change the default view of an object as it always has.
- When you use the "display" menu (implemented with IBrowserDefault) to set a default page in a folderish container, you want it to display that item always, unless there is an index_html - index_html always wins (note - the "display" menu is disabled when there is an index_html in the folder, precisely because it will have no effect)
- When you use the "display" menu to set a layout template for an object (folderish or not), you want that to be displayed on the "view" tab (action), as well as by default when the object is traversed to without a template/action specified...
- ...except for ATFile and ATImage, which use a method index_html() to cut in when you don't explicitly specify an item. However, these types will still want their "view" action to show the selected layout, but will want a no-template invocation to result in the file content
Some implementation detail notes:
There are two distinct cases:
- CASE I: "New-style" content types using the paradigms of ATContentTypes
- These implement ISelectableBrowserDefault, now found in the generic CMFDynamicViewFTI product. They support the "display" menu with per-instance selectable views, including the ability to select a default-page for folders via the GUI. These use CMF 1.5 features explicitly.
- CASE II: "Old-style" content types, including CMF types and old AT types
- These do not implement this interface. The "display" menu is not used. The previous behaviour of Plone still holds.
The "old-style" behaviour is implemented using the Zope hook __browser_default__(), which exists to define what happens when you traverse to an object without an explicit page template or method. This is used to look up the default-page (e.g. index_html) or discover what page template to render. In Plone, __browser_default__() calls PloneTool.browserDefault() to give us a single place to keep track of this logic. The rules are (slightly simplified):
- A method, attribute or contained object
index_htmlwill always win. Files and Images use this to dump content (via a method index_html()); creating a content object index_html in a folder as a defualt page is the now-less-encouraged way, but should still be the method that trumps all others.
- A propery
default_pageset on a folderish object giving the id of a contained object to be the default-page is checked next.
- A property
default_pagein
site_propertiesgives us a list of ids to check and treat similarly to index_html. If a folder contains items with any of these magic ids, the first one found will be used as a default-page.
- If the object has a
folderlistingaction, use this. This is a funny fallback which is necessary for old-style folders to work (see below).
- Look up the object's
viewaction and use this if none of the above hold true.
In addition, we test for ITranslatable to allow the correct translation of returned pages to be selected (LinguaPlone), and have some WebDAV overrides.
Lastly, it has always been possible to put "/view" at the end of a URL and get the view of the object, regardless of any index_html() method. This means that you can go to /path/to/file/view and get the view of the file, even if /path/to/file would dump the content (since it has an index_html() method that does that).
This mechanism uses the method view(), defined in PortalContent in CMF (and also in BaseFolder in Archetypes). view() returns
self(), which results in a call to __call__(). In CMF 1.4, this would look up the
view action and resolve this. Note that for folders in Plone 2.0, the
view action is just
string:${object_url}/, which in turn results in __browser_default__() and the above rules. This means that /path/to/folder/view will render a default-page such as a content object index_html. The fallback on the
folderlisting action in PloneTool.browserDefault() mentioned above is there to ensure that when there isn't an index_html or other default-page, we get
folder_listing (instead of an infinite loop), essentially making the
folderlisting action on Folders the canonical place to specfy the view template. If you think that sounds messy, you're right. (With CMF 1.5 types, things are little different - more on that later.)
Enter CMF 1.5. CMF 1.5 introduces "Method Aliases". It is important to separate these from actions:
- Actions
- These generate the content action tabs (the green ones). You almost always have
viewand
edit. Other standard actions are
propertiesand
sharing. Each action has a target, which is typically something like
string:${object_url}/base_editfor the edit tab.
base_edithere is a page template.
- Method aliases
- These let you generalise actions. The alias
editcan point to
atct_editfor an ATContentTypes document, for example, and point to
document_edit_formfor a CMF document. Aliases can be traversed to, so /path/to/object/edit will send you to
atct_editon the object if the object is an ATContentTypes document, and to
document_edit_formif it is a CMF Document.
This level of indirection is actually quite useful. First of all, we get a standard set of URLs, so /path/to/object/edit is always edit, /path/to/object/view is always view. The actions (tabs) can point to these, meaning that we can pretty much use the same set of actions for all common types, with the variation happening in the aliases instead.
Secondly, a method alias with the name "(Default)" specifies what happens when you browse to the object without any template or action specified. That is, /path/to/object will look up the "(Default)" alias. This may specify a page template, for example, or a method (such as a file-dumping index_html()) to call.
Crucially, if "(Default)" is not set or is an empty string, CMF falls back on the old behaviour of calling the __browser_default__() method. In PloneFolder.py, this is defined to call PloneTool.browserDefault(), as mentioned above, which implements the Plone-specific rules for the lookup. Hence, if we need the old behaviour, we can just unset "(Default)"! This is what happens with old-style content types (that is, it is the default if you're not using ATContentTypes' base classes or setting up the aliases yourself).
Now, CMFDynamicViewFTI, which is used by ATContentTypes, extends the standard CMF FTI and a adds a few things:
- A pair of interfaces, ISelectableBrowserDefault and IBrowserDefault (the former extends the latter) describing various methods for getting dynamic views, as found in Plone in the "display" menu.
- A class BrowserDefaultMixin which gives you a sensible implementation of these. This uses two properties, "default_page" and "layout" to keep track of which default-page and/or view template (aka layout) is currently selected on an object.
- Two new properties in the FTI in portal_types - the default view, and the list of available views.
- A special target for a method alias called
(selected layout), which will return the selected view template (layout).
- Another special alias target called
(dynamic view), which will return a default-page, if set, or else the selected view template (layout) - you can think of "(dynamic view)" as a superset of "(selected layout)".
ATContentTypes uses BrowserDefaultMixin from CMFDynamicViewFTI, and sets up the standard aliases for "(Default)" and "view" to point to "(dynamic view)". The exceptions are File and Image, which have the "(Default)" alias pointing to "index_html", and the "view" alias pointing to "(selected layout)". This way, /path/to/file results in the file content (via the index_html() method) and /path/to/file/view shows the selected layout inside Plone. (Note that using "(dynamic view)" for the "view" alias would not work, because the index_html attribute would take precedence over the layout when testing for a default-page.) Additionaly, the
view action (tab) for each of these types must be
string:${object_url}/view to ensure it invokes the "view" alias, not the "(Default)" alias.
For Folders, the use of "(dynamic view)" takes care of the default-page and the selected view template. The
folderlisting fallback is no longer needed - the
view action can still be "string:${object_url}", and the "(Default)" alias pointing to "(dynamic view)" takes care of the rest.
In order for the "(dynamic view)" target to work as expected, it needs to delegate to PloneTool so that Plone's rules for lookup order and (especially) ITranslatable/LinguaPlone support are used. However, delegating to PloneTool.browserDefault() is not an option, because this does other checks which are not relevant (this essentially stems from the fact that browserDefault() is implementing both the "(Default)" and "view" cases above in a single method). Thus, the code for determining which, if any, contained content object should be used as a default-page has been factored out to its own method, PloneTool.getDefaultPage(). Helpfully, this can also be used by PloneTool.isDefaultPage(), radically simplifying that method.
Calling content objects
The last issue is what happens with view() and __call__() in this equation. The first thing to note is that view() method is masked by the
view method alias. Hence, /path/to/object/view will invoke the method alias
view if it exists, not call view(), making that method a lot less relevant.
However, we still want __call__() to have a well-defined behaviour. In CMF 1.4, __call__()used to look up the
view action, and this is still the default fallback, but if the "(Default)" alias is set, this is used instead. This may give somewhat unexpected behaviour, however: From the comments in the source code and the behaviour in Zope, where __call__() is the last fallback if neither __browser_default__() nor index_html are found, and to ensure that the "view() --> __call__()" mechanism always returns the object itself, never dumped file content, it seems to be the intention that __call__() should always return the object, never a default-page or file content dumped via an index_html() method. For Folders in Plone 2.0, this was actually not the case: __call__() would look up the
view action, which was "string:${object_url}", which with the use of __browser_default__() resulted in a lookup of a default-page if one was present. With the CMF 1.5 behaviour, the use of the "(Default)" alias in __call__() will mean that calling a File returns the dumped file content. Calling a Folder will return the default-page (or the Folder in its view if no default page is set) as in Plone 2.0.
The behaviour in Plone 2.1 is that __call__(), as overridden in BrowserDefaultMixin, should always return the object itself as it would be rendered in Plone without any index_html or default-page magic. Hence, __call__() in CMFDynamicViewFTI looks up the "(selected layout)" target and resolves this. This behaviour is thus consistent with the old behaviour of Documents and Files, but whereas Folders with a default-page in 2.0 used to return that default page from __call__(), in 2.1, it returns the Folder itself rendered in its selected layout. Again remember that this method will rarely if ever be called, since /path/to/object is intercepted by CMF's pre-traversal hook and ends up looking up the "(Default)" method alias (which does honour default-page for Folders), and /path/to/object/view uses the "view" method alias, as described above.
5.2. Navigation structures
How the navtree is constructed, and how it may be extended
5.
5.2.2. Constructing the navigation tree
How the navigation tree is constructed from a catalog query and how the settings in portal_properties are able to influence the shape of the final tree.
The navigation tree portlet uses a view, found in
CMFPlone.browser.interfaces.INavigationPortlet with a factory in
CMFPlone.browser.portlets.navigation. This view in turn uses a more general view,
CMFPlone.browser.interfaces.INavigationTree that is implemented in
CMFPlone.browser.navigation.CatalogNavigationTree.
Previously, the navigation tree used to be constructed in
CMFPlone.PloneTool, and there is still a deprecated
createNavTree() method there. A utility method that invokes the view is found in
CMFPlone.utils.createNavTree().
The actual construction of the navtree is delegated to a generic function found in
CMFPlone.browser.navtree called
buildFolderTree(). This function has a single purpose: Execute a catalog query and turn the results into a navigation tree-like structure. It is thus generic, and can be used to construct other navigational structures (such as the TOC for this reference manual).
Because the navigation tree needs to take several properties into account, the navtree builder can take a "strategy" object, which implements
CMFPlone.browser.interfaces.INavtreeStrategy. This can provide methods used to decide whether a given node should be included and/or whether a whole subtree should be pruned. It can also decorate the node dicts with additional keys, used to hold domain-specific information.
The standard navigation tree strategy is found in
CMFPlone.browser.navtree.DefaultNavtreeStrategy. In fact this extends the sitemap navtree strategy, because a navtree is essentially a more restricted form of a sitemap.
The navtree strategy is an adapter, looked up on the context (the content object being viewed - registered for
* by default) and the interface of the view that is constructing it (to distinguish the sitemap from the navtree):
<adapter for="* .interfaces.INavigationTree" factory=".navtree.DefaultNavtreeStrategy" provides=".interfaces.INavtreeStrategy" />
The
CatalogNavigationTree class that implements
INavigationTree thus does this:
def navigationTree(self): context = utils.context(self) queryBuilder = NavtreeQueryBuilder(context) query = queryBuilder() strategy = getMultiAdapter((context, self), INavtreeStrategy) return buildFolderTree(context, obj=context, query=query, strategy=strategy)
The
NavtreeQueryBuilder, is found in
CMFPlone.browser.navtree, with an interface in
CMFPlone.browser.interfaces.INavigationQueryBuilder. This simply wraps up the task of constructing the catalog query dict that is used to build the navtree, so that it may be re-used.
To make it easier to create custom navtrees and similar structures for site integrators, the contents of
CMFPlone.browser.navtree may be imported from protected code, by virtue of the following statement in 'CMFPlone/__init__.py':
allow_module('Products.CMFPlone.browser.navtree')
To ensure that the methods and attributes of the query builders and navtree strategies are accessible as well,
CMFPlone/browser/configure.zcml contains:
<content class="Products.CMFPlone.Portal.PloneSite"> <implements interface=".interfaces.INavigationRoot" /> </content> <content class=".navtree.NavtreeStrategyBase"> <allow interface=".interfaces.INavtreeStrategy" /> </content> <content class=".navtree.NavtreeQueryBuilder"> <allow interface=".interfaces.INavigationQueryBuilder" /> </content>
5.2.3. Constructing the sitemap
How the sitemap is constructed
The sitemap is constructed using the same mechanisms as the navigation tree - it simply uses a different query builder and navtree strategy.
The view that is used by the
CMFPlone.browser.sitemap.SitemapView, and implements
CMFPlone.browser.interfaces.ISitemapView. It in turn delegates to the more general view
CMFPlone.browser.navigation.CatalogSiteMap which implements
CMFPlone.browser.interfaces.ISitemap. This does:
def siteMap(self): context = utils.context(self) queryBuilder = SitemapQueryBuilder(context) query = queryBuilder() strategy = getMultiAdapter((context, self), INavtreeStrategy) return buildFolderTree(context, obj=context, query=query, strategy=strategy)
Note that the sitemap and navtree query builders and strategies share much code and are derived from one another.
There is a utilty method in
CMFPlone.utils called
createSiteMap() that invokes the
ISitemap view. The deprecated way of constructing the sitemap is through a call to
PloneTool.createNavTree setting
5.2.4. Navigation tabs
How the navigation tabs are constructed
Since Plone 2.1, the navigation tabs at the top of the site have been constructed from two sources:
- Any actions in the
portal_tabscategory
- Any folders in the root of the site
In Plone 2.1, these were created using
PloneTool.createTopLevelTabs(). This is now deprecated. A utility method in
PloneTool.utils with the same name now wraps the call to the view that implements this functionality. The view can be found in
CMFPlone.browser.navigation.CatalogNavigationTabs, with an interface in
CMFPlone.browser.INavigationTabs.
The view implementation performs an action lookup, and constructs a query that that finds folderish items in the navigation root. Note that the "navigation root" is not necessarily the portal root - see the section on the navigation root.
5. | http://plone.org/documentation/manual/plone-developer-reference/referencemanual-all-pages | crawl-002 | refinedweb | 7,876 | 61.87 |
Datapath¶
AWS ENI¶
The AWS ENI datapath is enabled when Cilium is run with the option
--ipam=eni. It is a special purpose datapath that is useful when running
Cilium in an AWS environment.
Advantages of the model¶
- Pods are assigned ENI IPs which are directly routable in the AWS VPC. This simplifies communication of pod traffic within VPCs and avoids the need for SNAT.
- Pod IPs are assigned a security group. The security groups for pods are configured per node which allows to create node pools and give different security group assignments to different pods. See section AWS ENI for more details.
Disadvantages of this model¶
- The number of ENI IPs is limited per instance. The limit depends on the EC2 instance type. This can become a problem when attempting to run a larger number of pods on very small instance types.
- Allocation of ENIs and ENI IPs requires interaction with the EC2 API which is subject to rate limiting. This is primarily mitigated via the operator design, see section AWS ENI for more details.
Architecture¶
Ingress¶
Traffic is received on one of the ENIs attached to the instance which is represented on the node as interface
ethN.
An IP routing rule ensures that traffic to all local pod IPs is done using the main routing table:
20: from all to 192.168.105.44 lookup main
The main routing table contains an exact match route to steer traffic into a veth pair which is hooked into the pod:
192.168.105.44 dev lxc5a4def8d96c5
All traffic passing
lxc5a4def8d96c5on the way into the pod is subject to Cilium’s BPF program to enforce network policies, provide service reverse load-balancing, and visibility.
Egress¶
The pod’s network namespace contains a default route which points to the node’s router IP via the veth pair which is named
eth0inside of the pod and
lxcXXXXXXin the host namespace. The router IP is allocated from the ENI space, allowing for sending of ICMP errors from the router IP for Path MTU purposes.
After passing through the veth pair and before reaching the Linux routing layer, all traffic is subject to Cilium’s BPF program to enforce network policies, implement load-balancing and provide networking features.
An IP routing rule ensures that traffic from individual endpoints are using a routing table specific to the ENI from which the endpoint IP was allocated:
30: from 192.168.105.44 to 192.168.0.0/16 lookup 92
The ENI specific routing table contains a default route which redirects to the router of the VPC via the ENI interface:
default via 192.168.0.1 dev eth2 192.168.0.1 dev eth2
Configuration¶
The AWS ENI datapath is enabled by setting the following option:
ipam: eniEnables the ENI specific IPAM backend and indicates to the datapath that ENI IPs will be used.
blacklist-conflicting-routes: "false"disables blacklisting of local routes. This is required as routes will exist covering ENI IPs pointing to interfaces that are not owned by Cilium. If blacklisting is not disabled, all ENI IPs would be considered used by another networking component.
enable-endpoint-routes: "true"enables direct routing to the ENI veth pairs without requiring to route via the
cilium_hostinterface.
auto-create-cilium-node-resource: "true"enables the automatic creation of the
CiliumNodecustom resource with all required ENI parameters. It is possible to disable this and provide the custom resource manually.
egress-masquerade-interfaces: eth+is the interface selector of all interfaces which are subject to masquerading. Masquerading can be disabled entirely with
masquerade: "false".
See the section AWS ENI for details on how to configure ENI IPAM specific parameters. | https://cilium.readthedocs.io/en/stable/concepts/datapath/ | CC-MAIN-2019-39 | refinedweb | 611 | 53.92 |
Wed.
I experimented once and found it works for function parameters too:
def print_point((x,y)):
print x,y
pt = 1,2
print_point(pt)
But every time I start to use it, I think it may be too clever.
So, Python has (some level of) pattern-matching - all the cool (IMO) languages (e.g. Haskell, OCaml, Erlang) do :-)
In "_, (bx, by) = points()", what does the underscore do?
Brandon, thanks, I didn't realize the same trick applied to function parameters, though I think I agree with you that it is too tricky.
Joe: The underscore is not special. It's just another identifier, used in this case because it an unobstrusive name for a variable I'll never use. It's used just like "dummy", or some other name.
Joe:
I'd be cautious of ever assigning a value to '_' when it's the interactive mode's object name for the last returned value.
>>> _
Traceback (most recent call last):
File "", line 1, in ?
NameError: name '_' is not defined
>>> print 5
5
>>> _
Traceback (most recent call last):
File "", line 1, in ?
NameError: name '_' is not defined
>>> 5
5
>>> _
5
>>>
===========
..and I'm glad to see Python uses a back-slash instead of an underscore to wrap an expression to a new line.
Great for use in list comprehension: [func(x) for (x,y) in points()]
Actually, the right-hand side doesn't even need to be a tuple -- it just needs to follow the iterator protocol. This can be very handy.
Assume we have:
x = set([1,2])
y = set([2,3])
And we want to get the element that is the intersection.
You could do something like:
z = x & y
assert len(z) == 1
z = z.pop()
but instead, you can just do:
z, = x & y
This looks a little funny, but it throws an exception if there is not exactly one element in the intersection, and binds z to that element.
2007,
Ned Batchelder | http://nedbatchelder.com/blog/200701/python_assignment_trickiness.html | crawl-003 | refinedweb | 332 | 71.14 |
NAME
people - fetch a structure containing all ttys, whose owner behaves like a human
SYNOPSIS
#include <sys/people.h> #include <asr.h> int people (struct ppl_tty **ttys);
DESCRIPTION
The people function fetches a short description of every tty, whose coupled process behaves as an actual human. It returns a newly malloc’ed array with just enough elements to contain all elements needed for this. The struct ppl_tty is declared as: struct ppl_tty { char tty_path[MAX_PATH_LENGTH]; int is_erratic; int uses_jobcontrol; int is_amoron; int is_aluser; int has_aclue; pid_t pgrp_leader; };
RETURN VALUES
On success people returns the number of elements in ttys , on failure it returns -1 and errno is set to an appropriate value.
ERRORS
ENOENT There are no human-behavioured processes on the system EBUSY The kernel is busy and will not stand this silly behaviour. Caution to call people again, from the same process, as the kernel might kill it right away. ENODEV See ENOENT above. EUSERS Too many of the people found were lusers. The cut-off for this error is system dependent, but is usually about 3.
EXAMPLE
#include <sys/people.h> #include <asr.h> #include <signal.h> int main (int argc, char **argv) { struct ppl_tty **ttys; int rv,c; rv=people(ttys); if (rv!=-1) { for (c=0;c<rv;c++) if ((ttys[c].is_amoron)||(ttys[c].is_aluser)) { kill(-(ttys[c].pgrp_leader),SIGKILL); } } else { ; /* Handle errors in a graceful way... */ } }
AUTHOR
This man page was written by Ingvar Mattsson, as a contribution to the a.s.r man page collection. | http://manpages.ubuntu.com/manpages/intrepid/man2/people.2fun.html | CC-MAIN-2015-11 | refinedweb | 253 | 58.89 |
#include <sys/types.h> #include <sys/ipc.h> #include <sys/sem.h>
The
semget() system call
returns the System, −1 is
returned, with
errno indicating
the error.
On failure,
errno will be set
to one of the following:
A semaphore set exists for
key, but the calling
process does not have permission to access the set, and
does not have the
CAP_IPC_OWNER capability.
IPC_CREAT and
IPC_EXCL were specified
in
semflg, but
a semaphore set already exists for
key.
nsems is
less than 0 or greater than the limit on the number of
semaphores per semaphore set (
SEMMSL).
A semaphore set corresponding to
key already exists, but
nsems is larger
than the number of semaphores in that set.
No semaphore set exists for
key and
semflg did not specify
IPC_CREAT.
A semaphore set has to be created but the system does not have enough memory for the new data structure.
A semaphore set has to be created but the system
limit for the maximum number of semaphore sets
(
SEMMNI), or the system
wide maximum number of semaphores (
SEMMNS), would be exceeded.
all but the least significant 9 bits of
semflg and creates a new
semaphore set (on success)..
The name choice
IPC_PRIVATE
was perhaps unfortunate,
IPC_NEW would more clearly show its
function.
semctl(2), semop(2), ftok(3), capabilities(7), sem_overview(7), svipc(7) | http://manpages.courier-mta.org/htmlman2/semget.2.html | CC-MAIN-2017-17 | refinedweb | 223 | 63.49 |
Working with JSON in strictly-typed languages (such as Go) can be tricky. Go actually does a remarkable job at this without much effort – like most things in Go.
Go provides native libraries for JSON and it integrates well with the language. There is a lot of great documentation in the encoding/json package.
I will try and cover the 90% cases in this article.
All examples will use the following basic imports so that the same lines don’t need to be repeated:
import ( "encoding/json" "fmt" )
Table of Contents
Decoding JSON
In a perfect world the schema of the object and the struct definition would both remain the same and have predicable value types. You could say this is easiest scenario, it’s also the most ideal place to start.
json.Unmarshal takes a byte array (we convert a
string to
[]byte ) and a reference to the object you wish to decode the value into.
type Person struct { FirstName string LastName string } func main() { theJson := `{"FirstName": "Bob", "LastName": "Smith"}` var person Person json.Unmarshal([]byte(theJson), &person) fmt.Printf("%+v/n", person) }
{FirstName:Bob LastName:Smith}
Go uses the case of the first letter to designate whether the entity is exported or not. However, this does not effect the decoding because it is not sensitive to case. For example
{"firstname": "Bob", "lastname": "Smith"} decoded into:
type Person struct { FirstName string LastName string }
Will still map the keys correctly. Just be careful if you need to encode this data back to JSON the keys will change case which will potentially break other systems. There is a simple way around this with tags explained later.
Encoding JSON
Encoding JSON is very easy. We simply provide the variable to be encoded to the
json.Marshal function:
type Person struct { FirstName string LastName string } func main() { person := Person{"James", "Bond"} theJson, _ := json.Marshal(person) fmt.Printf("%+v/n", string(theJson)) }
{"FirstName":"James","LastName":"Bond"}
Mapping Keys
If you need to map completely different JSON keys to a
struct you can use tags . Tags are text-based annotations added to variables:
type Person struct { FirstName string `json:"fn"` LastName string `json:"ln"` } func main() { theJson := `{"fn": "Bob", "ln": "Smith"}` var person Person json.Unmarshal([]byte(theJson), &person) fmt.Printf("%+v/n", person) }
This also applies to encoding the object so that the keys will be mapped back to their original names.
Default Values
The
json.Unmarshal function takes in an object rather than a type. Any values decoded in the JSON will replace values found in the object, but otherwise it will leave the values as they were which is ideal for setting up default properties for your object:
type Person struct { FirstName string LastName string } func newPerson() Person { return Person{"<No First>", "<No Last>"} } func main() { theJson := `{"FirstName": "Bob"}` person := newPerson() json.Unmarshal([]byte(theJson), &person) fmt.Printf("%+v/n", person) }
{FirstName:Bob LastName:<No Last>}
Handling Errors
If the JSON is unable to be parsed (like a syntax error) the
error is returned from the
json.Unmarshal function:
func main() { theJson := `invalid json` err := json.Unmarshal([]byte(theJson), nil) fmt.Printf("%+v/n", err) }
invalid character 'i' looking for beginning of value
More commonly is that the JSON values are of a different type than those defined in the Go structures and variables. Here is an example of trying to encode a JSON string into an integer variable:
func main() { theJson := `"123"` var value int err := json.Unmarshal([]byte(theJson), &value) fmt.Printf("%+v/n", err) }
json: cannot unmarshal string into Go value of type int
How you handle these errors is up to you, but if your dealing with JSON that can be messy or dynamic you need to keep reading…
Dynamic JSON
Dynamic JSON is the most difficult to deal with because you have to investigate the type of data before you can reliably use it.
There are two approaches to dealing with dynamic JSON:
1. Create a flexible skeleton to unserialise into and test the types inside that.
type PersonFlexible struct { Name interface{} } type Person struct { Name string } func main() { theJson := `{"Name": 123}` var personFlexible PersonFlexible json.Unmarshal([]byte(theJson), &personFlexible) if _, ok := personFlexible.Name.(string); !ok { panic("Name must be a string.") } // When validation passes we can use the real object and types. // This code will never be reached because the above will panic()... // But if you make the Name above a string it will run the following: var person Person json.Unmarshal([]byte(theJson), &person) fmt.Printf("%+v/n", person) }
2. Go totally rogue (pun intended) and investigate the entire value one step at a time.
This is akin to dealing with JSON in JavaScript with lots of
typeof conditions. You can do the same thing in Go like above or use thebeauty of the switch:
func main() { theJson := `123` var anything interface{} json.Unmarshal([]byte(theJson), &anything) switch v := anything.(type) { case float64: // v is an float64 fmt.Printf("NUMBER: %f/n", v) case string: // v is a string fmt.Printf("STRING: %s/n", v) default: panic("I don't know how to handle this!") } }
Important:Go uses fixed types for JSON values. Even though you would think
123 would be decoded as some type of
int it will actually always be a
float64 . This simplifies the type switches but may require you to also round or test for an actual integer value if that’s a requirement.
Validating JSON Schemas
If you have a complex JSON structure and/or more complex rules about how that data should be formatted it is easier to use the JSON Schema format to do the validation for you.
I won’t go into too much detail here because JSON Schema options could fill more than a whole article. However, here is an quick example using the xeipuuv/gojsonschema package with the example they provide:
import ( "fmt" "github.com/xeipuuv/gojsonschema" ) func main() { schemaLoader := gojsonschema.NewReferenceLoader("") documentLoader := gojsonschema.NewReferenceLoader("") result, err := gojsonschema.Validate(schemaLoader, documentLoader) if err != nil { panic(err.Error()) } if result.Valid() { fmt.Printf("The document is valid/n") } else { fmt.Printf("The document is not valid. see errors :/n") for _, desc := range result.Errors() { fmt.Printf("- %s/n", desc) } } } | http://www.shellsec.com/news/22889.html | CC-MAIN-2018-13 | refinedweb | 1,036 | 56.96 |
ServiceChange
A complex type that contains changes to an existing service.
Contents
- Description
A description for the service.
Type: String
Length Constraints: Maximum length of 1024.
Required: No
- DnsConfig
A complex type that contains information about the records that you want Route 53 to create when you register an instance.
Type: DnsConfigChange object
Required: Yes
- HealthCheckConfig
Public DNS namespaces only. A complex type that contains settings for an optional health check. If you specify settings for a health check, Amazon Route 53 associates the health check with all the records that you specify in
DnsConfig.
Important
If you specify a health check configuration, you can specify either
HealthCheckCustomConfigor
HealthCheckConfigbut not both.
Custom health checks are basic Route 53 health checks that monitor an AWS endpoint. For information about pricing for health checks, see Amazon Route 53 Pricing.
Note the following about configuring health checks.
A and AAAA records
If
DnsConfigincludes configurations for both A and AAAA records, Route 53 creates a health check that uses the IPv4 address to check the health of the resource. If the endpoint that is specified by the IPv4 address is unhealthy, Route 53 considers both the A and AAAA records to be unhealthy.
CNAME records, Route 53 creates an: HealthCheckConfig object
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/Route53/latest/APIReference/API_autonaming_ServiceChange.html | CC-MAIN-2018-30 | refinedweb | 229 | 63.39 |
This is your resource to discuss support topics with your peers, and learn from each other.
12-09-2012 08:21 AM
Hi,
I use the cocos2d-x framework,
I have a game that runs fine on android and iOS, but when I try to create the project inside QNX Momentics, It tell that can't find box2d.h, how is the correct way to add box2d to a blackberry project in cocos2d-x?
12-09-2012 06:03 PM
12-10-2012 06:38 PM
I succesfully add the box2d file
Previous I use
#include "Box2d.h"
But it not work
now I use the
#include "Box2D/Box2d.h"
and works, but I still have an error:
../../../../external/Box2D/proj.blackberry/Device-
any idea of how to resolve that?
12-11-2012 04:16 AM
12-11-2012 05:32 AM
Unfortunately I think it's a bug from Momentics, because I already have the libBox2D.a file in the respective Device-Debug folder, but when I try to compile, Momentics can't find the file.
I'll try to add the file like you said, I'll post the result here, thanks
12-11-2012 06:19 PM
ThomasRiisbjerg, I can't find the 'linker' option that you said.
I open the properties, but i find two C++ options:
C/C++ Build and C/C++ General,
But no one of them have the linker option inside it.
Can you help me?
12-12-2012 04:12 PM
Project properties -> C/C++ Build -> Settings -> QCC Linker
12-12-2012 06:15 PM
I found it, thank you very much
12-12-2012 08:29 PM
Did that solve your linker error?
01-18-2013 05:47 AM | http://supportforums.blackberry.com/t5/Native-Development/How-to-correct-add-Box2D-to-blackberry-project-using-cocos2d-x/m-p/2027147 | CC-MAIN-2014-52 | refinedweb | 288 | 79.7 |
Ever since launching One Word Domains five months ago, I've been wanting to set up a blog where I could:
- Document my build process
- Write about some of the coolest programming tips and tricks that I learned in the process (this blog post is one of them)
- Share some insights on the domain industry - i.e. what are some of the top naming conventions in Silicon Valley
However, I quickly ran into a dilemma when trying to find a suitable blogging CMS (content management system) for my needs:
- Wordpress is easy to set up, but is also an overkill - I don't need 15 different subpages and a full-fledged user management system (I'm already using PostgreSQL for that)
- Ghost is a little more challenging to set up (here's a great guide if you're into that) but would require setting up an extra dyno on Heroku or a VPS on Digital Ocean - which would mean an extra $5 - $7 a month
- Medium is relatively pain-free to set up, but is pretty limited when it comes to customization + you're not really helping your site's SEO with your posts since you'll be contributing to Medium's SEO instead
What I was looking for was a simple and free static-site solution that was easy to customize + integrates well with my existing stack (Heroku, Flask, PostgreSQL, Python, HTML/CSS, JavaScript, jQuery).
I decided to consult my friend Linus, who recommended the Python-Markdown library - which is the same framework that Pelican (the Python version of Hugo) uses.
Intrigued, I began researching the origins of the Python-Markdown library, and that's when I came across this blog post by James Harding. 10 lines of code later, I've successfully set up my very own Markdown-powered static site for the One Word Domains Blog.
Here's how everything went down, step by step:
Requirements
First, I imported the
Flask-FlatPages and
Markdown libraries:
import markdown from flask_flatpages import FlatPages
...and declared them in my
requirements.txt file:
Flask-FlatPages==0.7.1 Markdown==3.2.1
Folder Structure
Since I already had an existing Flask app up and running, all I did next was add a
/posts folder at the root directory, a separate folder called
blog-images under the
/static/assets folder, and a few template files in the
/templates folder. Here's a rough overview on how my folders were structured:
├──app.py ├──posts │ └──post1.md │ └──post2.md ├──templates │ └──blog.html │ └──post.html └──static └──assets │ └──blog-images └──script └──styles
Define FlatPages ENV Variables
Before I started setting up the Flask routes for my blog, I defined the
ENV variables for
FlatPages in my
app.py file, right after initiating the Flask app:
FLATPAGES_EXTENSION = '.md' FLATPAGES_ROOT = '' POST_DIR = 'posts' flatpages = FlatPages(app) app.config.from_object(__name__)
Here, I defined
FLATPAGES_ROOT as
'' because the folder containing all my markdown files,
posts, is located in the root directory – which is why
POST_DIR is defined as
'post'.
Flask Routes
Here are the 10 lines of code that I mentioned earlier – which I inserted into my
app.py file:
@app.route("/blog") def blog(): posts = [p for p in flatpages if p.path.startswith('posts')] posts.sort(key=lambda item:dt.strptime(item['date'], "%B %d, %Y"), reverse=True) return render_template("blog.html", posts=posts) @app.route("/blog/<permalink>") def blog_post(permalink): path = '{}/{}'.format('posts', permalink) post = flatpages.get_or_404(path) return render_template('post.html', post=post)
I know, I couldn't believe it either.
10 lines of Python code was all I needed to get the One Word Domains Blog up and running.
Let's dive deeper into the lines of code above and see what each one of them does:
- The first route,
/bloghosts the landing page of the blog. Here, the code iterates across all the Markdown files present in the
/postsfolder and interprets them in the form of a
flatpagesobject. It then sorts them in descending order by published date – here, I'm using the
dt.strptime()method because my dates are written in natural language format (October 30, 2020). Lastly, the code renders the
blog.htmltemplate and sends over all the posts as jinja variables.
- The second route,
/blog/<permalink>takes care of the individual blog posts. The first line of code creates the composite path for each of the Markdown files, which is in the format
/posts/post1.md. It then gets the files with the
flatpagesmodule and renders the
post.htmltemplate along with all the attributes of the particular blog post.
Markdown Format
Let's take a look at the format of a given Markdown file, say, the one for this blog post, for example:
title: Building A Lightweight Blogging CMS In 10 Lines of Code subtitle: This is the full story of how The One Word Domains blog was built - with 10 lines of Python code, the Flask-Flatpages library, and a bunch of Markdown files. date: November 2, 2020 image: post2-thumbnail.png permalink: markdown-flask-lightweight-cms Ever since launching One Word Domains five months ago... (content)
As you can see, each Markdown file has the following attributes:
title: The title of the blog post
subtitle: The subtitle, or 'tagline' of a blog post, usually written to give more context on the post
date: The date the blog post was published
image: The thumbnail image for the blog post, stored within the
/static/assets/blog-imagesfolder that I mentioned earlier
permalink: The canonical URL for the blog post. Protip: try and use dashes and keep this below 74 characters so that it doesn't get truncated in the search results
content, or
html: The bulk of the blog post's content
HTML Templates
Here's a rough outline of my
blog.html template:
{% for post in posts %} <a href="/blog/{{ post.permalink }}"> <img src="/static/assets/blog-images/{{ post.image }}"/> <h1>{{ post.title }}</h1> <p>{{ post.date }}</p> <p>{{ post.subtitle }}</p> </a> {% endfor %}
This code will iterate across all the Markdown files in the
/posts folder that I set up earlier and auto-generate previews for each and every one of them.
And here's the one for my
post.html file:
<img src="/static/assets/blog-images/{{ post.image }}"/> <h1>{{ post.title }}</h1> <p>{{ post.date }}</p> {{ post.html|safe }}
Compile and Run
If everything went well, your blog should be live at
127.0.0.1:5000/blog once you run
$ python app.py in your terminal. Yay!
Or, if you're like me and you run into a bunch of errors in your first few attempts - don't give up! Debug your code by pasting the error messages into Google and clicking on the first Stackoverflow post that pops up.
Good luck!
Bonus: Typora
I first started editing my Markdown files in Sublime, which was rather mechanical and cumbersome.
Then, everything changed when the fire nation attacked I discovered this free tool, Typora (or at least, "free during beta", as stated on their site). The intuitive and seamless writing experience that Typora provides is unparalleled, and while **this is not an ad, I highly recommend trying it out.
Discussion
I think you are great! I built a display machine state using Python3 with Flask!
Flask State Github:github.com/yoobool/flask-state .
Should i can get some improvement suggestions from you? Thanks~
That's really sick, great job!
Hi, thanks for your reply. Would you give me a star on GitHub? because my project isn't active. ^.^
Okay, will do! :)
You should try stackedit too! It's a pretty cool markdown editor as well. Also hashnode might've been a decent fit for you I'd say
Hey, thanks a lot for the suggestion, Shenesh! I'll definitely look into Stackedit! ☺️
And yeah, I've been using hashnode for a while but I was looking for a more customizable solution for this project 😅
Thank you for the post.
I created a simple example: github.com/denpetrov/blog-simple
This is amazing, thanks for sharing this with me! Just starred your repo! :) | https://dev.to/steventey/building-a-lightweight-blogging-cms-in-10-lines-of-code-e1a | CC-MAIN-2020-50 | refinedweb | 1,345 | 64.1 |
My objective is to have a URL like - m.af.ia.
So, assuming I register a TLD web-site called m.af with Afghanistan (af) ... is there any way I can set up my server infrastructure so that the URL I handout would be - m.af.ia.
And not m.af/ia.
Is there a possibility of achieving this?
DNS is a hierarchy, with the topmost element at the far right.
If you want a DNS entry for m.af.ia you would need to register af.ia.
This is currently impossible: The .ia top-level domain does not exist according to the root zone database.
m.af.ia
af.ia
.ia
You can petition ICANN for a new top-level domain, but I doubt you'll get it (and it would be hideously expensive...)
In that case, you'd need to own the af domain under an ia top-level domain (which I'm sure ICANN will provide for a scant investment of hundreds-of-thousands per year), and make a subdomain of m.
af
ia
m
Under a domain that you control (af), you can create any number or structure of subdomains that you please.
No.
The only way this would work is if you could find a provider that allows you to register ".ia" addresses. Your current/example name is in the ".af" DNS namespace, so any domains you create will always end in
224 times
active | http://serverfault.com/questions/309005/how-to-register-a-url-with-3-part-naming | crawl-003 | refinedweb | 240 | 75.4 |
Simple Java SOAP Web Service Using JDK Tools
Simple Java SOAP Web Service Using JDK Tools
A tutorial on how to use JDK tools to publish and consume a simple SOAP web service.
Join the DZone community and get the full member experience.Join For Free
Java-based (JDBC) data connectivity to SaaS, NoSQL, and Big Data. Download Now.
The JDK allows us to both publish and consume a web service using some of its tools. The sample service “Hello world” will be responsible for saying hello to the name that I’ll send it to that service. This example also includes creating a client for this service (you can follow the same steps in client to communicate with any service you like).
A. Creating the Service
1. Construct Simple Hello Class
Suppose you have a simple class that receives a string and return another string
package wsserver; public class Hello { public String sayHello(String name) { return "Hello " + name; } }
2. Convert Hello Class to a Web Service
Simply we can convert this class to be a web service using some annotations
@WebService — This identifies the class as being a web service.
@SOAPBinding(style=SOAPBinding.Style.RPC) — This specifies the type of the communication, in this case RPC.
package wsserver; import javax.jws.WebMethod; import javax.jws.WebParam; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; @WebService @SOAPBinding(style=SOAPBinding.Style.RPC) public class Hello { public String sayHello(String name) { return "Hello " + name; } }
3. Publish Hello Service
To publish this service, we can use the Endpoint class. We will provide the publish method with any URL and an instance of our service class
package wsserver; import javax.xml.ws.Endpoint; public class ServiceStarter { public static void main(String[] args) { String url = ""; Endpoint.publish(url, new Hello()); System.out.println("Service started @ " + url); } }
4. Compile Code
We can compile our two classes using the simple Javac command:
javac -d . *.java
5. Start Service
We can start our service by running ServiceStarter class using the following Java command:
java wsserver/ServiceStarter
6. Check Running Service
Now the service has been started, you can check your service by seeing its WSDL file by getting the url in setp 3. We can get the Service WSDL file by appending “?wsdl” to the URL:
The result of the WSDL file will look like the following XML file:
<?xml version="1.0" encoding="UTF-8"?><!-- Published by JAX-WS RI at. RI's version is JAX-WS RI 2.1.6 in JDK 6. --><!-- Generated by JAX-WS RI at. RI's version is JAX-WS RI 2.1.6 in JDK 6. --> <definitions xmlns: <types></types> <message name="sayHello"> <part name="arg0" type="xsd:string"></part> </message> <message name="sayHelloResponse"> <part name="return" type="xsd:string"></part> </message> <portType name="Hello"> <operation name="sayHello"> <input message="tns:sayHello"></input> <output message="tns:sayHelloResponse"></output> </operation> </portType> <binding name="HelloPortBinding" type="tns:Hello"> <soap:binding</soap:binding> <operation name="sayHello"> <soap:operation</soap:operation> <input> <soap:body</soap:body> </input> <output> <soap:body</soap:body> </output> </operation> </binding> <service name="HelloService"> <port name="HelloPort" binding="tns:HelloPortBinding"> <soap:address</soap:address> </port> </service> </definitions>
B. Creating the client
The first thing we should have is an interface of that service class to be able to call its methods using java code. After that we'll write some code to connect to that service. Fortunately there is a tool in JDK called wsimport that can do all of that if you just provided it with a valid WSDL URL.
1. Import Service Interface and Service Client Creator Class
Using wsimport tool we will write the following command:
wsimport -d . -p wsclient -keep
The -p arg tells the tool to put the generated classes into a specific package. Executing this command will result in generating two classes. The first class, called Hello.java and its interface that contains our method sayHello.
The code should be something like this:
package wsclient; import javax.jws.WebMethod; import javax.jws.WebParam; import javax.jws.WebResult; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; /** * This class was generated by the JAX-WS RI. * JAX-WS RI 2.1.6 in JDK 6 * Generated source version: 2.1 * */ @WebService(name = "Hello", targetNamespace = "") @SOAPBinding(style = SOAPBinding.Style.RPC) public interface Hello { /** * * @param arg0 * @return * returns java.lang.String */ @WebMethod @WebResult(partName = "return") public String sayHello( @WebParam(name = "arg0", partName = "arg0") String arg0); }
The second file would be called HelloService.java, and it will contain the methods that would help us to connect to our service we are only concerned with the no-arg constructor and the getHelloPort() method:
package wsclient;; import javax.xml.ws.WebServiceFeature; /** * This class was generated by the JAX-WS RI. * JAX-WS RI 2.1.6 in JDK 6 * Generated source version: 2.1 * */ @WebServiceClient(name = "HelloService", targetNamespace = "", wsdlLocation = "") public class HelloService extends Service { private final static URL HELLOSERVICE_WSDL_LOCATION; private final static Logger logger = Logger.getLogger(wsclient.HelloService.class.getName()); static { URL url = null; try { URL baseUrl; baseUrl = wsclient.HelloService.class.getResource("."); url = new URL(baseUrl, ""); } catch (MalformedURLException e) { logger.warning("Failed to create URL for the wsdl Location: '',")); } /** * * @return * returns Hello */ @WebEndpoint(name = "HelloPort") public Hello getHelloPort() { return super.getPort(new QName("", "HelloPort"), Hello.class); } /** * * @param features * A list of {@link javax.xml.ws.WebServiceFeature} to configure on the proxy. Supported features not in the <code>features</code> parameter will have their default values. * @return * returns Hello */ @WebEndpoint(name = "HelloPort") public Hello getHelloPort(WebServiceFeature... features) { return super.getPort(new QName("", "HelloPort"), Hello.class, features); } }
2. Invoke the Web Service
We are now ready to write the code responsible for invoking the web service by making a new instance of the HelloService class, we are ready to get Hello interface by calling the method getHelloPort() from the HelloService instance. After that we can call the method and get the response as a simple Java method:
package wsclient; public class HelloClient { public static void main(String[] args) { HelloService service = new HelloService(); Hello hello = service.getHelloPort(); String text = hello.sayHello("hany"); System.out.println(text); } }
3. Compile Classes and Run
javac -d . *.java java wsclient/HelloClient
Connect any Java based application to your SaaS data. Over 100+ Java-based data source connectors.
Published at DZone with permission of Hany Ahmed . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/simple-java-soap-web-service-using-jdk-tools | CC-MAIN-2019-09 | refinedweb | 1,074 | 51.04 |
The main problem with wiki syntaxes is that they have almost always fixed
features. Some wikis are extensible, like Trac, who let you create
macros.
CPSWiki is going to have its own system of macros, that will give the user
the ability to execute scripts.
Those scripts will be able to interact with the portal through an execution
context that will make the portal accessible to the code.
Zwiki has a similar approach in by letting
the user execute dtml.
The more interesting way to do it is to create a wiki namespace and to keep
all scripts in the wiki folder. This namespace will allow scripts to
interact and
users to construct complex features. (dynamic graph creation,
etc.)
This has to come with a library of base scripts that the user can start to
manipulate.
Let's get rid of all thoses heavy CMS features and get back to a good o'wiki
to create all our sites ;)
(Post originally written by Tarek Ziadé on the old Nuxeo blogs.) | https://www.nuxeo.com/blog/wiki-scripting/ | CC-MAIN-2017-51 | refinedweb | 170 | 71.34 |
!
This particular application had two functions, like /api/edit_user and /api/view_user. The CSRF token was found by going to view_user and it could then be sent to edit_user, but the token was only valid once. Therefore every time I wanted to send a payload to edit_user I first had to request a valid token from view_user (and so I had to teach burp scanner how to do this too!)
Example code for this is here! Again I’ll explain how it works below:
The way that I achieved this was to modify my original burp extension so that before it sent each request it would send a preliminary request to the token granting function but this request must also include the cookie supplied in the main request.
A further hurdle in this task was the fact that I was testing in a pre-production environment and the server I was testing had a self-signed certificate, something that Jython doesn’t like so much. Luckily I found a really nice solution for this written by Brandon of pedersen-live.com who has a nice write up about it here.
So to capture a new CSRF token first of all I have to determine the cookie in use for the original request and then use that cookie to request a new token from the view page, that’s covered by the following code:
requestHeaders = self._helpers.bytesToString(request) #print requestHeaders for header in requestHeaders.split("n"): if header.startswith("Cookie"): print "Found cookie: " + header[8:] BurpExtender.cookie = header[8:] requestBody = self._helpers.bytesToString(request[parsedRequest.getBodyOffset():]) resp = self.connect_to_untrusted_host("target.example.net", "/api/view_user") print "Requested new token from target page" data1 = resp.read() print data1
This code takes the original request’s headers, pulls them apart and looks for the header that starts with “Cookie” and then takes that whole line and stores it so that it can include those cookies in the request to the view page. Here that request to the view page is made by the function connect_to_untrusted_host which is a function that comes directly from Brandon’s code, I’ve just snuck into it the ability to inject the stored cookies, it looks like this:
def connect_to_untrusted_host(self, host, page): conn = httplib.HTTPSConnection(host) # Cookies required are pulled out of the request, this is good for most sites # but if you're using HTTP Basic, you can change this section here: print "Hacked a cookie into request" if BurpExtender.cookie != '': headers = {"Cookie": BurpExtender.cookie} else: headers = {} conn.request('GET', page, headers=headers) response = conn.getresponse() return response
Here you can see that it simply requests the given page but if a cookie was extracted earlier it sticks that into the new request and returns the response for processing! The response is handled in the exact some way that the original extension I blogged about handles tokens! The TL;DR of that is it extracts the CSRF token out of the response and injects it into the new request so that the request is valid!
Example code is all here. | https://www.gracefulsecurity.com/burp-suite-vs-csrf-tokens-round-two/ | CC-MAIN-2017-30 | refinedweb | 514 | 60.85 |
A utility class (aka helper class) is a "structure" that has only static methods and encapsulates no state.
StringUtils,
IOUtils,
FileUtils from Apache Commons;
Iterables and
Iterators from Guava, and
Files from JDK7 are perfect examples of utility classes.
This design idea is very popular in the Java world (as well as C#, Ruby, etc.) because utility classes provide common functionality used everywhere.
Here, we want to follow the DRY principle and avoid duplication. Therefore, we place common code blocks into utility classes and reuse them when necessary:
// This is a terrible design, don't reuse public class NumberUtils { public static int max(int a, int b) { return a > b ? a : b; } }
Indeed, this a very convenient technique!?
Utility Classes Are Evil
However, in an object-oriented world, utility classes are considered a very bad (some even may say "terrible") practice.
There have been many discussions of this subject; to name a few: Are Helper Classes Evil? by Nick Malik, Why helper, singletons and utility classes are mostly bad by Simon Hart, Avoiding Utility Classes by Marshal Ward, Kill That Util Class! by Dhaval Dalal, Helper Classes Are A Code Smell by Rob Bagby.
Additionally, there are a few questions on StackExchange about utility classes: If a “Utilities” class is evil, where do I put my generic code?, Utility Classes are Evil.
A dry summary of all their arguments is that utility classes are not proper objects; therefore, they don't fit into object-oriented world. They were inherited from procedural programming, mostly because we were used to a functional decomposition paradigm back then.
Assuming you agree with the arguments and want to stop using utility classes, I'll show by example how these creatures can be replaced with proper objects.
Procedural Example
Say, for instance, you want to read a text file, split it into lines, trim every line and then save the results in another file. This is can be done with
FileUtils from Apache Commons:
void transform(File in, File out) { Collection<String> src = FileUtils.readLines(in, "UTF-8"); Collection<String> dest = new ArrayList<>(src.size()); for (String line : src) { dest.add(line.trim()); } FileUtils.writeLines(out, dest, "UTF-8"); }
The above code may look clean; however, this is procedural programming, not object-oriented. We are manipulating data (bytes and bits) and explicitly instructing the computer from where to retrieve them and then where to put them on every single line of code. We're defining a procedure of execution.
Object-Oriented Alternative
In an object-oriented paradigm, we should instantiate and compose objects, thus letting them manage data when and how they desire. Instead of calling supplementary static functions, we should create objects that are capable of exposing the behavior we are seeking:
public class Max implements Number { private final int a; private final int b; public Max(int x, int y) { this.a = x; this.b = y; } @Override public int intValue() { return this.a > this.b ? this.a : this.b; } }
This procedural call:
int max = NumberUtils.max(10, 5);
Will become object-oriented:
int max = new Max(10, 5).intValue();
Potato, potato? Not really; just read on...
Objects Instead of Data Structures
This is how I would design the same file-transforming functionality as above but in an object-oriented manner:
void transform(File in, File out) { Collection<String> src = new Trimmed( new FileLines(new UnicodeFile(in)) ); Collection<String> dest = new FileLines( new UnicodeFile(out) ); dest.addAll(src); }
FileLines implements
Collection<String> and encapsulates all file reading and writing operations. An instance of
FileLines behaves exactly as a collection of strings and hides all I/O operations. When we iterate it—a file is being read. When we
addAll() to it—a file is being written.
Trimmed also implements
Collection<String> and encapsulates a collection of strings (Decorator pattern). Every time the next line is retrieved, it gets trimmed.
All classes taking participation in the snippet are rather small:
Trimmed,
FileLines, and
UnicodeFile. Each of them is responsible for its own single feature, thus following perfectly the single responsibility principle.
On our side, as users of the library, this may be not so important, but for their developers it is an imperative. It is much easier to develop, maintain and unit-test class
FileLines rather than using a
readLines() method in a 80+ methods and 3000 lines utility class
FileUtils. Seriously, look at its source code.
An object-oriented approach enables lazy execution. The
in file is not read until its data is required. If we fail to open
out due to some I/O error, the first file won't even be touched. The whole show starts only after we call
addAll().
All lines in the second snippet, except the last one, instantiate and compose smaller objects into bigger ones. This object composition is rather cheap for the CPU since it doesn't cause any data transformations.
Besides that, it is obvious that the second script runs in O(1) space, while the first one executes in O(n). This is the consequence of our procedural approach to data in the first script.
In an object-oriented world, there is no data; there are only objects and their behavior! | http://www.yegor256.com/2014/05/05/oop-alternative-to-utility-classes.html | CC-MAIN-2017-43 | refinedweb | 865 | 55.74 |
QT client to C server
I'm having a hard time trying to connect my QT client to a C server. I know that the server is fully functional. THe problem is in my client. I just want to establish a connection so that later I can send data. First, the client has a login dialog box. After a successful login, he should be redirected to the next menu. I know that the redirection part is correct too. The only problem remains at the connection.
Here is the code
@void Login::on_Login_clicked()
{
pSocket = new QTcpSocket (this); connect (pSocket, SIGNAL(readyRead()), SLOT(waitNextStep())); pSocket->connectToHost(ui->lineEdit->text(), ui->lineEdit_2->text().toInt()); if(pSocket->waitForConnected()) { Menu mMenu; mMenu.setModal(true); mMenu.exec(); } else { QMessageBox::critical(this,tr("Error"),tr("Error at Connectt!")); }
}@
I don't get any error. After I complete the line edits with IP and Port and hit connect, the windows freezes for about 15 seconds - is unresponsive. After that I get the Error at Connect dialog box. No matter if I complete with a correct IP and PORT set, the windows still freezes. Any help will be much appreciated.
@SLOT(waitNextStep())@
I'm worried mostly about this line. If I'm not planning to do anything with the data now, what shoud the function waitNextStep do ?
Hi,
Have a look at "fortuneclient": example.
"waitForConnected": will block until client is connected.
Instead use "connected": signal.
Ok so this makes my connect functions look something like
@connect (pSocket, SIGNAL(connected()), SLOT(handleConnected()));
connect (pSocket, SIGNAL(error(QAbstractSocket::SocketError)), SLOT (handleError()));@
But I'm getting the same result. In the fortuneclient example I couldn't find any waitForConnected function, so I reckon I must get rid of it. But how can I test whether a connection was established?
bq. But how can I test whether a connection was established?
The "connected": signal is emitted as soon as the connection is established with the server.
Thus your handleConnected() will get called.
Ok, I understood. Now it's not getting unresponsive. However, it's not doing anything when I press on login - it's not calling handleConnected nor handleError
@void Login::on_pushButton_clicked()
{
pSocket = new QTcpSocket (this); connect (pSocket, SIGNAL(connected()), SLOT(handleConnected())); connect (pSocket, SIGNAL(error(QAbstractSocket::SocketError)), SLOT (handleError())); pSocket->connectToHost(ui->lineEdit->text(), ui->lineEdit_2->text().toInt());
}@
The handleConnect looks like
@void Login::handleConnected()
{
Menu mMenu;
mMenu.setModal(true);
mMenu.exec();
}@
I expect it to open my next dialog box, but fail.
Sorry for asking so many questions but I really need to make it work.
It seems that the client is not getting connected to the server.
Is the server ip and port correct ?
Now as you have connected error signal what error you get in handleError() ?
You can ask as many relevant questions as you like :)
The IP and port are both correct.
In my handleError() i just got a message box with an error message, like this:
@void Login::handleError()
{
QMessageBox::critical(this,tr("Error"),tr("Connection Error"));
}
@
Both handleConnected() and handleError() are declared in private slots section in Login class
I should have looked at this first. You need to connect the signal error to slot with proper parameters.
@
connect (pSocket, SIGNAL(error(QAbstractSocket::SocketError)), SLOT (handleError(QAbstractSocket::SocketError)));
@
And then in the Slot definition,
@
void Login::handleError(QAbstractSocket::SocketError e)
{
qDebug() << e;
}
@
Here you would get the actual error.
Ok, I understood that. But I'm getting this:
error: no match for 'operator<<' (operand types are '<unresolved overloaded function type>' and 'QAbstractSocket::SocketError')
qDebug << e;
I'm coming from visual studio so i tried to include the using namespace std; but with no result
Ofcourse you can use the qDebug() in GUI Application. Just include the header file.
Whatever error you get during the connection handleError Slot will be called and the error will get printed.
Did you put the parentheses for qDebug? "qDebug": is a function.
Ok I fixed it and now it runs. But still nothing happens as I press login. I assume the problem is in the login button slot
("127.0.0.1", 9000);
}@
Are you sure it is getting called on click ?
Have you declared on_pushButton_clicked() as a SLOT ?
Yes. I already tested it by opening my next dialog directly. And it works. Isn't there a specific in order in the calling of connect and connectToHost? Or maybe should I place the connectToHost in my handleConnected?
The order is correct. All the signals must be connected before calling "connectToHost":
The signal connected() gets emitted when the connection is established.
Apart from these,
Which OS are you using ?
Try changing the port number. It may be blocked by the firewall.
I'm on windows with my client. However, the server is on a linux machine. I know i wrote 127.0.0.1 but I did that just to hide the real IP. I tried with ui->lineEdit->text() and ui->lineEdit_2->text().toInt() but still doesn't works.
The server is designed to listen to 9000 port. I changed the port and no result. After the client is connected, he should send a message to the server. The server should send back the message 'Hello message'. It's pretty straightforward, but I know it's functional since I got connected from a C client and successfully sent and recieved data. I don't think that the message thing should represent a problem, as I should be able to establish a connection anyway. The server uses select for client acceptance - don't know if it's a relevant info
Is it a public IP ? May be it is taking time to connect to the Server.
Try waiting for some time as the handleError() should return some error atleast like QAbstractSocket::SocketTimeoutError or QAbstractSocket::HostNotFoundError or another..
Another way to test would be to connect to a server using diifferent program.
As a quick test i using the python server, for e.g you can start a python HTTP server like this,
@
python -m SimpleHTTPServer
@
or
@
python3.3 -m http.server
@
By default it listens to port 8000.
So you can try connecting the client to python server with port 8000.
I started a python HTTP server as you said. The server was serving HTTP on 0.0.0.0 port 8000. When I completed the fields of my client with 0.0.0.0 and 8000, i got QAbstractSocket::NetworkError in the bottom side of the screen.
If i completed with the IP i knew and the port 8000, still nothing as I press login
It is public but I just connected to the server with a pre-made C client via putty. The server works, it transmited the message back to the C client
Can you post your complete client code here ?
Sure.
The login.h
@#ifndef LOGIN_H
#define LOGIN_H
#include <QMainWindow>
#include <QtNetwork>
#include <QTcpServer>
#include <QTcpSocket>
#include <QMessageBox>
#include <QDebug>
namespace Ui {
class Login;
}
class Login : public QMainWindow
{
Q_OBJECT
public:
explicit Login(QWidget *parent = 0);
private slots:
void on_pushButton_clicked();
void on_ExitLogin_clicked();
void handleConnected();
void handleError(QAbstractSocket::SocketError);
private:
Ui::Login *ui;
QTcpSocket *pSocket;
};
#endif // LOGIN_H
@
The login.cpp
@#include "login.h"
#include "ui_login.h"
#include <QMessageBox>
#include "menu.h"
#include <QtNetwork>
#include <QLineEdit>
#include <QTcpSocket>
#include <QTcpServer>
#include <QAbstractSocket>
#include <QDebug>
using namespace std;
Login::Login(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::Login)
{
ui->setupUi(this);
}
Login::~Login()
{
delete ui;
}(ui->lineEdit->text(),ui->lineEdit_2->text().toInt());
}
void Login::on_ExitLogin_clicked()
{
QCoreApplication::instance()->exit();
}
void Login::handleConnected()
{
Menu mMenu;
mMenu.setModal(true);
mMenu.exec();
}
void Login::handleError(QAbstractSocket::SocketError e)
{
qDebug() << e;
}
@
The main.cpp
@#include "login.h"
#include <QApplication>
#include <QtNetwork>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
Login w;
w.show();
return a.exec();
}
@
I have some other classes like football, tennis etc since this should be a sports app. However, I have almost no code in them since I just designed them with drag and drop
Anyway it seemed like the client was functional and I would like to thank you for your interest. However, I have another problem now and I'd be glad if you could help. My app offers a menu after I pass the login part. There are a lot of options in the menu. I want to communicate with the server via the same socket used at the login part. My pSocket is declared in the Login class and I have no idea how to make all my other functions in other classes to use it. | https://forum.qt.io/topic/36500/qt-client-to-c-server/9 | CC-MAIN-2018-17 | refinedweb | 1,419 | 59.09 |
Including the JRE in a NetBeans Platform Installer on Ubuntu Linux
The NetBeans Platform provides a very nicely integrated installer infrastructure (NBI) which alllows for creating native installers for various platforms (Linux and Windows, in particular). This does, in a cross-platform way, what various other installers do (NSIS, IzPack, etc), but with the advantage of being NetBeans Platform aware and completely configurable.
It is not immediately clear how to bundle a JVM because using the default "Package As | Installers" option in NetBeans IDE 7.0 will build installers that are dependent on an already installed JVM on the system. In this guide, with help from Geertjan and Dmitry, we aim to configure the build process so that:
- Native installers will be built for (at least) Windows and Linux.
- These native installers will have a bundled JVM, and thus not require the system they are installed on to have a pre-installed JVM.
- Ensure that the installed NetBeans Platform application uses a private JRE (in fact, the same one that was bundled before) which is therefore not visible to other applications in the system.
Getting a nice installer cycle with private JRE under Ubuntu Linux (Windows, very similar), to build both Windows and Linux installers from within Ubuntu.
Create JRE's to bundle
# Creating a directory where I will keep the JREs to bundle in various installers,
mkdir ~/bundled_jres
cd ~/bundled_jres
# 1. Set up Linux JRE
cp -r /some/path/to/linux/jre linux_jre (if using Windows, use xcopy /E or Windows Explorer)
cd linux_jre
# pack200 it
pack200 -J-Xmx1024m lib/rt.jar.pack.gz lib/rt.jar
# zip it
zip -9 -r -y ../linux_jvm.zip .
cd ..
# get unzipsfx for Linux
cp /usr/bin/unzipsfx unzipsfx.bin
# concatenate sfx header with zip payload, this results in a self-extracting jvm executable
cat unzipsfx.bin linux_jvm.zip > linux_jvm.bin (if using Windows, use copy /B unzipsfx.bin + linux_jvm.zip linux_jvm.bin)
# 2. Set up Windows JRE
cp -r /some/path/to/windows/jre windows_jre (if using Windows, use xcopy /E or Windows Explorer)
cd windows_jre
# pack200 it
pack200 -J-Xmx1024m lib/rt.jar.pack.gz lib/rt.jar
# zip it
zip -9 -r -y ../windows_jvm.zip .
cd ..
# get unzipsfx.exe for Windows, can find in
cp /path/to/unzipsfx.exe unzipsfx.exe
# concatenate sfx header with zip payload, this results in a self-extracting jvm executable (Windows)
cat unzipsfx.exe windows_jvm.zip > windows_jvm.exe (for Windows, use copy /B as above)
At this point, we have a set of JVMs (not necessarily only Windows and Linux) which we can re-use for perpetuity, or update when significant new releases of the JRE becomes available.
Set up build.xml for NBI
Now, open <netbeans-install-dir>/harness/nbi/stub/build.xml :
Search for "-generate-bundles" target and replace the <create-bundle> call by the following set of conditional calls (changing /home/ernest to rather be your home dir of choice):
(In the above, one could have multiple JVM's for multiple platforms, and just hook them up in the correct way as here. For example, in the Mac installer above we don't bundle any JVM.)(In the above, one could have multiple JVM's for multiple platforms, and just hook them up in the correct way as here. For example, in the Mac installer above we don't bundle any JVM.)
<!-- Linux installer -->
<if property="platform" value="linux">
>
<!-- Solaris installer -->
<if property="platform" value="solaris">
>
<!-- Windows installer -->
<if property="platform" value="windows">
<create-bundle
<component uid="${main.product.uid}" version="1.0.0.0.0"/>
<property name="nbi.bundled.jvm.file" value="/home/ernest/bundled_jres/windows_jvm.exe"/>
</create-bundle>
</if>
<!-- Mac installer -->
<if property="platform" value="macosx">
<create-bundle
<component uid="${main.product.uid}" version="1.0.0.0.0"/>
</create-bundle>
</if>
After doing this, generated installers should run off their bundled JVM's, but the application itself will not use this JVM, and search for a system JVM. To rather let it depend on this private JRE, continue …
Letting the application depend on the private JVM
Now, we edit ConfigurationLogic.java in <netbeans-install-dir>/harness/nbi/stub/ext/components/products/helloworld/src/org/mycompany/ConfigurationLogic.java to (a) extract the previously bundled JRE to a subdirectory called “jre” upon installation and (b) ensure it is also removed when the application gets uninstalled.
Add import statement:
import org.netbeans.installer.utils.system.launchers.LauncherResource;
At the end of install(), add:
File javaHome = new File(System.getProperty("java.home"));
File target = new File(installLocation, "jre");
try {
FileUtils.copyFile(javaHome, target, true); //FileUtils is one of the NBI core classes, already imported in ConfigurationLogic.java
} catch (IOException e) {
throw new InstallationException("Cannot copy JRE",e);
}
// set permissions:
File binDir = new File(target, "bin");
for (File file : binDir.listFiles()) {
try {
file.setExecutable(true);
} catch (Exception ex) { ex.printStackTrace(); }
}
// to add uninstaller logic:
SystemUtils.getNativeUtils().addUninstallerJVM(new LauncherResource(false, target));
and at the end of uninstall(), but just before progress.setPercentage(Progress.COMPLETE);, add
File jre = new File(installLocation, "jre"); if (jre.exists()) { try { for (File file : FileUtils.listFiles(jre).toList()) { FileUtils.deleteOnExit(file); } FileUtils.deleteOnExit(installLocation); } catch (IOException e) { //ignore } }
Hook up a custom application configuration file, following Geertjan's approach, but ensure that my.conf contains the line
jdkhome="jre"which will be a relative path to the JRE that was installed by ConfigurationLogic's install() method.
Finally, to create our set of installers for various platforms and their respective bundled JRE's, right-click on your suite application, select "Package As", then choose "Installers". To create more compact installers (compressing .jar files using pack200), you can enable this by right-clicking, selecting Properties, select "Installer" and check "Use pack200 compression" - this has the effect that building the installers takes quite a bit longer due to the pack200 compression, but your users will be quite happy to have significantly smaller installers to download.
Many thanks to Geertjan Wielenga and Dmitry Lipin (the NetBeans Installer guru) without whom finding all the necessary ingredients would not have been possible!
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Tito Sanchez replied on Tue, 2011/04/26 - 2:26pm
Excellent article.
Now I'm using NSIS to include JRE in my NBP project but I need deploy it also in MAC and Linux.
Javier Ortiz replied on Tue, 2011/04/26 - 7:57pm
in response to:
Tito Sanchez
Mark Clarke replied on Wed, 2011/04/27 - 4:49am
Ernest Lotter replied on Wed, 2011/04/27 - 8:52am
in response to:
Tito Sanchez
Tito Sanchez replied on Thu, 2011/04/28 - 10:11am
in response to:
Ernest Lotter
Hi Ernest.
I will try to create our installer using NB (currently we have some dependencies: mysql and other tools) but looking this article I have some ideas to apply.
Thanks!
Drew Willis replied on Fri, 2011/11/04 - 1:13pm
This all works wonderfully for windows .exe installers. I am experiencing a small problem in Linux. Despite the fact that the self-extracting bundled linux_jvm.sh file extracts itself correctly, when packaged into the application installer, the application installer extracts the jvm to the jre directory but the execute flag for the files in the <apppath>/jre/bin directory have not been set (which is important for /jre/bin/java). Strangely, it allows the installer to run from the bundled jvm but when I attempt to launch the application, it sees the jre directory and the jvm as <apppath>/jre/bin/java but does not execute the jvm as this file does not have it's execute flag set.
I have verified file access privs and validated the self-extracting zip linux_jvm.sh preserves the execute flags. I have also verified that the app runs properly when I go into the <apppath>/jre/bin directory and execute the command "chmod 755" which sets the execute flag for all these files.
Any ideas?
--dreww
Ernest Lotter replied on Thu, 2011/11/24 - 2:39pm
Adam Metzler replied on Sun, 2011/11/27 - 12:17am
Brian McCormick replied on Wed, 2012/05/02 - 9:09am
Ernest Lotter replied on Sat, 2012/06/30 - 8:50am
Hi Brian, hopefully rather late than never, but here goes: I've made self-extracting images for 32-bit and 64-bit Linux and Windows JRE's, then you can modifiy the Linux section of the build.xml above to:
Same trick works for Windows side, and this results in 32-bit and 64-bit installers placed next to each other for each platform.
Jemiah Aitch replied on Sun, 2012/10/07 - 5:48pm
Javier Ortiz replied on Tue, 2012/11/06 - 9:45am
Would be great to have the Mavenized version of this as well!
Brian McCormick replied on Thu, 2012/12/13 - 5:33pm
Ernest,
Thank for the followup on how to make builds for both 32 and 64 bit versions in the installer. I have a new issue very closely related. I have this app written for a very large organization where only administratiors can install applications. Regular users cannot. They want me to give them a deployment with the embedded JREs like the installer but packaged as a zip distribution instead so that they don't have to do a formal install. I am not an ant script expert but I think if I knew which sectionin the ant scripts I should make the additions/changes I might just be able to figure it out.
Any help would be greatly appreciated.
Thanks!
Matt Coleman replied on Thu, 2013/02/21 - 1:18am
in response to:
Javier Ortiz
YES!!for Maven please!!
buffalo freelance web designer
Gary Chen replied on Tue, 2013/05/21 - 4:25am
One question, who will unpack rt.jar.pack.gz?
Winston Dehaney replied on Tue, 2013/05/21 - 1:10pm
Is this the same way to include any folder at all in the installation process? What happened is that I was using the NetBeans Platform Application module to create a desktop application for a class project. I have my pictures in my modules suite folder, but when I try to make an installation file it doesn't use that folder so my application ends up not showing some pictures. So I was wondering if this would automatically include these folders. Don't know if my request is clear.
Winston Dehaney replied on Mon, 2013/05/27 - 9:30am
OK, guess I'll seee if it's possible
Cata Nic replied on Mon, 2013/09/02 - 4:27am
The archive must be decompressed? I think is a good solution if the process can be explained in a better way.
Enrico Scantamburlo replied on Thu, 2013/12/19 - 7:03am
Ciao and thanks for the tutorial . I took a look also on the NetBeans wiki and they say that I cannot pack some jars in my installer, like jce.jar, .
I really need those files for my application. Do you know something about that? | http://netbeans.dzone.com/including-jre-in-nbi | CC-MAIN-2014-15 | refinedweb | 1,841 | 53.41 |
why raspberry pi 3 fail to read 1000 p/r rotary encoder while my 8 bit nuvoton micro controller did that properly?
-1
I am using LS s40-6-1000ZT(H) 1000 p/r rotary encoder for my project with raspberry pi 3,but raspberry not working properly as per encoder specification(i.e not counting event properly).What could be the issue? Any solution??
from RPi import GPIO from time import sleep import tkinter as tk a=22 b=23 GPIO.setmode(GPIO.BCM) GPIO.setup(a,GPIO.IN) GPIO.setup(b,GPIO.IN) root=tk.Tk() clkLastState=GPIO.input(a) def call(): global clkLastState global counter clkState=GPIO.input(a) if clkState != clkLastState: dtState=GPIO.input(b) if dtState != clkState: counter +=1 else: counter -=1 text=str(counter/2) print (text[0:-2]) var.set(text) clkLastState=clkState def my_callback(channel): call() counter=0 clkLastState=GPIO.input(a) #GPIO.add_event_detect(23,GPIO.BOTH,callback=my_callback) print (counter) var=tk.IntVar() L1=tk.Label(root,bg="orange",fg="red",textvariable=var) L1.pack() # var.set(counter) # root.after(500,main) root.geometry("100x100") #GPIO.add_event_detect(22,GPIO.BOTH,callback=my_callback) GPIO.add_event_detect(23,GPIO.BOTH,callback=my_callback,debouncetime=6) root.mainloop() #var=tk.IntVar() #L1=tk.Label(root,bg="orange",fg="red",textvariable=var) #L1.pack() #GPIO.add_event_detect(23,GPIO.BOTH,callback=my_callback) GPIO.cleanup()
- Ah, let me see. (1) Nuvoton is MCU (Micro Controller), like Arduino, so you can ask her to pay attention to do one thing and just one thing, counting pulses, so never missing any. On the other hand, Rpi is a “multi-tasking” computer, like a naught girl, doing many tasks at the same time, jumping up and down, here and there, therefore you cannot ask her to pay attention to just one thing. (2) If you like, Rpi is sort of ADHD (Attention-deficit/hyperactivity disorder), always absent mindedly miscount pulses. / to continue, … – tlfong01 1 hour ago
- (3) In real life, one workaround is big sister Rpi partnering with the little sister WiFi MCU, ESP8266/32, which won’t miscount pulses and at the same time doing WiFi work (OK, she is doing two tasks at the same time, but not multitasking. Me giving up explaining deeper, you google :)) – tlfong01 1 hour ago
- Actually there are a couple of other work around to reduce miscount. Say if you are using Rpi to do a Micky Mouse project using low speed geared cheapie, ugly looking, yellow toy motors, you can use the “Discard Odd Man Out” moving average counting method. Take an oversimplified sample. Suppose your motor speed is about 600 rpm (revolutions per minute). You can either (1) count pulses every minute, and get the results 600, 601, 599 rpm etc. – tlfong01 13 mins ago
- But this is not very accurate, because Rpi, during that minute, might pause a bit to do other side jobs. Now the trick is to count pulses not every minute, but every second. So now you would get almost always 10, 11, 9, 10 revolutions and so on. If is likely that once in a blue moons, you get abnormal results in some seconds, like 5, or 6 counts in a second. The reason for this odd man out result is because Rpi during that second pause counting and divert to do other house keeping chores, such as answer the door etc. – tlfong01 9 mins ago
- Now you have have already guessed that you simply discard the odd man out and use say only 10 normal second results and multiple by 10 to get the rpm. Actually this trick is not used in toy projects. Even CalTech JPL (Jet Propulsation Lab)’s rocket scientist, using the same, yes, same trick, but using analog hardware, such as operational amplifiers to find the odd man out. They called this trick digital filtering and the most famous version is Karman Filter. – tlfong01 2 mins ago Edit
- You might heard of other Rpi and Arduino guys using this digital filtering trick their balancing their two wheel car algorithms. Ah, evening tea time. See you later. PS I am just thinking aloud, my apologies for all the typos. – tlfong01 26 secs ago Edit
1
pigpio has a more deterministic response to GPIO level changes.
Try my pigpio Python example.
It requires the pigpio daemon.
sudo pigpiod
If Python can’t keep up you will have to use C instead. Perhaps try this pdif2 C example. It also use the pigpio daemon.
Categories: Uncategorized | https://tlfong01.blog/2020/02/13/rotary-encoder-reading-error-discussion-notes/ | CC-MAIN-2021-04 | refinedweb | 750 | 55.24 |
From: John Maddock (john_at_[hidden])
Date: 2007-03-01 04:40:11
Boris Gubenko wrote:
> John Maddock wrote:
>> Can you test the attached patch then? If you can assure me that the
>> RW version check is correct (presumably you will up the version
>> number if the issue gets fixed?), and that it has no unfortunate
>> effects on the other Boost regression tests [...]
>
> Unfortunately, the patch broke some tests, in particular, 9 tr1
> library tests. This is because roguewave.hpp #include's <iterator>.
> Just
> inclusion of <iterator> makes the tests fail. I did not analyze the
> failures, I guess, the tests don't expect <iterator> to be included.
>
> I think, we have 3 options:
>
> 1) put your fix in some regex library header which includes <iterator>
> anyway.
>
> 2) put my original fix which does not require any headers in
> roguewave.hpp
>
> 3) do nothing: mark tests affected by this RW bug expected failures
> and wait for the compiler kit with the fix (for the reference, this
> RW bug is tracked internally as CR JAGag33243).
>
> I'm running the tests with 2) now, to verify that it indeed is a
> viable solution.
Oh shucks, I forgot about that, try:
# ifndef BOOST_TR1_NO_RECURSION
# define BOOST_TR1_NO_RECURSION
# define BOOST_CONFIG_NO_ITERATOR_RECURSION
# endif
# include <iterator>
# ifdef BOOST_CONFIG_NO_ITERATOR_RECURSION
# undef BOOST_TR1_NO_RECURSION
# undef BOOST_CONFIG_NO_ITERATOR_RECURSION
# endif
In any case this is too late for 1.34 now whatever.
Thanks,
John.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2007/03/117235.php | CC-MAIN-2020-29 | refinedweb | 249 | 58.38 |
Defines the.
<Field
ID="Text"
Group="Text"
MaxLength="Text"
SourceID="Text"
StaticName="Text"
/>
ID
Required Text. Specifies the GUID of the field.
Group
Optional Text. Specified the site column group to which this site column belongs.
MaxLength
Optional Integer. Specifies the maximum number of characters allowed in a field value. Edit forms are adjusted to enforce this value, which is validated on the client. If the user attempts to enter more than the number of characters set by MaxLength, an error message appears.
SourceID
Optional Text. Contains the namespace that defines the field, such as, or the GUID of the list in which the custom field was created.
StaticName
Optional Text. Contains the internal name of the field.
None. (See Remarks)
Elements Element (Field).
There seem to be a lot of attibutes that are not documented? Here are a few:
rowOrdinal, Sealed, Hidden, required, fromBaseType... the list goes on....
Where is the "complete" documentation located for a field? | http://msdn.microsoft.com/en-us/library/aa979575.aspx | crawl-002 | refinedweb | 158 | 62.34 |
SQLAlchemy Frequently Asked Questions
- General
- Setting Up/Connecting
- How do I configure logging ?
- The "help" keyword (i.e. help(sqlalchemy)) and "pydoc" commands dont work
- Do I have to do something special to use connection pooling ? Are my connections pooled ?
- How do i specify (insert custom DBAPI argument here) to my connect() function?
- MySQL server has gone away / psycopg.InterfaceError?: connection already closed
- MetaData / Schema
- I am using autoload=True and <insert your problem here>
- My program is hanging when I say table.drop()/metadata.drop_all()
- Does SQLAlchemy support ALTER TABLE, CREATE VIEW, CREATE TRIGGER, Schema Upgrade Functionality ?
- SQLAlchemy gets confused when i try to drop arbitrary database objects that have dependencies on other things it doesn't know about
- How can I sort Table objects in order of their dependency?
- How can I get the CREATE TABLE/ DROP TABLE output as a string ?
- How do I map a column that is a Python reserved word or already used by SA?
- SQL/Transactions
- Object Relational Mapping
- How do I attach an aggregate column (or other SQL expression) to an ORM Query? i.e. max(), count(), x*y etc.
- FlushError: instance <someinstance> is an unsaved, pending instance and is an orphan
- I have a schema where my table doesnt have a primary key, can SA's ORM handle it?
- a single object's primary key can change, can SA's ORM handle it?
- why isnt my __init__ called when I load objects?
- how do I use ON DELETE CASCADE with SA's ORM ?
- I've created a mapper against an Outer Join, and while the query returns rows, no objects are returned. Why not ?
- I'm using "lazy=False" to create a JOIN/OUTER JOIN and SQLAlchemy is not constructing the query when I try to add a WHERE, ORDER BY, LIMIT, etc. (which relies upon the (OUTER) JOIN)
- I set the "foo_id" attribute on my instance to "7", but the "foo" attribute is still None - shouldn't it have loaded Foo with id #7 ?
- How Do I use Textual SQL with ORM Queries ?
- Why is my PickleType column issuing unnecessary UPDATE statements ?
- Is there a way to automagically have only unique keywords (or other kinds of objects) without doing a query for the keyword and getting a reference to the row containing that keyword?
- Integrating with external tools
General
Whos using SQLAlchemy?
We have a wiki page set up at Applications and Sites which collects everything going on right now.
What license is SQLAlchemy licensed under ?
SQLAlchemy is under the MIT License.
Setting Up/Connecting
How do I configure logging ?
SQLAlchemy as of 0.3.0 uses Python's logging module for logging. This is enabled via the standard logging methodology, using named loggers. SQLAlchemy logs database conversations underneath the sqlalchemy.engine namespace, and the ORM dumps debugging messages to a variety of names underneath the sqlalchemy.orm namespace.
A generic logging setup looks like:
import logging logging.basicConfig() logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO) logging.getLogger('sqlalchemy.orm').setLevel(logging.INFO)
Also note that using echo=True on the create_engine or create_session functions will try to set up its own logging configuration, which may conflict with an explicit logger config.
The "help" keyword (i.e. help(sqlalchemy)) and "pydoc" commands dont work
This is a known Python bug which is fixed in version 2.5:
Do I have to do something special to use connection pooling ? Are my connections pooled ?
No and yes. If you have called create_engine(), the returned Engine object has a reference to a connection pool in all cases. The class used for this pool in most cases is sqlalchemy.pool.QueuePool, but in the case of SQLite is sqlalchemy.pool.SingletonThreadPool. The type of pool used as well as its characteristics can be controlled, see the docs.
How do i specify (insert custom DBAPI argument here) to my connect() function?
Method #1:
e = create_engine('postgres://url', connect_args={'sslmode':True})
Method #2:
e = create_engine('postgres://url?sslmode=something')
Method #3:
def conn(): return psycopg.connect(url, arg1, arg2, kwarg1=foo, kwarg2=bar) e = create_engine('postgres://', creator=conn)
MySQL server has gone away / psycopg.InterfaceError?: connection already closed
This is usually symptomatic of the database closing connections which have been idle for some period of time. On MySQL, this defaults to eight hours.
Use the pool_recycle setting on the create_engine() call, which is a fixed number of seconds for which an underlying pooled connection will be closed and then re-opened. Note that this recycling only occurs at the point at which the connection is checked out from the pool; meaning if you hold a connection checked out from the pool for a duration greater than the timeout value, the timeout will not work. (it should be noted that it is generally bad practice for a web application to hold a single connection open globally; Connection objects should be obtained via connect() and closed/removed from scope as needed).
Note: With MySQLdb specifically, this error was also occuring due to sloppy re-use of connections within SA's connection pool. As of SA 0.3.3 these issues have been fixed.
MetaData / Schema
I am using autoload=True and <insert your problem here>
The autoload feature, which is also known as table reflection, is by far the most troublesome feature of SA not because SA has skimped on it, but because it encompasses the widest number of variables, including not only the specific quirks of a particular schema, but also the myriad quirks/limitations/"features" of the database, the platform it runs on, the DBAPI driver, and the specific versions and configurations of both. Lots of scenarios simply are not possible in certain scenarios, such as case-sensitive schemas, foreign key reflection, etc.
Before reporting an issue with reflection, please read fully the notes for your particular database over on the DatabaseNotes page to see if its a known issue.
It should be noted that no application really needs reflection in order to function....so while a reflection issue is frustrating, keep in mind that to work around it, just define the Table and Column objects explicitly...which is how many other ORMs such as Hibernate require you to work anyway.
My program is hanging when I say table.drop()/metadata.drop_all()
This usually corresponds to two conditions: 1. using postgres, which is really strict about table locks, and 2. you have a connection still open which contains locks on the table and is distinct from the connection being used for the DROP statement. Heres the most minimal version of the pattern:
result = mytable.select() mytable.drop()
the "result" from mytable.select() still holds onto a connection, which was pulled from the connection pool. Saying mytable.drop() then pulls a second connection from the pool, and hangs because postgres still has open table locks in the first connection.
Two solutions:
1. close the result. this will return the underlying connection to the connection pool, which issues an unconditional ROLLBACK statement on all returned connections, thus releasing any locks.
result = mytable.select() result.close() mytable.drop()
2. use the same connection for both operations
conn = engine.connect() result = conn.execute(mytable.select()) mytable.drop(connectable=conn)
Does SQLAlchemy support ALTER TABLE, CREATE VIEW, CREATE TRIGGER, Schema Upgrade Functionality ?
The only instance where SA automatically renders ALTER TABLE is when you use the use_alter flag on a ForeignKey construct, which is used to create tables with mutually dependent foreign keys.
Everything else related to schemas and things beyond basic table creates and drops, such as CREATE SCHEMA, CREATE VIEW, CREATE PROCEDURE, aren't "built in", but as of 0.4.3 the DDL() construct is ideal for specifying statements of this nature, and can be attached to Table or MetaData objects to be executed at specific times during create or drop calls.
The Migrate project at also seeks to provide a full suite of schema migration utilities.
SQLAlchemy gets confused when i try to drop arbitrary database objects that have dependencies on other things it doesn't know about
Like the previous question, SA is not a comprehensive DDL management solution and does not intend to be (however if you were to write a DDL management solution in Python, SQLAlchemy would be a great foundation to start with). It does not have the ability to magically figure out everything there is to know about a database, nor can it strategize the creation/dropping of objects beyond its basic topological sort of a set of Table objects within a single MetaData.
How can I sort Table objects in order of their dependency?
metadata = MetaData() # ... add Table objects to metadata ti = metadata.table_iterator() for t in ti: print t
As of the sort_tables function is available individually:
from sqlalchemy.sql.util import sort_tables for table in sort_tables([t1, t2, t3], reverse=True): print table.description
How can I get the CREATE TABLE/ DROP TABLE output as a string ?
The "mock" engine strategy supports sending a callable which will intercept all execute calls.
from sqlalchemy import * from StringIO import StringIO buf = StringIO() engine = create_engine('postgres://', strategy='mock', executor=lambda s, p='': buf.write(s + p)) meta = MetaData() t1 = Table('sometable', meta, Column('foo', String(30))) meta.create_all(engine) print buf.getvalue()
How do I map a column that is a Python reserved word or already used by SA?
table_a = Table ('tbl_a', metadata, autoload=True) mapper(AObj, table_a, properties={'type_col' : table_a.c.type, 'pass_col':table_a.c['pass']})
SQL/Transactions
I am using multiple connections with a SQLite database (typically to test transaction operation), and my test program is not working !
SQLite by default uses SingletonThreadPool to manage connections. This means that within a single thread, the connection pool will return the same connection instance in all cases. There are several reasons for this. The best one is that when using a memory: database its required, since a new connection to memory: gives a new database. Another is that with file-based databases theres usually not much reason you'd want multiple connections to the same file within a single thread, and some SQLite versions can deadlock since the database file is locked on write operations (second connection tries to execute in the same thread -> deadlock). A third reason is that many versions of SQLite also do not allow connection instances to be moved between threads (often due to the non-threadsafety of flock()), hence the regular multithreaded connection pool will fail.
Since SQLite changes frequently, there may be options/switches to enable regular multithreaded communication with a particular SQLite file-based database. If this is your case, you can instruct the create_engine call to use the normal connection pool as follows:
from sqlalchemy import pool e = create_engine('sqlite:///foo.db', poolclass=pool.QueuePool)
SQLite: SQL logic error or missing database
related to the previous FAQ entry, this error occurs when results still remain on a select statement, the ResultProxy from that statement has not been closed, and you have tried to issue another command (like an INSERT) to the SQLite database. It goes to use the same connection (since only one connection per thread) and fails. Ensure that all previous result sets are fully fetched and/or the results are closed using close().
How do I get at the raw DBAPI connection when using an Engine ?
With a regular SA engine-level Connection, you can get at the psycopg2 connection via conn.connection. the connection and cursor returned by cursor() are proxied for garbage collection/connection pool purposes but call upon the underlying DBAPI API for their behavior and methods:
engine = create_engine(...) conn = engine.connect() conn.connection.<do DBAPI things> cursor = conn.connection.cursor(<DBAPI specific arguments..>)
you also want to make sure you revert any isolation level settings on the connection back to normal before returning it to the pool.
or, as an alternative to that, you can call the detach() method on either Connection or the proxied connection, which will remove it from the pool; it gets closed for real when you close() it and whatever changes you've made on it don't pollute the overall pool of connections:
conn = engine.connect() conn.detach() # detaches the DBAPI connection from the connection pool conn.connection.<go nuts> conn.close() # connection is closed for real, the pool replaces it with a new connection
Object Relational Mapping
How do I attach an aggregate column (or other SQL expression) to an ORM Query? i.e. max(), count(), x*y etc.
At the query level the column can be added using add_column(), and then adding in the appropriate join and group by criterion:
session.query(MyClass).group_by([c for c in MyClass.c]).add_column(func.count(MyClass.c.whatever).label( 'count')).list()
Additional criterion can be added using generative functions like join(), filter(), filter_by(), etc. Such as if the aggregate function were against a different table, these functions would be used to add the appropriate join criterion to the second table.
The returned result will be tuples consisting of instances and scalars, i.e.
[ (<MyClass instance>, 5), (<MyClass instance>, 12) ]
Other ways to add aggregates include:
- creating the mapper() against the desired select()
q = select([mytable, func.count(mytable.c.somecolumn)], group_by=[c for c in mytable.c if c is not mytable.c.somecolumn]) mapper(MyClass, q)
- adding the expression using column_property():
mapper(MyClass, mytable, properties={ 'count':column_property(select([func.count(othertable.c.related_id)], mytable.c.id==othertable.c.related_id, scalar=True).label('count')), 'product':column_property((mytable.c.somecol * 5).label('product')) })
- using the desired select() directly with Query:
q = select([mytable, func.count(somecolumn)], group_by=[c for c in mytable.c]) result = session.query(MyClass).select(q) # or result = session.query(MyClass).instances(q.execute())
Of course, if the aggregate value need not be queried inline with the main mapped query, you can just initiate a regular SQL query for the count value (which can be assembled into a class property easy access).
FlushError: instance <someinstance> is an unsaved, pending instance and is an orphan
This corresponds to the usage of the delete-orphan cascade rule. This rule establishes the "parent" object as required for a particular relationship, such as:
mapper(Parent, sometable, properties={ relation(Child, cascade="all, delete-orphan") })
With the above relationship, no newly created instance of Child can be flushed without being attached to a Parent.
The error occurs when you do specifically this:
c = Child() session.save(c) session.flush()
I.e., a new Child was created and saved to the session. This establishes its state as "pending". The operation is invalid because "c" cannot be inserted (its an orphan), but also cannot be deleted (its not saved). Since it doesn't make any sense to save() a Child instance without a parent, the error is raised (SQLAlchemy does not want to guess if a strange user operation is intentional or not).
I have a schema where my table doesnt have a primary key, can SA's ORM handle it?
jonathan ellis pointed us all to a great series of articles concerning primary keys:
the gist of the articles is that *any* concept you are trying to store in a database usually has some kind of unique identifying key.
if you dont have any columns explicitly defined as part of a primary key in your table schema (which you probably should), you can still tell a mapper what columns compose the primary key like this:
mapper(someclass, sometable, primary_key=[sometable.c.uid,sometable.c.bar])
a single object's primary key can change, can SA's ORM handle it?
Yes ! As of version 0.4.2 primary keys on entities can change without issue. After a flush the instance's "identity_key" is updated to reflect the change, and the instance is moved in the identity map to the new identity.
What needs to be kept in mind is that references to related objects, who have a foreign key referencing the primary key which is changing, need to be updated as well. SQLAlchemy will issue the UPDATE statements if the passive_updates=False flag is set on a relation(). However, this flag is not appropriate if the database is actually enforcing referential integrity; in that case, ON DELETE CASCADE must be configured on the database tables themselves so that the database can cascade the update, this includes databases like Postgres, Oracle, and MySQL when InnoDB tables are used.
why isnt my __init__ called when I load objects?
SQLAlchemy instantiates objects that are loaded from the database via the __new__() method, which bypasses the __init__() method. This allows objects to follow the semantics of being stored and later reloaded into memory.
Typically, initialization that takes place in an object's constructor is to set up internal state variables for the first time. It is usually the case that these internal state values are part of the state which is persisted to the database; therefore when the object is loaded again, it would be inappropriate to re-initialize this state since its being loaded from the database !
For example, an Order object used in an ecommerce application:
class Order(object): def __init__(self, user, items): self.user = user self.items = [Item(name) for name in items] myorder = Order(user, items=['item 1', 'item 2']) session.save(myorder) session.flush()
Above, the user and items objects are part of the state which is saved to the database. Additionally, the Order constructor created new Item objects upon construction, which automatically get saved to the session using cascade rules. Its clear that the above Order object would fail badly when reconstructed from a database row, since no arguments would be available. If __init__() were expected to be called with no arguments, constructors would be forced to not have any required arguments and would have to check all incoming arguments for None, which actually was the case in early versions of SA. But also, the Order would have no indication if it were an already-saved instance or a brand new one.
Another way to view it, is that calling __new__ is more equivalent to your objects being "pickled" and restored from their "pickled" state.
In general, having loaded objects created via __new__() alone instead of __init__() allows objects to differentiate between construction and loading.
if you need customized things to be done when the instance is loaded back from the database, the idea is that youd make a MapperExtension with create_instance() overridden, which returns a new instance with whatever else you wanted (i.e. it can just create the object normally):
class MyExt(MapperExtension): def create_instance(self, mapper, selectcontext, row, class_): return MyObject() mapper(MyObject, table, extension=MyExt())
how do I use ON DELETE CASCADE with SA's ORM ?
you should be able to have ON DELETE CASCADE set up in your database normally. however, you have to make sure that the mappers you set up contain the relationships that correspond to those cascade rules. What will happen then is if you mark an object instance as deleted and then flush, SQLAlchemy will also delete all child objects that have been set up with the "delete" cascade rule. In this scenario, SA will do the deletions in the correct order before the database-level CASCADE rule ever gets to it.
SQLAlchemy will either "actively" load in child instances during a cascading delete operation, or "passively" process only those child instances which were already loaded into the session. This setting is controlled by the passive_deletes boolean flag on the relation() function, which defaults to False, indicating that it will "actively" load in child instances for deletion.
Setting this flag to True when ON DELETE CASCADE is configured on the database may save a lot of extra load operations, particularly for large tables.
I've created a mapper against an Outer Join, and while the query returns rows, no objects are returned. Why not ?
The ORM Query object will not return an instance if the row does not contain a full primary key. For an outerjoin such as books_table.outerjoin(fulltext_table), the natural primary key of this join would be the primary key column(s) of books_table + the primary key column(s) of fulltext_table. Such as:
[books_table.c.id, fulltext_table.c.id]
When mapping to a join like this, you have to make the determination what the "primary key" of a particular row from your outerjoin should be. By default, it will be the set of primary key columns from all involved tables, "reduced" down to the minimal set of unique columns among them, as is the case above.
If you decide that the books_table alone determines primary key, then you can override this default using the primary_key argument:
mapper(MyClass, books_table.outerjoin(fulltext_table), primary_key=[books_table.c.id])
If, on the other hand, the combination of columns from both tables do comprise the primary key, since we are mapping to an outerjoin which may contain NULLs for the outer joined table, the mapper needs to be told that the composite primary key may contain a NULL for one or more (but not all) of the columns, but is still a valid primary key. For that you add the flag allow_null_pks=True to your mapper:
mapper(MyClass, books_table.outerjoin(fulltext_table), allow_null_pks=True)
I'm using "lazy=False" to create a JOIN/OUTER JOIN and SQLAlchemy is not constructing the query when I try to add a WHERE, ORDER BY, LIMIT, etc. (which relies upon the (OUTER) JOIN)
If the query you are constructing requires joining to a second table, that is not what eager loading is used for. You instead need to tell your Query object to join explicitly. Pretend the eager loading is not there !
SA's eager loading capability is designed to load full sets of child objects on a parent object, in such a way that it is completely transparent if lazy or eager loading is used. To accomplish this, it uses aliases so that the extra tables are "anonymized" against the normal query criterion, which is why explicit query criterion does not affect them. The goal is that whether or not eager loading is used, the same Query criterion will produce the *identical* result.
For information on how to set up explicit joins, see the tutorial on "Querying with Joins" at.
I set the "foo_id" attribute on my instance to "7", but the "foo" attribute is still None - shouldn't it have loaded Foo with id #7 ?
SQLAlchemy doesn't currently tie event handlers to foreign key-holding object attributes. While this is something we may try in a future release, traditionally SA's usage model has focused on dealing with object instances and their direct association with each other in Python, rather than associating them through manual manipulation of foreign key attributes. Particularly with objects that aren't even persisted in the database yet (i.e. transient), we don't like to populate attributes "automagically" based on some other event, with the one notable exception being backreferences via the backref flag.
For this particular behavior, the foreign key attribute would "expire" the related item. So setting foo_id on your Bar to 7 means that Bar.foo gets *deleted*, and will load Foo id #7 the next time you access Bar.foo; otherwise setting a whole set of individual attributes would immediately trigger that many individual SELECT statements and be extremely wasteful.
But, what if you had previously set Bar.foo to something already, which conflicted with id #7, do we just wipe it out, or raise an error ? Furthermore, the presence of the reverse collection of bars on Foo #7 is concerning as well...do we expire that collection? What about other Bar instances which may have been newly added to it? Or, do we immediately flush and reload Foo's collection of Bars ? Again the setting of attributes has created a complex set of decisions to make. For reasons like these, the level of surprise behavior and potential for either wasteful or outright erroneous situations seems high, and in the spirit of "don't guess" we've done just that.
So, when you do set a foreign key attribute and you want to load the related items in, just use session.refresh() or session.expire(). If operating upon the whole instance, issue a flush() first. But, as of 0.4.1 these methods also take a list of individual attribute names as arguments, so you can even do it without flushing, by just expiring the individual attribute:
bar.foo_id = 7 session.expire(bar, ['foo']) # expires the 'foo' attribute on `Bar` print "foo #7:", bar.foo # immediately loads Foo #7
How Do I use Textual SQL with ORM Queries ?
Textual blocks can be assembled at any point within Query. Individual components can be sent to filter():
session.query(User).filter("id<:value and name=:name").\ params(value=224, name='fred').one()
and full statements can be used with from_statement():
session.query(User).from_statement("SELECT * FROM users where name=:name").params(name='ed').all()
Why is my PickleType column issuing unnecessary UPDATE statements ?
When pickling a Python datastructure such as dict or set, Python's pickle module does not produce the same picklestring for the same collection each time. The default behavior of PickleType when used in the ORM is to check for changes by comparing the full pickled string, so that it can detect changes within the structure. To disable this behavior and instead use the equals operator, set up the column using PickleType(mutable=False):
Column('mycol', PickleType(mutable=False))
Note that PickleType was using the is operator in this case, previous to version 0.4.1.
Alternatively, any desired comparison function can be installed on PickleType using the comparator argument:
def mycomparator(x, y): return x == y # ... Column('mycol', PickleType(comparator=mycomparator))
Is there a way to automagically have only unique keywords (or other kinds of objects) without doing a query for the keyword and getting a reference to the row containing that keyword?
When people read the many-to-many example in the docs, they get hit with the fact that if you create the same Keyword twice, it gets put in the DB twice. Which is somewhat inconvenient.
This recipe was created to address this issue: UsageRecipes/UniqueObject
Integrating with external tools
py2exe
There is a problem when converting scripts using SQLAlchemy to windows executable with py2exe. Problem and its possible solution was described in this post. And patch for SQLAlchemy trunk (as for 2007-02-08) is in attachment to this post. | http://www.sqlalchemy.org/trac/wiki/FAQ | crawl-001 | refinedweb | 4,414 | 53.51 |
Raspberry Pi
Raspberry Pi Logging Program
Written by Matthew Kirk.
In the previous section, we wrote a program to log temperature data. But it was all done from the command line, and our program was designed around that one type of data— we couldn't use it to log anything else.
So to make our temperature logging program into a fully fledged data logger that can deal with any type of measurement, and to make it easier to use, I wrote a graphical interface to the program, that lets you select what kind of data you want to measure using a "plugin" system (see below), and lets you fiddle with the settings of how you record the data.
The GUI that I have created allows us to write small pieces of code that will measure data from a new type of sensor, and easily get the sort of data logger that was created in my other tutorial for a temperature sensor, without having to write anything more.
You can download the program from here. This package also contains the memory sensor and temperature sensor plugins, so you can get straight down to measuring something. To run it, you will need to install the PyQt4 package using sudo apt-get install python-qt4. If you want to be able to live plot the data as you are measuring it, you will need to install gnuplot using sudo apt-get install gnuplot-x11. To use the temperature sensor plugin, you will need to update the kernel, using the instructions in this step and then install the RPi.GPIO Python module, as described in the first two bullet points of this step. To run the program, first extract the files from graphical-logger.tar.gz by right clicking on the file and selecting "Extract here" from the menu, or by using tar -xzf graphical-logger.tar.gz. Then find the file main.py, double-click on it and select "Execute".
Our extra piece of code that "plugs in" to the already written logging code to give it extra abilities is known as a plugin, and this document will lead you through the simple steps to write a plugin.
These screenshots show the two tabs of the GUI logging program, the plugin tab and the configuration tab. The main tab lists what plugins have been written, and gives you the option of plotting the data live as it is collected. The configuration tab lets you change the settings of how the data is collected, and how it gets stored.
First Things First
To start with, we'll need to find a sensor we want use, and work out how to get measurements from it in Python. Once you have done this, we can start writing our plugin.
Creating a plugin
To create a plugin, we need to write a Python program, but in a particular way, an example of which is shown below:
class MyNewSensor(Sensor):
return data
What we have here is a Python class. Put simply, a class is a way of representing a "thing" to a computer. In our case, the thing we want to represent is a sensor. This MyNewSensor class is based on the Sensor class, and has a single function called get_data. This function should do whatever it needs to to get data from the new sensor, and then should return the data.
Because we are using the Sensor class that I have written, all the rest of the functions that could be used (see the end of this tutorial) are already taken care of for us. But if we want to make our plugin program use these extra functions, then all we have to do is add our own version to our code, and the logging program will make use of those instead. This is one of the most important features of this "class-y" way of programming.
For those who are interested, this business with the classes is part of what is known as "Object Oriented Programming", or "OOP". This is a style of programming where you try and write your program using classes to represent different things in your program. When you use a class in your program, we call them objects, hence the name Object Oriented Programming. Some more modern programming languages like Java were designed around this style, and others, like Python, allow you to use classes and objects when you want to.
The feature I described above, whereby the predefined code from Sensor.py is used, unless you write your own, is known as inheritance, and is one of the most useful and powerful features of OOP. It is called inheritance because the Sensor class I have written is called the "parent" class, and the class we write ourselves the "child" class, and functions in the parent class get "inherited" by the child.
The naming scheme is important - if your class is called MyNewSensor, it needs to be put in a file called my_new_sensor.py, otherwise it won't work with the logging program.
What more can we do?
What is shown here is the minimum amount of code that we need to write. There are a few other functions that we could write, which are understood by the logging program. These functions are setup, wait_for_start, stop_measuring and finished. Using these extra functions, we can make use of more complicated sensors, that need some setting up before measurement, or have our logging program react to input. In the file _sensor.py, there are more detailed descriptions of what these functions can do, but here is an overview:
- setup - Any bits that need to be done before taking measurements, such as setting up variables or turning on something.
- wait_for_start - If your circuit has some way of indicating that logging should start, like a button, this function is where you should wait for it.
- stop_measuring - If you have another button or something to indicate that you want to stop measuring, this is where you should check this. If you want this function to be used, you will also have to set can_quit to True in your class.
- finished - Shutdown or stop anything that needs to stopped once logging stops, such as turning off an LED.
If you read the temperature_sensor.py file, you will see how each of these functions are used to make that plugin behave exactly as the temperature logger program that was developed in the other tutorial. If you haven't worked through that tutorial, and so don't know how that temperature plugin should work, don't worry. Included in the program download along with the temperature sensor plugin is a memory sensor plugin (which uses all the available functions), and a complete guide to writing this plugin is in the next section.
Plugin Tutorial
This is a tutorial on how to write your own plugin script, but with a slight twist. Instead of making use of a hardware sensor, we are going to get our data from the Pi itself - this means you can do this yourself without having to get hold of any bits and pieces, and there's a much lower risk of burning your finger on anything.
What we are going to measure is how much memory is available on the Pi - that is, how much free RAM we have, in megabytes. Our plugin will also record the percentage of free RAM. We are going to use all five functions available to us, to make our plugin start and stop is slightly interesting ways. As we are measuring memory let's call our class MemorySensor, and so save it to a file called memory_sensor.py.
Let's start with setup. In our plugin, we will write the word "started" to a file in the current directory.def setup(self):
filename = "logging" with open(filename, "w") as f:f.write("started")
The with statement here you may or may not have seen before - it is a Python-y way of opening files to avoid having to remember to close them later. The code above is completely equivalent to:def setup(self):self.filename = "logging"
f = open(self.filename, "w")
f.write("started")
f.close()
However, the version using with is generally considered better.
Next, we are going to write the wait_for_start function. In our plugin, the wait for start will be a simple delay, rather than being a wait for input.def wait_for_start(self):time.sleep(5)
The function sleep is in the time module, as you can see, so in order to use this, we will also have to add the line import time to the begining of the text file.
Now, for the most important function - the get_data function. Our data comes from the file /proc/meminfo, which contains lots of information about the RAM. Try doing cat /proc/info to see what format the information comes in. You'll see something like this:
As you can see, the information in the file has the format of a description, folllowed by a colon, then some blank space, then a number, and a unit.
So to get out the amount of free RAM, we will need to read this file, select the second line, separate out the number, and then divide by 1000 to get the value in MB. To get the percentage free, we also need the value from the first line, the total memory, and we'll need to divide the free memory by the total and multiply by 100.def get_data(self):with open("/proc/meminfo") as f:line1 = f.readline()total_mem_KiB = float(line1.split()[1]))
In this code, we are opening the file /proc/meminfo and reading the first two lines. Then we take the strings we get, and split them up using the whitespace, and select the middle section, and turn that piece of text into a float.
Then we do something that is (possibly) slightly unexpected. We take our number, and multiply by 1.024 and then divide by 1000. As you might guess from the variable names, this is because, despite what the file says, the numbers is gives us are in kibibytes (KiB), not kilobytes (kB). The reason behind this is slightly complicated, but unless you really care, it's not worth worrying about.[1] All I'm doing is just changing the units.
Finally, we also calculate the percentage of free memory, and return a tuple contain both pieces of data. Because we are returning two bits of data, we also need to change the value of the variable no_of_measurements to 2. Because we know we will always want to return two different pieces of data, we can change this variable at the begining of the program. But if we weere unsure about how many pieces of data we would be returning, we could set this variable in the setup function, as is done in the temperature plugin.
Remember the file we created in setup? Well, how about we make it so that deleting the file will stop the logging?def stop_measuring(self):file_exists = os.path.isfile(self.filename)
return not file_exists
The function isfile, which is in the module os.path (so we need to do import os.path), returns True or False depending on whether or not a file exists at the name the we give it. So if it returns True, the file does still exist, and so we return not True which is False.
There is one last thing we need to do to make this work. If you read the _sensor.py file, you'll see that if we want to allow stopping half way through, we need to set the value of the variable can_quit to True. This is because the code in logger.py checks this variable and doesn't use the stop_measuring function.[2]
The last function we are going to write is the finished function. To finish off, we will get rid of the file that we created at the begining in setup.def finished(self):if os.path.isfile(self.filename):os.remove(self.filename)
Now we've written all our code, this is what the full program looks like:
import os.path
from plugins._sensor import Sensor
class MemorySensor(Sensor):
def setup(self):
with open(self.filename, "w") as f:
def wait_for_start(self):
def get_data(self):)
def stop_measuring(self):
return not file_exists
def finished(self): | https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/temperature/plugins.html | CC-MAIN-2017-09 | refinedweb | 2,069 | 69.92 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
2
results of 2
Hello folks-
Here is the latest batch of plugin updates, covering the last 2 months.
It contains 16 updates and additions to Plugin Central. All of the new
releases work with both jEdit 4.0 and 4.1 except LaTeXTools 0.1.1 and
XML 0.9, which are 4.1 only.
* AntelopePlugin 1.141: added Assert, If, Try, and Variable Ant tasks;
now works in multiple views in jEdit; rewrote docs in docbook format,
added a lot of documentation; improved AntLogger; added option for
display of targets starting with a dash; requires jEdit 4.0pre1,
Console 3.3, ErrorList 1.2, CommonControls 0.3, and JDK 1.4
* BufferList 0.8: changed display so that files are grouped by directory
now; removed list of recent files; entries are shown in File System
Browser colors; allow operations on multiple selections of buffers:
save, close, reload (This allows you to close, for example, all files
of one directory at once); bugfix: bufferlist showed up on buffer
change even if "Auto-show on buffer change" was off; requires jEdit
4.0final and JDK 1.3
* CodeBrowser 1.0.0: initial Plugin Central release; requires jEdit
4.0pre4 and JDK 1.3; requires separate download of Exuberant C Tags
from
* CodeLint 0.1: initial Plugin Central release; requires jEdit 4.0pre1,
ErrorList 1.2, Console 3.1, and JDK 1.3; requires separate download of
JLint from
* CommonControls 0.5: something new in PopupList.java; requires jEdit
4.0pre8 and JDK 1.3
* Dict 0.3: initial Plugin Central release; requires jEdit 4.0pre8 and
JDK 1.3
* GruntspudPlugin 0.1.0-beta: it is now possible to turn off the use of
the native file mode handler (i.e. chmod on *nix, attrib on windows)
and use Java's File.setReadOnly method; all the files now show in
report dialog when commands are in different directories (multiple
commands run); an empty message on import is now caught before the
command starts to run; Revision / Branch options are now blank upon a
new import; text file detection now takes tabs into account; refreshes
better upon file updates (CVS, external); last selected view is now
reselected correctly when a view is closed; more spurious rescans
removed; selection is kept when flat mode is toggled; many other
changes; requires jEdit 4.0pre4, JDiff 1.3, Console 3.1, InfoViewer
1.1 (optional), and JDK 1.3
* JTools 1.2.1: adds a couple of options to the toggle comments tools: 1
- The selection can be retained after toggling comments. 2 - A new
mode for line comments allows comment symbols to be inserted at the
leftmost indent of the selected lines; requires jEdit 4.0pre1,
CommonControls 0.2, ErrorList 1.2, and JDK 1.3
* JythonInterpreter 0.9: fixed bugs #606782, #536885, #524056; added
option to uset UP/DOWN for history navigation; added option to reuse
the export to file; added option for the export to file to remember
the original source file (when export to is pressed it will use again
the original source file); removed all old Console dependencies; added
option pane to define colors and font; requires jEdit 4.0pre6 and JDK
1.3
* LaTeXTools 0.1.1: initial Plugin Central release; requires jEdit
4.1pre4 and JDK 1.3
* LookAndFeel 0.10: updated Skin LAF to version 1.2.3; updated Kunstoff
to version 2.0.1; updated Metouia to version 1.0 beta; added option
for setting all widget fonts to use fonts specified from Global
Options > Appearance (Chris Kent); requires jEdit 3.1final and JDK
1.2; includes Skin L&F 1.2.3, Kunstoff 2.0.1, Metouia 1.0 beta, and
Oyoaha 3.0rc1
* PMDPlugin 1.0: updated to use PMD-1.01; requires jEdit 4.0final,
ErrorList 1.2, and JDK 1.3; includes PMD 1.01
* Sessions 1.0.1: new option to add the session toolbar at the top of
the BufferList plugin (needs BufferList 0.8 or higher); fixed bug
#611660: Sessions 0.7.3 referencing wrong icons; requires jEdit
4.0final and JDK 1.3
* TextTools 1.10: "Sort" action now sorts rectangular selections; more
options in "Advanced Sort"; added "Delete Duplicates" action; insert
variable-length text in "Block Fill/Insert"; fixed minor bugs;
requires jEdit 4.0pre1 and JDK 1.3
* XML 0.9: if a DTD does not appear in a catalog file, the plugin will
now offer to download it; downloaded DTDs are cached in ~/.jedit/dtds;
added "Split Tag" command; updated for 4.1 icons; many bug fixes;
requires jEdit 4.1pre5, ErrorList 1.2, and JDK 1.3
* Xrefactory 1.5.10je-final: generation of import statement from
completions; a few minor bug fixes; requires jEdit 4.0final and JDK
1.4
-md
jEdit 4.1pre6 is now available from <>.
Thanks to Cullen Linn, Giulio Piancastelli, John Perry, Joshua Miller,
Kris Kopicki, Marcelo Vanzin, Peter Cox, Rex Young, Ryan Grove,
Rudolf Widmann and Steve Snider for contributing to this release.
+ Help System Changes:
- The user's guide, FAQ, and plugin documentation is now searchable. An
in-memory index is built the first time a search is performed during a
jEdit session. Currently, the search engine is very simple and only
performs whole-word matching (so searching for "auto save" and
"autosave" will yield different results). Results are ranked according
to how many times that word appears in the document. Searching for
multiple words returns all documents that contain at least one of the
words, however documents containing multiple search terms are ranked
higher.
+ Editing Changes:
- The "Indent on Enter" and "Indent on Tab" options are no longer.
(Actually, the former was removed in pre5, but I forgot to mention it
in the changelog).
The new way to change these options is to just rebind the ENTER and
TAB keys in the Shortcuts option pane. There are several actions they
can be bound to:
Insert Enter
Insert Enter and Indent
Insert Tab
Insert Tab and Indent
Indent Line (if you want TAB to indent anywhere in a line, not just in
the leading whitespace, like in emacs).
+ Syntax Highlighting Changes:
- Added .NET CIL syntax highlighting (Cullen Linn).
- Added Maple syntax highlighting (John Perry).
- Added NSIS2 syntax highlighting (Ryan Grove).
- Updated ColdFusion syntax highlighting (Joshua Miller).
- Updated Fortran syntax highlighting (Yifan Song).
- Updated PL-SQL syntax highlighting (Steve Snider).
- Updated Prolog syntax highlighting (Giulio Piancastelli)
+ Miscellaneous Changes:
- Holding down Alt while scrolling with the wheel mouse now moves the
caret up or down. Holding down Shift and Alt will extend the selection
up or down (Rudolf Widmann).
- The "HyperSearch Results" window now remembers old search results.
They can be cleared out by right-clicking and selecting "Remove Node"
from the resulting popup menu (Peter Cox).
- The content of the status bar can be customized in the new
"Status Bar" pane of the Global Options dialog box (Kenrick Drew).
+ Platform-Specific Changes:
- Updated MacOS plugin (Kris Kopicki).
- Added MacOS menu to the Plugins menu. Provides time saving features
like revealing files and folders in the Finder, and running
AppleScripts.
- You can run AppleScripts (compiled, uncompiled and standalone).
Scripts must be located in the scripts folder in the jEdit folder.
They can be either plain text or compiled scripts. Scripts must have
their type and creator set correctly, have a .scpt (for compiled
scripts) or .applescript (for uncompiled scripts) extension, or
both. Note: Scripts that require user interaction are not supported.
- Added and option to Mac OS Plugin settings to change the script
folder search depth.
+ Bug Fixes:
- Removed erronous check box from Buffer Options dialog box.
- Changed the code that makes sure windows are always displayed in the
visible region of the screen. It now only moves windows if their
top-left corner is invisible; windows with other parts obscured are
not moved.
- Fixed possible ArrayIndexOutOfBoundsException with regular expression
syntax rules.
- Plugin manager's dialog boxes are now parented by the plugin manager,
not the view (Marcelo Vanzin).
- Fixed repaint problems in tool bar option pane's buttons (and possibly
other places) by making RolloverButton's isOpaque() method return
false.
- Fixed highlighting of digits in C, C++, Java and similar modes; now
hex digits are only highlighted if the number begins with "0x".
- Print output would be clipped if the "print header" option was
switched off.
- Fixed obscure deadlock condition in ReadWriteLock.java (Rex Young).
- The drop down box on a history text field was painted in the menu
foreground, not the text field foreground. This could look bad if text
field colors were customized using the "Use jEdit text area colors in
all text components".
- Fixed a problem where the Complete Word popup would not be closed
under some circumstances.
- "Unsplit Current" command no longer changes the divider location to
zero.
- "Copy Append to Register" and "Cut Append to Register" commands would
throw NullPointerExceptions when appending to a register that didn't
exist. The correct behavior is to put the selected text in the
- $ at the end of an abbreviation expansion is now treated literally.
- Fixed an obscure bug that could result in exceptions being thrown by
the caret status display.
- Fixed a problem with BeanShell namespace handling that would cause
some macros to fail (eg, Add_Prefix_and_Suffix.bsh).
+ API Changes
- Macros.Handler.accept() now takes a path name, not a file name. This
should not affect existing plugins since none that I know of
override this method. | http://sourceforge.net/p/jedit/mailman/jedit-announce/?viewmonth=200211 | CC-MAIN-2015-22 | refinedweb | 1,599 | 69.28 |
Forum Index
Am Thu, 26 Apr 2012 21:24:46 -0700
schrieb bcs <bcs@example.com>:
> On 04/26/2012 05:37 AM, Steven Schveighoffer wrote:
> >
> > versions should be defined for the *entire program*, not just for certain files. And if they are defined just for certain files, define them in the file itself.
> >
>
> Versions should be defined for the *entire program*, not just for whatever you happen to be compiling right now.
>
> Is there any way to make different modules complain if you link object files built with diffident versions set?
You would need to modify the linker (optlink/ld) to understand that two object files compiled by a D compiler include a section that informs it about enabled version tags, or maybe abuse existing systems to the end that the poor programmer mixing versions up gets a generic error message that doesn't make any sense.
But more importantly, you can bundle up object files into .a archives aka static _libraries_ with yet another tool, I think ar is its name. It would not work out if e.g. GtkD compiled its static libraries with one set of versions and you compile your app with another set. Or in contrast, you would have to enable versions like 'cairo_new_backend', 'druntime_use_old_gc', 'whatever_any_other_object_file_was_compiled_with' for your program, even if you don't need them.
To sort that mess out again, the compiler would have to create a list of versions that affect the currently compiled module at all. It would then find out that for your program 'druntime_use_old_gc' is irrelevant and exclude it from the version check.
my_app GtkD Phobos <conflicts?>
my_ver on - - no
unused off - - no
cairo_new_backend - on - no
druntime_use_old_gc - - off no
debug on off off yes
X86_64 on - on no
In this case the linker would complain only about a version named 'debug' that was enabled in your application, but not for the object files of GtkD and Phobos.
--
Marco
P.S.: Can I have a software patent on this?
What about a templated module?
module test(bool option1, T);
Imported like this:
import test!(true, Foo);
It could act like the entire module was wrapped in a template, and the import would become:
import test;
mixin test.test_template!(true, Foo);
On 28.04.2012 0:22, Matt Peterson wrote:
> What about a templated module?
>
> module test(bool option1, T);
>
> Imported like this:
>
> import test!(true, Foo);
>
> It could act like the entire module was wrapped in a template, and the
> import would become:
>
> import test;
> mixin test.test_template!(true, Foo);
I would rather see this as import test with specified version identifiers.
import test!(some_version);
//imports module but treats it contents as if with "version = some_version;" added at the top of it
--
Dmitry Olshansky
On Friday, 27 April 2012 at 20:26:38 UTC, Dmitry Olshansky wrote:
>
> I would rather see this as import test with specified version identifiers.
>
> import test!(some_version);
> //imports module but treats it contents as if with "version = some_version;" added at the top of it
This is inconsistent with the syntax and the way templates work in the rest of the language, as well as being a lot less flexible.
It wouldn't be hard to mixin a string built from a list of identifiers passed in to set those versions. | http://forum.dlang.org/thread/yaomhbsosaxailktbtcw@forum.dlang.org?page=2 | CC-MAIN-2016-18 | refinedweb | 546 | 68.91 |
![if gte IE 9]><![endif]>
Documentation says: “The minimum non-zero value is 1”documentation.progress.com/.../index.html
If the -usernotifytime is set to non-zero value then each client locks USR latch once per the -usernotifytime interval or once per 10 seconds if the -usernotifytime is less than 10 seconds. Does it mean that the minimum value of the -usernotifytime is 10 seconds?
I saw the real cases where the -usernotifytime was set to a few minutes (3-10 minutes). Would it be wrong to set the -usernotifytime let’s say to 10 seconds? For comparison: enabling statement cache would create much higher USR latch activity.
Yes, for ABL clients the minimum is effectively 10 seconds. This limit is imposed because the client could be a remote client connected to multiple databases, and the network IO could be significant in extreme cases. There is also some synchronization within the client process. Note that SQL does not have this limit because the SQL server is not a remote connection to the database.
You could set the value to 10 seconds, but this could also increase the network IO in addition to the USR latch contention.
Off-topic note:
V11 introduced the -noneedsattnthread parameter: no needs an attention thread used for -usernotifytime. ABL session will not launch the second thread when it's connected to a database with non-zero value of the -usernotifytime. Othewise session will consist of two threads. We can see them: ps -T p <abl-client-pid>.
PID SPID TTY TIME CMD
21456 21456 pts/0 00:00:00 _progres
21456 21473 pts/0 00:00:00 _progres
gdb -p <abl-client-pid>
(gdb) info thread
(gdb) thread 1
(gdb) where
(gdb) thread 2
(gdb) where
Output for the second thread will contain rnNeedsAttnThread
Thanks, Kyle!
And yes, the remote ABL clients send to a server and get back one 64-byte message per notification.
By the way, IMPORT statement blocks the client's notifications. Is it expected?
The following code will reproduce the issue again any database started with non-zero -usernotifytime:
output to value("test.input").
output close.
input through value("tail -f test.input").
import unformatted ^.
input close.
message "Done" view-as alert-box.
The session will run an attention thread but it will not lock the USR latch and will block the idxactivate. Program above immitates the import of data from the third-party systems.
READKEY, UPDATE or WAIT-FOR don't have such issue.
I believe that's a bug in the ABL client. The client should allow notifications to be checked whenever the client is in a long wait for external input outside of a transaction, and this should include that usage of the IMPORT statement.
I will open a case. | https://community.progress.com/community_groups/openedge_rdbms/f/18/p/59239/202292 | CC-MAIN-2019-39 | refinedweb | 461 | 65.01 |
.
Understanding the HTML image element>
Creating graphics on-the-fly with JavaScript
If you're not an X Window user, you probably haven't heard of the XBM graphics format. This format is supported by Netscape and many other browsers. It's a simple format that offers only black-and-white rendition and is intended to be used in X Window applications for basic icons. It's occasionally used in Web pages to produce icons and hit counter digits, and for various other graphic images.
What makes the XBM format particularly interesting to us "JavaScripters" is that you can create the XBM image in JavaScript and display it in Netscape. This allows you to create and manipulate simple images in real time, using nothing more than JavaScript. The images are limited to black-and-white only.
Understanding the XBM format
The XBM format is really a C-language variable assignment. The XBM file format begins with two define statements that indicate the width and height, in pixels, of the image. The width is usually, but not always, stated in multiples of eight. The height can be any value.
Next comes the bit representation of the image. This consists of one or more values, specified in hexadecimal format. Each value defines an area of up to eight pixels wide by one pixel in height. For instance, the following XBM file produces a small 8-by-1-pixel line:
#define text_width 8 #define text_height 1 static unsigned char text_bits[] = {0xff};
Note that you can make the line shorter by specifying a smaller width. But you cannot make a line longer without adding more hex values. You can do this by separating each value with a comma, like this:
#define text_width 16 #define text_height 1 static unsigned char text_bits[] = {0xff,0xff};
This defines a 16-by-1-pixel line. Each 0xff value is used for eight pixels of the line. Of course, an XBM graphic of just a very short line isn't too useful. Most of the time you will want to add height to the image. This is done by increasing the height value, and by adding more hex values in the list. Netscape will read the image going from left to right, top to bottom. You can better visualize the arrangement of the values if you put them in "spreadsheet" column/row format. Each column represents eight bits of width; each row represents one bit of height.
The following produces a 16 by 8 bar. Notice the formatting of the 0xff values. This is purely for easy readability. It helps you visualize that there are two 8-pixel columns (for a total of 16 pixels), and eight rows.
#define text_width 16 #define text_height 8 static unsigned char text_bits[] = { 0xff,0xff, 0xff,0xff, 0xff,0xff, 0xff,0xff, 0xff,0xff, 0xff,0xff, 0xff,0xff, 0xff,0xff};
In all of the above examples the graphic is composed of all-black line segments. You can use other hex values to produce line segments with black and white bits. White (actually, the absence of black) is represented by a 0 bit; black is represented by a 1 bit. Put eight bits together and you get a value from 0 to 255. The hex value 0xff is decimal 255, or binary 11111111. All those ones create the all-black line. Conversely, 0x00 will make an all-white line, as it's decimal 0, or binary 00000000.
To make an image different, number values are used to represent the bit pattern desired. The pattern 1000001, which is decimal 128 (or hex 0x81), produces a line with only one bit turned on at each end. The middle is white. The bits of the digit are organized right to left with respect to how the bits appear on the screen. Remember this when calculating the bits to make a picture.
A practical XBM example
One practical use for XBM graphics is a hit counter. JavaScript provides all the images for each number. By itself, JavaScript is not capable of storing the number of times someone visits your Web site; this requires a CGI program. For the following example, though, we'll create a "fake" hit counter that can be used instead. This hit counter makes up an absurdly large number and displays that as the number of people who have visited the page. It's intended to serve as comic relief; of course you should not actually represent the number as valid, because some people take "hit counters" pretty seriously.
The random hit-counter script shown below uses a randomizing function based on the Park-Miller random number generator algorithm. A specific random number routine is used instead of JavaScript's Math.random property because in Netscape 2.0 the random property only functions when using the Unix platform. Here is the complete random_hit.html example.
Random hit count
<HTML> <HEAD> <TITLE>Random Hit Count</TITLE> <SCRIPT> function setHits() { RandNum = new randomNumberGenerator(); val = RandNum.next() + "" val = val.substring (2, 12) PicSize = "HEIGHT=16 WIDTH=8"; Digit = new Array(); Head = "#define count_width 8\n#define count_height 16\nstatic char count_bits[] = {" Digit[0]=Head+"0xff,0xff,0xff,0xc3,0x99,0x99,0x99,0x99,0x99,0x99,0x99,0x99,0xc3,0xff,0xff,0xff};" Digit[1]=Head+"0xff,0xff,0xff,0xcf,0xc7,0xcf,0xcf,0xcf,0xcf,0xcf,0xcf,0xcf,0xc7,0xff,0xff,0xff};" Digit[2]=Head+"0xff,0xff,0xff,0xc3,0x99,0x9f,0x9f,0xcf,0xe7,0xf3,0xf9,0xf9,0x81,0xff,0xff,0xff};" Digit[3]=Head+"0xff,0xff,0xff,0xc3,0x99,0x9f,0x9f,0xc7,0x9f,0x9f,0x9f,0x99,0xc3,0xff,0xff,0xff};" Digit[4]=Head+"0xff,0xff,0xff,0xcf,0xcf,0xc7,0xc7,0xcb,0xcb,0xcd,0x81,0xcf,0x87,0xff,0xff,0xff};" Digit[5]=Head+"0xff,0xff,0xff,0x81,0xf9,0xf9,0xf9,0xc1,0x9f,0x9f,0x9f,0x99,0xc3,0xff,0xff,0xff};" Digit[6]=Head+"0xff,0xff,0xff,0xc7,0xf3,0xf9,0xf9,0xc1,0x99,0x99,0x99,0x99,0xc3,0xff,0xff,0xff};" Digit[7]=Head+"0xff,0xff,0xff,0x81,0x99,0x9f,0x9f,0xcf,0xcf,0xe7,0xe7,0xf3,0xf3,0xff,0xff,0xff};" Digit[8]=Head+"0xff,0xff,0xff,0xc3,0x99,0x99,0x99,0xc3,0x99,0x99,0x99,0x99,0xc3,0xff,0xff,0xff};" Digit[9]=Head+"0xff,0xff,0xff,0xc3,0x99,0x99,0x99,0x99,0x83,0x9f,0x9f,0xcf,0xe3,0xff,0xff,0xff};" for (Count = 0; Count < val.length; Count++) { dig= val.substring (Count, Count+1) setDigit (dig, PicSize) } }
function setDigit (Dig, PicSize) { // this function to overcome a bug if (Dig=="0") { document.write ("<IMG SRC='JavaScript:Digit[0]'" + PicSize) } else if (Dig=="1") { document.write ("<IMG SRC='JavaScript:Digit[1]'" + PicSize) } else if (Dig=="2") { document.write ("<IMG SRC='JavaScript:Digit[2]'" + PicSize) } else if (Dig=="3") { document.write ("<IMG SRC='JavaScript:Digit[3]'" + PicSize) } else if (Dig=="4") { document.write ("<IMG SRC='JavaScript:Digit[4]'" + PicSize) } else if (Dig=="5") { document.write ("<IMG SRC='JavaScript:Digit[5]'" + PicSize) } else if (Dig=="6") { document.write ("<IMG SRC='JavaScript:Digit[6]'" + PicSize) } else if (Dig=="7") { document.write ("<IMG SRC='JavaScript:Digit[7]'" + PicSize) } else if (Dig=="8") { document.write ("<IMG SRC='JavaScript:Digit[8]'" + PicSize) } else if (Dig=="9") { document.write ("<IMG SRC='JavaScript:Digit[9]'" + PicSize) } }; } </SCRIPT> </HEAD> <BODY> A whopping <SCRIPT>setHits();</SCRIPT> people have visited this page since tea-time yesterday. Click Reload to see the number magically change. </BODY> </HTML>
Using JavaScript for on-the-fly image sizing
As mentioned earlier in this column, Netscape will scale images to fit the HEIGHT and WIDTH attributes you have defined for them. One common application of this resizing capability is using a small GIF image to create any length of bar you want. Suppose the original GIF file is 16 by 16 pixels. An <IMG> tag like the following will "stretch" it to 16 by 200 pixels:
<IMG SRC="small_box.gif" HEIGHT=16 WIDTH=200>
As long as the image is a solid color, it will not degrade when you resize it. The image can even contain stripes of other colors, but the colors must be horizontal. The stripes stretch with the rest of the image, and it looks like one long bar.
Now here's where JavaScript comes in. Instead of hard coding the width and/or height of the image, have JavaScript do it for you. You can use this technique to create horizontal bar charts or to design special effects for your page. Here's an example of using JavaScript to make a bar chart. The function stretches a small solid GIF image (the actual size is unimportant; the smaller the better for transmission speed) to the width specified.
JavaScript graphic sizing example
<HTML> <HEAD> <TITLE>JavaScript Graphic Sizing Example</TITLE> <SCRIPT> makeBar ("250", "Cat"); makeBar ("175", "Dog"); makeBar ("80", "Rat");
function makeBar (Width, Text) { var GifName = pathOnly (location.href) + "blu.gif"; var Height = 12; var Temp = "<IMG SRC='" + GifName + "' HEIGHT='" + Height + "' WIDTH=" + Width + "' ALIGN=baseline> " + Text + "<P>"; document.write (Temp); }
function pathOnly (InString) { LastSlash=InString.lastIndexOf ('/', InString.length-1); OutString=InString.substring (0, LastSlash+1); return (OutString); } </SCRIPT> </HEAD> <BODYv This is body text. </BODY> </HTML>
You can readily modify the script to use a different GIF image and to increase or decrease the height of the bars. I have the bars set at 12 pixels high.
Conclusion
Graphics can enhance any Web page, no matter what its content. Combining JavaScript with graphics is a quick and simple way to give your Web page a dynamic element and will help set it apart from the rest. You can use JavaScript to select the graphic to use, to size a graphic, and even to create the graphic, using the convenient XBM graphic format.
Learn more about this topic
-
Netscape's documentation for JavaScript.
-
Yahoo's JavaScript pages. | http://www.javaworld.com/article/2077223/learn-java/using-javascript-and-graphics.html | CC-MAIN-2014-52 | refinedweb | 1,610 | 56.86 |
I am a newbie in Django and I would really appreciate it if you could offer me some guidance. I am trying to create a form that allows a user to tick one or more options. I understood that I must use MultipleChoiceField field with a CheckboxSelectMultiple widget but the Django documentation doesn't offer an example on this topic. I would be grateful if you could offer me an example and explain how do I handle the results. For example if I have a form with the options a b c d, and the user ticks c and d. Also how do I specify the choices(I don't want to use a db, a list of strings is what I have in mind)? Thanks a lot
hope this helps :D
from django import forms class Test(forms.Form): OPTIONS = ( ("a", "A"), ("b", "B"), ) name = forms.MultipleChoiceField(widget=forms.CheckboxSelectMultiple, choices=OPTIONS) | https://codedump.io/share/teaeDvBmcEhx/1/django-form-multiple-choice | CC-MAIN-2017-26 | refinedweb | 153 | 61.77 |
NAME
Tk_DoOneEvent, Tk_MainLoop, Tk_HandleEvent - wait for events and invoke event handlers
SYNOPSIS
#include <tk.h>
int Tk_DoOneEvent(flags)
Tk_MainLoop()
Tk_HandleEvent(eventPtr)
ARGUMENTS
- int flags (in)
This parameter is normally zero. It may be an OR-ed combination of any of the following flag bits: TK_X_EVENTS, TK_FILE_EVENTS, TK_TIMER_EVENTS, TK_IDLE_EVENTS, TK_ALL_EVENTS, or TK_DONT_WAIT.
- XEvent *eventPtr (in)
Pointer to X event to dispatch to relevant handler(s).
DESCRIPTION
These three procedures are responsible for waiting for events and dispatching to event handlers created with the procedures Tk_CreateEventHandler, Tk_CreateFileHandler, Tk_CreateTimerHandler, and Tk_DoWhenIdle. Tk_DoOneEvent is the key procedure. It waits for a single event of any sort to occur, invokes the handler(s) for that event, and then returns. Tk_DoOneEvent first checks for X events and file-related events; if one is found then it calls the handler(s) for the event and returns. If there are no X or file events pending, then Tk_DoOneEvent checks to see if timer callbacks are ready; if so, it makes a single callback and returns. If no timer callbacks are ready, Tk_DoOneEvent checks for Tk_DoWhenIdle callbacks; if any are found, it invokes all of them and returns. Finally, if no events or work have been found, Tk_DoOneEvent sleeps until a timer, file, or X event occurs; then it processes the first event found (in the order given above) and returns. The normal return value is 1 to signify that some event or callback was processed. If no event or callback is processed (under various conditions described below), then 0 is returned.
If the flags argument to Tk_DoOneEvent is non-zero then it restricts the kinds of events that will be processed by Tk_DoOneEvent. Flags may be an OR-ed combination of any of the following bits:
- TK_X_EVENTS -
Process X events.
- TK_FILE_EVENTS -
Process file events.
- TK_TIMER_EVENTS -
Process timer events.
- TK_IDLE_EVENTS -
Process Tk_DoWhenIdle callbacks.
- TK_ALL_EVENTS -
Process all kinds of events: equivalent to OR-ing together all of the above flags or specifying none of them.
- TK_DONT_WAIT -
Don't sleep: process only events that are ready at the time of the call.
If any of the flags TK_X_EVENTS, TK_FILE_EVENTS, TK_TIMER_EVENTS, or TK_IDLE_EVENTS is set, then the only events that will be considered are those for which flags are set. Setting none of these flags is equivalent to the value TK_ALL_EVENTS, which causes all event types to be processed.
The TK_DONT_WAIT flag causes Tk_DoWhenIdle not to put the process to sleep: it will check for events but if none are found then it returns immediately with a return value of 0 to indicate that no work was done. Tk_DoOneEvent will also return 0 without doing anything if flags is TK_IDLE_EVENTS and there are no Tk_DoWhenIdle callbacks pending. Lastly, Tk_DoOneEvent will return 0 without doing anything if there are no events or work found and if there are no files, displays, or timer handlers to wait for.
Tk_MainLoop is a procedure that loops repeatedly calling Tk_DoOneEvent. It returns only when there are no applications left in this process (i.e. no main windows exist anymore). Most X applications will call Tk_MainLoop after initialization; the main execution of the application will consist entirely of callbacks invoked by Tk_DoOneEvent.
Tk_HandleEvent is a lower-level procedure invoked by Tk_DoOneEvent. It makes callbacks to any event handlers (created by calls to Tk_CreateEventHandler) that match eventPtr and then returns. In some cases it may be useful for an application to read events directly from X and dispatch them by calling Tk_HandleEvent, without going through the additional mechanism provided by Tk_DoOneEvent.
These procedures may be invoked recursively. For example, it is possible to invoke Tk_DoOneEvent recursively from a handler called by Tk_DoOneEvent. This sort of operation is useful in some modal situations, such as when a notifier has been popped up and an application wishes to wait for the user to click a button in the notifier before doing anything else.
KEYWORDS
callback, event, handler, idle, timer | https://metacpan.org/release/NI-S/Tk-804.027/view/pod/pTk/DoOneEvent.pod | CC-MAIN-2021-39 | refinedweb | 648 | 62.07 |
akoon2Members
Content count314
Joined
Last visited
Community Reputation140 Neutral
About rakoon2
- RankMember
rakoon2 posted a topic in Math and PhysicsHello! I have a problem with my Matrix2.RotateClockWise( angle ) method! It works perfectly for a while. But it changes from clockwise rotation to countercockwise rotation after ~ 2/3 * PI. Itthen changes back to clockwise after ~2/3 * PI. Why? =§ Here the pseudo function: cos = cos(angle); sin = sin(angle); rotmat = [ cos -sin ] [ sin cos ] newRotatedMatrix = oldMatrix * rotmat; Maybe my drawing code is wrong.. or the function that produces the angle of a matrix2. ( I will post the source code a lil bit later ) Angle method: Returns the angle to the identity matrix. return Acos( m00 / sqrt( m00² + m01² ) ) Thank you guys! :)
rakoon2 replied to rakoon2's topic in General and Gameplay ProgrammingLoose trees.. ohh. But a perfect tree for static/non obing objects would be fine? I would have to reload the tree if I change levels or if the player moves over a portal. ( Eg leaving a closed areal. ) Thanks. See you later!
rakoon2 replied to rakoon2's topic in General and Gameplay Programminga) It doesn't seem to work with my dlls. If I enable debugging only the.exe cam be debugged. "No iformation aviable". b) Entity[] for a non changing list with random acess List<Entity> for a non chaning list with random acess LinkedList<Enitity> for a changing list. (Iteration only.) Thanks ^^,
rakoon2 posted a topic in General and Gameplay ProgrammingHello guys! I am currently developing 2d stuff for my engine. 1.) Every non static object is a node in a SceneGraph. 2.) Static collision-only objects are put into a AABRTree, axis aligned bounding recteangle tree. This allows fast collision detection. I have a few questions here: a:) Are AABRTress only useful for static / nonmoving objects? I would have to recalculate the tree everytime a object moves! b.) What type of objects should I make the tree for? Currently my collision objects have a Polygon and a Position and are not scenegraph nodes. Bad? =| 3.) How should I lay out my moveable entitys? Entity2 (2 meens 2d) inherites from Node and has a rigid body object inside. The rigid body object itself has the following things: 1.) A polygon 2.) A 2x2 matrix used for rotations 3.) Velocity 4.) Angular Velocity 5.) Mass, Inertia, Density, ... I have already coded a complete collision system btwn polygons. .) More to come :D Thank you for your time. Good bye! :D
rakoon2 posted a topic in General and Gameplay ProgrammingHello all, I have some questions. :) a) How can I enable debugging? ( VC# 2005 Beta1 ) b) What list/collection type should I use for a list of entitys/ collisions objects? List>Bla, LinkedList<Bla>, .. ? Thank you very much. Good bye! :p
rakoon2 replied to Nightwalk's topic in Math and PhysicsIsnt the gravity force calculated by this formula: F = (G * m1 * m2) / r² G.. Gravity constant 6.6742e-11 m1.. Mass of object one. m2.. Mass of object two. r.. Distance btwn the objects. Eg: const double cGravity = 6.6742e-11d; Vector2 Gravity = Vector2.Zero; double distancex = planet1.x - planet2.x; double distancey = planet1.y - planet2.y; double cgf = cGravity * planet1.mass * planet2.mass; Gravity.x = cgf / distancex; Gravity.y = cgf / distancey; planet1.AddForce( Gravity );
rakoon2 replied to rakoon2's topic in General and Gameplay ProgrammingHi! :) "Axis Aligned Bounding Box", well I already have that class. ( 3d ) Im going to use the first class concept. Im going to provide functions that take an extra offset( e.g. position ). @Matrix, well I didnt have that at school yet. Gonna take some time till we will do that. Any >good< tutorials? @Statics, ok. Normal and static functions have the same speed. ( If they are build the same, and use member data. ) A function that doesnt use member data and is static if better than a nonstatic. ( The this pointer isnt used. ) Thanks all. :)
rakoon2 posted a topic in General and Gameplay ProgrammingHello all! I have some design questions. First let me say im using C#. 1.) I have a general quad class representing a quad aligned on the axis: Looks something like this: sealed class AAQuad { private Vector2 position; private Vector2 size; //... public static bool Collides( AAQuad qA, AAQuad qB ){ // .. } } So I store the quad as a position and a size. Well, this isn't so good! Here a new attempt: sealed class AAQuad { private Vector2 min; private Vector2 max; //... public static bool Collides( AAQuad qA, Vector2 posA, AAQuad qB, Vector2 posB ) { // .. } } The quad is represented as a min and a max vector. The Collides() function uses the quad to get the collision mask and an extra vectors for an offset, that is normally inside the position Vector2 in attempt one. Which one is better? I think the 2nd, bcs. a quad is normally used by other higher systems that contain a position. 2.) When should I use static functions? Stroustrup.(spelled wrong) said that you should make as few as possible member functions. ( global in C++, static in C# ) Eg, wat about the Collide() function in my first question. 3.) More will come! ^^ Thank you! :D [/source]
rakoon2 replied to pmouse's topic in Math and PhysicsHe guys, how about precompiled sin / cos / etc tables? I mean something like that: ( C# code ) Class Tables { float Sin( float angle ) { switch( angle ) { case 12.0f: return 12312.0f; // example case 12.01f: return 12313.0f; // example //.. default: // NOt in the list, calculate it. return System.Math.Sin( angle ); } } }
rakoon2 replied to raptorstrike's topic in Engines and MiddlewareDo you mean something like that: ? I used it in on of my first SDL "games". Worked fine. int screenWidth = 640; int screenHeight = 480; int scrollx = 0; while( false == done ) { //... scrollx =(scrollx % screenWidth); ++scrollx;# DrawBackground( scrollx, 0, image, screen, screenWidth, screenHeight ); //... } /// i void DrawBackground( int scrollx, int scrolly, SDL_Surface* image, SDL_Surface* screen, int screenW, int screenH ) { SDL_Rect dest; SDL_Rect dest2; dest.x = 0 - scrollx; dest.y = 0; dest2.x = screenW - scrollx; dest2.y = 0; SDL_BlitSurface( image, 0, screen, &dest ); SDL_BlitSurface( image, 0, screen, &dest2 ); }
rakoon2 replied to rakoon2's topic in General and Gameplay ProgrammingOkay, thanks! :) I used thee .. ermm "public" method. I don't need error checking. Stupid! :) So functions can't return a reference? Are there any ogl font librarys for C#? ( multi platform ) Thanks guys.
rakoon2 replied to rakoon2's topic in General and Gameplay ProgrammingArgh, thats it. Thank you! :) I am used to C++! :) ::edit:: New question: (: How can i get a reference to a prim type using properties? See: I have following function: void Bla( ref float f, ref float b ); Now I want to use this function: Bla( ref body.LinearVelocity, ref body.AngVelocity ); But that does not work, because the Properity returns a number, not a variable. I tried to make a properity, that returns a reference( ref float LinearVelocity{ .. } ). The compiler doesn't like that. How could I solve that? It's easy in C++.. ! (: Thank you! [Edited by - rakoon2 on November 6, 2004 10:11:26 AM]
rakoon2 posted a topic in General and Gameplay ProgrammingHello guys! I have a problem with c# delegates. I get this strange compile error: "Inconsistent access. Fieldtyp Enigma.Math.PlayerContactHandle is less accessable than Enigma.Math.RigidBody2D contactHandle." Here some code: namespace Enigma.Math { delegate PlayerContactHandle( Vector2 norm, ref float t, RigidBody2D body, RigidBody2D otherBody ); //... class RigidBody2D { protected PlayerContactHandle contactHandle; //... } } I made a new test project and did the same thing and all works fine. But not so in my real project. =/ The intelisense can find PlayerContactHandle. I am using Vs 2003 and C#. Thank you. :/ [Edited by - rakoon2 on November 6, 2004 10:29:59 AM]
rakoon2 replied to rakoon2's topic in General and Gameplay ProgrammingAh great! :) I also found the sdl sublibs! Thank you! :)
rakoon2 replied to rakoon2's topic in General and Gameplay ProgrammingAhh, thank you! :). I can't find the .zip that contains the newest SDL_video SDL_image SDL_ttf SDL_mixer SDL_event dlls(.NET). does not has them? hmm... | https://www.gamedev.net/profile/62904-rakoon2/?tab=topics | CC-MAIN-2017-30 | refinedweb | 1,331 | 70.19 |
Hidden Powers of Python (2)
Let’s tackle a bit more how the hidden powers of Python are used in backtrader and how this is implemented to try to hit the main goal: ease of use
What are those definitions?
For example an indicator:
import backtrader as bt class MyIndicator(bt.Indicator): lines = ('myline',) params = (('period', 20),) ...
Anyone capable of reading python would say:
linesis a
tuple, actually containing a single item, a string
paramsis also a
tuple, containing another
tuplewith 2 items
But later on
Extending the example:
import backtrader as bt class MyIndicator(bt.Indicator): lines = ('myline',) params = (('period', 20),) def __init__(self): self.lines.myline = (self.data.high - self.data.low) / self.p.period
It should be obvious for anyone, here that:
- The definition of
linesin the class has been turned into an attribute which can be reached as
self.linesand contains in turn the attribute
mylineas specified in the definition
And
The definition of
paramsin the class has been turned into an attribute which can be reached as
self.p(or
self.params) and contains in turn the attribute
periodas specified in the definition
And
self.p.periodseems to have a value, because it is being directly used in an arithmetic operation (obviously the value is the one from the definition:
20)
The answer: Metaclasses
bt.Indicator and therefore also
MyIndicator have a metaclass and this
allows applying metaprogramming concepts.
In this case the interception of the definitions of ``lines`` and ``params`` to make them be:
Attributes of the instances, ie: reachable as
self.linesand
self.params
Attributes of the classes
Contain the
atributes(and defined values) which are defined in them
Part of the secret
For those not versed in metaclasses, it is more or less done so:
class MyMetaClass(type): def __new__(meta, name, bases, dct): ... lines = dct.pop('lines', ()) params = dct.pop('params', ()) # Some processing of lines and params ... takes place here ... dct['lines'] = MyLinesClass(info_from_lines) dct['params'] = MyParamsClass(info_from_params) ...
Here the creation of the class has been intercepted and the definitions of
lines and
params has been replaced with a class based in information
extracted from the definitions.
This alone would not reach, so the creation of the instances is also intercepted. With Pyton 3.x syntax:
class MyClass(Parent, metaclass=MyMetaClass): def __new__(cls, *args, **kwargs): obj = super(MyClass, cls).__new__(cls, *args, **kwargs) obj.lines = cls.lines() obj.params = cls.params() return obj
And here, in the instance instances of what above was defined as
MyLinesClass and
MyParamsClass have been put into the instance of
MyClass.
No, there is no conflict:
The class is so to say: “system wide” and contains its own attributes for
linesand
paramswhich are classes
The instance is so to say: “system local” and each instance contains instances (different each time) of
linesand
params
Usually one will work for example with
self.lines accessing the instance,
but one could also use
MyClass.lines accessing the class.
The latter gives the user access to methods, which are not meant for general use, but this is Python and nothing can be forbidden and even less with Open Source
Conclusion
Metaclasses are working behind the scenes to provide a machinery which enables
almos a metalanguage by processing things like the
tuple definitions of
lines and
params
Being the goal to make the life easier for anyone using the platform | https://www.backtrader.com/blog/posts/2016-11-23-hidden-powers-2/hidden-powers/ | CC-MAIN-2019-26 | refinedweb | 560 | 51.48 |
Library order is important
By Darryl Gove-Oracle on Dec 05, 2012
I've written quite extensively about link ordering issues, but I've not discussed the interaction between archive libraries and shared libraries. So let's take a simple program that calls a maths library function:
#include <math.h> int main() { for (int i=0; i<10000000; i++) { sin(i); } }
We compile and run it to get the following performance:
bash-3.2$ cc -g -O fp.c -lm bash-3.2$ timex ./a.out real 6.06 user 6.04 sys 0.01
Now most people will have heard of the optimised maths library which is added by the flag -xlibmopt. This contains optimised versions of key mathematical functions, in this instance, using the library doubles performance:
bash-3.2$ cc -g -O -xlibmopt fp.c -lm bash-3.2$ timex ./a.out real 2.70 user 2.69 sys 0.00
The optimised maths library is provided as an archive library (libmopt.a), and the driver adds it to the link line just before the maths library - this causes the linker to pick the definitions provided by the static library in preference to those provided by libm. We can see the processing by asking the compiler to print out the link line:
bash-3.2$ cc -### -g -O -xlibmopt fp.c -lm /usr/ccs/bin/ld ... fp.o -lmopt -lm -o a.out...
The flag to the linker is -lmopt, and this is placed before the -lm flag. So what happens when the -lm flag is in the wrong place on the command line:
bash-3.2$ cc -g -O -xlibmopt -lm fp.c bash-3.2$ timex ./a.out real 6.02 user 6.01 sys 0.01
If the -lm flag is before the source file (or object file for that matter), we get the slower performance from the system maths library. Why's that? If we look at the link line we can see the following ordering:
/usr/ccs/bin/ld ... -lmopt -lm fp.o -o a.out
So the optimised maths library is still placed before the system maths library, but the object file is placed afterwards. This would be ok if the optimised maths library were a shared library, but it is not - instead it's an archive library, and archive library processing is different - as described in the linker and library guide:
"The link-editor searches an archive only to resolve undefined or tentative external references that have previously been encountered."
An archive library can only be used resolve symbols that are outstanding at that point in the link processing. When fp.o is placed before the libmopt.a archive library, then the linker has an unresolved symbol defined in fp.o, and it will search the archive library to resolve that symbol. If the archive library is placed before fp.o then there are no unresolved symbols at that point, and so the linker doesn't need to use the archive library. This is why libmopt needs to be placed after the object files on the link line.
On the other hand if the linker has observed any shared libraries, then at any point these are checked for any unresolved symbols. The consequence of this is that once the linker "sees" libm it will resolve any symbols it can to that library, and it will not check the archive library to resolve them. This is why libmopt needs to be placed before libm on the link line.
This leads to the following order for placing files on the link line:
- Object files
- Archive libraries
- Shared libraries
If you use this order, then things will consistently get resolved to the archive libraries rather than to the shared libaries. | https://blogs.oracle.com/d/date/20121205 | CC-MAIN-2015-40 | refinedweb | 627 | 75.2 |
Grizzled Unix vet Paul Venezia tips his cap to the Windows Server crew, suggesting that the lessons of Unix history have not been lost on Microsoft — and that’s one reason why Windows Server has become so complex. ‘The.
Why Windows Server Deserves Unix Admin’s Respect
41 Comments
2011-03-09 8:37 pmYamin
Been saying it for years.
“Given enough time and money, Microsoft will eventually invent UNIX….and call it innovative.” – me.
And I’ve been saying this for years ‘stupid’ ‘server’ like behaviors that have been in Unix since day one.
Who knows if they’ll succeed or if their old baggage is just too much.
2011-03-10 8:47 amKebabbert
I just don’t buy the attitude that MS was ‘stupid’..
2011-03-09 8:57 amLennie ‘machines’ running on the same hardware there is nothing more efficient than just creating seperate namespaces like these container technologies do.
2011-03-09 8:56 pmnt_jerkface
Virtualization is popular because it increases portability and security.
Both Nix and Windows seem like a mess if you don’t read a good book first.
-
-
-
2011-03-09 8:09 pmnt_jerkface
It probably bothers you because you hold a dated view of Windows Server.
Windows Server 2008 is a refined OS. There are plenty of web hosts that guarantee 99.9% uptime with it.
There was even a Windows 2000 server that went over 2 years.…
2011-03-09 8:23 pmSoulbender
Oh man, does this mean that the Windows guys are going to start this pointless uptime penis-measurement thing too now?
2011-03-09 8:53 pmnt_jerkface
No it means that the derisive attitude towards Windows Server as seen by the parent’s comment about stability is unfounded.
-
2011-03-10 8:33 amKebabbert
It probably bothers you because you hold a dated view of Windows Server.)
2011-03-10 8:26 pmlucas_maximus
2011-03-11 11:14 amKebabbert
All he was saying is that Windows Server is usually not thought of as good as Unix and uptime is one indicator of server stability. We have one Web server that can deal with 100% CPU load at peak periods and it doesn’t fall over.
I can run numerical computations at 100%, if the computer runs without crashing – it proves nothing.
The London Stock exchange problems were probably more to do with the fact it was running on SQL Server 2000 which is far slower than newer versions than anything else.
The major problems with LSE was it crashed a lot. And also, the latency was to high. Exchange systems need low latency. Ive heard from several sources that Windows latency is too high (something with the TCP/IP stack)
2011-03-11 11:22 amlucas_maximus
The only thing I think we have found out is that you have an attitude problem.
-
2011-03-09 10:12 amWorknMan
They can even use find in /etc if they don’t know what file they should ‘list’. ‘help’ didn’t work; I had to buy a book to figure out what ‘man’
2011-03-09 1:57 pmTrenien
2011-03-09 3:36 pmDoc Pain
Well, we’re talking about sysadmin, there.
If you have to poke around and click haphazardly to find out how to set up your server, you probably should stop immediatly.
“Trial & error” is not how a sysadmin should work. 🙂 ‘i’. 🙂
2011-03-09 11:16 pmthe old rang
Anyone remember ‘ancient windows server history’ about…
Skype??
AT&T??
HotMail??
and about 4 other large events in the last year??
Knocking down Massive PRIME customer business…
THAT is a Feature Microsoft ‘diners at the table’ from the media fail to mention…
-
2011-03-09 11:23 pmDoc Pain. 🙂
2011-03-09 6:59 pmSoulbender
2011-03-09 10:27 amZaitch.
2011-03-09 11:06 amThom Holwerda
You could get Server 2008 and run without a GUI at all.
2011-03-09 12:15 pmsaso.
2011-03-09 2:28 pmjptros.
2011-03-09 3:51 pmDoc Pain. 🙂.
2011-03-09 3:39 pmBill Shooter of Bul.
2011-03-09 3:52 pmdpJudas.
2011-03-09 9:04 pmnt_jerkface.
2011-03-10 5:25 amelsewhere.
2011-03-09 10:41 pmnt_jerkface
I believe that, but only because you have no control..
What a bunch of malarkey, yes you heard me, malarkey.
Been saying it for years.
“Given enough time and money, Microsoft will eventually invent UNIX….and call it innovative.” – me.
Edited 2011-03-09 04:26 UTC | https://www.osnews.com/story/24498/why-windows-server-deserves-unix-admins-respect/ | CC-MAIN-2021-49 | refinedweb | 753 | 74.59 |
TERENCE KB CHIKORWA3,456 Points
i dont even undestand the issue with white spaces
def first_function(arg1): return 'arg1 is {}'.format(arg1) def second_function(arg1): return 'arg1 is {}'.format(arg1) class MyClass: args = [1, 2, 3] def class_func(self): return self.args
2 Answers
Chris FreemanTreehouse Moderator 63,984 Points
You are very close. Add a blank line at the end code.
Sean HernandezPro Student 9,350 Points
Yea i just finished my code and it seem that these code challenges are picky. Any blank lines with tabs in them need to be emptied up to the beginning of the new line, in other words create a new empty line but don't tab or indent them, if that makes any sense | https://teamtreehouse.com/community/i-dont-even-undestand-the-issue-with-white-spaces | CC-MAIN-2021-21 | refinedweb | 122 | 72.16 |
Technical Articles
Face Recognition app me too, more and more lately… To avoid awkward situations, I’ve been thinking about a solution for this. I’ve started with an app to store all the people I know together with a picture. This should help me to find the name of someone I already knew. I would still need to search in the list, which is not easy when it becomes a big list and you don’t remember the name…
I needed to add something to the app that could provide me the name of a person by just taking a picture. This way, I don’t need to search through a big list and don’t have to know the name.
Challenges
Before I could start on this app, I still had to do some research to answer a few questions like:
- Where can I store the people?
- Where can I store the images?
- How can I recognize a face and find it in my list?
After some investigation, I came up with the following:
- Storing data? à It would be a good opportunity to use the new Cloud Application Programming model (CAPM). CAPM enables you to create a complete Java or NodeJS OData service in SAP Cloud Platform Cloud Foundry by only creating a Core Data Service (CDS) file.
- Images? à For this, I came across the Document Repository in the SAP Cloud Platform NEO
- Face recognition à SAP Leonardo Machine Learning Service offers several API’s related to human faces, for example an API to detect the face of a person in an image, an API to extract face features and so on… . Although there is no specific API for comparing faces, it can be done by combining the face feature extraction api with the similarity scoring api. The face feature extraction returns a vector of the face that the api found on the image. The vectors of all images can be used in the similarity scoring api to compare images and find a match between images.
- Technical steps are defined in this blog:
That’s all I needed to know and was ready to start ?
Solution
I developed a solution on SAP Cloud Platform Cloud Foundry that offers me two possibilities. On the one hand it offers me a nice UI5 app that allows me to store all my contact persons together with a picture. On the other hand, it offers me the great functionality to search through all my contacts by just uploading a new picture of the person.
In case I come across an old contact person, who I don’t remember his name. I can just take a new picture and upload it to the app and it will tell me who it is. Off course I’ll need to take the picture without that person notices …
Here you have a small demo on how I can easily add a new person and how to search in the list with another image:
- First I add Daniel Craig to the list with a picture
- You’ll see how to search in the list by using another picture of one of the contactpersons
This solution is built as an MTA project on SAP Cloud Platform CloudFoundry. In this MTA project I have added the following layers:
- HANA module as the Database layer to store my contact persons with their related Face Feature Extraction Vector
- Java Service module to expose the data in HANA as an OData service
- UI5 for the UI Layer
Besides the MTA project I also needed other services:
- Machine Learning for the Face Feature extraction and similarity scoring to compare vectors and find the right person.
- SCP Document Service to store the images. (I could have stored them as a base64 string in the database, but I prefer a document store for images)
Machine Learning is being used in the UI and the Service layer. It is needed in the UI layer to validate the image before showing the create dialog. But then again for searching in the list of contact persons, it’s better to do this on the server side. Imagine that we have a list of 100 persons and the UI only shows 20 of them due to lazy loading…
Architecture overview
I will publish a blog with more technical details soon.
Install the solution
Because this is an MTA project, it can be easily installed on every SCP account. Follow these steps to install it on your account.
Your always welcome to improve this app via pull requests on the git project:
Deploy MTAR
Download the MTAR file here:
(It should be public)
Login to Cloud Foundry with the command CF Login
Run the command
cf deploy FaceRecognition_0.0.1.mtar
Or clone my github repository in SAP Web IDE
Right click on the project and select build and build again.
This will generate an MTAR file which you can deploy:
If the deployment is successful, you’ll see the app in your space:
Activate Document Service
Activate the Document Service on your NEO account and create a repository (store the generated key):
Create a Java Proxy to you NEO Document Service like described in the documentation. You’ll need the generated key in your proxy.
Go back to Cloud Foundry and create a destination to your Document Service Proxy with this name:
Create a new destination with the following properties:
URL=https://<your cmis proxy>.hanatrial.ondemand.com Name=documentservicewl ProxyType=Internet Type=HTTP Authentication=BasicAuthentication Description=Connection to Proxy Bridge App User=<your user> Password=<your password>
Activate ML
A Machine Learning instance is automatic instantiated by the MTA project because it’s being used by the Java Service. Now, we also want to use it in our UI layer. Therefore, we still need to add two destinations, so the UI layer can access the same Machine Learning Instance.
(This could be improved by defining the destinations in the mta.yml and maybe one instead of two)
If you go to your space and open Service Instances, you’ll see the ml-service. This is your Machine Learning instance that’s being used by the Java service. (it could be that you need to open the Service Instance menu two times before the instances show up)
Click on it to see all the required information like clientid, secret, … You need the clientid, clientsecret and the url at the bottom for the destination.
We need to create two destinations, one for fetching the bearer token and one to access the Face Feature Extraction API. This is needed because we use this API directly in the UI layer to detect faces on images.
URL=<ml keys url example:> Name=ml_auth_api ProxyType=Internet Type=HTTP Authentication=BasicAuthentication User=<clientid> Password=<clientsecret>
Next, add a destination to the Face Feature Extraction API:
URL= Name=ml_api ProxyType=Internet Type=HTTP Authentication=NoAuthentication
You should have 3 destinations now, make sure the names are exactly the same for the destinations:
Run the app
Go to your space and click on “FaceRecognition_appRouter”
Click on the link in the section Application Routes
Not working…
We need to add the path to the UI layer to the url. Add the following to the url:
/bewlmlFaceUI/index.html
This is generated based on the namespace and name of the UI layer.
You get a login screen now.
You’ll see the app after you login:
You can start uploading persons ?
Be aware that the UI is not yet completely productive ready and might still have some bugs. For example, it won’t do anything when it can’t find a face. The error handling is not yet finished.
You’re free to help improving this solution by contributing on this github repo:
Enjoy!
Super stuff
Nice post. Thanks for this fantastic elaboration too. 🙂
Br,
Pavan Golesar
Good content. Thank you for share.
Ha! This is crazy fun! Can’t wait to try this out and see what it does to pictures of cats!!!
Not sure if it will detect a face... but you could try with monkeys and see who comes out 🙂
Hey Wouter Lemaire,
Great blog post! I am struggling a bit with the proxy bridge and the explaination on the help site.
Could you explain where and what kind of files we have to modify to create the bridge?
I added the code snippet in the web.xml file and create a new Java file CMISProxyServlet.java.
Best case scenario with a code snippet, explaining how you did it.
Thank you!
You can take the code of the sap help and just change the id's. I explained this more in detail here:
Hope this helps!
Thank you for the link. Do I have to add the code in the web.xml and the servlet class in the code you provided and then deploy the application also to NEO? Or do I have to create two new files and deploy them to NEO?
Maybe you could share your web.xml and java servlet 🙂
Thank you.
here is the full project:
Thanks a lot! I just forgot to assign the ecm developer role to my user in the cockpit...
i just open this post and not yet explore whole stuff, but one thing i am sure you are a genius man.
Hey Wouter!
Is it possible to use this solution with SAP Success Factors? | https://blogs.sap.com/2019/05/28/face-recognition-app/ | CC-MAIN-2022-33 | refinedweb | 1,569 | 68.7 |
Opened 2 years ago
Last modified 20 months ago
#12019 new bug
Profiling option -hb is not thread safe
Description (last modified by )
This ticket is a continuation of #11978 and #12009. After fixing a couple of issues in those two tickets I found that the profiling run time is not thread safe.
Have a trivial test program (written as one of the tests for #11978):
import Control.Concurrent import Control.Concurrent.MVar import Control.Exception import Control.Monad main :: IO () main = do putStrLn "Start ..." mvar <- newMVar (0 :: Int) let count = 50 forM_ [ 1 .. count ] $ const $ forkIO $ do threadDelay 100 i <- takeMVar mvar putMVar mvar $! i + 1 threadDelay 1000000 end <- takeMVar mvar putStrLn $ "Final result " ++ show end assert (end == count) $ return ()
Compiling that with a compiler that has bug fixes arising from #11978 and #12009 as:
inplace/bin/ghc-stage2 testsuite/tests/profiling/should_run/T11978b.hs \ -fforce-recomp -rtsopts -fno-warn-tabs -O -prof -static -auto-all \ -threaded -debug -o T11978b
and run as:
./T11978b +RTS -p -hb -N10
crashes in a number of different ways. I've seen at least 3 different assertion failures and numerous segfaults (in different
stg_ap_* functions).
Replace
-hb with other profiling options like
-hr etc do not seem to crash.
Looking at code, one example of lack of thread safetly is the function
LDV_recordDead which mutates global variable
censuses which does not have any locking around it. Only figured this out because the following assert (in
LDV_recordDead) was being triggered occasionally.
ASSERT(censuses[t].void_total < censuses[t].not_used);
Change History (12)
comment:1 Changed 2 years ago by
comment:2 Changed 2 years ago by
Yes, I'm going to try to fix it.
comment:3 Changed 2 years ago by
Fixing this is definitely not trivial.
I've placed locks around some of the shared mutable data structures, but I still get the same assertion (in function
processHeapClosureForDead) failing:
ASSERT(((LDVW(c) & LDV_CREATE_MASK) >> LDV_SHIFT) > 0);
because the
overwritingClosure has already been called on it.
comment:4 Changed 21 months ago by
We should at least try to disable it except at
-N1.
comment:5 Changed 21 months ago by
Phab:D2516 throws an error when
-hb is used with more than one capability.
comment:6 Changed 20 months ago by
In 6555c6bb/ghc:
comment:7 Changed 20 months ago by
Merged to
ghc-8.0 as c51caafae7669d4246f4efd3d1a6858020780e02.
comment:8 Changed 20 months ago by
comment:9 Changed 20 months ago by
Reopening since this is still an issue; we just happen to fail more gracefully now.
comment:10 Changed 20 months ago by
The fix on the 8.0 branch broke the build on my Mac. Reverting it for now for my local build. But surely it's impacted other folks
comment:11 Changed 20 months ago by
rts/ProfHeap.c: In function 'initHeapProfiling': rts/ProfHeap.c:389:49: error: error: 'PAR_FLAGS {aka struct _PAR_FLAGS}' has no member named 'nCapabilities' if (doingLDVProfiling() && RtsFlags.ParFlags.nCapabilities > 1) { ^
I think the patch / CPP somehow doesn't quite work on the build ways matrix of the RTS in the expected way, i'll try to poke at it myself if i have time, but it def died
comment:12 Changed 20 months ago by
In d1b4fec1/ghc:
Yes, when I made the rest of profiling thread-safe I didn't look at +RTS -hb. Perhaps we should disable it except at
-N1, unless you want to have a go at fixing it? | https://ghc.haskell.org/trac/ghc/ticket/12019 | CC-MAIN-2018-22 | refinedweb | 576 | 63.49 |
1. Is the string "0" converted to a boolean false or true in PHP and js?
php: false; php weak language '0' is the same as 0;
js: true; strings are true except for the empty string ('') (including '' with spaces in the middle);
2, echo, print, print_r, var_dump difference
The echo language structure is not a real function. It can output multiple values at once, and multiple values are separated by commas.
print is a function and can only output one value.
print_r prints arrays and objects.
var_dump can print arrays, objects, and data types.
3. The program $ a = "www"; settype ($ a, 'array'); (string) $ a; floatval ($ a); echo gettype ($ a); The output after running:
array Reason: settype will change the data type of the original value. String and floatval are forced conversions. The data type of the original value is not changed.
4. Implement the bubble sort algorithm with PHP.
控制轮次数 // number of control rounds ( $i = 1; $i < count ( $arr ); $i ++ ) { for ( $ i = 1; $ i < count ( $ arr ); $ i ++ ) { 控制次数,并判断大小交换位置 // Control the number of times and determine the size exchange position ( $j = 0; $j < count ( $arr ) - $i ; $j ++ ) { for ( $ j = 0; $ j < count ( $ arr )- $ i ; $ j ++ ) { 如果当前值大于后面的值 // If the current value is greater than the subsequent value ( $arr [ $j ] > $arr [ $j + 1 ]) { if ( $ arr [ $ j ]> $ arr [ $ j + 1 ]) { 交换 // exchange = $arr [ $j ]; $ temp = $ arr [ $ j ]; [ $j ] = $arr [ $j + 1 ]; $ arr [ $ j ] = $ arr [ $ j + 1 ]; [ $j + 1] = $temp ; $ arr [ $ j + 1] = $ temp ; } } }
5. A group of monkeys are arranged in a circle, and they are numbered according to 1, 2, …, n. Then start counting from the first one to the mth, kick it out of the circle, start counting from the back, count to the mth, and kick it out … so keep on going until the end Until there is only one monkey left, that monkey is called King. Programming is required to simulate this process, input m, n, and output the number of the last king.
$n猴子个数$m第几个位置 // $ n number of monkeys $ m fn( $n , $m ) function fn ( $ n , $ m ) { 将猴子数量放到数组内 // Put the number of monkeys into the array ( $i = 1; $i < $n + 1; $i ++ ) { for ( $ i = 1; $ i < $ n + 1; $ i ++ ) { [] = $i ; $ arr [] = $ i ; } = 0 ; $ i = 0 ; 当数组内只剩下一个值跳出数组 // When there is only one value left in the array, jump out of the array ( count ( $arr ) > 1 ) { while ( count ( $ arr )> 1 ) { 遍历数组,判断当前猴子是否为出局序号,如果是则出局,否则放到数组最后 // Walk through the array to determine whether the current monkey is an outbound sequence number. If it is, then out, otherwise put it at the end of the array (( $i + 1) % $m == 0 ) { if (( $ i + 1)% $ m == 0 ) { 当循环次数满足m值去除掉当前值 // When the number of cycles meets the value of m, remove the current value ( $arr [ $i ]); unset ( $ arr [ $ i ]); else { } else { 不满足循环次数放到数组对尾 // Not meet the number of loops to the end of the array pair ( $arr , $arr [ $i ]); array_push ( $ arr , $ arr [ $ i ]); 删除掉当前循环内容 // Delete the current loop content ( $arr [ $i ]); unset ( $ arr [ $ i ]); } ++ ; $ i ++ ; } $arr ; return $ arr ; } 调用 // call (fn(15,7)); var_dump (fn (15,7));
6. What are the difficulties in sub-tables, partitions, and sub-databases? How to store data evenly?
Sub-table: the data of a large table is divided into several tables.
The sub-table is divided into vertical split and horizontal split.
Vertical split: split the fields; you can put unused fields in a table, large fields in a table, and frequently used fields in a table.
Horizontal split: split the table data; you can split the data by modulo id, for example, to split into 100 tables, user0, user1, user2 …, the remainder obtained by id% 100 is stored Which table to go to.
Difficulties in dividing tables: according to what strategy to divide tables; how to query data after dividing tables ( which table should be accessed under what circumstances ).
Partitioning: All data is still in a table, but the physical storage data is stored in different files according to certain rules, and the files can also be placed on different disks. Partition types: range partition, list partition, hash partition, and key partition.
Common partitioning methods:
1. Partition according to the time interval, such as the year, the partition stores data.
2. According to the self-incrementing primary key id, hash (id div 10000000) means to create a partition with 1 million data.
Depot: Divide the data into several libraries for storage. Divided into vertical sub-database and horizontal sub-database.
Vertical database: according to the table to separate the database, the same type of table one database; for example, a database blog, a forum database.
Horizontal database: according to some rules, the data of the same table is distributed in different databases; for example, user blog articles are distributed among 5 databases according to user id.
7. The difference between single quotes and double quotes in PHP.
。 In general, single quotes and double quotes can communicate with each other, but internal variables of double quotes will be resolved, while internal variables of single quotes will not be resolved .
8. The difference between require () and include ().
All introduce other pages;
An error in require () will terminate the program; include () will continue to execute with an error warning;
In actual projects, it is generally better to use require_one ().
9. Superglobal variables, magic variables, and magic methods in PHP:
Super global variables (9):
Reference:
$ GLOBALS
$ _SERVER
$ _REQUEST
$ _GET
$ _FILES
$ _ENV
$ _COOKIE
$ _SESSION
Magic variables (8):
__LINE__ The current line number in the file.
__FILE__ The full path and file name of the file. If used in an included file, returns the included file name.
__DIR__ The directory where the file is located. If used in an included file, returns the directory where the included file is located.
__FUNCTION__ returns the name of the function when it is defined (case sensitive)
__CLASS__ returns the name (case sensitive) of the class when it was defined.
__TRAIT__ Trait name (new in PHP 5.4.0). As of PHP 5.4.0, PHP implements a method of code reuse called traits.
__METHOD__ returns the name (case sensitive) of the method when it was defined.
__NAMESPACE__ Name of the current namespace (case sensitive).
Magic method:
Reference:
10.Stacks and queues
Stack: A special linear table that allows inserts and deletes on the same end.
表。 The stack is also called a first-in -first-out table.
Note: Linear tables are the most basic, simplest, and most commonly used data structure. The relationship between data elements in a linear table is a one-to-one relationship.
Queue: is a special linear table. What is special is that only delete operations are allowed on the front end and insert operations are performed on the back end of the table.
表。 Queues are also called first-in, first-out tables.
11.Symmetric encryption and asymmetric encryption
Symmetric encryption refers to the same key used for encryption and decryption, so it is called symmetric encryption. Symmetric encryption has only one secret key as a private key.
Common symmetric encryption algorithms: DES, AES, 3DES, etc.
Asymmetric encryption refers to: different keys are used for encryption and decryption, one as a public key and the other as a private key. Information encrypted by the public key can only be decrypted by the private key. The information encrypted by the private key can only be decrypted by the public key.
Common asymmetric encryption algorithms: RSA, ECC (for mobile devices), DSA (for digital signatures)
12, time complexity and space complexity
Algorithm complexity is divided into time complexity and space complexity.
Its role: time complexity refers to the computational workload required to execute the algorithm; and space complexity refers to the memory space required to execute this algorithm.
13, the difference between abstract classes and interfaces
Reference:
14, PHP creates a multi-level directory
makedir( $path ) function makedir ( $ path ) { ( is_dir ( $path )){ if ( is_dir ( $ path )) { "目录已存在" ; echo "Directory already exists" ; else { } else { = mkdir ( $path , 0777, true ); $ res = mkdir ( $ path , 0777, true ); ( $res ) { if ( $ res ) { "创建成功" ; echo "Created successfully" ; else { } else { "创建失败" ; echo "create failed" ; } } }
15. PHP writes a piece of code to ensure that multiple processes successfully write to a file at the same time
writeData( $filepath , $data ) { $fp = fopen ( $filepath , 'a'); // 以追加的方式打开文件,返回的是指针 do { usleep (100); // 暂停执行程序,参数是以微秒为单位的 } while (! flock ( $fp , LOCK_EX)); // LOCK_EX 取得独占锁定(写入的程序)进行排它型锁定获取锁有锁就写入,没锁就得 $res = fwrite ( $fp , $data . "\n"); // 以追加的方式写入数据到打开的文件 flock ( $fp , LOCK_UN); // LOCK_UN 释放锁定(无论共享或独占)。 function writeData ( $ filepath , $ data ) { $ fp = fopen ( $ filepath , 'a'); // Open the file in append mode and return the pointer do { usleep (100); // Pause program execution, parameter } While (! Flock ( $ fp , LOCK_EX)); // LOCK_EX acquires an exclusive lock (written program) for exclusive lock. Get lock if you have a lock. Write if you have a lock. res = fwrite ( $ fp , $ data . "\ n"); // Write data to the open file in append mode flock ( $ fp , LOCK_UN); // LOCK_UN releases the lock (whether shared or exclusive). ( $fp ); // 关闭打开的文件指针 return $res ; } fclose ( $ fp ); // close the open file pointer return $ res ;}
16. There is a bug in PHP's is_writeable () function, which cannot accurately determine whether a directory / file is writable. Please write a function to determine whether the directory / file is absolutely writable.
The following is the solution of the is_really_writable function in CodeIgniter, see the function comment for details
There are two aspects to the bug,
1. In windows, the is_writeable () function returns false when the file has only read-only properties. When it returns true, the file is not necessarily writable.
If it is a directory, create a new file in the directory and judge by opening the file;
If it is a file, you can test whether the file is writable by opening the file (fopen).
2. In Unix, when safe_mode is enabled in the php configuration file (safe_mode = on), is_writeable () is also not available.
Read if the configuration file is safe_mode on.
* / * * * Tests for file writability * * is_writable () returns TRUE on Windows servers when you really can't write to * the file, based on the read-only attribute. is_writable () is also unreliable * on Unix servers if safe_mode is on. * * @access private * @return void * / ( ! function_exists ('is_really_writable' )) { if (! function_exists ('is_really_writable' )) { is_really_writable( $file ) function is_really_writable ( $ file ) { If we're on a Unix server with safe_mode off we call is_writable // If we're on a Unix server with safe_mode off we call is_writable (DIRECTORY_SEPARATOR == '/' AND @ ini_get ("safe_mode") == FALSE ) { if (DIRECTORY_SEPARATOR == '/' AND @ ini_get ("safe_mode") == FALSE ) { is_writable ( $file ); return is_writable ( $ file ); } For windows servers and safe_mode "on" installations we'll actually // For windows servers and safe_mode "on" installations we'll actually // write a file then read it. Bah ... ( is_dir ( $file )) { if ( is_dir ( $ file )) { = rtrim ( $file , '/') . '/' . md5 ( mt_rand (1, 100) . mt_rand (1, 100 )); $ file = rtrim ( $ file , '/'). '/'. md5 ( mt_rand (1, 100). mt_rand (1, 100 )); (( $fp = @ fopen ( $file , FOPEN_WRITE_CREATE)) === FALSE ) { if (( $ fp = @ fopen ( $ file , FOPEN_WRITE_CREATE)) === FALSE ) { FALSE ; return FALSE ; } ( $fp ); fclose ( $ fp ); chmod ( $file , DIR_WRITE_MODE); @ chmod ( $ file , DIR_WRITE_MODE); unlink ( $file ); @unlink ( $ file ); TRUE ; return TRUE ; elseif (! is_file ( $file ) OR ( $fp = @ fopen ( $file , FOPEN_WRITE_CREATE)) === FALSE ) { } elseif (! is_file ( $ file ) OR ( $ fp = @ fopen ( $ file , FOPEN_WRITE_CREATE)) === FALSE ) { FALSE ; return FALSE ; } ( $fp ); fclose ( $ fp ); TRUE ; return TRUE ; } }
17, stripping non-letter parts of a string in php
('/[^az]/i', '', $str ); preg_replace ('/ [^ az] / i', '', $ str );
18. Remove a non-letter part of a string, and capitalize the first letter after the '_' in the string and the first letter of the string.
getStr( $str ) function getStr ( $ str ) { [^az]用来匹配任何不在a和z之间的字符,i表示不区分大小写。 // [^ az] is used to match any character that is not between a and z, i means case-insensitive. = preg_replace ('/[^a-z_]/i', '', $str ); $ str = preg_replace ('/ [^ a-z _] / i', '', $ str ); = explode ('_', $str ); $ arr = explode ('_', $ str ); ( $arr as $key => $value ){ foreach ( $ arr as $ key => $ value ) { ucfirst()首字母大写 // ucfirst () capitalize [ $key ] = ucfirst ( $value ); $ arr [ $ key ] = ucfirst ( $ value ); } = implode ('', $arr ); $ str = implode ('', $ arr ); $str ; echo $ str ; } 'a2b_ab23c'); getStr ( 'a2b_ab23c');
19, use js to enter a page 10s pop-up prompt box, the content of the prompt box is 'hello world'.
setTimeout ("alert ('hello world')", 10000)
20. Write a sql statement to query all the data in the user_name field in table A twice or more.
user_name , COUNT ( user_name ) AS num SELECT user_name , COUNT ( user_name ) AS num A GROUP BY user_name HAVING num >= 2 ; FROM A GROUP BY user_name HAVING num > = 2 ;
: Note :
- having can only be used after group by to filter the results after grouping (that is, the prerequisite for using having is grouping).
- where must be before group by.
- Aggregate functions are not allowed in conditional expressions after where, while having is allowed.
21.What to do after redis memory is full
Reference blog:
If the set limit is reached, the Redis write command will return an error message (but the read command can return normally.) Or you can use Redis as a cache to use the configuration elimination mechanism. When Redis reaches the memory limit, the old content will be flushed out. There are 5 memory elimination mechanisms, see the reference blog for details.
22.PHP's method of swapping the values of two variables (without using a third variable)
exchange() { /* * * 双方变量为字符串或者数字时,可用此交换方法* 使用异或运算 */ $a = "This is A"; // a变量原始值 $b = "This is B"; // b变量原始值 echo '交换之前$a 的值:' . $a . ', $b 的值:' . $b , '<br>'; // 输出原始值 /* * * 原始二进制: * $a:010101000110100001101001011100110010000001101001011100110010000001000001 * $b:010101000110100001101001011100110010000001101001011100110010000001000010 * * 下面主要使用按位异或交换,具体请参照下列给出的二进制过程, */ $a = $a ^ $b ; // 此刻$a:000000000000000000000000000000000000000000000000000000000000000000000011 $b = $b ^ $a ; // 此刻$b:010101000110100001101001011100110010000001101001011100110010000001000001 $a = $a ^ $b ; // 此刻$a:010101000110100001101001011100110010000001101001011100110010000001000010 echo '交换之后$a 的值:' . $a . ', $b 的值:' . $b , '<br>'; // 输出结果值 } function exchange () { / * * * When both variables are strings or numbers, this exchange method can be used * XOR operation is used * / $ a = "This is A"; // the original value of a variable $ b = "This is B "; // The original value of the b variable echo 'The value of $ a before the exchange:'. $ A. ', The value of $ b:'. $ B , '<br>'; // Output the original value / * * * Original binary: * $ a: 010101000110100001101001011100110010000001101001011100110010000001000001 * $ b: 01010100011010000110100101110011001001001001011100110010000001000010 * * The following mainly uses bitwise XOR exchange. For details, please refer to the binary process given below. * / $ A = $ a ^ $ b ; // now $ a : 000000000000000000000000000000000000000000000000000000000000000000000000000011 $ b = $ b ^ $ a ; // At the moment $ b: 01010100011010000110100101101110011000000001101001011100110010000001000001 $ a = $ a ^ $ b ; // At this moment $ a: 01010100011010000110100110100101110011001 10010011001101011100110010000001000010 echo 'The value of $ a after . The value of $ b: '. $ b ,' <br> '; // output result value } | http://www.itworkman.com/77051.html | CC-MAIN-2020-40 | refinedweb | 2,321 | 59.84 |
Question
Galveston Shipyards is considering the replacement of an eight-year-old riveting machine with a new one that will increase earnings before depreciation from $27,000 to $54,000 per year. The new machine will cost $82,500, and it will have an estimated life of eight years and no salvage value. The new machine will be depreciated over its 5-year MACRS recovery period. The firm’s marginal tax rate is 40 percent, and the firm’s required rate of return is 12 percent. The old machine has been fully depreciated and has no salvage value. Should the old riveting machine be replaced by the new one?
Answer to relevant QuestionsExit Corporation is evaluating a capital budgeting project that costs $320,000 and will generate $67,910 for the next seven years. If Exit’s required rate of return is 12 percent, should the project be purchased?Project P costs $15,000 and is expected to produce benefits (cash flows) of $4,500 per year for five years. Project Q costs $37,500 and is expected to produce cash flows of $11,100 per year for five years.a. Calculate the ...A college intern working at Anderson Paints evaluated potential investments using the firm’s average required rate of return (r), and he produced the following report for the capital budgeting manager:The capital ...Goodtread Rubber Company has two divisions: the tire division, which manufactures tires for new autos, and the recap division, which manufactures recapping materials that are sold to independent tire recapping shops ...Use the computerized model in File C13 to work this problem. Golden State Bakers, Inc. (GSB) has an opportunity to invest in a new dough machine. GSB needs more productive capacity, so the new machine will not replace an ...
Post your question | http://www.solutioninn.com/galveston-shipyards-is-considering-the-replacement-of-an-eightyearold-riveting | CC-MAIN-2016-44 | refinedweb | 299 | 54.73 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <file_config.h>
BOOL RecBuf (
U8* buf, /* buffer to store data */
U32 sz); /* buffer size */
The function RecBuf stores data read from the SPI
interface. The parameter buf is a pointer to the
buffer to store the data. The parameter sz
specifies the buffer size in bytes.
The function is part of the SPI
Driver. The prototype is defined in the file
File_Config.h. Developers must customize the function.
spi.BusSpeed, spi.CheckMedia, spi.Init, spi.Send, spi.SendBuf, spi.SetSS, spi.UnInit
/* SPI Device Driver Control Block */
SPI_DRV spi0_drv = {
Init,
UnInit,
Send,
SendBuf,
RecBuf,
BusSpeed,
SetSS,
CheckMedia
};
/* Receive SPI data to buffer. */
static BOOL RecBuf (U8 *buf, U32 sz) {
U32 i;
for (i = 0; i < sz; i++) {
SSPDR = 0xFF;
while (!(SSPSR & RNE)); /* Wait while Rx FIFO is empty. */
buf[i] = SSPDR;
}
return (_. | http://www.keil.com/support/man/docs/rlarm/rlarm_spi_recbuf.htm | CC-MAIN-2020-05 | refinedweb | 146 | 67.65 |
Has only three relevant APIs
xp_parse(s::String) returns a parsed object of type
ETree (used to be called
ParsedData).
LibExpat.find(pd::ETree, element_path::String) is used to search for elements within the parsed data object as returned by
xp_parse
(pd::ETree)[xpath::String] or
xpath(pd::ETree, xpath::String) is also used to search for elements within the parsed
data object as returned by
xp_parse, but using a subset of the xpath specification
Examples for
element_path are:
"foo/bar/baz" returns an array of elements, i.e. ETree objects with tag
"baz" under
foo/bar
"foo//baz" returns an array of elements, i.e. ETree objects with tag
"baz" anywhere under
foo
"foo/bar/baz[1]" returns a
ETree object representing the first element of type
"baz"
"foo/bar/baz[1]{qux}" returns a String representing the attribute
"qux" of the first element of type
"baz" which
has the
"qux" attribute
"foo/bar[2]/baz[1]{qux}" in the case there is more than one
"bar" element, this picks up
"baz" from the 2nd
"bar"
"foo/bar{qux}" returns a String representing the attribute
"qux" of
foo/bar
"foo/bar/baz[1]#string" returns a String representing the "string-value" for the given element path. The string-value is the
concatenation of all text nodes that are descendants of the given node. NOTE: All whitespace is preserved in the concatenated string.
If only one sub-element exists, the index is assumed to be 1 and may be omitted.
"foo/bar/baz[2]{qux}" is the same as
"foo[1]/bar[1]/baz[2]{qux}"
returns an empty list or
nothing if an element in the path is not found
NOTE: If the
element_path starts with a
/ then the search starts from pd as the root pd (the first argument)
If
element_path does NOT start with a
/ then the search starts with the children of the root pd (the first argument)
You can also navigate the returned ETree object directly, i.e., without using
LibExpat.find.
The relevant members of ETree are:
type ETree name # XML Tag attr # Dict of tag attributes as name-value pairs elements # Vector of child nodes (ETree or String) end
The xpath search consists of two parts: the parser and the search. Calling
xpath"some/xpath[expression]"
xpath(xp::String) will construct an XPath object that can be passed as the second argument to the xpath search. The search can be used via
parseddata[xpath"string"] or
xpath(parseddata, xpath"string"). The use of the xpath string macro is not required, but is recommended for performance, and the ability to use $variable interpolation. When xpath is called as a macro, it will parse path elements starting with $ as julia variables and perform limited string interpolation:
xpath"/a/$b/c[contains(.,'\$x$y$(z)!\'')]"
The parser handles most of the XPath 1.0 specification. The following features are currently missing:
julia-observer-quote-cut-paste-0__work#39;or
julia-observer-quote-cut-paste-1__workquot;as escape sequences when using the
xpath""string macro)
If you do not want to store the whole tree in memory, LibExpat offers the abbility to define callbacks for streaming parsing too. To parse a document, you creata a new
XPCallbacks instance and define all callbacks you want to receive.
type XPCallbacks # These are all (yet) available callbacks, by default initialised with a dummy function. # Each callback will be handed as first argument a XPStreamHandler and the following other parameters: start_cdata # (..) -- Start of a CDATA section end_cdata # (..) -- End of a CDATA sections comment # (.., comment::String) -- A comment character_data # (.., txt::String) -- A character data section default # (.., txt::String) -- Handler for any characters in the document which wouldn't otherwise be handled. default_expand # (.., txt::String) -- Default handler that doesn't inhibit the expansion of internal entity reference. start_element # (.., name::String, attrs::Dict{String,String}) -- Start of a tag/element end_element # (.., name::String) -- End of a tag/element start_namespace # (.., prefix::String, uri::String) -- Start of a namespace declaration end_namespace # (.., prefix::String) -- End of the scope of a namespace end
Using an initialized
XPCallbacks object, one can start parsing using
xp_streaming_parse which takes the XML document as a string, the
XPCallbacks object and an arbitrary data object which can be used to reference some context during parsing. This data object is accessible through the
data attribute of the
XPStreamHandler instance passed to each callback.
If your data is too large to fit into memory, as an alternative you can use
xp_streaming_parsefile to parse the XML document line-by-line (the number of lines read and passed to expat is controlled by the keyword argument
bufferlines).
05/10/2013
3 months ago
150 commits | https://juliaobserver.com/packages/LibExpat | CC-MAIN-2019-51 | refinedweb | 778 | 50.06 |
Sem acts in a similar way, but with threads (as opposed to people). You can setup how many threads can enter, before it begins to deny entry and wait until some threads leave to begin accepting requests again, like my friend Gandalf up there :) Hopefully now you understand the gif!
The C standard library provides a very compact way of using semaphores. Functions dealing with semaphores have a sem_ prefix. And the data type for a semaphore is a sem_t. It's also easy enough to import, #include<semaphore.h>. However, the purpose of this blog entry is to explain how to setup semaphores, using the C standard library, under Mac OS X. So this begs the question, how are semaphores under Mac different than in Linux? ( I can't make a claim for Windows).
Most tutorials online for setting up semaphores under Linux make use of something called unnamed semaphores, which are exactly as it sounds -- Semaphores that have no named identification. Semaphores that are unnamed are usually located in an agreed upon memory location so that other threads can access this semaphore, shared memory, or stored on the heap. However, Mac OS X does not make use of this unnamed semaphore scheme, and if you've ever developed a threaded application for Mac OS X, you'd be surprised to know this compiles and would appear to work, but fails miserably:
int main()
{
sem_t semaphore;
sem_init(&semaphore,0,0);
// begin using the semaphore
// call sem_wait() to block until a sem_post() from a different thread
}
If on a separate thread you're sem_wait()'ing, the unnamed semaphore will not block and the supposedly waiting thread will actually start executing, even if the semaphore count is 0!! This bug is usually pretty tough to catch, unless you analyze the return value of the call to sem_init(). It returns -1 on error, and sets errno appropriately. If you do happen to spin up this code:
#include <semaphore.h>
#include <iostream> int main() { sem_t semaphore; int retVal = sem_init(&semaphore,0,0); if(retVal == -1) { perror("Failed to initialize semaphore"); } return 0; }
You'll get this wonderful error message:You'll get this wonderful error message:
Function not implemented
Which can puzzle many a concurrent coder. Mac OS X ( as far as version Mavericks), does not support unnamed semaphores, but instead requires you to use a named semaphore. A named semaphore allows for better IPC between two otherwise unrelated threads, since the semaphore has a distinct name ( which is actually a path internally). For more information about named vs unnamed semaphores, view this stackoverflow link,(). Creating a named semaphore has a different procedure for initializing and destructing it, but the functionality is otherwise the same, except for the fact that you'll be dealing with a sem_t pointer (sem_t*) as opposed to a regular sem_t.
A named semaphore, as you may have intelligently guessed, requires a name upon initialization. Unfortunately, sem_init does not have a parameter for const char*, you'll need to use another function, sem_open. The fine-grain details for using sem_open are highlighted within the man pages, but there is an overload for that function.
- sem_open(const char* name, int oflag)
- sem_open(const char* name, int oflag, mode_t mode, unsigned int value)
The common parameters between those two functions, namely const char* name and int oflag specify the name that you wish to give the semaphore. Note, there is a special "syntax" for the naming conventions of semaphores, which you can find in this link. oflag specifies bitwise flags that control the operation of the creating of the named semaphore ( all the flags can be found in <fcntl.h>). The two main ones you'll encounter will be O_CREAT and O_EXCL. The former tells the function to actually create the named semaphore, if it does not already exist. O_EXCL is more of a "companion" flag, that you'll usually end up OR-ing it with O_CREAT to specify exclusivity. If used in conjunction with O_CREAT, the function call will fail if the named semaphore already exists. Again, the details can be read in the man pages, but to put it into context, these flags are the same as when using open().
Specifically in the overload, mode_t refers to the permissions given to the class of users regarding access to this semaphore. The actual flags can be found in <sys/stat.h> but if you're savvy with chmod, the permission syntax is the same ( it uses octal as its counting system and specifies permissions for user,group, and others).
And perhaps the most trivial to understand parameter, value, specifies the initial value of the semaphore ( how many threads can sem_wait() and get accepted at a time).
A sample named semaphore can be spun up like this:
#include <iostream>
#include <fcntl.h>
int main()
{
sem_t* semaphore = sem_init("/my_semaphore",O_CREAT|O_EXCL,0666,2); // notice how now I am using a sem_t pointer
}
Now, I have a pointer to a semaphore! Errors can happen here as well, such as if the name "/my_semaphore" is already taken (since we specified to fail with the O_EXCL flag). In the case of an error, sem_init will return a constant SEM_FAILED. Proper error handling would include checking if semaphore == SEM_FAILED
If this part clears, then the rest of the semaphore usage is the same old song and dance as an unnamed semaphore (until we get to the destruction phase). I'll assume you know how to use the sem_post and sem_wait functions, as they are the same with named and unnamed semaphores, except that with unnamed semaphores you'll have to pass a reference to your sem_t object into each one of the functions, since its expecting a pointer, but lucky for us, using named semaphores provides us with pointers :)
Destruction of a named semaphore requires sem_close()'ing and sem_unlink()'ing it, since in a way, its sort of like an open file.
sem_close(semaphore);
sem_unlink("/my_semaphore");
The call to sem_close() essentially states that the semaphore is to no longer be used. It is in someways analogous to sem_destroy() for unnamed semaphores. sem_unlink() however, takes a const char* to the name of the named semaphore. If done successfully, it removes/dissociates the semaphore from that name.
I hope this gentle introduction to semaphores on Mac OS X has proved helpful! My next few blog posts will be about more concurrency utilities and methods, all leading up to a big secret project that I'll release shortly as an open source project! :)
The elevator access is ideal for mulch-tenant premises since it restricts access to floors that you have no business on. This is good in terms of security and the
convenience it gives to the tenants. There are two types of controllers you can choose from there is the 1 cab and the 2 cab which can greatly make a difference on your facility.
elevator access control a great degree adaptable and secure. By utilizing a lift access control framework, you can isolate the overall population and unapproved clients, from clients, from approved clients.
elevator access control | http://blog.cs4u.us/2014/04/using-semaphores-on-mac-os-x.html | CC-MAIN-2020-05 | refinedweb | 1,181 | 58.21 |
This page describes how to troubleshoot issues that you might encounter while developing Android games with the Play Games SDK.
Unable to sign in
If you are unable to sign players into your game, first make sure that you have followed the instructions to create your client IDs and configure the games services. If you still encounter sign-in errors, check the following items to make sure that your game is set up correctly.
Check your metadata tags
Your
AndroidManifest.xml must contain a games metadata tag. To verify that
your metadata tags are correctly set up:
Open your
AndroidManifest.xmland verify that it contains a
meta-datatag as shown below:
<meta-data android:
Locate the definition of your
@string/app_idresource. It is usually defined in an XML file located in the
res/xmldirectory, for example
res/xml/strings.xmlor
res/xml/ids.xml.
Verify that the value of the
@string/app_idresource matches your application's numeric ID. The value of this resource should only contain digits. For example:
<string name="app_id">123456789012</string>
Check your package name
Your game's package name must match the package name on your client ID. To verify the package name:
- Open your
AndroidManifest.xmland verify that your game's package name is correct. The package name is the value of the
packageattribute in the
manifesttag.
- Verify the package name you supplied when creating your client ID. To verify the package name in the Google Play Console, go to the Google Play Console and click on the entry corresponding to your game. Go to the Linked Apps tab and examine the list of client IDs. There should be an Android linked app in this list whose package name matches the package name in your
AndroidManifest.xml.
- If there is a mismatch, create a new client ID with the correct package name and try to sign in again.
Check the certificate fingerprint
The certificate with which you are signing your game should match the certificate fingerprint associated to your client ID. To verify this, first check your certificate's SHA1 fingerprint:
Find your certificate file and obtain its SHA1 fingerprint. To obtain the SHA1 fingerprint, run this command:
keytool -exportcert -alias your-key-name -keystore /path/to/your/keystore/file -list -v
Take note of the sequence of hexadecimal digits labeled
SHA1:in the output. That is your certificate's fingerprint.
Next, check that your build tool is using this certificate:
- Generate your game's APK from your build tool and sign it with the desired certificate. Copy the generated APK to a temporary directory.
In the temporary directory, run the following command to unzip your APK.
unzip YourGame.apk
Generate a private key using an RSA certificate file:
keytool -printcert -file META-INF/CERT.RSA
Alternatively, you can generate the private key using a DSA certificate file:
keytool -printcert -file META-INF/CERT.DSA
Note the sequence of hexadecimal digits on the line labeled
SHA1:.
This sequence of digits should match your certificate fingerprint from the previous step. If there is a mismatch, your build tool or system is not configured to sign your application with your certificate. In this case, consult your build environment's documentation to determine how to configure it correctly and try to sign in again.
Next, check if the certificate fingerprint matches the fingerprint configured in your client ID. To do this:
- Open the Google Play Console and navigate to your game.
- On the Game Details page, scroll to the bottom and click the link to the linked Google Cloud Platform project.
- note the certificate fingerprint (SHA1).
If this fingerprint does not match your certificate's fingerprint from the previous steps, you must create a new client ID with the correct certificate fingerprint. You must create the new client ID in the Google Play Console, not in the Google Cloud Platform.
Check that test accounts are enabled
Before a game is published, the account that created the game in the Google Play Console must also be enabled as a tester. To check that this is correctly configured:
- Open the Google Play Console and navigate to your game.
- Open the Testing tab.
- Check that the account you are trying to sign in with is in the list of testers.
If the account you are trying to sign in with is not listed, add it to the list, wait a few minutes and try to sign in again.
Proguard issues
If you are using Proguard and are seeing errors on the obfuscated APK, check the target API level
on your
AndroidManifest.xml. Make sure to set it to 17 or above.
Other causes of setup issues
Check for other common causes of errors:
- If your game is published, check that the game settings are also published (it is possible to publish the application without publishing the games settings). To do this, go to the Google Play Console and navigate to your app, and check that the box next to the game's name indicates that it's published. If indicates that it is in another state, such as "Ready to Publish" or "Ready to Test", click the box and select Publish Game.
- If you can't publish your game, check that exactly one of the client IDs has the This app is preferred for new installations option enabled.
Anonymous listeners
Do not use anonymous listeners. Anonymous listeners are implementations of a listener interface that are defined inline, as illustrated below.
ImageManager im = ...; // Anonymous listener -- dangerous: im.loadImage(new ImageManager.OnImageLoadedListener() { @Override public void onImageLoaded(Uri uri, Drawable drawable) { // ...code... } }
Anonymous listeners are unreliable because the Play Games SDK maintains them as weak references,
which means that they might be reclaimed by the garbage collector before they are
invoked. Instead, you should implement the listener using a persistent object
such as the
Activity.
public class MyActivity extends Activity implements ImageManager.OnImageLoadedListener { private void loadOurImages() { ImageManager im = ...; im.loadImage(this); } @Override public void onImageLoaded(Uri uri, Drawable drawable) { // ...code... } } | https://developers.google.com/games/services/android/troubleshooting?hl=bg | CC-MAIN-2022-33 | refinedweb | 996 | 63.39 |
MPI_IprobeNonblocking test for a message
int MPI_Iprobe( int source, int tag, MPI_Comm comm, int *flag, MPI_Status *status );
Parameters
- source
- [in] source rank, or MPI_ANY_SOURCE (integer)
- tag
- [in] tag value or MPI_ANY_TAG (integer)
- comm
- [in] communicator (handle)
- flag
- [out] True if a message with the specified source, tag, and communicator is available (logical)
- status
- [out] status object (Status)
Remarks_Iprobe.#include "mpi.h"
#include <stdio.h>
int main( int argc, char * argv[] )
{
int rank;
int sendMsg = 123;
int recvMsg = 0;
int flag = 0;
int count;
MPI_Status status;
MPI_Request request;
int errs = 0;
MPI_Init( 0, 0 );
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if(rank == 0)
{
MPI_Isend( &sendMsg, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &request );
while(!flag)
{
MPI_Iprobe( 0, 0, MPI_COMM_WORLD, &flag, &status );
}
MPI_Get_count( &status, MPI_INT, &count );
if(count != 1)
{
errs++;
}
MPI_Recv( &recvMsg, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status );
if (recvMsg != 123)
{
errs++;
}
MPI_Wait( &request, &status );
}
MPI_Finalize();
return errs;
} | http://mpi.deino.net/mpi_functions/MPI_Iprobe.html | CC-MAIN-2014-42 | refinedweb | 141 | 57.3 |
how databases into a dump: mysqldump --all-databases > all_databases_export.sql To import one of these mysql databases from the dump into a database: mysql --one-database database_name < all_databases_export.sql
i am trying upload databases from phpmyadmin.
hostgator and bluehost have 50Mb limit for sql file.
time i split my databases
using sqldumpsplitter.
and uploads parts one by one. it worked fine last time.
but when try to upload getting connection timeout.
tried to Upload database from MySQL workbench. failed.
i need to try ssh access.
also
i uploaded to home directory and asking them for import it to phpmyadmin.
they are not replied they enabled shell access for my hosted domain.
I have found 4 methods to uploaded but those also not working because of shared server limits I think.
- using PHPMyAdmin.
- from the Cpanel control panel
- by splitting DB to small files
- MySQL workbench these 4 options not worked for me.
- using Putty ssh now I am going to try this. (simple & quick way worked later)
Really I don’t Know what to do every time searches on google new results and new methods.
Every time I get a problem like this I need to search research on google. sometimes may not found the perfect solution and many times not able to understand and do that. because some of them require technical knowledge.
Totally disappointed registered hosting plan 2 days still not uploaded database files which are 150 MB and 194 Mb. just created a thread on web hosting talk forum.
Also, I tried to compare all methods at one place below still confused. HoIe i think may of the newbies like me facing. they may understand my problem.
Ways to Import /Export database Large ad small fewer than 50 Mb
- Using Php Admin
- Using cpanel backup options
- SQL dump splitter by large database into Parts
- Php script to import large database files
- Mysql workbench
- Using SSH putty commands Very easy.
Import exporting database bu SSH Command line Process
- Enable SSH Access
- Generate SSH Key
- Login with putty by entering below details
- Username: Cpanel user namePassword: Your cPanel passwordHostname: your website.com or IPPort: 22
- Sometimes you have to whitelist your ip to gain Jail Shell Access only from your IP.
- Upload your database to your cpanel Home Directory by gzip compressing to reduce database size.
- Create database and username and password to access.
- Run Import or export commands on Putty client as shown below.
Finally I’ve Uploaded my database to phpmyadmin?
uploaded to root folder
logged in putty by entering the shared ip, port 2222, cpanel username and password.
and i entered this command
mysql -u root -p db-name < backup.sql
ex u database username
p password
d
database< path of sql file
mysql -u seobackl_final -pRg22tg22 seobackl_bmarks < /home2/seobackl/public_html/DBS/2.sql.
password no space after P.
this is also got some error connection timed out. also updated a ticket with hostgator. they uploaded. they uploaded but database size bigger than original sql file.
i need to repair db using cpanel>>databases>>checkDB>Repair DB.
Importing database In shared Server?
The Only way is contact support they will do it for you in case file size is larger than 50 MB.
Becuase SSH / Shell access not works on major shared providers. use SQL dump splitter.
Check Phpmyadmin max upload SIZe limit.
How to use ssh in the shared server to import database from home directory to phpmyAdmin?
- To access SSH, download WinSCP or PuTTY. Enter your IP address and port 2222; login with your cPanel username and password.
ssh access you can try this command:
mysql -u {database username} -p -h localhost -D {database name} < YourBackupName.sql
You will be prompted for the password for that database user.
upload to root folder
and try this command
mysql -u<user> -p<password> <database name> < /path/to/dump.sql
Try to import it from mysql console as per the taste of your OS.
mysql -u {DB-USER-NAME} -p {DB-NAME} < {db.file.sql path}
mysqldump -u username -ppassword –all-databases > dump.sql
import export database using ssh
How to import and export a MySQL database using ssh.
To export a database use:
mysqldump -u root -p db-name > backup.sql
mysql -u root -p db-name < backup.sql
Ex:mysqldump –u seobackl –pg93s9wYjC2 seobackl_333 > /home2/seobackl/public_html/DBS/manabadi_atmarks.sql
mysql -u [username] -p [database_name] < [dumpfilename.sql]
Example: mysql -u seobackl_final -pRg22tg22 seobackl_bmarks < /home2/seobackl/public_html/DBS/2.sql
-
-
connected database from standard tcp connection database username and
mysql import database command line | http://theonlineking.com/import-export-sql-file-into-mysql/ | CC-MAIN-2018-09 | refinedweb | 758 | 67.86 |
NAME
kill - send signal to a process
SYNOPSIS
#include <sys/types.h> #include <signal.h> int kill(pid_t pid, int sig);
DESCRIPTION()ed for.
NOTES
The only signals that can be sent task number one, the init process, are those for which init has explicitly installed signal handlers. This is done to assure the system is not brought down accidentally. 1003.1-2003 requires that if a process sends a signal to itself, and that process does not have the signal blocked, and no other thread has it unblocked or is waiting for it in sigwait(), at least one unblocked signal must be delivered to the sending thread before the call of kill() returns. embers of the process group. Notwithstanding this error return, the signal was still delivered to all of the processes for which the caller had permission to signal.
LINUX HISTORY 1003.1-2001, were adopted in kernel 1.3.78.
CONFORMING TO
SVr4, SVID, POSIX.1, X/OPEN, 4.3BSD, POSIX 1003.1-2001
SEE ALSO
_exit(2), killpg(2), signal(2), sigqueue(2), tkill(2), exit(3), capabilities(7), signal(7) | http://manpages.ubuntu.com/manpages/dapper/man2/kill.2.html | CC-MAIN-2015-35 | refinedweb | 185 | 76.72 |
.
Part of the road to solving a programming problem is to take big problems and split them into smaller problems and continue that process until all your problems are small and easy to solve.
So step 1 is to ignore problem 2 and only look at problem 1. Then when problem 1 is solved we can look at problem 2 and try to solve it. However, my posting will be big enough only dealing with problem 1 so I'll ignore problem 2 for now.
Secondly, the road to object oriented program design and analysis is to first take a good look at your problem text. Try to identify nouns and verbs that are important to the problem.
For example in your problem the word 'account' appear quite often, it is safe to say that you should have a class named 'account' in your problem.
An account has a balance (a monetary amount), a PIN code and a name. It does not appear to have much else so this imnplies that an account look like this:
class account {
private:
moneyType balance;
string name;
int pin;
public:
....
};
The pin code could have been a string if you prefer. The balance must be some type that can implement the monetary amount. In C++ a double or an int counting cents would be good enough. so the moneyType can be double, this is the easiest I guess and should be sufficient for our purposes.
Now, the bank is then a collection of these accounts. There are only three acounts so you need a collection of three elements. You can use a map, a list, an array, a tree or simply have three global variables for each of the accounts. The latter is not useful in a real life situation though since it is hard to expand when the bank get a fourth customer. I would therefore suggest some other structure. Since you often need to look up the accounts based on account name you can probably use the name as a lookup key. Again, in real life situations a bank would never use name as lookup key since many people can have the same name. A real bank would associate each customer with a customer ID and use that as the lookup key. The natural thing is therefore to make a map. STL has a map type for you already, I suggest you use that.
map<string,account> accounts;
Also, you might want to say that "bank" is also an object - even if it is only one such object present - and thus make a class bank to hold this account collection:
class bank {
private:
std::map<string,account> accounts;
public:
...
};
Also, in the classes above I have only written the members, the class probably need a lot of accessor functions etc but I leave all that to you. I believe you can figure it out.
However, since I introduced the map type I can show you how to find an account given a name.
account * bank::find(const char * name)
{
std::map<string,account>::
if (p == accounts.end())
return 0; // not found.
return & *p;
}
bank the_bank;
I will also assume a 'using namespace std;' someplace in early in the code so that I don't have to write std:: everywhere.
The next little piece is the main program, it is a program that read info from screen and output a response. It is essentially something that goes like this:
[edited by moderator Mindphaser]
Several things about this code.
1. it is not perfect and I have purposely made it so, there are lots of room for improvement on it. But it does what you want it to do.
2. All the account methods used are not implemented anywhere I believe you know how to do that yourself.
3. This is essentially a 'request/response' engine which receives a request and then respond to it. It is also known as a 'state machine'. In my implementation the state is kept by being in a certain place in the program. A better solution would be to have a separate state variable and have the state determined by that variable. In this case determining the states would be an important part of the solution:
enum state_t {
st_insert_card,
st_check_pin,
st_get_cmd,
st_deposit,
st_withdraw,
st_balance,
};
[edited by moderator Mindphaser]
This code has several advantages over the first even though it does the same thing. It is a state machine so you can easily add new states to it as needed, so the code is more flexible. I believe it is also easier to read.
Each request/response is done by one loop through the while loop while the first solution had a nested loop solution where you had an inner loop to control the transactions from one customer and an outer loop for dealing with new customers.
In any case, your problem 2 is very similar and I am pretty sure you know how to solve problem 2 if you read through this solution for problem 1 given above.
Alf
EE policy is against making homework for students.
This is not the first time that you make homework for them.
You should restrain yourself to explain how to do things rather then giving them source codes.
This apply also for comments posted to answer non-students programmers, you should lead the way, enlight them, etc. but never write the whole code for them.
Doing so you prevent them from learning !
However, I did try to explain how one thinks when translating a problem to a computer program and it is that process that is of importance in my posting. The code itself is only to illustrate it. The main reason why there's so much code in my answer was that I showed lots of code that did the same thing, i.e. several ways to do the same thing. None of the code is by itself enough to make the program. For example the account class is essentially missing - and I did that on purpose.
I did include the code for locating an account given a name if you use std::map, since I assumed this might not be something crystalfish is familiar with.
I also included large portions of the main program for the explicite reason that I showed several ways of doing it. In any case, this main program in itself doesn't really contain the hard logic, it is just a bunch of cout and cins. It is the code in the account class that contain the main logic.
I did possibly provide too much in that I listed up the data members of the account class. This - I admit - is a big spoiler, but it is also something that follows automatically when you list up the nouns used in describing the problem. As such the problem description is very important in that it describes all the object types encountered in the problem and the relationships between them.
No, I do not try to go against EE policy and provide homework solutions to people - at least not intentionally. There are times when I did write a bit too much and I have apologized for that. I am not sure that this is one of them though.
Alf
We value your feedback.
Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!
I know that it is hard to define 'academic honesty', but this goes too far. We usually accept that our experts help students with their homework as long as they have a specific question they need help with. However we do not accept to get their homework done. Our Member Agreement at has "violating the guidelines for academic honesty or other unethical behavior" in its text and warns you that both asker and expert can get their account suspended if they violate these rules.
I removed the main parts of the program you posted and send a copy of the original to Admin. Please consider this as a warning, nothing else, I know that you are a very active member of this community. But the next time you are all exited about solving somebody's problem, take a good look at the question first and if it is homework take a deep breath and request a specific problem!
** Mindphaser - Community Support Moderator **
Thank you for your support. I know that Mafalda and Mindphaser is probably thinking that I am a just lazy person who just wants other people to do the complete programs for for him.
Guess what!! If I'm that kind of asshole, I could just ask you guys to give me the straight programming codes. Alf is a really good person. For those guys who are already experts in programming, how can they understand my difficult time learning C++ and other programming languages.
If Mindphaser is gonna delete my account, FINE! I am just a beginner. I don't have any rights to anything here.
Salte, thanks again for your help. I think I will never come to this site again.
I never said I am suspending your account. I pointed Salte to our Member Agreement and told him to be careful ... for his own sake.
I removed some of his code for these reasons and since you said you wouldn't need that complete code ther should be plenty of information to continue with your problems.
As I pointed out, there is no problem with helping but in these academic cases we agreed to provide help for specific questions, not for the whole problem.
Unfortunatelly we HAVE many students join EE and ask us to do their complete work ...
** Mindphaser - Community Support Moderator ** | https://www.experts-exchange.com/questions/20554153/I'm-getting-so-confused-with-the-C-Analysis.html | CC-MAIN-2018-05 | refinedweb | 1,625 | 70.73 |
One of the easiest ways to master NCover is simply to dive into some code and improve tests, so this week we'll be looking at improving the test quality for an open source project, Json.NET, which is available on CodePlex here. Json.NET is a tool for reading and writing JSON from .NET, and includes a JsonSerializer for easily serializing .NET objects into Json.
Json stands for JavaScript Object Notation, and is a way that AJAX developers communicate between the browser and the web server, and increasingly a way that web services developers communicate between various web services clients and the web server. You can learn more about Json at Introducing JSON.
We'll start off by compiling and running the tests for Json.NET through NCoverExplorer in order to visualize code coverage, and then proceed to actually improve the quality of the test suite for Json.NET by increasing the project's code coverage.
During this article we'll be utilizing Visual Studio 2005, NUnit, and of course, NCover. If you use a tool other than Visual Studio to build your projects, you should be fine, as long as you can compile the Json.NET project. If you do not already have NUnit installed, you can simply download the latest MSI installer from the Nunit home page, install it, and add the NUnit bin directory to your path. NCover can be downloaded from here, and you'll need either a paid or trial license to complete the steps outlined in this article.
Building Json.NET is fairly straightforward. Download the project's source code from the Json.NET website. We'll be working with version 1.3.1 in this article. The source is included with the 1.3.1 release, which you can download here. Once you have downloaded the code, unzip it to your hard drive and open the solution file located at Src\Newtonsoft.Json.sln in Visual Studio 2005.
Building the project is as easy as clicking on Build -> Build Solution within Visual Studio. After a moment you'll see some warnings show up in the Error List in Visual Studio, but the build should complete successfully.
Now that the solution is finished building, let's test it in NUnit to make sure that all of the tests succeed. Open up a console window and cd into the location where you unzipped Json.NET. Next, cd into the Src\Newtonsoft.Json.Tests\bin\Debug folder, which is where your build output should be. Listing the files in the directory should reveal several DLL and PDB files, as well as a couple of XML documents.
To test Json.NET with NUnit, execute the following command at the command line:
nunit-console Newtonsoft.Json.Tests.dll
You'll see some output scroll by as the tests are being executed and should see that… wait a minute! One of the tests failed. Let's go ahead and fix that before we continue.
You'll notice that the output of the test failure should look something like this:
We can tell right off that the failure most likely happened because the developer of Json.NET coded the test against his time zone, rather than against whatever time zone the test is being run from, so a couple of changes around line 339 of the XmlNodeConverterTest.cs file should fix the issue.
So, let's add a couple of lines before line 337 to get a
DateTime in our own time zone:
// Get a DateTime set to our current locale to put into our expected XML DateTime dt = new DateTime(1970, 1, 1); dt = dt.AddMilliseconds(954374400000);
And then update what was line 337 to serialize the
DateTime that we created where the old serialized
DateTime was:
string expected = @"<?xml version=""1.0"" standalone=""no""?>" + @"<root><person id=""1""><Float>2.5</Float><Integer>99</Integer></person>" + @"<person id=""2""><Boolean>true</Boolean><date>" + XmlConvert.ToString(dt, "yyyy-MM-ddTHH:mm:ss.fffffffzzzzzz") + @"</date></person></root>";
Now rebuild the project and run it through NUnit on the console again, and you should see that it completes successfully.
Installing NCover should have added a shortcut to NCoverExplorer in your Start Menu, so go ahead and open it up. Now, go ahead and click on the Run NCover button on the toolbar or select Run NCover... from the File menu. You'll be presented with the NCoverExplorer Run dialogue.
The run dialogue presents you with all of the options available from the NCover.Console command line. We'll set the following options on the Application to Profile page:
/noshadowflag simply tells NUnit to not shadow copy the assemblies that it is testing. Shadow copying gives NUnit problems at times, so we recommend you always test with it off.
Additionally, ensure that the "Close window & load coverage file" checkbox in the bottom left corner of the Run dialog is unchecked, so that we can review the output of NCover after it completes.
Now click the Run button in the bottom right corner of the dialog, and you should see output begin to scroll in the Run dialog. The first few lines of the output show the NCover settings, and then you should notice the output of NUnit, just as it was displayed when we ran the NUnit tests before. Finally, you should see a summary of the coverage data and the return code from NUnit (which should be 0).
Now that you've collected coverage data on the application, click the "Close" button in the Run dialog to expose the main NCoverExplorer interface. You'll notice that you can see the two project namespaces,
Newtonsoft.Json and
Newtonsoft.Json.Tests in the far left window. You can also see percentages next to the namespaces. The percentages represent the percentage of lines of code in the namespaces that were visited during the execution of the tests.
Expanding the
Newtonsoft.Json namespace will reveal the namespaces within the namespace Expanding those namespaces will reveal classes, and expanding the classes will reveal methods. Clicking on a class or method will take us to the source file where the class or method is declared. You'll see within the source view that lines of code that were not visited during the course of the tests are highlighted in red, and lines that were visited are highlighted in green.
We're going to target a few of the easier to cover methods in the namespace, and attempt to improve their coverage to 100%. Let's start with the
ParseProperty method in the
Newtonsoft.Json.JsonReader class. It's got a single line of code that isn't covered — line 380. The code surrounding line 380 is presented below, with line 380 highlighted for emphasis:
if (ValidIdentifierChar(_currentChar)) { ParseUnquotedProperty(); } else if (_currentChar == '"' || _currentChar == '\'') { ParseQuotedProperty(_currentChar); } else { throw new JsonReaderException("Invalid property identifier character: " + _currentChar); }
We can tell fairly easily that the line will only be reached if a Json property doesn't return true from the ValidIdentifierChar method (which says that a property should begin with a letter, a digit, an underscore, or a dollar sign), or if it doesn't match a " or a '. So, let's write a quick test that creates just such an invalid property and attempts to parse it through a JsonReader. Our test should check that a
JsonReaderException is thrown.
We'll add a test to the bottom of JsonReaderTest.cs with the following code to cover that exception being thrown:
[Test] [ExpectedException(typeof(JsonReaderException))] public void TestInvalidPropertyName() { string input = @"{@CPU: 'Intel'}"; StringReader sr = new StringReader(input); using (JsonReader jsonReader = new JsonReader(sr)) { // just read until the reader is exhausted while (jsonReader.Read()) { }; } }
The test basically just creates some Json with an invalid identifier (@CPU) and then reads the Json out with a JsonReader until it gets an exception. How do we know that the exception is thrown at the correct place, though?
Well, build your project and go back to NCover Explorer. Open the Run dialog again, and hit the run button to run the tests through NCover. Once the tests are finished executing and you have confirmed that they all succeeded close the run dialog and click on File -> Reload to load the new coverage data from the execution. You should see that line 380 of JsonReader.cs is now covered, so we know that our changes caused that line to be visited and that the exception was thrown, since our test passed.
We'll go ahead and improve the coverage for one more method in JsonReader before we wrap this article up. Let's focus our attention on a method that is completely uncovered. A little digging around in NCoverExplorer reveals that the private method ParseComment (which starts on line 724) is completely uncovered. Let's focus on that method.
Let's start with a baseline test that includes a Json comment in it. That should cause some lines of
ParseComment to be covered, and will let us know that we're on the right track. I added the following test, which causes some Json with a comment in it to be parsed:
// Tests a comment [Test] public void TestParseComment() { string input = @"{CPU: 'Intel', /* Some Comment */ Manufacturer: 'Dell'}"; StringReader sr = new StringReader(input); using (JsonReader jsonReader = new JsonReader(sr)) { // just read until the reader is exhausted while (jsonReader.Read()) { }; } }
Run that through NCoverExplorer, and you should see that most of the lines of
ParseComment are now covered. It looks like we're on the right track. Now, let's get some of those uncovered conditions in the code covered.
The first bit of uncovered code in the method, lines 744 and 745, is the handling of asterisks in the middle of a comment. As Json.NET parses a comment, it looks for an asterisk, and then if the asterisk is followed by a forward slash it finishes handling the comment. But, if the asterisk is not followed by a forward slash, it simply adds the forward slash to the buffer and continues. So, let's add an asterisk without a trailing forward slash to the middle of our comment, and that should cover the condition. We'll just change the first line of the test we just wrote:
string input = @"{CPU: 'Intel', /* Some * Comment */ Manufacturer: 'Dell'}";
Run that change through NCoverExplorer and you'll see that we only have one more line of code (line 757) in the method to cover and we'll be finished. I think that next line requires its own test. That line looks to be throwing an exception whenever someone has a forward slash in their code without a following asterisk, which basically means that they entered half of the start comment token.
The following test should do the trick:
// Tests a comment with a missing opening asterisk [Test] [ExpectedException(typeof(JsonReaderException))] public void TestParseCommentMissingOpeningAsterisk() { string input = @"{CPU: 'Intel', / Some Comment With Missing Asterisk */ Manufacturer: 'Dell'}"; StringReader sr = new StringReader(input); using (JsonReader jsonReader = new JsonReader(sr)) { // just read until the reader is exhausted while (jsonReader.Read()) { }; } }
Run that through NCoverExplorer and you'll see that it does in fact cover the last uncovered line of the method.
In this article we saw just how easy it is to improve the quality of your NUnit tests using NCoverExplorer. We can now feel confident that the
ParseComment and
ParseProperty methods in the
JsonReader class have been thoroughly tested. A great exercise would be getting the
JsonReader class to have 100% coverage.
There's no better way to hone your skills as a programmer than going out and reading someone else's code, and even better, improving their code. Make it a habit to find some open source projects that you would like to be involved in and help them reach 100% coverage. By the time you improve a few classes in a project, you should see that you have a solid understanding of their project and that you are ready to add new features.
All code from Json.NET is licensed under the following:
Copyright (c) 2007 James New.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/showcase/NCover_Basics.aspx | crawl-002 | refinedweb | 2,022 | 61.77 |
Chapter. Both documents are vital, not only to show the corporate health of the organisation, but also as an indication to various shareholders of how well or badly the organisation is performing, as proof to potential investors or lenders for the raising of capital and as a statutory record for taxation and other purposes.
This chapter is intended to provide:
· An introduction to the basic principles of the accounting equation
· An introduction to, and the construction of, manufacturing, trading and profit and loss accounts and their use
· An understanding of the principles and construction of a balance sheet and its interpretation
· A detailed explanation of the interpretation of company accounts using ratio analyses and the uses of these.
This chapter is structured in a logical way, building up from the basic tenets of financial analysis - the dual effect and the accounting equation. From this, the chapter looks at the construction of manufacturing, trading and profit and loss accounts and the drawing up of a balance sheet. Ratio analysis is a particularly powerful technique aimed at helping marketers to compare sets of figures over time and between companies. This is dealt with in considerable detail.
All aspects of accounts are governed by these two principles.
a) First principle: Dual effect
Every transaction has two effects, not one, e.g. if a Cerial Marketing Board (C.M.B.) purchases grain it has:· more stock
· less cash.
If the Dairiboard Company of Zimbabwe sells milk to a retailer, it has:· less stock
· an amount owed by the customer if he does not pay immediately.
b) Second principle: The accounting equation
The second principle stems from the first. Every transaction has two effects; these two are equal and balance each other. Thus, at any given moment the net assets of a business are equal to the funds which the owner or proprietor has invested in the business.
Net Assets = Proprietor's funds
is the ultimate accounting equation. An explanation of the terms is as follows;· Net assets are defined as a business's total assets less total liabilities.
· An asset is defined as something owned by a business, available for use in the business.
· A liability is defined as the amount owed by the business, i.e. an obligation to pay money at a future date.
· Proprietor's funds represents the total amount which the business owes to its owner or proprietor. This consists of:Capital: (amount proprietor invested in the business)
plus
Profits: (funds generated by the business)
or minus
Losses: (funds lost by the business) minus
Drawings: (amounts taken out of the business).
We normally arrive at a business's profit or loss by means of a profit and loss account, but where information about income and expenditure is lacking, the accounting equation can be a useful way of finding profit. If:
Net assets = Proprietor's funds
then an increase in net assets = an increase in a proprietor's funds.
Considering what causes an increase in the proprietor's funds, we can say that INCREASE IN NET ASSETS (from the beginning of a period to the end) is equal to:
New capital introduced + Profit - Drawings
during the same period. If three of these four amounts are known, the fourth can be calculated.
There are many firms, whether parastatal, sole trader, partnership or limited company, which manufacture the final product to be sold from raw materials, e.g. a fertiliser company uses phosphates, ammonia and so on to produce finished fertiliser pellets.:
a) Different types of cost
The costs needed to prepare a manufacturing account can be broken down into two main categories known as direct and indirect costs..
b) The effect of stocks
One complication in constructing the manufacturing account is to remember that there may be opening and closing stocks of raw materials and opening and closing values to attach to partly completed items (work in progress).
Figure 2.1 Pro forma manufacturing account
Now attempt exercise 2.1 and 2.2.
These adjustments can be seen in the pro forma manufacturing account which follows. (see figure 2.1.)
Exercise 2.1 A simple manufacturing account
The following are details of production costs of Aroma Pvt Ltd for the year ended 31 December 19X5.
Prepare a manufacturing account for the year ended 31 December 19X5.
Exercise 2.2 A manufacturing account with an adjustment of work-in-progress.
The purpose of the trading account is to show the gross profit on the sale of goods..e. the carriage inwards of those goods.
Preparing a trading account
The trading account is calculated by using a sequence of steps. It is essential that these steps are carried out in the order indicated.
a) The first step is to transfer the balance on the sales account to the trading account:
b) Next, debit the trading account with the cost of goods sold, starting with the opening stock:
The opening stock is obviously the same as the closing stock of the previous period; in the first year of trading, of course, there will be no opening stock.
c) The balance on the purchases account is then transferred to the trading account and added to the opening stock figure:
d) Transfer any balance on the carriage inwards account to the trading account:
Add the carriage to the total arrived at in c) above. This gives the total cost of goods available for sale.
e) Deduct the value of closing stock from the:
(by deduction from the debit side).
We have now arrived at the cost of sales.
f) The balance on the trading account will be the difference between sales and cost of sales, i.e. gross profit, which is carried down to the profit and loss account.
Point to Note:.
Point to Note:
The order of items is most important. Sales returns must be deducted from sales; purchases returns must be deducted from purchases; carriage inwards, if any, must be debited in the account before closing stock is deducted. Figure 2.2 shows a pro forma trading account.
Figure 2.2 Pro forma trading account
N.B. A trading account is prepared very much like a manufacturing account but substituting the production cost of completed goods for the usual purchasing figure (see exercise 2.3: Preparation of trading account)
Appendix I shows a sample trading account for the Cerial Marketing Board, Zimbabwe.
Now attempt exercise 2.3.
Exercise 2.3 Preparation of trading account
Prepare a trading account from the following balances included in the trial balance of K. Smith at 31 December 19X8.
Introduction: The remaining nominal accounts in the ledger represent non-trading income, gains and profits of the business in the case of credit balances, e.g. rent, discount and interest receivable. Debit balances represent expenses and losses of the business and are known as overheads, e.g. salaries and wages, rent and rates payable, lighting, heating, cleaning and sundry office expenses.. For manufacturers the cost of goods sold involves the cost of manufacturing products (raw materials, labour and overheads). For retailers, the cost of goods sold involves the cost of merchandise purchased for resale (purchase price plus freight charges).
The balance sheet shows that the profit for an accounting period increases proprietor's funds. The trading and profit and loss account shows, in detail, how that profit or loss has arisen. The profit and loss statement consists of these major components:-
· Gross sales - the total resources generated by the firm's products and services
· Net sales - the revenues received by the firm after subtracting returns and discounts (such as trade, quantity, cash)
· Cost of goods sold - the cost of merchandise sold by the manufacturer or retailer.
· Gross margin (profit) - the difference between sales and the cost of goods sold: consists of operating expenses plus net profits
· Operating expenses - the costs of running a business, including marketing
· Net profit before taxes - the profit earned after all costs have been deducted. Figure 2.3 shows a pro forma trading and profit and loss account.
Figure 2.3 Trading, profit and loss a/c for the year ended 31 Dec 19X0
Explanations
It is essential that the difference between a trading and profit and loss account is clearly understood. The following provides an explanation.
· The trading account shows the gross profit generated by the business.:Opening Stock + Opening Purchases (for year or period) - Closing Stock (cost of goods unsold at the end of the same period).
This gives the cost of goods which were sold. Sales and cost of goods sold should relate to the same number of units.
· The profit and loss account shows items of income or expenditure which although earned or expended by the business are incidental to it and not part of the actual manufacturing, buying or selling of goods.
· In a complicated manufacturing industry and in service industries, different definitions of "goods", "net profit" and "cost of sales" may exist.
Capital and revenue expenditure
Only revenue expenditure (e.g. heating bills) is charged to the profit and loss account; capital expenditure (e.g. the purchase of a new plant) is not.
a) Revenue expenditure is expended on:· acquiring assets for conversion into cash (resale goods)
· manufacturing, selling and distribution of goods and day-to-day administration of the business
· maintenance of fixed assets (e.g. repairs);
It is well to note that "cash" need not be paid or received to be accounted.
b) Capital expenditure is expended on:· start up of the business
· acquisition of fixed assets (not for resale)
· alterations or improvements of assets to improve their revenue earning capacity.
Capital expenditure is not charged to the profit and loss account as the benefits are spread over a considerable period of time.
Now attempt exercise 2.4.
Exercise 2.4 Trading and profit and loss account
Nigel Munyati and his friends opened a small scale horticultural "co-operative" in Concession, growing and retailing. The business started on 1 August 19X6.
The following is a summary of the transactions for the first year:
You are required to prepare a trading and profit and loss account for the year ended 31 July 19X7..e. the accounting equation.
Assets - Liabilities = Capital + Profit - Drawings
expressed in the form of a balance sheet is as follows:-
This is a simplified form; in reality the assets and liabilities will be further sub-divided and analysed to give more detailed information. Figure 2.4 shows a pro forma balance sheet.
Figure 2.4 Pro forma balance sheet
Balance sheet at 31 December 19X0
Explanations
As with trading and profit and loss accounts, the balance sheet has its own nomenclature. These are fixed accounts, current accounts, current liabilities and funds:
A) Fixed assets: assets acquired for use within the business with a view to earning profits, but not for resale. They are normally valued at cost less accumulated depreciation.
B) Current assets: assets acquired for conversion into cash in the ordinary course of business; they should not be valued at a figure greater than their net realisable value.
C) Current liabilities: amounts owed by the business, payable within one year.
D) Net current assets: funds of the business available for day-to-day transactions. This can also be called working capital.
E) Loans: funds provided for the business on a medium to long term basis by an individual or organisation other than the proprietor.
F) This total is the total of the business's net assets.
G) This total is the total of proprietor's funds, i.e. the extent of his investment in the business. Within these main headings the following items should be noted.· Fixed assets
Depreciation is an amount charged in the accounts to write off the cost of an asset over its useful life.
· Current assets
Debtors are people who owe amounts to the business.
Prepayments are items paid before the balance sheet date but relating to a subsequent period.
· Current liabilities
Trade creditors are those suppliers to whom the business owes money.
Accrued charges are amounts owed by the business, but not yet paid, for other expenses at the date of the balance sheet.
Note:
Working capital. This is a term given to net current assets, or total current assets less total current liabilities, e.g.
Working capital is important because it is the fund of ready resources that a business has in excess of the amount required to pay its current liabilities as they fall due. Working capital is important; lack of it leads to business failure.
Appendix i shows a sample balance sheet and a full set of accounts for the Cerial Marketing Board of Zimbabwe.
Now attempt exercise 2.5.
Exercise 2.5 Balance sheet
Prepare a balance sheet for year ended 31 July 19X7 for Nigel Munyati's horticultural co-operative.
Accounting for stocks: Almost every company carries stocks of some sort. In an agricultural business, these may be fertilisers, chemicals, produce, etc. Accounting for stocks presents a problem, because stocks in hand at the end of the financial year are regarded as current assets, whereas stocks used during the year form part of the company's costs. Hence, stocks (assets) appear in the balance sheet, and stocks (used) must be accounted for in the trading and profit and loss account.
Valuation of stocks: Valuing closing stocks has always been a problem and a source of disagreement. There are many methods of establishing the value of stocks. Three common alternatives are average cost, first in first out (Fifo) and last in first out (Lifo).
i) Average cost
Cost is calculated by taking the average price computed by dividing the total costs of production by the total number of units produced. This average price may be derived by means of a continuous update, a periodic calculation, or a moving period calculation. This method is often used to calculate the cost of low value items, e.g. in the manufacture of nails.
ii) First in first out (Fifo)
The calculation of the cost of stocks and work-in-progress is on the basis that the stocks in hand at the year end represent the latest purchases or production, as the items going into stock at the earliest date are assumed to leave first, e.g. a greengrocer will obviously wish to sell the oldest stocks first.
iii) Last in first out (Lifo)
The calculation of the cost of stocks and work-in-progress is on the basis that the stocks in hand represent the earliest purchases or production, as it is assumed that the latest stocks into store are the first to be taken out, e.g. a 'bin' system, where purchases are added to the top and sales will be removed from the top.
Consider the following example comparing the effect of valuing stock of 240 units:
FIFO trading account
LIFO trading account
Note: In both cases, there are 240 items in stock. Valuing stocks using the latest prices, the gross profit is $320, whereas using the earliest prices the figure is $220.
The lower of cost and net realisable value: The most fundamental accounting concept with regards to the valuation of stocks and work-in-progress is that they need to be stated at cost, or if lower, at net realisable value. Net realisable value is the amount at which it is expected that items of stock and work-in-progress could be sold after allowing for the costs of completion and disposal. If net realisable value is higher than cost, then cost is taken, as valuing stocks at a higher value would not be prudent, i.e. profit would have been taken into account before it is actually earned. It is important to check against the net realisable value to ensure that the current asset, stock, is not stated at a figure above that for which it could be realised at the balance sheet date.
Stock provision: If it is decided to reduce the value of certain items of stock from cost to net realisable value, e.g. obsolete, slow moving or unsaleable stocks, this is done by means of a provision.
Stock is reduced in value, and a charge is made against profits. The full amount is deducted from stock in the balance sheet, but only the decrease between the beginning and end of a period is shown in that period's trading and profit and loss account.
Now attempt exercise 2.6.
Exercise 2.6 Valuation of stocks
Kubi Dwili began business as a small scale peanut importer in July 19X6. Purchases of peanuts were made as follows:
On 10 October, 100 tons of peanuts were sold and on 31 December 70 tons were sold. The total proceeds of the sale were $8,500.
You are required to calculate the value of closing stock and to prepare the trading account on the following bases:
a) first in first out
b) last in first out
c) average cost.
Why ratios: Ratios are the means of presenting information, in the form of a ratio or percentage, which enables a comparison to be made between one significant figure and another. Often the same ratios of like firms are used to compare the performance of one firm with another. A "one off" ratio is often useless - trends need to be established by company ratios over a number of years.
· The great volume of statistics made available in the annual accounts of companies must be simplified in some way. Present and potential investors can therefore quickly assess whether the company is a good investment or not.
· Financial ratio analysis is helpful in assessing an organisation's internal strengths and weaknesses. Potential suppliers will, for example, want to judge credit worthiness.
· Ratios by themselves provide no information; they simply indicate by exceptions where further study may improve company performance. Management can compare current performance with previous periods and competing companies.
Which areas are used for analysis
Four key areas are generally used for analysis:
· profitability
· liquidity
· leverage (capital structure)
· activity or management effectiveness (efficiency).
a) Profitability
In most organisations profits are limited by the cost of production and by the marketability of the product. Therefore, "profit maximisation" entails the most efficient allocation of resources by management, and "profitability ratios" when compared to others in the industry will indicate how well management has performed this task.
Key questions to be identified in profitability analysis include:
· Does the company make a profit?
· Is the profit reasonable in relation to the capital employed in the business?
· Are the profits adequate to meet the returns required by the providers of capital, for the maintenance of the business and to provide for growth?
· How are sales and trading profit split among the major activities?
· To what extent are changes due to price change?
· To what extent does volume change?
· Does inter-company transfer pricing policy distort the analysis?
· Has the appropriate proportion of profit been taken in tax charged?
· What deferred taxation policy is being followed?
· Has the share of profit (or loss) attributable to minority interests in subsidiaries changed? If so, is it clear why?
· Are profits and losses on sales of fixed assets:-treated as adjustments of depreciation charges?
-disclosed separately "above the line" in the profit and loss account?
-treated as "below the line" items in the profit and loss account?
-transferred directly to reserves?
· What has been included in Extraordinary Items?
· Should any of these items be regarded as part of the ordinary business of the company?
· Do any items tend to recur year after year?
· Is it clear which items have been transferred directly to reserves without going through the profit and loss account?
· Is such treatment appropriate in each case?
b) Liquidity
"Liquidity measures" are based on the notion that a business cannot operate if it is unable to pay its bills. A sufficient amount of cash and other short-term assets must be available when needed. On the other hand, because most short term assets do not produce any return, a strong liquidity position will be damaging to profits. Therefore, management must try to keep the firm's liquidity as low as possible whilst ensuring that short term obligations will be met. This means that industries with stable and predictable conditions will generally require smaller current ratios than will more volatile industries.
Key questions to be identified in liquidity analysis include:
· Has the business sufficient liquid resources to meet immediate demands from creditors?
· Has the business sufficient resources to meet the requirements of creditors due for payment in the next 12 months i.e. creditors payable within one year?
· Has the business sufficient resources to meet the demands of its fixed asset replacement programme and its commitments to providers of long-term capital falling due for repayment in say, the next five years?
c) Leverage
"Leverage ratios" show how a company's operations are financed. Too much equity in a firm often means the management is not taking advantage of the leverage available with long-term debt. On the other hand, outside financing will become more expensive as the debt-to-equity ratio increases. Thus, the leverage of an organisation has to be considered with respect both to its profitability and the volatility of the industry.
Key questions to be identified in leverage analysis include:
· What sort of capital has the company issued?
· Who owns the capital?
· What is the cost of capital in terms of interest or dividend?
· What proportions of the capital have a financed return (gearing or leverage)?
· Is the mix of capital optimum for the company?
· Is further capital available if required?
· Is total capital employed analysed among different classes of business?
· If so, can return on capital be calculated for each class?
· Has issued Ordinary share capital increased during the period?
· If so, why? e.g. Rights issue? Bonus (scrip) issue? Acquisition?
· Are "per share" figures calculated using appropriately weighted number of shares?
· Are prior years' figures comparable?
· What individual items have caused significant movements on Reserves?
· Do any of them really belong in the profit and loss account?
· Is any long term debt convertible into ordinary shares?
· On what terms?
· Calculate appropriate measures on "fully diluted
· basis
· Is any long term debt repayable within a short period?
· If so, should it be treated as a current liability?
· Are there significant borrowings in foreign currencies?
· Are they matched by foreign assets?
· How are exchange losses and gains thereon treated?
· Is there any preference capital?
· Is short term borrowing included in capital employed? Should it be?
· Is the treatment of pensions appropriate? Is information revealed?
· Would capitalising leases significantly affect long term debt and gearing ratios?
d) Activity
"Activity ratios" are used to measure the productivity and efficiency of a firm. When compared to the industry average, the fixed-asset turnover ratio, for example, will show how well the company is using its productive capacity. Similarly, the inventory turnover ratio will indicate whether the company used too much inventory in generating sales and whether the company may be carrying obsolete inventory.
Key questions to be identified in activity analysis are:
· Does management control the costs of the business well?
· Which costs, if any, have changed significantly, thus reducing or improving apparent profitability?
· Does management control the investment in assets well?
· Are fixed assets sufficient for the current level of activity? Are they replaced on a regular basis and adequately maintained?
· Are the stock levels adequate for the level of activity, or excessive?
· Are debts collected promptly?
· Are creditors paid within a reasonable period of time?
· Are surplus cash resources invested to increase overall returns?
· How variable are the profits before interest and tax?
· How many times can the interest be paid from the available profit?
· How many times can the existing dividend be paid from the available profit?
e) Other
Other questions can be asked in interpreting final accounts. These may relate to long-term trends in the business or to fixed assets, e.g.
i) Long-term trends in the business· Are profits increasing or decreasing?
· Is the size of the business growing faster or slower than inflation?
· How has past growth been financed?
· Are the levels of stocks, debtors and creditors consistent with the long-term growth of the business?
· Are dividends increasing?
· Have any radical changes occurred in the past, giving rise to major changes in the business?
ii) Fixed assets· Where fixed assets are shown "at historical cost":-How old are they? What is their estimated current value?
-How would revaluation affect the depreciation charge?
· Where fixed assets are shown "at valuation":-When was the valuation made, and on what basis?
-How have values changed since that date?
-Might the assets be more valuable if used for other purposes?
· What method of depreciation is used for valuation?
· What asset lives are used? Are different lives used for Current Cost Accounting?
· Has adequate provision been made for technological obsolescence?
· Are any assets leased? What is their value?
· How much are the annual rentals? How long is the commitment?
· Is goodwill:-Shown as an asset?
-Written off against reserves?
-Being amortised by charges against profit?
· How does the book value of goodwill compare with the estimated surplus of the current value of fixed assets over their net book value?
· Has the status of any investments changed during the period? Subsidiaries? Associated companies? Trade investments? Non-consolidated subsidiaries?
· Are investments in associated companies shown by the "cost" method or by the "equity" method?
· What is the difference between cost and market value of quoted investments? Is market value used if it is lower than cost?
· Are there any long-term debtors? How have they been treated in the balance sheet?
Methods used to evaluate organisational performance
To evaluate the performance of a company with respect to these ratios, three methods are used, namely industry comparisons, time series analysis and absolute standards.
a) Industry comparisons
Data are used, such as that provided by commercial firms like Dun and Bradstreet and Profit Impact of Marketing Strategies (PIMS), are used for comparing the company with others of about the same size, that serve the same market and have similar products. The danger is that when industry averages include companies with different products or markets, averages can be misleading.
b) Time series analysis
Ratios for several periods are used to determine whether significant changes have occurred. These time series can also be used to project the future financial performance of the company.
c) Absolute standards
Most organisations have some minimum requirements for corporate performance regulations of the particular industry, e.g. the long-term debt-to-equity ratio should not exceed one. A thorough financial analysis usually is a condition of these three approaches.
Figure 2.5 shows an example of how a time series analysis can be used to back financial and business objectives.
Figure 2.5 Framework for linking financial business objectives
1) Profitability
a) Gross profit margin or profit margin on sales:
or
b) Net profit margin:
c) Return on assets:
d) Return on equity
Note that:
NAV = Net asset value
i.e. Net asset turnover X profit margin = return on assets
2) Liquidity
a) Current ratio
b) Quick (liquidity or acid test ratio):
c) Defensive interval ratio:
d) Inventory to net working capital:
3(i) Leverage (coverage ratios or gearing) - debt cover
a) Conventional leverage:
b) Murphy Prussman Gearing:
c) U.S. measure of leverage:
(ii) Leverage - interest cover
d) Interest coverage:
e) Cumulative interest coverage:
4) Activity (efficiency ratios)
a) Debtors turnover:
b) Creditors turnover:
c) Inventory turnover:
d) Wages turnover:
e) Net asset turnover:
f) Profits per employee:
1) Stock market ratios
a) Earnings per share:
b) Price earnings ratio:
c) Dividend yield (net):
d) Dividend cover:
If an organisation is made up of multiple divisions or Strategic Business Units (SBU's), then the following measures can be computed, provided that balance sheet and income statement data are available at the divisional or SBU level. These analyses enable corporate management to assess the performance of divisions, SBU's and/or their management.
a) Return on sales (ROS)
ROS is computed by dividing net income (NI), or profit (P) before or after interest and taxes, by total revenue:
Some argue that interest expenses and tax should not be considered as they are outside the SBU manager's control. However, interest may be added to show managers that invested funds are not a free resource. This, however, understates the true cost of capital employed, because the interest is a charge for only the debt portion of capital.
b) Return on investment (ROI), return on net assets (RONA) and return on equity (ROE)
Note: NET ASSETS = TOTAL ASSETS- TOTAL LIABILITIES
Note: Owner's equity = total assets - total liabilities
In using any of these measures to assess an SBU manager's ability to use assets efficiently, account should be taken of whether cash is centrally controlled or headquarters determines both credit and payment policies. If the latter, then cash receivables or payables or both should be omitted from the investment base.
c) Cash flow (CF)
Cash flow is not the same as net income (NI) or profit (P). It differs in two ways:
i) Cash flow includes depreciation, as this is a bookkeeping transaction, and tax, because tax is a cash cost. Thus,CF = NI (or P) after tax and depreciation
ii) Cash flow is affected by balance sheet changes, e.g. increase in accounts receivable or additions to fixed assets (FA), e.g. plant and equipment and changes in working capital (WC).CF = Net Income (or Profit) after tax plus Depreciation minus changes in FA and minus changes in WC
Note: If no tax is paid or if tax is deferred, use Net Income (or Profit) before tax.
The changes in (D) are calculated by the company's balance sheet entries for two consecutive periods.
Working Capital =
D Cash plus or minus
D Stock plus or minus
D Accounts receivable plus or minus
D Accounts payable (and other short-term liabilities)
Example:
Balance sheet
d) Sustainable growth rate (SGR)
This is a measure of the ability of the business to grow within the constraints of its current financial policies. What is required is a balance sheet for the SBU which includes a justified assignment of the proportion of the total corporate short-run liabilities and long-run debt.
Once accomplished, the maximum sustainable growth rate (a measure of the ability of the business to fund the new assets needed to support increased sales) is estimated by:
where:
p = profit margin after taxes
d = dividend payout ratio (for a business unit this is computed from the corporate overhead charge plus any dividend paid to corporate?)
L = debt to equity ratio
t = ratio of assets (physical plant and equipment plus working capital) to sales.
The growth rate is expressed in nominal terms. Real SGR is reduced by 2.2% for every 5% of inflation for two reasons:
i) Depreciation charges based on historical costs overstate taxable income because they fail to fully recover the economic value of depreciating assets.
ii) Working capital increase solely due to inflation requires financing.
If the actual growth rate exceeds SGR, then the organisation can consider a number of strategic actions which affect the "productivity" side of the quest for increased profits ("productivity" as opposed to "volume" strategies to increase profits). These are:
i) reduce investment intensity (cut stocks and/or receivables)
ii) reduce dividends
iii) obtain injections of "equity" from the corporate body
iv) increase debt.
Now attempt exercise 2.7.
Exercise 2.7 Ratio analysis
Nigel Munyati's horticultural business continued to flourish. Six years later his condensed financial accounts for the last three years are summarised below (N.B. he introduced fresh capital into the business):
Compute the following ratios for 19X0, 19X1 and 19X2.
a) gross profit on sales
b) gross profit on cost of goods sold
c) stock turnover
d) return on capital employed
e) current ratio
f) liquidity (or quick) ratio
g) debtor collection period
h) working capital
i) ratio of current assets to total assets
j) ratio of cash to current liabilities
Comment briefly on the results of the business over the last three years.
Activity ratio
Average cost
Balance sheet
Capital expenditure
Cost of goods sold
Credit balances
Current liabilities
Debit balances
Direct and indirect costs
Dual effect
First-In-first-out
Fixed and current assets
Gross margin
Gross sales
Last-in-first-out
Leverage
Liquidity ratios
Loans
Manufacturing account
Net profit
Net sales
Net sales and net purchases
Opening and closing stock
Overheads
Profit
Profitability ratios
Profit and loss account
Ratio analysis
Revenue expenditure
Trading account
The accounting equation
Working capital
Work-in-progress | http://www.fao.org/docrep/W4343E/w4343e03.htm | crawl-002 | refinedweb | 5,432 | 53.92 |
Managing may also want to install these optional features:
- Windows Server Backup The new backup utility included with Windows Server 2008.
- Storage Manager for SANs Allows you to provision storage for storage area networks (SANs).
- Multipath IO Provides support for using multiple data paths between a file server and a storage device. Servers use multiple IO paths for redundancy in case of failure of a path and to improve transfer performance.
Table 12-1 Role Services for File Servers
.
Note During the setup process, shared files are created on the server. If you encounter a problem that causes the setup process to fail, you will need to resume the setup process using the Add Role Services Wizard. After you restart Server Manager, select the File Services node under Roles. In the main pane scroll down and then click Add Role Services. You can continue with the installation, starting with step 3. If you were in the process of configuring domain-based DFS, you'll need to provide administrator credentials.
-. If you are installing DFS Namespaces, you'll have three can only have up to 5,000 DFS folders. Stand-alone namespaces can have up to 50,000 DFS folders but are replicated only when you use failover server clusters and configure replication.
-. Next,.
Note You do not have to configure DFS Namespaces at this time. Once you've installed DFS Namespaces, DFS Replication, or both, you can use the DFS Management console to manage the related features. This console is installed and available on the Administrative Tools menu. See Chapter 15, "Data Sharing, Security, and Auditing," for more information.
- With File Server Resource Manager, you can monitor the amount of space used on disk volumes and create storage reports. If you are installing File Server Resource Manager, you'll have two additional configuration pages:
- On the Configure Storage Usage Monitoring page, you can previously selected type.
Note You do not have to configure monitoring and reporting at this time. After you've installed FSRM, you can use the File Server Resource Manager console to manage the related features. This console is installed and available on the Administrative Tools menu. See Chapter 14 for more information.
- If you are installing Windows Search Service, you'll see an additional configuration page that allows you to select the volumes to index. Indexing a volume makes it possible for users to search a volume quickly. However, indexing entire volumes can affect service performance, especially if you index the system volume. Therefore, you may only want to index specific shared folders on volumes, which you'll be able to do later on a per-folder basis.
Note You do not have to configure indexing at this time. After you've installed Windows Search Service, you can use the Indexing Options utility in Control Panel to manage the related features.
- When you've completed all the optional pages, click Next. You'll see the Confirm Installation Options page. Click Install to begin the installation process. When Setup finishes installing the server with the features you've selected, you'll see the Installation Results page. Review the installation details to ensure that all phases of the installation completed successfully.
If the File Services role is installed already on a server and you want to install additional services for a file server, you can add role services to the server.< Back Next > | https://technet.microsoft.com/en-us/library/dd163554.aspx | CC-MAIN-2017-47 | refinedweb | 564 | 55.64 |
Key Takeaways
- Amazon Web Services is a member of the .NET Foundation and fully supports .NET development and developers.
- AWS providers a Software Development Kit for .NET that gives .NET developers a library for working with AWS services.
- There are toolkits for all major .NET IDEs that are designed to help create, debug, and deploy .NET applications on AWS.
- For those developers that prefer using command line tools, AWS has created extensions for both PowerShell and the .NET CLI.
Amazon Web Services (AWS) is a member of the .NET Foundation and is a major proponent of .NET development and .NET developers. AWS engineers are currently supporting several different .NET open source products with the support for those projects, and others, continuing to grow.
This support includes most modern .NET platforms, including the recently released version of .NET Core, v3.1.
As its name implies, AWS provides a large set of online services ranging from infrastructure solutions such as virtual machines and container management to storage products to purpose-built or managed databases with many, many more.
Since these are all cloud-based services, all interaction is done through API calls (generally RESTful). A developer can choose to interact directly with these APIs, however AWS recognizes that this adds complexity to a project and so provides language or framework-specific software development kits (SDK) that perform all of this interaction for the developer.
All of the AWS SDKs provide support for API lifecycle consideration such as credential management, retries, data marshaling, and serialization; all of which save time and effort for a developer looking to interact with AWS services and products.
One example of a framework-specific SDK is the AWS Software Development Kit for .NET.
.NET Software Development Kit
In November 2009, AWS released the initial AWS Software Development Kit for .NET (SDK). The SDK gives .NET applications a framework that allows them easy access to AWS services.
AWS updates the .NET SDK daily, so it is a rare situation when AWS releases a new feature or service and the SDK is not updated that day. The .NET SDK simplifies the use of AWS services by providing a set of libraries in a way that is consistent and familiar for .NET developers.
Every AWS service that is accessible through an API has an SDK that is available through NuGet from nuget.org. The service-specific SDKs follow the naming pattern of AWSSDK.ServiceName, as shown in Figure 1.
Figure 1- NuGet configuration in JetBrains Rider after searching for AWS
There are also some higher-level abstractions to help developers better manage some common tasks such as logging, transferring files, and providing authentication. These higher-level abstractions have a different naming convention, AWS.AbstractionName. An example of one of these abstractions is AWS.Logger. This SDK is a higher-level library that wraps common .NET logging libraries, giving them the capability to save log entries directly into AWS’ monitoring and observability service, CloudWatch. You can see the different AWS.Logger NuGet packages in Figure 2.
Figure 2 - Visual Studio 2019 NuGet Package Manager after searching for AWS.Logger.
When you install an API-specific AWSSDK package, you will generally see that there are two packages being installed, AWSSDK.Core and AWSSDK.ServiceName as shown in Figure 3.
Figure 3- Installing AWSSDK.S3 package in Visual Studio
All service packages depend on a common core library that does the marshaling, request signing, and other tasks to enable sending API requests to, and receiving responses from, AWS. The service-specific packages simply expose the service API and related model classes. The same may happen if you install a higher-level SDK. Figure 2, above, shows a package named AWS.Logger.Core. Installing any of the other AWS.Logger packages will also install the Core package. Note, this is not true across all of the higher-level SDKs.
Reviewing the package and .dlls will show that the .dll and the package have the same name. Thus, adding the AWSSDK.S3 package to a project will result in adding AWSSDK.S3.dll to the \bin directory. One thing to note, however, is that while the namespace is similar to the package and .dll name, they are not identical. While the .dll name is AWSSDK.S3.dll, as Figure 4 shows, the namespace used when accessing that functionality is Amazon.S3.
Figure 4 - Visual Studio Object Browser for AWSSDK.S3.
Working with the SDK is similar to working with any other project reference. For example, the AWS service shown in Figure 4 is S3, also known as Amazon Simple Storage Service. S3 is an object storage service that offers scalability, data availability, security, and performance. The following code example demonstrates using the SDK to persist a file to S3:
/// <summary>Example running in an ASP.NET MVC web application that synchronously saves a file to S3</summary> using Amazon; using Amazon.S3; using Amazon.S3.Model; public async Task<string> SaveAsync(string path) { try { string fileName = "NDC-Booth.jpg"; string filePath = Path.Combine(path, fileName); RegionEndpoint bucketRegion = RegionEndpoint.USWest2; var putRequest = new PutObjectRequest { BucketName = "infoq-article-bucket", Key = fileName, ContentType = "image/jpeg", FilePath = filePath }; putRequest.Metadata.Add("x-amz-meta-title", "2020 AWS Booth at NDC London"); using (IAmazonS3 client = new AmazonS3Client(bucketRegion)) { var results = await client.PutObjectAsync(putRequest).ConfigureAwait(false); return results.ETag; } } catch (AmazonS3Exception e) { logger.Error("Amazon error saving to S3", e); } catch (Exception e) { logger.Error("Unknown error saving to S3", e); } return null; }
This simple example copies an image from the local web directory and puts it in S3. By using the SDK, you do not have to worry about managing authentication, verification, or retries, and it just works. In addition, while this example uses an asynchronous method, a synchronous version that does not return a Task is also available for those cases where a synchronous approach would be more appropriate. In that case, your synchronous method would look more like:
public string Save(string path) { … using (IAmazonS3 client = new AmazonS3Client(bucketRegion)) { var response1 = client.PutObject(putRequest); return response.ETag; } … }
Your calling method would have to change as well because of the change in return type. For those of you working in .NET Core, asynchronous communication is the standard, however synchronous communication was common in earlier .NET applications.
AWS Toolkits
As you can see, the SDK helps developers interact with AWS services from within their .NET application. However, developers use SDK programmatically, so it does not support personal interaction with AWS services. That is where the Toolkits come in. Each of the major IDEs used by .NET developers for developing their applications has their own Toolkit. There are currently Toolkits for Visual Studio, VS Code, and JetBrains Rider. These toolkits are extensions designed to help create, debug, and deploy .NET applications on AWS.
You can install the toolkit through the IDE. In Visual Studio, it will be through the Extensions > Manage extensions > Visual Studio Marketplace as shown in Figure 5.
Figure 5 - Adding the AWS Toolkit to Visual Studio 2019
In JetBrains’ Rider, you install the toolkit through the Configure > Plugins link on the opening splash screen, as shown in Figure 6.
Figure 6 - Add AWS Toolkit to JetBrains Rider
After installation, the toolkit gives you access to AWS services through the IDE. For example, when working in Visual Studio, the toolkit provides you an additional module, the AWS Explorer. After opening the AWS Explorer, you will see a list of different AWS services with which you will be able to work. Most of these allow you to create or delete items as well as modify some of the properties of those items. Figure 7 shows the file uploaded from the previous code sample.
Figure 7 - Using the toolkit to look at an uploaded file in an S3 bucket
The various toolkits have different levels of maturity and support. The VS Code and JetBrains toolkits, for example, focus on serverless development so do not have the same list of services available in their AWS Explorers windows. Figure 8 shows the AWS Explorer window in JetBrains Rider and VS Code together.
Figure 8 - AWS Explorer windows in JetBrains Rider and VS Code
The toolkits provide more functionality to the IDE than the ability to manage some of the AWS services. Another feature that the toolkit brings is the addition of additional project templates, as shown in Figure 9.
Figure 9 - Visual Studio 2019 New Project dialogue filtered to AWS templates
There are nine templates added in Visual Studio, mostly around creating AWS serverless applications or Lambda projects in C#, F#, and node.js; all languages supported by Visual Studio and AWS. JetBrains Rider has an additional template as well, “AWS Serverless Application.” That there are only serverless application templates created is appropriate; this is the only new “application type” in AWS as all of the rest of the server-side applications created in .NET are already supported.
The last set of features provided by the toolkit is help around deploying the .NET application. The assistance provided by the toolkit is dependent upon the type of project and application with which you are working. An ASP.NET Core application will have an option to “Publish to AWS Elastic Beanstalk.” If that same application has Docker support enabled within the .NET solution then they will get an additional deployment option of “Publish Container to AWS.” You can see what this looks like in Figure 10.
Figure 10 - Publish options when working in a "Docker-ized" ASP.NET Core application
Choosing the “Publish to AWS Elastic Beanstalk” approach will have the toolkit walk you through deploying to an existing Elastic Beanstalk environment or help you configure a new Elastic Beanstalk application. While not exactly a platform-as-a-service (PaaS), Elastic Beanstalk feels very similar to a PaaS because it removes the hard work from provisioning and configuring AWS resources. Deploying to a new instance requires you to make several choices: (1) the type of EC2 (virtual machine) instance you would like to use, (2) whether or not you want to deploy to multiple instances, and (3) what type of load balancer to use if you do deploy to multiple instances. You also can select various security settings or go with the default. This is a good starting point for those wanting to toss a web application over the AWS wall and see what it looks like
Applications already set up to support Docker have additional capabilities. These capabilities revolve around Amazon Elastic Container Service (ECS) and allow developers to have the application run as a service, a “run-right-now” task, or as a scheduled task. You can also simply upload the container definition to the Elastic Container Registry so that it is available for selection when working within ECS. If you are deploying a website, you will want to select the “service” option as shown below in Figure 11.
Figure 11 - Using Visual Studio to publish a container to AWS
Proceeding through the publish path will also give you the ability to define and select load balancers and permissions. Once that is completed, your application will be available and can be managed the same as any other container – you can increase or decrease the amount of containers running behind the load balancer, set up auto-scaling, or perform any other container-specific tasks needed to ensure your application is available for consumption.
Other .NET Support
The .NET SDK and IDE-based toolkits are not the only tools that AWS has built to support .NET developers. There are also AWS Tools for PowerShell, AWS Tools for Azure DevOps (previously called AWS Tools for Visual Studio Team Services), and AWS Extensions for the .NET CLI. The AWS Tools for PowerShell let developers and administrators manage their AWS services and resources in the PowerShell scripting environment and support many of the same actions as the AWS SDK for .NET. The PowerShell tools also support the automation of AWS service management, so that you can manage your AWS resources with the same PowerShell tools you use to manage your Windows, Linux, and MacOS environments.
The AWS Tools for Azure DevOps is not specific to .NET but is, instead, a set of extensions for on-premises Microsoft Team Foundation Server (TFS) and Azure DevOps that make it easy to deploy .NET applications using AWS. You can use your existing build and release process to deploy applications directly to AWS Elastic Beanstalk or AWS Lambda as well as interact with other services such as Amazon S3 or AWS CodeDeploy.
The AWS Extensions for .NET CLI are additional tool extensions focused on building and deploying .NET Core and ASP.NET Core applications. Many of these commands are the same as used in the AWS Toolkit for Visual Studio so that you can test the initial deployment through Visual Studio and then use the command line to automate the deployment going forward. There are three different sets of .NET CLI tool extensions, one for Amazon ECS, one for AWS Elastic Beanstalk, and one for AWS Lambda.
AWS has been supporting .NET development for more than a decade. As mentioned earlier, many of the AWS compute services, such as EC2 and Lambda, have support for various versions of .NET built into their systems. Windows virtual machines or containers support all .NET frameworks while the Linux-based VMs, containers, and serverless environments all support .NET Core. The SDKs provide programmatic support for AWS services, allowing developers to not worry about authentication, or retries, or other infrastructure concerns and instead focus on building their application.
Other available tools include the IDE-specific toolkits that give developers the ability to interact with various AWS services from within the browser, the AWS Tools for PowerShell for managing AWS services within the PowerShell scripting environment, the AWS Tools for Visual Studio Team Service, and the AWS Extensions for .NET CLI. All of these tools are updated regularly, some daily, to keep up with changes that happen in AWS services, changes that happen in the .NET Framework, and to help keep up as development paradigms change to better support the cloud.
AWS is also looking to hire experienced .NET developers across the world – please review open positions if you are interested in helping AWS get even better in the .NET space!
About the Author
Having over 25 years of experience in software development (more than 15 of which is .NET), Dr. Bill brings a pragmatic approach to software development. With much of that time spent in consulting and project-based work, he has worked on many different projects and used many different development designs and approaches. He recently switched to the dark side and uses his development experience in a product management role where he acts as a .NET Developer Advocate with AWS, helping AWS to build a better and more rich .NET developer experience.
Community comments | https://www.infoq.com/articles/cloud-development-aws-sdk/?itm_source=articles_about_dotnet&itm_medium=link&itm_campaign=dotnet | CC-MAIN-2020-34 | refinedweb | 2,479 | 56.35 |
What is the use/advantage of function overloading
Please suggest use of FO
Nandini
- Mar 2nd, 2018
Code maintenance is easy
supriya
- May 24th, 2017
With the help of function loading we can use multiple function without declare again and again as per our requirement we can use our parameters in same function and call them from our subclasses.
Difference between data encapsulation vs abstraction
PRANIT PATIL
- Nov 27th, 2017
Encapsulation and abstraction both solve same problem: Complexity; but in different dimensions. Abstraction: Hides the implementation details of your methods. Provides a base to variations in the app...
tuba
- Jun 10th, 2016
Abstraction is the method in which we write its coding in another class. In which we do not know how our function work example SMS in which we only send message we do not know how we send it or what m...
How do you differentiate between a constructor and normal function?
Sumit Payasi
- Oct 2nd, 2017
Constructor is an special function, which has same name as class name, constructor is an automatic initialization of object. constructor should not have any return type. constructor must be invoked wi...
udayakumar
- Sep 28th, 2016
Method: 1) The method may or may not be present in the class 2) The method will be having the return type 3) The method is invoked with reference to object of the class Constructor: 1) If you write o...
Compile Time Overloading
What is the requirement of compile time overloading?
Mamta
- Sep 14th, 2017
Compile time overloading or static binding or early binding means to have same method name with different arguments. Example
sample(int a)
sample(int a, int b)
sample(char a)
sample(int a, char b)
sample(int a, char b, string c)
All of these are example of compile time overloading.
Praveen
- Jan 16th, 2012
We know that because of polymorphism, we can declare any number of function with change in signature. Signature considers three parts 1. No.of parameters 2. Type of parameters 3. Order of parameters...
What is static identifier?
Amit
- Jun 21st, 2017
Variable declared as static in a function is initialized once, and retains its value between function calls. The default initial value of an uninitialized static variable is zero.
yogeshpanda
- Sep 21st, 2005
The static variables persist their values between the function calls and the static functions is out of the class scoop means it can be invoked without the use of the object( cab be invocked by classname::functionname).
What does static variable mean?
supriya
- May 24th, 2017
Static variable is used with in class in anywhere.
vignesh
- Feb 10th, 2017
A function-scope or block-scope variable that is declared as static is visible only within that scope. Furthermore, static variables only have a single instance. In the case of function or block-scope...
Explain the advantages of OOPs with the examples.
nimesh diyora
- Feb 11th, 2017
Code Reuse and Recycling: Objects created for Object Oriented Programs can easily be reused in other programs. Encapsulation (Part 1): Once an Object is created, knowledge of its implementation is no...
sabari
- Nov 18th, 2016
Code reuse
Reduce lines of coding
What is the similarity between a Structure, Union and enumeration?
dhivyanatesh
- Aug 19th, 2016
Structure and Union both allows us to group variables of mixed datatypes into a single entity.
Resh
- Jul 20th, 2016
All are helpful in defining new datatypes advantages of Object Oriented Modeling?
Elizabeth
- Jun 15th, 2016
Increase consistency between analysis design and programming or activities
sripri
- May 2nd, 2007
The advantage of object-oriented modeling is the fact that objects encapsulate state expressed by attributes and behavior specified methods or operation and they are able to communicate by sending and...
What is near, far and huge pointers? How many bytes are occupied by them?
PAWAN KUMAR TIWARI
- Jun 12th, 2016
Near=2 far=4 huge=4
ankit_k7
- Dec 7th, 2007
Near pinter occupy 16 bytesnear and far occupy 32 bytes
What are the different types of polymorphism?
syedafarzana
- Mar 31st, 2016
Compile time Polymorphysim and Runtime Polymorphism
tux
- Jul 4th, 2015
There are two types controlled and uncontrolled polymorphism.
Overloading wont belong to type of polymorphism
Overriding is tool to implement polymorphism.
What is OOPS?
OOP is the common abbreviation for Object-Oriented Programming.
Read Best Answer
Editorial / Best Answer
Answered by: shahistha
-.
MD SADDAM HUSSAIN
- Mar 22nd, 2016
OOPs. Object Oriented Programming is a paradigm that provides many concepts such as inheritance, data binding, polymorphism etc.
pooja odhekar
- Feb 28th, 2013
OOPS provides features like polymorphism, inheritance, abstraction.
What is an Abstraction?
Does the c supports abstraction?with example...and what is the diff. b/w abstraction and encapsulation?
Samyuktha Reddy
- Mar 22nd, 2016
Hiding the programming elements which are encapsulated.
PRIYADHARSHINI
- Mar 16th, 2016
Abstraction means Hiding Information
Write a c++ program to accept three digits (i.e 0-9)and print all possible combinations fro these digits.(for exanple if the three digits are 1,2,3 then all possible combinations are 123,132,231,213,312,321 etc)
Lineesh K Mohan
- Mar 22nd, 2016
"cpp #include #include using namespace std; int main() { int arrNum[3] = {0}; int nTemp = 0; int nI = 0,nJ = 0,nK = 0; coutnTemp; ...
adnan
- Jan 13th, 2016
"cpp Yanesh Tyagi Dec 23rd, 2006 /* Program to write combination of 3 digit number Author : Rachit Tyagi Date : 23-Dec-2006 2345 IST */ #include #include void ...
Can you use the function fprintf() to display the output on the screen?
Helal
- Jan 28th, 2016
Which function is used to display the output on the screen?
veena
- Dec 15th, 2006
yes we can #include <stdio.h>#include <conio.h>void main(){fprintf(stdout,"sdfsdf");getch();}fprintf() accepts the first argument which is a file pointer to a stream. the stdout is also a file pointer to the standard output(screen).
Merits and demerits of friend function
PRERNA
- Nov 7th, 2015
- Provides additional functionality which is kept outside the class.
- Provides functions that need data which is not normally used by the class.
- Allows sharing private class information by a non member function.
vadivel kumar
- Sep 17th, 2015
It can access the member function of another class
Are the variables argc and argv are local to main?
yogesh kumar
- Aug 21st, 2015
Can we change these argv and argc command line arguments ?
laki.sreekanth
- Jul 17th, 2010
Yes why because these variables written in main function only, not outside main. These belong to main block but not to other. So according to my knowledge these command line arguments are local to main.
Controlled and uncontrolled redundancy
What is the difference between controlled and uncontrolled redundancy? Illustrate with examples.
Dhanalakshmi
- Jul 19th, 2015
Both are the type of polymorphisms. Compile time polymorphism is controlled one. Dynamic is uncontrolled.
Why we need to create object for final class and what are its benefits?
Ahmed reda
- Jun 29th, 2015
A final class cannot have any subclasses. This just means that nobody can create a subclass that modifies behaviour of the final class by modifying internal data or overriding methods so they behave d...
Bimal Modak
- Jan 14th, 2015
We can create the object of final class and this will act like any simple object created for simple class."java public final class BlockTest { public void testMethod(){ S...
OOPS Interview Questions
Ans | http://www.geekinterview.com/Interview-Questions/Concepts/OOPS | CC-MAIN-2018-13 | refinedweb | 1,218 | 56.45 |
I'm trying to work out why a larger problem is occurring, using a smaller program as an example. This smaller program does not work, leading me to believe it is my understanding of the function that is flawed.
As far as I (had) believed, the following program should initialise a string with up to 30 characters, then take the number '5' to nine significant figures, and turn it into that string. The program should then print the value '5.00000000'. However, the program prints the value 7.96788(...). Why is this?
#include <stdio.h>
int main()
{
char word[30];
sprintf(word, "%.9g", 5);
printf(word);
return 0;
}
This is because
5 is an integer (
int), and you're telling
sprintf to pretend that it's a double-precision floating-point number (
double). You need to change this:
sprintf(word,"%.9g", 5);
to either of these:
sprintf(word,"%.9g", 5.0); sprintf(word,"%.9g", (double) 5); | https://codedump.io/share/7P22KFWWHAPj/1/simple-use-of-sprintf---c | CC-MAIN-2019-26 | refinedweb | 156 | 67.25 |
- Nesting Parentheses
- Using an Input Stream
- Using int Variables and Constants
- Types of Variables and Valid Names
- Summing Up
Using int Variables and Constants
A variable allows the program to perform its calculation outside of the cout statement.
1: #include <iostream> 2: 3: using namespace std; 4: 5: int main(int argc, char* argv[]) 6: { *7: const int Dividend = 6; *8: const int Divisor = 2; *9: *10: int Result = (Dividend/Divisor); *11: Result = Result + 3;// Result is now its old value+3=6 *12: *13: cout << Result << endl; 14: 15: // Note: You must type something before the Enter key 16: char StopCharacter; 17: cout << endl << "Press a key and \"Enter\": "; 18: cin >> StopCharacter; 19: 20: return 0; 21: }
Lines 713 have been changed.
Lines 7 and 8 declare variables named Dividend and Divisor and set their values to 6 and 3, respectively. The = is called the assignment operator and puts the value on the right-hand side into the variable on the left-hand side. These variables are declared as type int, which is a number with no decimal places.). Declarations with const are often called constants (mathematically inclined readers will recall that pi is the name of the constant whose value is 3.14159).
Line 10 declares a variable and assigns the result of part of an expression to the variable. It uses the names of the constants on lines 7 and 8 in the expression, so the value in Result depends on the content of those constants.
Line 11 is perhaps the hardest one for non-programmers. Remember that the variable is a named location in memory and that its content can change over time. Line 11 says, "Add the current content of Result and the number 3 together and put the calculated value into the location named by Result, wiping out what used to be there."
The output of this example is still 6. This shows that you can change the implementation of a design (that is, the code you have written to accomplish a task), and still produce the same result. Therefore, it is possible to alter a program to make it more readable or maintainable. | https://www.informit.com/articles/article.aspx?p=28582&seqNum=4 | CC-MAIN-2022-21 | refinedweb | 360 | 62.92 |
Just a few weeks ago we announced the first preview release of an experimental web UI framework called Blazor. Blazor enables full-stack web development using C# and WebAssembly. So far thousands of web developers have taken on the challenge to try out Blazor and done some pretty remarkable things:
- Started using Blazor to build RealWorld web apps
- Used Blazor to control a Christmas tree from a Raspberry Pi
- Built a Blazor app for looking up lyrics to vocaloid music
- Integrated Blazor with the Redux DevTools to do time-traveling debugging
The feedback and support from the community has been tremendous. Thank you for your support!
Today we are happy to announce the release of Blazor 0.2.0. Blazor 0.2.0 includes a whole bunch of improvements and new goodies to play with.
New features in this release include:
- Build your own reusable component libraries
- Improved syntax for event handling and data binding
- Build on save in Visual Studio
- Conditional attributes
- HttpClient improvements
A full list of the changes in this release can be found in the Blazor 0.2.0 release notes.
Many of these improvements were contributed by our friends in the community, for which, again, we thank you!
You can find getting started instructions, docs, and tutorials for this release on our new documentation site at.
Get Blazor 0.2.0
To get setup with Blazor 0.2.0:
- Install the .NET Core 2.1 Preview 2 SDK.
- If you've installed the .NET Core 2.1 Preview 2 SDK previously, make sure the version is 2.1.300-preview2-008533 by running
dotnet --version. If not, then you need to install it again to get the updated build.
- Install the latest preview of Visual Studio 2017 (15.7) with the ASP.NET and web development workload.
- You can install Visual Studio previews side-by-side with an existing Visual Studio installation without impacting your existing development environment.
- Install the ASP.NET Core Blazor Language Services extension from the Visual Studio Marketplace.
To install the Blazor templates on the command-line:
dotnet new -i Microsoft.AspNetCore.Blazor.Templates
Upgrade a Blazor project
To upgrade an existing Blazor project from 0.1.0 to 0.2.0:
- Install all of the required bits listed above
- Update your Blazor package and .NET CLI tool references to 0.2.0
- Update the package reference for
Microsoft.AspNetCore.Razor.Designto 2.1.0-preview2-final.
- Update the SDK version in
global.jsonto
2.1.300-preview2-008533
- For Blazor client app projects, update the
Projectelement in the project file to
<Project Sdk="Microsoft.NET.Sdk.Web">
- Update to the new bind and event handling syntax
Your upgraded Blazor project file should look like this:
<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> <RunCommand>dotnet</RunCommand> <RunArguments>blazor serve</RunArguments> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.Razor.Design" Version="2.1.0-preview2-final" PrivateAssets="all" /> <PackageReference Include="Microsoft.AspNetCore.Blazor.Browser" Version="0.2.0" /> <PackageReference Include="Microsoft.AspNetCore.Blazor.Build" Version="0.2.0" /> <DotNetCliToolReference Include="Microsoft.AspNetCore.Blazor.Cli" Version="0.2.0" /> </ItemGroup> </Project>
Build reusable component libraries
Blazor components are reusable pieces of web UI that can maintain state and handle events. In this release we've made it easy to build reusable component libraries that you can package and share.
To create a new Blazor component library:
Install the Blazor templates on the command-line if you haven't already
dotnet new -i Microsoft.AspNetCore.Blazor.Templates
Create a new Blazor library project
dotnet new blazorlib -o BlazorLib1
Create a new Blazor app so we can try out our component.
dotnet new blazor -o BlazorApp1
Add a reference from the Blazor app to the Blazor library.
dotnet add BlazorApp1 reference BlazorLib1
Edit the home page of the Blazor app (
/Pages/Index.cshtml) to use the component from the component library.
@addTagHelper *, BlazorLib1 @using BlazorLib1 @page "/" <h1>Hello, world!</h1> Welcome to your new app. <SurveyPrompt Title="How is Blazor working for you?" /> <Component1 />
Build and run the app to see the updated home page
cd BlazorApp1 dotnet run
JavaScript interop
Blazor apps can call browser APIs or JavaScript libraries through JavaScript interop. Library authors can create .NET wrappers for browser APIs or JavaScript libraries and share them as reusable class libraries.
To call a JavaScript function from Blazor the function must first be registered by calling
Blazor.registerFunction. In the Blazor library we just created
exampleJsInterop.js registers a function to display a prompt.
Blazor.registerFunction('BlazorLib1.ExampleJsInterop.Prompt', function (message) { return prompt(message, 'Type anything here'); });
To call a registered function from C# use the
RegisteredFunction.Invoke method as shown in
ExampleJsInterop.cs
public class ExampleJsInterop { public static string Prompt(string message) { return RegisteredFunction.Invoke<string>( "BlazorLib1.ExampleJsInterop.Prompt", message); } }
In the Blazor app we can now update the
Counter component in
/Pages/Counter.cshtml to display a prompt whenever the button is clicked.
"/counter" <h1>Counter</h1> <p>Current count: <button onclick="@IncrementCount">Click me</button> { int currentCount = 0; void IncrementCount() { currentCount++; ExampleJsInterop.Prompt("+1!"); } }BlazorLib1
Build and run the app and click the counter button to see the prompt.
We can now package our Blazor library as a NuGet package and share it with the world!
cd ../BlazorLib1 dotnet pack
Improved event handling
To handle events Blazor components can register C# delegates that should be called when UI events occur. In the previous release of Blazor components could register delegates using a specialized syntax (ex
<button @onclick(Foo)> or
<button onclick=@{ Foo(); }>) that only worked for specific cases and for specific types. In Blazor 0.2.0 we've replaced the old syntax with a new syntax that is much more powerful and flexible.
To register an event handler add an attribute of the form
on[event] where
[event] is the name of the event you wish to handle. The value of the attribute should be the delegate you wish to register preceded by an
@ sign. For example:
<button onclick="@OnClick" /> { void OnClick(UIMouseEventArgs e) { Console.WriteLine("hello, world"); } }
or using a lambda:
<button onclick="@(e => Console.WriteLine("hello, world"))"
If you don't need access to the
UIEventArgs in the delegate you can just leave it out.
<button onclick="@OnClick" /> { void OnClick() { Console.WriteLine("hello, world"); } }
With the new syntax you can register a handler for any event, including custom ones. The new syntax also enables better support for tool tips and completions for specific event types.
The new syntax also allows for normal HTML style event handling attributes. If the value of the attribute is a string without a leading
@ character then the attribute is treated as normal HTML.
For some events we define event specific event argument types (ex
UIMouseEventArgs as shown above). We only have a limited set of these right now, but we expect to have the majority of events covered in the future.
Improved data binding
Data binding allows you to populate the DOM using some component state and then also update the component state based on DOM events. In this release we are replacing the previous
@bind(...) syntax with something more first class and that works better with tooling.
To create setup a data binding you use the
bind attribute.
<input bind="@CurrentValue" /> @functions { public string CurrentValue { get; set; } }
The C# expression provided to
bind should be something that can be assigned (i.e. an LValue).
Using the
bind attribute is essentially equivalent to the following:
<input value="@CurrentValue" onchange="@((UIValueEventArgs __e) => CurrentValue = __e.Value)/> @functions { public string CurrentValue { get; set; } }
When the component is rendered the
value of the input element will come from the
CurrentValue property. When the user types in the text box the
onchange is fired and the
CurrentValue property is set to the changed value. In reality the code generation is a little more complex because
bind deals with a few cases of type conversions. But, in principle,
bind will associate the current value of an expression with a value attribute, and will handle changes using the registered handler.
Data binding is frequently used with
input elements of various types. For example, binding to a checkbox looks like this:
<input type="checkbox" bind="@IsSelected" /> @functions { public bool IsSelected { get; set; } }
Blazor has a set of mappings between the structure of input tags and the attributes that need to be set on the generated DOM elements. Right now this set is pretty minimal, but we plan to provide a complete set of mappings in the future.
There is also limited support for type conversions (
string,
int,
DataTime) and error handling is limited right now. This is another area that we plan to improve in the future.
Binding format strings
You can use the
format-... attribute to provide a format string to specify how .NET values should be bound to attribute values.
<input bind="@StartDate" format- @functions { public DateTime StartDate { get; set; } }
Currently you can define a format string for any type you want … as long as it's a
DateTime ;). Adding better support for formating and conversions is another area we plan to address in the future.
Binding to components
You can use
bind-... to bind to component parameters that follow a specific pattern:
@* in Counter.cshtml *@ <div>...html omitted for brevity...</div> @functions { public int Value { get; set; } = 1; public Action<int> ValueChanged { get; set; } } @* in another file *@ <Counter bind- @functions { public int CurrentValue { get; set; } }
The
Value parameter is bindable because it has a companion
ValueChanged event that matches the type of the
Value parameter.
Build on save
The typical development workflow for many web developers is to edit the code, save it, and then refresh the browser. This workflow is made possible by the interpreted nature of JavaScript, HTML, and CSS. Blazor is a bit different because it is based on compiling C# and Razor code to .NET assemblies.
To enable the standard web development workflow with Blazor, Visual Studio will now watch for file changes in your Blazor project and rebuild and restart your app as things are changed. You can then refresh the browser to see the changes without having to manually rebuild.
Conditional attributes
Blazor will now handle conditionally rendering attributes based on the .NET value they are bound to. If the value you're binding to is
false or
null, then Blazor won't render the attribute. If the value is
true, then the attribute is rendered minimized.
For example:
<input type="checkbox" checked="@IsCompleted" /> @functions { public bool IsCompleted { get; set; } } @* if IsCompleted is true, render as: *@ <input type="checkbox" checked /> @* if IsCompleted is false, render as: *@ <input type="checkbox" />
HttpClient improvements
Thanks to a number of contributions from the community, there are a number of improvements in using
HttpClient in Blazor apps:
- Support deserialization of structs from JSON
- Support specifying arbitrary fetch API arguments using the HttpRequestMessage property bag.
- Including cookies by default for same-origin requests
Summary
We hope you enjoy this updated preview of Blazor. Your feedback is especially important to us during this experimental phase for Blazor. If you run into issues or have questions while trying out Blazor please file issues on GitHub. You can also chat with us and the Blazor community on Gitter if you get stuck or to share how Blazor is working for you. After you've tried out Blazor for a while please also let us know what you think by taking our in-product survey. Just click the survey link shown on the app home page when running one of the Blazor project templates:
Have fun!
Join the conversationAdd Comment
WOW that is looking pretty impressive and heading in the direction that it should, IMO. Used to be that anything Razor-based was easy to gloss over since it was all JS and a total PITA to work with… no longer the case here, obviously. This is what it’s all about and what the promise of WASM is supposed to deliver. Nice work here, team.
Could not be more stoked about a technology. The webassembly hype train is real. LETS GO.
Cool 🙂 Looking forward to debugger-support. 🙂
Hi, people. It is great.
But, is there a way to build Blazor projects using VS2015?
This could be a salvation to a lot of C# developer.
Thanks.
Unfortunately no, the Blazor tooling requires features that are only available in the latest preview of Visual Studio 2017.
I’m absolutely sold on Blazor. Even at v0.2 it’s looking really, really good. I would encourage all C# people to give it a try.
I have 2 questions:
1- Can i access/manipulate the DOM thru C#?
2- Can i have my C# code – as ViewModel classes for example – in its own files, separate from the view (HTML files) and bind to it any exposed properties, aka MVVM-style?
Thnx
G.P.
Yes, you can access anything that the browser can do from C# via JavaScript interop. However, when using Blazor you generally don’t interact with the DOM directly. Instead you author Blazor components that render to a separate rendering tree that the Blazor runtime then uses to efficiently update the DOM.
You can also keep your view model code separate from the view using an inheritance pattern as described here: | https://blogs.msdn.microsoft.com/webdev/2018/04/17/blazor-0-2-0-release-now-available/ | CC-MAIN-2018-39 | refinedweb | 2,211 | 56.86 |
Im trying to do a HTTPS GET with basic authentication using python. Im very new to python and the guides seem to use diffrent librarys to do things. (http.client, httplib and urllib). Can anyone show me how its done? How can you tell the standard library to use?
In Python 3 the following will work. I am using the lower level http.client from the standard library. Also check out section 2 of rfc2617 for details of basic authorization. This code won't check the certificate is valid, but will set up a https connection. See the http.client docs on how to do that.
from http.client import HTTPSConnection from base64 import b64encode #This sets up the https connection c = HTTPSConnection("") #we need to base 64 encode it #and then decode it to acsii as python 3 stores it as a byte string userAndPass = b64encode(b"username:password").decode("ascii") headers = { 'Authorization' : 'Basic %s' % userAndPass } #then connect c.request('GET', '/', headers=headers) #get the response back res = c.getresponse() # at this point you could check the status etc # this gets the page text data = res.read() | https://codedump.io/share/kcfFnwsa4iDh/1/python-https-get-with-basic-authentication | CC-MAIN-2016-44 | refinedweb | 188 | 77.43 |
Database connections can sometimes be overlooked and done wrong. For serverless developers, we just wish there was a way as simple as Lambda itself for DB connection management.
First things first
Since a serverless architecture is quite different from what we were used to, there are some important considerations to take.
The first thing we need to consider is scalability. By default, AWS Lambda can scale to 1,000 concurrent requests in a matter of milliseconds. And it’s possible to lift this limit by asking AWS. This has important impacts on DB connection management, for example.
Stretch or jam
Is our database infrastructure capable of coping with this demand? What happens if DB becomes a bottleneck, how will our application behave? Is it going to lose data, or does it have some sort of "emergency plan" coded in its logic? Does it respond to end-user requests gracefully in case of DB failure?
Think of Lambda as a wide highway capable of absorbing huge amounts of traffic, from cars to heavy trucks to buses. We shouldn’t redirect all this traffic into a narrow neighborhood road, it will cause major traffic jams.
It can be so bad that it falls into gridlock state and no one can get home, like in the intersection below.
This could happen in the event of DB errors triggering Lambda auto-retries, which can escalate in a bad way.
It is important to know how much load our DB can handle and monitor Lambda closely. AWS CloudWatch metrics can be very helpful. For professional apps running in production, third party monitoring services can come in handy for more in-depth metrics and tight control, such as Dashbird, DataDog, and New Relic.
One size fits… only one!
There's no one-size-fits-all here. The way we approach the alignment of scalability between Lambda and databases will depend on the application and the storage system in place.
Consider an app that has a large number of users generating small chunks of data occasionally and a fewer number of big customers that are eagerly generating a sheer amount of data every second.
We set up our database infrastructure for distributed read/write requests and seamless horizontal scaling. Nice and sweet.
The database may be split across multiple servers (call it partitions), but we may not be able to distribute the load evenly across all of them. Let’s say we distribute load across partitions based on the customer. Customers 1 to 10 will end up in Partition-1, Customers 11 to 20 in Partition-2, etc (oversimplification for illustration purposes).
When a big customer generates data to be processed, everything will be funneled into a single partition, which may get overburdened.
While designing our processing and storing logic, we must take into account that the distribution of data per customer is skewed. The easiest way would be to allocate capacity per customer in advance.
For instance, if a given customer manifests it needs up to X TB of data per second, we can calculate in advance how many servers will be needed to accommodate the demand and set up dedicated partitions.
Eternity is not in DB connection vocabulary
Every DB server will have some sort of limit into how many connections it can have opened at any single point in time. Having idle connections opened for eternity is obviously a bad idea since connection slots are scarce.
In a traditional infrastructure, where we have one or multiple servers, each handling a considerable portion of the load, we can distribute a certain number of connections in each server, which are shared to serve multiple requesters.
Since our servers have a long lifespan and we are in full control, we can more easily manage the DB connections in a performant way.
In Lambda, things are quite different. We have very limited control over container management, not to mention the underlying servers running them. It's even impossible to control whether two Lambda instances are running on the same server.
If we're following the Lambda distributed model appropriately, each Lambda instance will be taking care of a single request. This means: inside a given Lambda container, we can’t have a pool of DB connections to share among a multitude of requesters.
One might say: "I can open a connection outside my Lambda function handler and leave it open for the next invocations".
It sounds interesting. We reduce overall latency by reusing connections. But that is a bad idea. Two questions: if we open connections outside the Lambda function handler, how do we control...
- ...when and how connections are closed?
- ...how many connections are opened and where they’re being used?
The answer is: we don't. And that's not a good way of managing DB connections.
A Lambda container might remain idle for several minutes, up to a few hours after serving an invocation. When an invocation comes in and a DB connection is opened outside the handler, it will remain open with the container for a variable period of time, and we can’t enforce a performant way of reusing that connection.
The best practice for Lambda is to open connections inside the function handler and close them as soon as the job is done, before terminating the invocation. Opening a connection at every invocation adds up latency, but there’s no better way of managing it.
When timeout does not timeout
One of the Lambda limitations is the execution time. Say a function is limited to 10 seconds. You open a DB connection, but for some reason, the job is taking a lot longer than normal to process. Ten seconds pass and the execution is halted due to the Lambda limit, and the DB connection may remain open.
The Lambda timeout will not immediately and automatically timeout the DB connection!
Now consider the issue persists and the timeout occurs frequently for a given period of time. The database server may get quickly overburdened with multiple idle connections, making it difficult or even impossible for other Lambda invocations to access it until those connections are closed.
To avoid this issue, we could use the Lambda context information to monitor the number of milliseconds left before the execution times out:
- For each Lambda invocation, launch a separate thread running concurrently with our code;
- Check the remaining time every few seconds;
- When it is too close to zero (meaning, the function is about to timeout), it will preventively close the DB connection before the invocation is halted;
Illustrating the concept in simplified terms:
import logging import time import threading # Dummy database objects from database import ( DatabaseConnection, DatabaseException, db_host, ) import handler_executor def handler(event, context): try: # Open a connection db_conn = DatabaseConn(**db_host) # Start a background thread to monitor risk of timeout timeout_thread = threading.Thread( target=monitor_timeout, args=(context, db_conn), daemon=True ) response = handler_executor(event, context, db_conn) except DatabaseException as error: logging.exception(error) response = {'status': 500, 'error': type(error).__name__} finally: # Make sure the connection is finally closed, if still open if 'db_conn' in locals() and isinstance(db_conn, DatabaseConn) and \ db_conn.is_open(): db_conn.close() return response def monitor_timeout(context, db_conn): # Check remaining time every two seconds while context.get_remaining_time_in_millis() >= 2500: time.sleep(2) # System will exit the loop if the remaining time < 2500 ms, # in which case we close the connection preventively if db_conn.is_open(): db_conn.close()
Connection-less Databases
Perhaps mixing a serverless compute engine, such as Lambda, with server-based storage systems is not the best approach. It will always be difficult to have scalability well aligned.
To our rescue, there are good serverless, API-based storage systems nowadays. We don't need to manage servers, nor establish a connection before firing data queries. API Databases will accept queries right upfront through HTTP endpoints, for example.
AWS has been doing a great job in this area, having serverless databases in Graph, SQL and No-SQL flavors.
DynamoDB is the oldest of the options: a No-SQL, key-value store database. It had some scalability issues in the past that draw negative attention to it, but they are a thing of the past. Currently, one can easily scale Dynamo to 40,000 concurrent requests, which should be enough for most Lambda applications. With some clever DB architecture designing, DynamoDB can scale to virtual infinity. I’ve personally used it in many projects with great success.
For fans of SQL, Aurora Serverless now supports API queries through the Data API. Unfortunately, only MySQL is supported at the time of this writing, but the Aurora team is certainly working to release Postgres support in the near future.
In the Graph space, we have Neptune and Cloud Directory. Graphs aren't exactly the purpose of the last one but it can be used for this purpose.
All these API-based databases align well with the Lambda model: they are fully managed, scale rapidly to accommodate shifting demand, don’t require connection overhead.
Posted on by:
Renato Byrro
The Quick Hello World Jumped Over The Lazy FooBar
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/byrro/6-keys-to-managing-db-load-in-aws-lambda-99k | CC-MAIN-2020-40 | refinedweb | 1,508 | 53.92 |
- Lowercase to Uppercase
This article is created to cover some programs in Python, to convert lowercase character or string to uppercase with and without using user-defined and predefined function. Here are the list of programs:
- Lowercase to Uppercase Character using ASCII Value
- Lowercase to Uppercase String using ASCII Value
- Lowercase to Uppercase using upper() Method
- Using user-defined Function
- Using Class and Object
Note - ASCII value of A is 65, B is 66, C is 67 and so on. Similarly, ASCII value of a is 97, b is 98, c is 99, and so on.
Lowercase to Uppercase Character using ASCII Value
The question is, write a Python program to convert lowercase character entered by user to uppercase. Here is its answer. This program uses ASCII value of character to convert the character in uppercase.
print("Enter the Character: ") ch = input() chup = ord(ch) chup = chup-32 chup = chr(chup) print("\nIts Uppercase is: ", chup)
Here is the initial output produced by this Python program:
Now supply the input say c as character in lowercase, then press
ENTER key to convert and print
its uppercase as shown in the snapshot given below:
Note - The ord() method is used to find Unicode value of a character passed as its argument.
Note - The chr() method is used to find the character corresponding to the Unicode value passed as its argument.
In above program, the following statement:
chup = chup-32
states that the value of chup gets subtracted with 32. This statement is used to initialize the ASCII value of corresponding uppercase character. For example, if entered value in lowercase is c and it gets stored in ch variable. Therefore using following statement (from above program):
chup = ord(ch)
the ASCII value of c, that is 99 gets initialized to chup. And using the following statement:
chup = chup-32
chup-32 or 99-32 or 67 gets initialized to chup. And finally, using the statement given below:
chup = chr(chup)
the character corresponding to the ASCII value of 67, that is C gets initialized to chup. In this way, the character gets converted into uppercase.
Modified Version of Previous Program
This program is the modified version of previous program, created to handle invalid input. This program uses end= to skip printing of an automatic newline using print()
print("Enter the Character: ", end="") ch = input() chlen = len(ch) if chlen==1: if ch>='a' and ch<='z': chup = ord(ch) chup = chup-32 chup = chr(chup) print("\nIts Uppercase is: ", chup) elif ch>='A' and ch<='Z': print("\nAlready in Uppercase!") else: print("\nInvalid Input!") else: print("\nInvalid Input!")
Here is its sample run with user input C:
Here is another sample run with user input codescracker:
Lowercase to Uppercase String without Function
This program is created to operate with string. That is, this program converts lowercase string entered by user to its equivalent uppercase string.
print("Enter String: ", end="") text = input() for i in range(len(text)): if text[i]>='a' and text[i]<='z': ch = text[i] ch = ord(ch) ch = ch-32 ch = chr(ch) text = text[:i] + ch + text[i+1:] print("\nIts Uppercase:", text)
Here is its sample run with string input codescracker:
In above program, I've scanned all characters of string using for loop, one by one. And compared whether the character at current index is a lowercase character or not. If the character found as a lowercase character, then that character gets converted into uppercase and using the following statement:
text = text[:i] + ch + text[i+1:]
new string after converting the current index's character to uppercase, gets initialized as new value of text. In above statement, text[:i] refers to all characters from 0th to (i-1)th index. And text[i+1:] refers to all characters from (i+1)th to (len-1)th index. Here len is the length of string.
For example, if text = "CODEscracker", i=4, and ch="S" therefore the following statement:
text = text[:i] + ch + text[i+1:]
evaluates to be, after putting the values:
text = "CODE" + "S" + "cracker"
states that, "CODEScracker" gets initialized to text as its new value. While slicing the string using indexing, that is [:], the index number before colon is included, whereas the index number after colon is excluded.
Lowercase to Uppercase using upper()
This program uses upper(), a predefined method to do the same job as of previous program.
print("Enter String: ", end="") text = input() text = text.upper() print("\nIts Uppercase:", text)
Lowercase to Uppercase using Function
This program is created using a user-defined and a predefined method. The user-defined function used in this program is LowerToUpper(). This function returns the uppercase of string passed as its argument, using upper() method.
def LowerToUpper(s): return s.upper() print("Enter String: ", end="") text = input() text = LowerToUpper(text) print("\nIts Uppercase:", text)
Lowercase to Uppercase using Class
This is the last program of this article, created using a class, an object-oriented feature of Python.
class CodesCracker: def LowerToUpper(self, s): return s.upper() print("Enter String: ", end="") text = input() ob = CodesCracker() text = ob.LowerToUpper(text) print("\nIts Uppercase:", text)
In above program, an object named ob is created of class CodesCracker to access its member function, LowerToUpper() using dot (.) operator.
Same Program in Other Languages
- Java Convert Lowercase to Uppercase
- C Convert Lowercase to Uppercase
- C++ Convert Lowercase to Uppercase
« Previous Program Next Program » | https://codescracker.com/python/program/python-program-convert-lowercase-to-uppercase.htm | CC-MAIN-2022-21 | refinedweb | 903 | 50.57 |
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Interactive Panel: Ask Us AnythingSep 27, 2013 at 12:15 AM
@Mordachai
You prefer signed integers so that math works correctly in perfectly normal use cases. If x < y, is it unreasonable to expect that x - y < 0?
Yes, you can program around this incorrect behavior, but you shouldn't have to.
C++ and Beyond 2012: Andrei Alexandrescu - Systematic Error Handling in C++Dec 10, 2012 at 10:34 PM
.
C++ and Beyond 2012: Scott Meyers - Universal References in C++11Oct 12, 2012 at 7:38 AM.
C++ and Beyond 2012: Scott Meyers - Universal References in C++11Oct 11, 2012 at 1:45 PM
@Vincent: You have another option actually. Write one constructor, taking all of your parameters by value, then move them where they need to be. This may result in additional moves, but never additional copies (unless there is no move constructor, which is not the case for strings).
C++ and Beyond 2012: Scott Meyers - Universal References in C++11Oct 10, 2012 at 2:37 PM
The quality of information in this talk was almost at STL's level. Impressive.
GoingNative 8: Introducing Casablanca - A Modern C++ API for Connected ComputingMay 03, 2012 at 6:30 AM
?
Herb Sutter: (Not Your Father’s) C++Apr 14, 2012 at 8:49 AM
@Luna: Could you please elaborate on why you believe it would be a hazard?
Herb Sutter: (Not Your Father’s) C++Apr 13, 2012 at 5:22 PM
What about this idea. Have a separate namespace within or beside std, call it safestd. In there have safe substitutes for various functionality in the standard. Then you could choose between safe and efficient selectively with a typedef or a `using` statement. | http://channel9.msdn.com/Niners/benjaminLindley | CC-MAIN-2014-42 | refinedweb | 308 | 60.35 |
It seems like ST2 should do this. I just haven't hit upon the correct keystroke combo. It would be really great if it could do this across multiple columns, but just 2 would be great. Thanks for any help
You could upvote this: sublimetext.userecho.com/topic/9 ... r-cmd-key/
This is a very basic plugin that synchronized cursor movements (triggered by keyboard only, not mouse scrolling, look at the keybindings) across all groups.It may be useful...
import sublime, sublime_plugin
class SynchViewCommand(sublime_plugin.WindowCommand):
def run(self, cmd="", **kwargs):
if cmd:
grpcount = self.window.num_groups()
if grpcount > 1:
for grp in range(grpcount):
synchview = self.window.active_view_in_group(grp)
if synchview:
synchview.run_command(cmd, kwargs)
{ "keys": "shift+alt+up"], "command": "synch_view", "args": {"cmd": "move", "by": "lines", "forward": false} },
{ "keys": "shift+alt+down"], "command": "synch_view", "args": {"cmd": "move", "by": "lines", "forward": true} },
{ "keys": "shift+alt+pageup"], "command": "synch_view", "args": {"cmd": "move", "by": "pages", "forward": false} },
{ "keys": "shift+alt+pagedown"], "command": "synch_view", "args": {"cmd": "move", "by": "pages", "forward": true} },
{ "keys": "ctrl+shift+alt+home"], "command": "synch_view", "args": {"cmd": "move_to", "to": "bof", "extend": false} },
{ "keys": "ctrl+shift+alt+end"], "command": "synch_view", "args": {"cmd": "move_to", "to": "eof", "extend": false} }
It's not working for me (installed from sublime package: github.com/atbell/SublimeSynchroScroll)
I'm on a mac, I tried to add the keys to my user file but it didn't work neither.I'll try later on my pc.
Maybe I'm doing it wrong tho.
Didn't try it but as it's the same code as mine that work fine, it must work.
Try typing this 2 lines in the console to activate logging:
sublime.log_input(True)
sublime.log_commands(True)
And try again.The console show you the keys you press (from ST2 view) and the command that is executed.Maybe the keybindings doesn't work on OSX.
Type the same lines with a False argument to deactivate the logging. | https://forum.sublimetext.com/t/scroll-multiple-columns-like-bbedit-synchro-scroll/3813/3 | CC-MAIN-2016-18 | refinedweb | 322 | 66.13 |
Kubernetes with Flannel
Kubernetes is an excellent tool for handling containerized applications at scale. But as you may know, working with kubernetes is not an easy road, mainly the backend networking implementation. Many developers have met many problems in networking and it costs much time to figure out how it works.
In this article, we want to use the simplest implementation as an example, to explain kubernetes networking works. So, let’s dive deep!
KUBERNETES NETWORKING MODEL
Kubernetes manages a cluster of Linux machines, on each host machine, kubernetes runs any number of Pods, in each Pod there can be any number of containers. User’s application will be running in one of those containers.
For kubernetes, Pod is the least management unit, and all containers inside one Pod shares the same network namespace, which means they have same network interface and can connect each other by using localhost.
KUBERNETES NETWORKING MODEL NECESSITATES
All containers can communicate with all other containers without NAT.
All nodes can communicate with all containers without NAT.
The user can replace all containers to Pods in above requirements, as containers share with Pod network.
Basically it means all Pods should be able to easily communicate with any other Pods in the cluster, even they are in different Hosts, and they recognized each other with their own IP address, just as the underlying Host does not exists. Also the Host should also be able to connect with any Pod with its own IP address, without any address translation.
THE OVERLAY NETWORK
Flannel is created by CoreOS for Kubernetes networking, it can also be used as a general software defined network solution for other purpose.
To achieve kubernetes network requirements, create flat network which runs above the host network, this is called overlay network. All containers(Pod) will be assigned one IP address in overlay network, they communicate with each other by calling each other’s IP address directly.
In the above cluster there are three networks:
AWS VPC Network: All instances are in one VPC subnet 172.20.32.0/19. They have been allocated IP addresses in this range, all hosts can connect to each other because they are in same LAN.
Flannel overlay network: Flannel has created another network 100.96.0.0/16, it is a bigger network which can hold 216 (65536) addresses, and it is across all kubernetes nodes, each pod will be assigned one address in this range.
In-Host docker network: Inside each host, flannel assigned a 100.96.x.0/24 network to all pods in this host, it can hold 28 (256) addresses. The Docker bridge interface docker0 will use this network to create new containers.
By the above design, each container has its own IP address, all fall into the overlay subnet 100.96.0.0/16. The containers inside the same host can connect with each other by the Docker bridge docker0. To connect across hosts with other containers in the overlay network, flannel uses kernel route table and UDP encapsulation to attain it.
PACKET COPY AND PERFORMANCE
The newer version of flannel does not recommend to use UDP encapsulation for production, it should be only used for debugging and testing purpose. One reason is the performance.
Though the flannel0 TUN device provides a simple way to get and send packet through the kernel, it has a performance penalty, the packet has to be copied back and forth from the user space to kernel space.
as the above, from the original container process send packet, it has to be copied three times between user space and kernel space, this will upsurge network overhead in a significant way.
THE VERDICT
Flannel is one of the simplest implementation of kubernetes network model. It uses the standing Docker bridge network and an extra Tun device with a daemon process to do UDP encapsulation. We hope this article helps you to understand the fundamentals of kuberentes networking, with this information you can start exploring the more interesting realm of kubernetes networking. | http://www.pro-tekconsulting.com/blog/kubernetes-with-flannel/ | CC-MAIN-2019-35 | refinedweb | 672 | 51.48 |
#include "llvm/ADT/StringRef.h"
#include "llvm/Support/Compiler.h"
#include "llvm/Support/DynamicLibrary.h"
#include "llvm/Support/Error.h"
#include <cstdint>
#include <string>
Go to the source code of this file.
LLVM_PLUGIN_API_VERSION Identifies the API version understood by this plugin.
When a plugin is loaded, the driver will check it's supported plugin version against that of the plugin. A mismatch is an error. The supported version will be incremented for ABI-breaking changes to the
PassPluginLibraryInfo struct, i.e. when callbacks are added, removed, or reordered.
Definition at line 33 of file PassPlugin.h.
Referenced by llvm::PassPlugin::Load().
The public entry point for a pass plugin.
When a plugin is loaded by the driver, it will call this entry point to obtain information about this plugin and about how to register its passes. This function needs to be implemented by the plugin, see the example below:
Referenced by llvm::PassPlugin::Load(). | http://www.llvm.org/doxygen/PassPlugin_8h.html | CC-MAIN-2019-51 | refinedweb | 154 | 53.27 |
Let’s just say no conversion completes 100%. You can make demos that do, but they are surprisingly challenging to write and normally aren’t very full-featured. So, if every conversion needs a little work afterwards then SupportClasses represent areas where we have done that post-conversion work for you. SupportClasses are C# source-code that we have written and include (when needed) in your project within a file called SupportClass.cs (imagine that). This file can have multiple C# classes within it; the result of us trying to break the SupportClass into logical chunks. This also helps us to only put the sections that you need into your project. After a conversion completes, you will see a new C# project file (ok, sometimes more than one), a conversion HTML report, your new C# classes, misc. resource files and most likely a SupportClass.cs file. What should you expect to find within the SupportClasses? A small snippet of the Java-API implemented in C#. The goal of a SupportClass is to exactly mimic the functionality found in the old Java-call but in C#. This sometimes requires multiple .NET framework calls, or more parameters, or maybe an interface or two… But, in the end, a SupportClass should look exactly how you would have fixed the conversion “errors” if we hadn’t.
This, of course, begs the question of why didn’t we implement SupportClasses for ALL of the additional work? Right… No, seriously there are only a small number of cases where we can implement all of the changes necessary to have exact runtime equivalence in every variation of that snippet without massive (read: many megabytes) of code and years of time. Maybe an example will help: I have partially implemented a SupportClass for System.Property.getProperty(String property_name). This is not something that is included in the JLCA because this relies on parsing user-string (the property’s name) and we try very hard not to get involved with user-string parsing.
Example:
public class Properties {
private IDictionary environmentVariables;
public Properties() {
environmentVariables = Environment.GetEnvironmentVariables();
}
public string GetProperty(string property_name) {
string property = null;
switch(property_name) {
case "user.dir" :
property = Environment.CurrentDirectory;
break;
case "user.name" :
property = Environment.UserName;
case "user.home" :
property = environmentVariables["HOMEDRIVE"] + environmentVariables["HOMEPATH"];
//... continued for a long time ...
}
return property;
}
This actually works pretty well (if you can overlook my hard-coded strings, random concatenation without using StringBuilder, …) and around 12 of the “regular” environmental variables would easily map over in a class like this, but there are others that don’t map over as well (like the “java.version” environmental variable). Most people would probably want the “jave.version” changed to check the runtime environment version of the .NET framework that is running their new C# application. So do we look for a version of the JRE (maybe) installed on the machine and return that version? Or would it be better to “correct” the user and return the version of the .NET runtime? This gets even a little more complex when you look at the long list of “java.vm.*” variables that really don’t map to anything in .NET.
Hopefully, now you start to see why every difference or post-conversion head-ache isn’t “solved” with SupportClasses. Can we do more? Of course! J# did them all (up to JDK 1.1.4/VJ++) and a few years from now I might be giving you a very different answer but for now SupportClasses are the exception instead of the standard. That said; if you have a great idea for where we should spend our time writing the next round of SupportClasses please leave it in a comment below.
Now a bunch of you are looking at the fact I started saying (and showing an example of) why we can’t have SupportClasses for everything. But then I stated that you should feel free to suggest areas for new SupportClasses… Do I think that we have written every SupportClass that should be part of the JLCA? No. But I can’t think of too many areas that I would like to add them. But as my disclaimer below says, I am fallible. I do not claim to even assume I have thought of everything.
Since there is code in this post:
Use of included script samples are subject to the terms specified at
Normal disclaimer:
This is my blog, my opinion, and as such is subject to my limitations. Therefore, this posting is provided "AS IS" with no warranties, and confers no rights. | http://blogs.msdn.com/b/tbright/archive/2004/02/11/71662.aspx?Redirected=true&t=What%20are%20JLCA%20SupportClasses | CC-MAIN-2014-35 | refinedweb | 759 | 64.41 |
Hi,
I have a program which do a lot of compare and bool operations. e.g ( bool_result = a&b&c ; bool_result1 = a||b||c; ...). My question is does GPGPU also efficient for such operations? Can I treat such bool operations and try to optimize it just like the other computation operations like '+' or '*' ?
Or is there anything I shall pay special attention for such program run on GPGPU?
Thanks
Pretty much the same as any cpu: boolean logic is fast, branches are not (and much worse than a cpu, but slow is slow).
So avoid the short-circut stuff on scalars (like result1 = a || b || c) since it would have to be implemented as some branches if 'a', 'b', or 'c' have side effects (reading memory is a side effect. Sometimes it's better to avoid the read tho).
As you're no doubt aware, bitwise | can be used rather than || if you have well behaved values, 0 = false, 1 = true which is what the boolean operators return (but not what they operate on, in which 0 = false, anything else = true).
Except the vector logic operators which return 0 = false, and -1/~0 == true ... sigh, so beware if mixing them. And the short-circuit operators don't short-circuit with vector types. | https://community.amd.com/thread/154685 | CC-MAIN-2018-43 | refinedweb | 212 | 71.44 |
Results 1 to 2 of 2
- Join Date
- Jun 2010
- 1
- Thanks
- 0
- Thanked 0 Times in 0 Posts
Java output problem; cannot connect methods
Hey, I'm having problems getting my program to output. I've tried numerous ways, but I'm still pretty new at this. The rate at which my professor teaches isn't really helping, so it would be greatly appreciated if I could get some help. (I don't see a spoiler tag button, so I'm just going to post this, it's not very long)
import javax.swing.JOptionPane;
public class TestAccount {
public static void main(String[] args){
//Accounts array
Account[] accounts = new Account[4];
accounts[0] = new Account("111222333",1500.75,.25,5175.42);
accounts[1] = new Account("111222334",501.80,.15,150.32);
accounts[2] = new Account("111222335",1122.56,.225,3007.78);
accounts[3] = new Account("111222336",1201.77,.25,2004.35);
}
//Loop and Display
public void showMessage(Account[] accounts) {
String output = "";
for (int i=0; i<accounts.length; i++){
output += accounts[i].toString() + "\n";
}
JOptionPane.showMessageDialog(null, output, "Current Account", JOptionPane.PLAIN_MESSAGE);
}
}
Everything works but the output part, so if you have any hints...
You are not calling your showMessage method anywhere. At the end of your main method try:
Code:
showMessage(accounts);
AdSlot6 | http://www.codingforums.com/java-and-jsp/199164-java-output-problem;-cannot-connect-methods.html | CC-MAIN-2015-40 | refinedweb | 215 | 68.67 |
is:
Image
Title
HeadLine
CTA Text – To be inherited from CTA template
CTA Link – To be inherited from CTA template
- Go to the Sitecore ()
- Go to Content Editor >> Template >> Sitecore Demo >> Banners.
- Right click on Banners template folder >> Insert >> New template and name it Banner.
- Add the fields – Title, HeadLine and Image
- Lets inherit the CTA template under Common in this template. Click on Content tab and double click on CTA template.
- Save the changes and configure the icon for the template.
- Click on the Builder Options tab and Standard Values.
- Enter $name in Title field as a default value.
- In the Containers template folder we will create one Container folder for Banner and name it “Banners Folder”. And set an icon to the template.
- Click on the Builder Options tab >> Standard values. Go the Insert options section. In case if you cant see any sections then go to View tab and click on Standard fields check box.
- Now we can create a content in Content under Global folder.
- Add the Banners folder under Global.
- Right Click on Banners >> Insert >> Banner and Name it as “Main Banner”.
- Lets save the background image and upload it in the Media Library.
- Go back to Main Banner item and insert the image there. Also add the CTA text as “Our Services”. Link we will add once the Service Listing page is ready.
- We are ready with the Sitecore Content now. Lets move to Visual studio solution.
- Create a new view MainBanner.cshtml and copy the HTML markup for the Banner from index.html under the Logistico template folder.
- First lets create a rendering in Sitecore, assign it to Home item and see if it is rendering on front end then we will fetch the content from Sitecore.
- Go to /sitecore/layout/Renderings/Sitecore Demo and create a View Rendering & name it “Main Banner”.
- Lets assign it to the Home item (/sitecore/content/Home). Click on Presentation tab >> Details.
- Click on the Edit click.
- Click on Controls >> Add.
- Expand Rendering and select the Main Banner rendering.
- Add the placeholder key (“main”) in the Add to Placeholder textbox as highlighted in Green in above screenshot. The key “main” is used as we used it for dynamic binding in the Main.cshtml. So this rendering will replace that placeholder in the Main.cshtml file.
- Click on Select. >> OK >> OK.
- Publish the site.
- Once the publish is completed. Publish MainBanner.cshtml from Visual Studio and load the front end ()
- So now we have to fetch the content from Sitecore.
- Go to Home item >> Presentation tab >> Details >> Edit >> Controls.
- Select the Main Banner rendering and click Edit.
- In the Data source field, click Browse and select the Main Banner item under Global.
- Click OK >> OK >> OK.
- Publish the Home item.
- Move to Visual Studio solution.
- Copy & paste Common_Templates.tt file and rename it to Banner_Templates.tt file. Change the value of models to “Banners”.
- Open the MainBanner.cshtml file and add the below namespaces and code to fetch the data source item.
@using Sitecore
@using Sitecore.Mvc
@using Sitecore.Mvc.Presentation
@using Sitecore.Data
@using Sitecore.Data.Items
@using DummyWebsite@{
//Extract the current Db
Database currentDb = Sitecore.Context.Database;//Get the value of the Datasource
string datasource = RenderingContext.Current.Rendering.DataSource;
//Get the datasource item
Item item = currentDb.GetItem(datasource);
}
- Lets replace the hard coded values by the Sitecore Content.
The title can be replaced by : @Html.Sitecore().Field(Templates.Banner.Fields.Title_FieldName, item)
The Headline can be replaced by : @Html.Sitecore().Field(Templates.Banner.Fields.Title_FieldName, item)
The CTA Text can be replaced by : @Html.Sitecore().Field(Templates.Banner.Fields.Title_FieldName, item)
The CTA Link can be replaced by : @Html.Sitecore().Field(Templates.Banner.Fields.Title_FieldName, item)
- Replace the HTML markup by
<!– slider_area_start –>
<div class=”slider_area”>
<div class=”single_slider d-flex align-items-center slider_bg_1″>
<div class=”container”>
<div class=”row align-items-center justify-content-center”>
<div class=”col-xl-8″>
<div class=”slider_text text-center justify-content-center”>
<p>@Html.Sitecore().Field(Templates.Banner.Fields.Title_FieldName, item)</p>
<h3>
@Html.Sitecore().Field(Templates.Banner.Fields.HeadLine_FieldName, item)
</h3>
@Html.Sitecore().Field(Common_Templates.CTA.Fields.Link_FieldName, item,
new
{
@class = “boxed-btn3”,
text = @Html.Sitecore().Field(Common_Templates.CTA.Fields.Text_FieldName, item)
})
</div>
</div>
</div>
</div>
</div>
</div>
<!– slider_area_end –>
- Publish Visual Studio solution and load the front end.
- We are almost done with the component. If you check the background image is not loading and at the same time there is no img tag in HTML markup to enter it.
- So the background img is coming through CSS.
- So we need to fetch the URL of the image from Sitecore and provide it to a class in MainBanner.cshtml. Code to fetch the image URL:
ImageField imgField = item.Fields[Templates.Banner.Fields.Image_FieldName];
string ImagPath = Sitecore.Resources.Media.MediaManager.GetMediaUrl(imgField.MediaItem);
And we need to add the CSS class:
<style>
.slider_bg_1 {
background-image: url(@ImagPath);
}
</style>
- Publish the MainBanner.cshtml and reload the Front end site.
This completes our development of Banner component. Hope this detailed blog will help you develop such models. Please get in touch with me if there are any issues while following the above steps.
To summarize, we create a template with the required fields, created the content out of it, created a view rendering and assigned the rendering to the Home item and passed the content as a data source to it.
In the next blog, we will create the Featured Services component.
Thank you.. Keep Learning.. Keep Sitecoring.. 🙂
3 thoughts on “Building Home page Components – Main Banner”
Pingback: Creating a Layout – Part IV | Sitecore Dair.
LikeLiked by 1 person
I would recommend you to use Google re-captcha V3. It is really helpful as well as it allows your users to have friction less interaction with your site.
LikeLiked by 1 person | https://sitecorediaries.org/2020/01/30/building-home-page-components-main-banner/ | CC-MAIN-2021-25 | refinedweb | 971 | 61.33 |
Foundations of Python Network Programming 144
First of all, 'Network' means 'Internet.' Everything in the book concerns protocols running over IP, which is almost anything useful these days. That said, this is a lot of ground to cover -- there's FTP, HTTP, POP3, IMAP, DNS, a veritable explosion of acronyms, and this book does a great job of hitting all the ones you're likely to need.
Foundations assumes you already know Python, but nothing about network programming. The first 100 pages covers the basics of IP, TCP, UDP, sockets and ports, server vs. daemon, clients, DNS, and more advanced topics like broadcast and IPv6. And in case you already know all that, how Python deals with them. This is the only part of the book you will probably read in order. After that you pick what you need.
Find a topic you need to know how to deal with, such as using XML-RPC, and locate the appropriate section of the book. There he'll cover the basics of the topic, show you how to use the correct Python module(s) to implement it, explain any gotchas (this is key!), and write a short but functional application or two that uses it. I'm not sure why this book isn't called 'Practical Python Network Programming.' It's eminently Practical. It won't make your heart race, but it tells you exactly what you need to get the job done.
All this information is out there to find for free, but having it all collected and summarized is worth every penny. And the real value is having the edge conditions and not-so-obvious practical details explained by someone who's obviously used this stuff in the field. Python and its excellent libraries make Internet tasks relatively easy, but it's even easier with some expert help, and the libraries assume you already know what you're trying to do. For example, if you're doing a DNS.Request() record query and using a DNS.Type.ANY, it (for good reason) returns information cached by your local servers, which may be incomplete. If you really need all the records you need to skip your local servers and issue a query to the name server for the domain. This is isn't hard; you just have to know what's going on. Or do you know which exceptions can get raised if you're using urllib to fetch web pages? It's here. Exception handling is not neglected.
So you know what you're getting, here's a laundry list of topics: IP, TCP, UDP, sockets, timeouts, network data formats, inetd/xinetd, syslog, DNS, IPv6, broadcast, binding to specific addresses, poll and select, writing a web client, SSL, parsing HTML and XHTML, XML and XML-RPC, email composition and decoding, MIME, SMTP, POP, IMAP, FTP, MySQL/PostgreSQL/zxJDBC (though you won't learn SQL), HTTP and XML-RPC servers, CGI, and mod_python. As a bonus you get some chapters on forking and threading (for writing servers) and handling asynchronous communication in general.
Just to find something to complain about churlishly, I wish Goerzen had managed to do all this and make it scintillatingly brilliant and witty from cover to cover (all 500 pages); perhaps dropping juicy bon mots of gossip from the Debian project. And while I'm at it I'd like a pony. No, seriously. If you program in Python, intend to do anything Internet related, and aren't already a Python networking god, you need Foundations of Python Network Programming. In terms of 'hours I could have saved if only I had this book sooner' it would have paid for itself many times over.
You can purchase Foundations of Python Network Programming from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Amazon (Score:2, Informative)
Re:Amazon (Score:5, Informative)
-- Sex Toys... [secondnirvana.com]
Re:Amazon (Score:1)
Re:Amazon (Score:2)
Re:Amazon (Score:1)
-Leigh
Re:Amazon (Score:5, Insightful)
Re:Amazon (Score:2, Offtopic)
If he really wanted to be an ass, he could have hidden it behind a meta redirect. But he didn't. If you want to be all morally hoity-toity about it, that's your prerogative.
Oh and by the way, I can smell your patchouli.
Re:Amazon (Score:4, Insightful)
Re:Amazon (Score:1, Insightful)
Try starting a small company that makes Apple clone hardware and see if Apple keeps you from doing something that you want.
Re:Amazon (Score:1, Insightful)
I don't know of Apple or amazon.com trying to keep me from doing something I want or pay for something that should be free, so I'm not all that fussed.
Unless of course you want to use something like an affiliate program or 1-click ordering, which Amazon has patented.
Amazon is still on my shitlist, but the great thing about the internet is you can "shop" at amazon and then click-click, buy your book somewhere else, with a clear conscience.
Re:Amazon (Score:1)
And all along I that was a worm poking out of it. The good news is it must still be edible.
Re:Amazon (Score:1)
Re:Amazon (Score:1)
Welcome to slashdot, you must be new here.
Twisted Framework (Score:5, Interesting)
For those interested in starting in network programming in Python, I'd recommend checking it out.
Re:Twisted Framework (Score:2)
Re:Twisted Framework (Score:5, Informative)
The bonus to Mr. Goerzen's use of Twisted in IMAP is that I came away with a much better understanding of how to use Twisted generally -- I grokked Deferreds for the first time. And I'd read all (ALL) the Twisted documentation I could get my hands on prior to that. That probably gave me the proper background, but the book really kicked in to place those final pieces necessary to get what was going on in Twisted.
The book doesn't just cover "raw" network programming, but covers multiple domain-specific areas and points you to the best libraries and modules to use for the area.
Good stuff, I highly recommend the book.
Re:Twisted Framework (Score:2)
I've been hoping for someone to do this for a long time... Not very likely to happen though...
The documentation is huge and pretty good but it just doesn't cut it for me...
Typos (Score:5, Informative)
Re:Typos (Score:3, Informative)
On the other hand, the content is excellent, truly a good book. And so far, my binding hasn't broken, FWIW.
Re:Typos (Score:4, Interesting)
As practicalities go, the one thing I really liked about the last APress book I got ('Dive into Python') was that when I wanted to refer to it at work, I didn't have to carry the book in, I just read the section I wanted on their website.
It was one of those "never going to buy another book without this facility" moments... how could we have missed something so useful for so long?
So back on topic, this book only has one chapter available for download, and it's in PDF rather than anything useful, so I guess it's not general policy to make all APress books downloadable.
I did find it amusing how Visual Basic, C#, and
re: Downloadable Dive Into Python (Score:1)
Re: Downloadable Dive Into Python (Score:1)
Look, I program in Perl (Score:1, Interesting)
Re:Look, I program in Perl (Score:5, Interesting)
Re:Look, I program in Perl (Score:5, Interesting)
And classes too. What's more, a class is just a callable object that is called exactly like a function - "C()" - and it returns an instance of that class. If a function wants a class for instantiation, you can just pass a factory function that returns some instance of any class.
Re:Look, I program in Perl (Score:5, Interesting)
And classes too.
Yes, absolutely. First-class functions and types is one of the most important hallmarks of a good, usable language. Not being able to pass around classes or functions would severely limit how most of my solution sets are defined. Personally, I won't consider languages that don't offer this.
Re:Look, I program in Perl (Score:2)
Re:Look, I program in Perl (Score:4, Informative)
import MyModule
MyModule.SomeFunction()
Or:
from MyModule import *
SomeFunction()
Re:Look, I program in Perl (Score:2)
For the former, first check out the python package index [python.org], which is the equivelent of CPAN to see if someone else has created a relevant package. If not, creating a python module from C code from python is easy [python.org]. As far as calling perl modules from python, that is one of the things Parrot is intended to do, so your savior will come with the apoclypse.
SWIG rocks for plugging into C/C++/Libraries. (Score:5, Informative)
I can't say enough good things about SWIG. It's an amazing piece of work that has saved me years of menial labor and enabled me to integrate all kinds of compex code into Python, from hairy C++ templates to third party Win32 libraries for which there is no source code. It works extremely well with Python, and many other languages too.
Here is the blurb from the web site [swig.org]: C#, Common Lisp (Allegro CL), Java, Modula-3 and OCAML. Also several interpreted and compiled Scheme implementations (Chicken, Guile, MzScheme).
-Don
Re:SWIG rocks for plugging into C/C++/Libraries. (Score:2)
Re:Look, I program in Perl (Score:2)
Re:Look, I program in Perl (Score:2)
I can definitely see a real use for this, as there aren't very many python packages around that do what DBI does. (at least not that I could find back when I was looking for them. I suspect this has not changed).
Look: Programming in Perl is Simply Irresponsible! (Score:2, Insightful)
If you choose to program in Perl, the poor suckers who are going to have to read, maintain, clean up and modify the code you wrote will hate your guts.
Programming languages should be designed primarily for PEOPLE to read, understand, write and maintain reliably, and only incidentally for computers to interpret and execute.
Perl goes against every rule in the
Re:Look: Programming in Perl is Simply Irresponsib (Score:2)
O.K. Why don't you start with listing a single specific?
Re:That's like eating just one potato chip! (Score:2)
Nothing. Unless you want to write like you do C/C++ or shell scripting in Perl(that is what may people do).
If you follow the style guide and conventions, you will be able to write concise, elegant code in Perl. Hmmm... It's like English. You can write poetry or you can spew out illiterate gibberish. One has to invest some time in studying Perl. It grows on you.
S
Re:That's like eating just one potato chip! (Score:2)
It's sad how many Perl programmers have invested so much prescious time learning their way around Perl's fractally complex syntactic surface area and nip picking legalistic style guides and conventions, that they're unwilling to consider learning other languages. Monolinguistic Perl programmers are afraid to learn other languages because they're under the mistaken belief that programming lang
Re:That's like eating just one potato chip! (Score:2)
What's Wrong with Perl (Score:2)
If you're a Perl programmer who doesn't know what Perl's weaknesses are yourself, and you have to ask me to spell them out for you, then you're an Incompetent Perl Programmer. You should have done that research yourself before deciding to use Perl. Shame on you! Put down the crack pipe and step away from the keyboard.
Incompetent Perl programmers who can't see or admit the fl
Re:What's Wrong with Perl (Score:2)
You can find detractors to any and all languages or systems (just noticing the source of the comments above).
I was not asking you to drink someone else's koolaid, but specifically list what you think is wrong with perl and see if it stands up to scrutiny. That is why your post was rated flamebait in the first degree.
Incidently if you read some of the quotes above and then read the discussions a little further and try to understand what the orignal compl
Perl is like DDT, Asbestos and Lead. (Score:2)
If you didn't already know those points I raised about Perl and the fundamental problems of "DWIM" programming languages, then you're simply not a competent programmer. The information is out there, go look it up and learn it yourself.
My point is that there are a lot of incompetent Perl programmers ou
Re:Look: Programming in Perl is Simply Irresponsib (Score:1)
Re:Look: Programming in Perl is Simply Irresponsib (Score:2)
-Don
Re:Look: Programming in Perl is Simply Irresponsib (Score:1, Funny)
It's like maturbating in public and not cleaning up after yourself.
Yes! whenever I masturbate in public I always wipe it up afterwards! The cashier at the supermarket really appreciated that!
Re:Look: Programming in Perl is Simply Irresponsib (Score:2)
Conjoined Fetus Lady [tvtome.com]
Every Perl programmer should switch to Python. (Score:5, Insightful)
At least, for a month or so.
Knowing multiple languages increases your value as a programmer quadratically. I like to think that languages follow a square law. By doubling the number of languages you know, you quadruple your total skill and marketability as a programmer.
I've done significant stuff in both languages and there are definitely tasks where Python is better -- for example, command-and-control, super-high-level types of apps, which coordinate large systems of smaller programs. And Perl is vastly superior in other situations, such as processing enormous wads of data and formatting output. I've even written hybrid programs where Python and Perl code intertwine.
Step outside your box. You don't have to love the language you're learning, but consider it an investment in yourself. Saving money sucks too, but it's still a good idea.
Re:Every Perl programmer should switch to Python. (Score:2, Informative)
Now I use Pythons re() module which takes a little getting used to. While I'm at it, for those of you that write complex or deep regular expressions where re() craps out, there is a much more stable legacy module called pre() that seems to be undocumented. It works exactly like re(). The only difference, it trades speed for stability.
Re:Every Perl programmer should switch to Python. (Score:3, Insightful)
Different syntax and different libraries will open your mind a little. But when a language encourages (or forces) you to think differently, that's where your "square law" starts to kick in.
Re:Every Perl programmer should switch to Python. (Score:2)
If you're a "$LANGUAGE Programmer"... (Score:4, Insightful)
As the parent post says, knowing multiple languages is good. One of my pet annoyances is hearing people describe themselves as a programmer for a specific language -- there are many more out there, and to say you only do one speaks volumes about the lack of breadth of experience you posess.
And don't just stick with imperative object-oriented languages. Try a few declarative languages, like Haskell (functional) or Prolog (logic). Yes, getting your head around them is hard. But you'll be glad you did.
Disclaimer: I'm a student doing an MSc in Computer Science, and by lines of code, most of what I wrote in the last twelve months was Perl, and was completely unrelated to my thesis
Re:If you're a "$LANGUAGE Programmer"... (Score:2, Interesting)
You're a little bit off there. Haskell is not declarative. It's just functional, as you said. Prolog is declarative though.
If you want to try a nice variety of languages here's what I suggest:
C - 'nuf said.
Java or C# - Good application programming languages. Similar enough for learning purposes that it doesn't matter which you use. If you want to get into the inner workings, both are very interesting systems to learn about.
Pe
Re:Every Perl programmer should switch to Python. (Score:1)
It doesn't take much googling to figure out a growing number of very experienced programmers are "discovering" python and the most common comment is something similiar to "...fun to program again..."
Python is a main stay at my work place. A language that doesn't get in your way, you can just solve problems and create solutions.
But, perl isn't going away. It's simply magic for one liners.
Have you tried Ruby? (Score:4, Insightful)
Ruby [ruby-lang.org] to Python. No indentation hassles with Ruby, for example. You'll also like the way Ruby does OO compared to Perl OO. More [rubyforge.org] Rubilicious [ruby-doc.org] links... [rubygarden.org]
Also, The Pragmatic Programmers [pragmaticprogrammer.com] have released a new edition of Programming Ruby that's a great intro and reference to the language - go buy it from their website.
Ruby: Because I can't wait around for Perl 6 to get finished
Re:Have you tried Ruby? (Score:5, Informative)
As a background to my choice, here's what I use it for:
I tend to write primarily for the Win32 platform and most of my applications have GUI front-ends they speak to MySQL databases, and often also control third party applications via COM. Aside from the COM stuff (the apps I'm controlling are only available for Win32 anyway) my software is fully cross-platform which is desireable. I love and use GNU/Linux extensively, and am starting to see an interest from the SME market which is encouraging.
I've used Python a lot and Perl a fair bit, plus I've looked at and thoroughly expected to fall in love with Ruby and Lua. I didn't.
I've realised that all four languages are so similar in many respects, that it's very difficult to convince a person using one to convert to another unless they have a very specific need. So it's just not worth trying.
If the language you are using does the job for you, then stick with it. Once you know the work-arounds for its deficiencies (and they all have them) then there is even less reason to change.
Trying to be objective, here's how I find each of the languages:
Python - Extremely easy to pick up, which is actually good for experienced programmers as well, but at the same time very flexible and powerful. Very readable and easily maintainable code. Good range of libraries (but nowhere near as many as Perl) which all stick closely to a well established "pythonic" way of doing things. You don't have to choose from a dozen different libraries that all claim to do the same job. The interactive shell is also remarkably useful for experimentation and debugging. Most good programmers indent their code anyway, and I don't know anyone that found the forced indentation a problem unless they were deliberately being arguamentative. The concept of packages is very simple and neat - you don't need to do anything special to allow importing of your code. Object orientation is very flexible, straight-forward and powerful. There are a large number of precompiled libraries with installers for Win32 platforms - don't ever underestimate how important this is in when using scripting languages in the current commercial environment. Extensive and uniform use of dot notation. Good range of freely available cross-platform IDEs. Like most Python bindings, those to GUI libraries are generally much easier to work with than the original C libraries.
Perl - Very powerful but extensive use of special characters rather than keywords can tend to result in code which needs reading several times to fully comprehend. Having built-in regular expressions is both useful and powerful, but only adds to the problem of making code less readable. Th eobject orientated aspects of the language are very much bolted on, and far from elegant. Functionaly they're quite capable, but certainly not pretty. It's very easy to code in your own style with several ways of doing the same thing, not necessarily a bad thing, but it does means there is more to learn of the core language if you want to be confident about being able to maintain code written by others. You do feel that you have flexibility in your choice of coding style which is always nice. Immense number of additional libraries, available from one source - the wonderful CPAN - but there is also a good deal of duplication, and you need to spend time evaluating the options to find one that has the features you need and works the way you'd like. Packages have to be written or at least bundled up as such. That said, it's available by default on *nix systems, it's also very closely tied into the operating system and shell which makes OS related stuff in Perl a breeze. Win32 support is available, but Perl is only truly at home in a *nix environment. The bindings to most cross platform GUIs are aften more complicated and difficult to use than the C equivalents.
Ruby -
Re:Have you tried Ruby? (Score:2)
The new second edition Programming Ruby by Dave Thomas & co. has an excellent section on built-in classes and modules that starts at page 427 and goes to page 777 - and even it is not exhaustive. I've done Ruby programming for pay and I've not found that Ruby was lacking any functionality that I've needed. Sure, Perl's CPAN is bigger than Ruby's RAA, but there's quite a bit of redundancy in the CPAN as well. I suspect that we're ga
Re:Have you tried Ruby? (Score:2)
A nice clean container for publishing objects and frameworks for logging, testing, cron, and remote objects.
Re:Have you tried Ruby? (Score:1)
Re:Have you tried Ruby? (Score:1)
Re:Have you tried Ruby? (Score:2)
Unlike perl they are scattered all over the web and you'll have to google for the module you think you need. There is no equivalent of perl -MCPAN so installation of the modules can be a pain if there there are dependencies.
"You don't have to choose from a dozen different libraries that all claim to do the same job."
This is flat out false. There are lots of SOAP libr
Re:Have you tried Ruby? (Score:1)
Re:Have you tried Ruby? (Score:1)
This is always brought up - oh no, Python enforces indentation. This is not a hassle - five minutes using Python and you won't even notice!
Re:Have you tried Ruby? (Score:2)
Indentation sometimes gets screwed up when you move a chunk of text around. Sometimes you 'fail to proceed' when you run tests and it's because of a screwed up indentation. It's easy enough to diagnose and pretty easy to fix, but it is a hassle.
Also, if you have a crappy text editor (or if you have crap skillz) you can get in trouble when you have to indent a chunk of text. Not a big hassle, just a little one.
Despite these two exception, python's me
Re:Look, I program in Perl (Score:3, Insightful)
Re:Look, I program in Perl (Score:2)
Re:Look, I program in Perl (Score:3, Insightful)
Re:Look, I program in Perl (Score:2)
For python code: insanely easy, modules and packages are just files and directories, and your own libraries snap into the place just like they were part of the stdlib.
For C/C++: still rather easy, even if you do it manually, with pyrex or swig, it's even better. The best part is, you call C extension just like you a python module, there's no difference in whatsoever, programmer using the
Re:Look, I program in Perl (Score:2)
Haskell is superb for mathematical problems, partly because the syntax is very mathematical, partly because the compiler implementation is well optimised for that kind of problem. (I often wonder why it doesn't get more use for stuff like encryption). Completely Open Source, of course.
Python is wondrous for network-related stuff - its a real strength of the language - and also seems to get alot of use as a language for installations and mods.
Perl i
Re:Look, I program in Perl (Score:3, Interesting)
print "Who are you?"
name = stdin.readline().strip()
print "I'm glad to meet you, %s." % name
Overly verbose and complicated, you can write this in two lines and save an import as well as the need to know stdin is a file object to boot. And string formatter operator is not instantly clear, especially to a person who hasn't used printf(), not everyone has C background.
How about
As the author says... (Score:5, Insightful)
Is it? If you are, as the author says, someone familiar with Python but you have no clue about network concepts or programming, perhaps this book isn't for you. The first 100 pages or so are all intro to networking; after that, you have specific Python networking programming topics. Perhaps you'd be better suited with a networking book and then this book (sans the first 100 pages).
I've read a few books on programming languages and when they decide that the reader needs an intro to something, they usually provide pretty poor coverage of that topic. You end up being lost after you get done with the intro section. I did this when I was learning some encryption programming... before I could start actually writing code that deals with encryption, I needed a solid base. Instead of trying to teach me all I needed to know, the reference I was using pointed me at the industry's best encryption and security books and authors (like Bruce Schneier).
Disclaimer: Not having read this particular book, maybe this one is different. I don't know.
Re:As the author says... (Score:2)
If you are doing your own encryption, it's going to be easily crackable. Encryption is definitely something that needs to be done in a peer-reviewed library.
What's wrong with, say, SSL?
Encryption Programming and Canned Libraries (Score:3, Insightful)
Re:As the author says... (Score:2)
Re:As the author says... (Score:3, Insightful)
Re:As the author says... (Score:5, Insightful)
Having read the book, I understand socket programming, general network programming, and could probably design and implement my own application protocol -- badly, of course, but still... Could I have done this prior to reading this book? No. Did this book make it easy to pick up the necessary background, as well as make it easy to pick up the specifics of network programming in Python? Yes.
This is a great book, and is a must-have for Python programmers.
Python makes Windows fun (Score:4, Interesting)
I used to be a huge Linux buff (and still am when it comes to servers), but intelligent tools like Python make using Windows XP Home a much more fruitful and fun experience as I can actually get stuff done programmatically. Go Python developers and keep up the good work!!!
Re:Python makes Windows fun (Score:2)
Python Web Programming (Score:2, Informative)
It also had a brief Python tutorial in it, but I kind of skipped over that, so I can't vouch for that part. The rest of the book will definitely teach you a bit about network programming, web/database programming, and things of that nature. For most of the
/. programmers it might be pretty old hat since they were doing this stuff in the womb, but for unexperience programmers such as myself, I found it helpful.
Civ IV moddable with python (Score:4, Interesting)
OT - sig (Score:1)
Re:OT - sig (Score:1)
Re:OT - sig (Score:1)
Re:OT - sig (Score:1)
I never really got into 'hjkl' as navigation keys, as even when playing the ports on my Amiga 500 I had a numberpad to use instead
Re:Civ IV moddable with python (Score:2)
A bunch of friends and I were talking about civ 3 the other day, and how the biggest feature it lacks is some player useable scripting engine. I hope that it will be flexible and allow things such as iterating through all cities and setting production to cavalry where net shields are greater than 15 per turn, except if the city has no barracks, in which case set production to that. It's a pain to do by hand.
Re:Civ IV moddable with python (Score:2)
Google groups psoting (Score:1)
/ groups?q=Foundations+of+Python+Network+Programmin g &hl=en&lr=&selm=mailman.3337.1095202643.5135.pyth o n-announce-list%40python.org&rnum=1
How many Foundations? (Score:5, Funny)
Re:How many Foundations? (Score:2)
Re:How many Foundations? (Score:1)
Re:How many Foundations? (Score:2)
For a minute there... (Score:2, Funny)
For a minute there I really did think the title "Foundations of Python Network Programming" indicated that a new Python Network was being created for television and that they were laying the foundations and discussing what the programming schedule would be.
Seriously, no joke. And by the way, a Python Network would be beat the Game Show Network [gsn.com] hands down !!!
The Agony of SOAP/WSDL (Score:2, Interesting)
The usual way involves a pageful of obscure code, and having to use obtuse WSDL descriptor files and code generators to give you classes.
But, hey, python can generate classes and methods on the fly. So getting the temperature at zip code 90210 becomes a one-liner after some standard imports:
I'm n
Python == good (Score:3, Interesting)
Python caused me to change my layout for code, almost instantly eliminating a big problem with c-like code: the missing brace.
Most code is structured like this: In this small segment, notice that there are to sets of braces - and they don't line up at all. You have to mentally follow the code after "fubar" and see that after the condition "if (c())" in order to mentally track the state of the braces.
Compare this to
(slashdot's ECODE filter sux0rs)
If you could see it, you'd notice that the braces line up. The opening and closing braces for the condition "if (c()))" are indented one more than the braces for function Fubar() which are indented more than the line "Function fubar()" itself.
Thus, you merely have to follow the indents to match the opening/closing braces. As a result of this change, I spend less than 5 minutes per week matching up braces without the need for an IDE to match them up for me.
Python seems to be a good language (I like that you can compiles sections of a Python program in c to improve performance without rewriting the whole program) but it's concepts of layout certainly carry beyond Python itself!
Re:Python == good (Score:2)
void function(args){
if (<cond>) {
<body>
}
}
See how the closing brace for the if lines up with the if itself?
Re:Python == good (Score:1)
FREE BOOK HERE! (Score:2)
Re:Easiest review to skip (Score:5, Insightful)
Re:Easiest review to skip (Score:2)
Re:Easiest review to skip (Score:2)
Re:Easiest review to skip (Score:2)
Re:A better article... (Score:1) | http://news.slashdot.org/story/04/10/13/1815209/foundations-of-python-network-programming | CC-MAIN-2015-35 | refinedweb | 5,320 | 71.04 |
18 January 2011 12:58 [Source: ICIS news]
LONDON (ICIS)--A 70km stretch of the river Rhine in ?xml:namespace>
The closure was causing major disruption to river traffic, said Florian Krekel, a spokesman for shipping authority Wasser- und Schifffahrtsamt Bingen.
“We have received a lot of calls from different companies which are depending on deliveries,” he said.
The authority had planned to allow restricted passage of some vessels upstream of the accident site, near St Goarshausen, on Tuesday, but this had now been put back until Thursday.
“The sunken ship is moving and we can’t put it at risk by the passage of other ships,” Krekel said over the telephone.
The river is closed between Bingen, near Mainz, and Engers, just north of Koblenz.
Tests were now expected to be carried out on Thursday with the passage of a single ship followed by a survey, which could result in the stretch of river remaining completely closed.Tests were now expected to be carried out on Thursday with the passage of a single ship followed by a survey, which could result in the stretch of river remaining completely closed.
Salvage work cannot begin until four cranes, which are currently en route to the site, were in place. Two were expected on Thursday and a third on Sunday, but Krekel said he did not know when the fourth would arrive.
The river would then remain closed until the salvage operation – expected to last between two and three weeks – was completed, he said.
The Waldhof, which was carrying 2,378 tonnes of sulphuric acid, capsized on 13 January while en route from German chemical major BASF’s production hub in
The
Chemical industry sources were predicting problems if the river remained closed beyond 18 January, but so far they have reported no major | http://www.icis.com/Articles/2011/01/18/9427074/stretch-of-rhine-may-stay-shut-for-three-weeks-after-ship-capsize.html | CC-MAIN-2014-41 | refinedweb | 302 | 54.97 |
A.
Here’s a practical example where lambda functions are used to generate an incrementor function:
Exercise: Add another parameter to the lambda function!
Watch the video or read the article to learn about lambda functions in Python:
Puzzle. Here’s a small code puzzle to test your skills:
def make_incrementor(n): return lambda x: x + n f = make_incrementor(42) print(f(0)) print(f(1))
To test your understanding, you can solve this exact code puzzle with the topic “lambda functions in Python” at my Finxter code puzzle app.
When to use lambda functions?
“If you don’t mind, can you please explain, with examples, how we are supposed to use ‘lambda’ in our Python programming codes?” — Colen, Finxter user
Lambda functions are anonymous functions that are not defined in the namespace (they have no names). The syntax is:
lambda <argument name> : <return expression>.
First of all, don’t use lambda functions if it doesn’t feel natural. In contrast to many other Python coders, I’m no big fan of creating fancy Pythonic code that nobody understands.
Having said this, I must admit that I use lambda functions quite frequently. Here is how I use lambda functions in one of my puzzles (you may recognize it from the CBP book).))
Exercise: What’s the output of this code?
The encrypt function shifts the string by two Unicode positions to the right. The decrypt function does the exact opposite shifting the string s1 to the left. Hence, the output is “True”.
To answer the question, I use lambda functions only as an input argument for functions such as map() or filter(). For example, the map function applies the argument function (anonymous or not – doesn’t matter) to each element of a sequence. But it’s often cleaner to define the function first and giving it a human-readable name.
Let’s have a look at an interactive video course devoted only to the wonderful Python lambda function!
Lambda Functions Video Course
Overview
Applications min() and max()
Parameterless Lambdas
Map Function and Lambdas
Stacking Lambdas
The Filter Function
If-Else Loops
Customize Sort()! | https://blog.finxter.com/a-simple-introduction-of-the-lambda-function-in-python/ | CC-MAIN-2020-34 | refinedweb | 353 | 62.88 |
> Is it worth adding a check like other checks that are in lwip_init? Up to 1.3.0, it seems to have been working with WND == MSS. And although this is not a good idea regarding performance, there are applications where it makes sense, e.g. if your system is too slow to receive two segments in a row without a big gap in between. So adding a check including #error might be annoying in this case. However, I a) don't know if other TCP stacks handle this well (problems with remote peers may arise) and b) I guess WND < MSS might not work at all, so at least checking this in lwip_init might be a good idea. After all, I'm still not sure that the change in behaviour seen from 1.3.0 to 1.3.1 (can be seen quite good in Jan's wireshark dump) is OK (unless it can be turned off by setting the window update threshold to zero). Simon > > Bill > > >Alain Mouette wrote: > >> May I suggest that a comment about this be added in the config file. > >> > >> A special page on the wiki about configuring the many buffers in LWIP > >> would be awsome too... This is a very obscure area in lwip config :( > >> > > > >Please have a look at opt.h (and it's in there for a while now): > > > >/** > > * TCP_WND: The size of a TCP window. This must be at least > > * (2 * TCP_MSS) for things to work well > > */ > >#ifndef TCP_WND > >#define TCP_WND 2048 > >#endif > > > > > > _______________________________________________ > lwip-users mailing list > address@hidden > -- Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox 3.5 - sicherer, schneller und einfacher! | https://lists.gnu.org/archive/html/lwip-users/2009-10/msg00134.html | CC-MAIN-2019-43 | refinedweb | 274 | 80.41 |
For no particularly good reason, I spent a year collecting them. Here, then, are the big heads of 2004.
( --More--( 2%) )
For no particularly good reason, I spent a year collecting them. Here, then, are the big heads of 2004.
( --More--( 2%) )
A policeman saw the couple hugging and kissing in a taxi [...] He took the unidentified man and his Egyptian acquaintance to a police station for questioning. The two "confessed to hugging and kissing inside the taxi," and the man also admitted to being drunk. He was fined $3,270.
The affluent emirate is in the midst of a drive to establish itself as the Gulf's business and leisure hub.
An underground group of fisherman says it has dealt with discrimination from Missouri lawmakers for years.
Noodling is an ancient fishing method in which the fisherman wiggles his or her hand underwater near a bed of fish in hopes that a catfish will bite or, even better, swallow his hand. The fisherman then pulls the catch, which can weigh up to 250 pounds, out of the water.
"I feel like we can win this battle one noodler at a time," the noodler wrote.
There is this sickening reddish spooge rolling down the outside of my windows, and sticking to them in big muddy clots.
"Rich.
I heard about this site last year when I read the Wired article about it, but I hadn't gotten around to reading it until yesterday. I got totally sucked in, and read the whole thing in two sittings. It's very entertaining. Short version: heiress runs to escape arranged marriage; launders money, flees private security forces, hangs out with smugglers, blogs.
I started with About the Author then went on to entry #1.
This Esquire article about her gets interesting on page 3 (after he stops cut-and-pasting chat logs and actually arranges an interview.) Warning, the article contains some spoilers, so read the blog first.
The webcast machine at the club loses its mind at least once a week: it appears to run out of memory and crash, but I can't figure out what the culprit is.
The machine is a dual CPU Athlon 2400+ with 1GB RAM and 500MB swap. It's running Fedora Core 3, but I was also experiencing this problem on FC2 and RH9. Memtest86 says the RAM is fine. It's got an Osprey 100 BT848 video capture card and an SB Live EMU10k1 audio card.
I set up a cron job that once a minute captures the output of "top -bn1" and "ps auxwwf" to a file. Here's are a pair of those files as it loses its mind. Note that the load goes from 3.44 to 22.73 in a minute and a half.
I've compared the two files character by character, and I don't see a smoking gun. The differences look quite trivial to me.
So while I was sitting there staring at this, I saw something very intersting happen: "top" was running on the machine's console, and showed 380MB swap available -- and the oom-killer woke up and shot down an xemacs and an httpd.
So, how's that even possible? Does this mean that some process has gone nuts and started leaking wired pages, so that it can't swap at all? Or what?
So, any ideas?
Update, Dec 29: It looks like something is leaking in the kernel; /proc/slabinfo shows the size-256 slab growing to 3,500,000 entries (over 800MB.) Current suspect is the bttv/v4l driver (since one of the things this machine does is run "streamer" to grab a video frame every few seconds.) That would be about 525 leaked allocations per minute, or around 26 leaks per frame.
kernel 2.6.9-1.681_FC3, xawtv-3.81-6.
Update, Jan 12: That was the culprit. This is the fix:
--- ./drivers/media/video/bttv-driver.c.orig 2005-01-11 14:54:15.477911088 -0800 +++ ./drivers/media/video/bttv-driver.c 2005-01-08 13:49:44.000000000 -0800 @@ -2992,6 +2992,9 @@ free_btres(btv,fh,RESOURCE_VBI); } + videobuf_mmap_free(file, &fh->cap); + videobuf_mmap_free(file, &fh->vbi); + #ifdef VIDIOC_G_PRIORITY v4l2_prio_close(&btv->prio,&fh->prio); #endif --- ./drivers/media/video/video-buf.c.orig 2004-10-18 14:54:08.000000000 -0700 +++ ./drivers/media/video/video-buf.c 2005-01-08 13:50:04.000000000 -0800 @@ -889,6 +889,7 @@ int i; videobuf_queue_cancel(file,q); + videobuf_mmap_free(file, q); INIT_LIST_HEAD(&q->stream); for (i = 0; i < VIDEO_MAX_FRAME; i++) { if (NULL == q->bufs[i]) | https://www.jwz.org/blog/2004/ | CC-MAIN-2018-43 | refinedweb | 760 | 82.75 |
In this problem, we are given string str by the user. And we have to print only those characters whose frequencies of occurrence in an odd number.
To solve this problem, we have to find the total frequency of occurrence of a character in a string. And print only those characters of the string that have odd frequencies of occurrence.
Let’s take an example to understand the topic better −
Input : adatesaas. Output : dte
Explanation −The characters with their frequency of occurrence are −
Characters with odd frequency are d, t, e.
Now let's try to create an algorithm to solve this problem −
Step 1 : Traverse the string and count the number of occurrences on characters of the string in an array. Step 2 : Traverse the frequency array and print only those characters whose frequency of occurrence is odd.
Let's create a program based on this algorithm −
#include <bits/stdc++.h> using namespace std; int main(){ string str = "asdhfjdedsa"; int n = str.length(); int frequency[26]; memset(frequency, 0, sizeof(frequency)); for (int i = 0; i < n; i++) frequency[str[i] - 'a']++; for (int i = 0; i < n; i++) { if (frequency[str[i] - 'a'] % 2 == 1) { cout << str[i]<<" , "; } } return 0; }
d , h , f , j , d , e , d | https://www.tutorialspoint.com/print-characters-having-odd-frequencies-in-order-of-occurrence-in-cplusplus | CC-MAIN-2021-25 | refinedweb | 209 | 61.97 |
Summary: Microsoft Scripting Guy Ed Wilson teaches how to use Windows PowerShell to troubleshoot Microsoft Outlook problems.
Hey, Scripting Guy! I have a problem with Microsoft Outlook on my laptop. It seems that it goes to sleep or something. I will be working, go to check my email, and nothing appears. But if I look at my Windows 7 mobile phone, it shows that new email is in my inbox. I am thinking that I have a problem with Outlook.
—BT
Hello BT,
Microsoft Scripting Guy Ed Wilson here. The Scripting Wife and I had a wonderful time at the Geek Ready conference. She knows many of the Microsoft premier field engineers (PFEs) because she has met them at various Windows PowerShell User group meetings, at conferences, or just from hanging around me. She had a chance to meet several PFEs who have written guest Hey, Scripting Guy! Blog articles, but that she had never met in person. It really was awesome. We were up late on several nights having “Windows PowerShell side meetings” with various PFEs, and on a couple occasions, we ended up having impromptu script club meetings. The results of all this engagement will be appearing for months to come on the Hey, Scripting Guy! Blog, so stay tuned.
Anyway, with all the geeks taking over the hotel where we were staying, poor Microsoft Outlook was struggling. In fact, Internet connectivity was spotty most of the time, just due to the sheer magnitude of the demands placed on the infrastructure by us. The nice thing is that, by using Windows PowerShell, I can do an awful lot of discovery.
When I am doing event log exploration, things go a whole lot faster if I store the particular lot in a variable. So that is what I am going to. Microsoft Outlook writes events to the application log using the source id of outlook. I therefore use the Get-EventLog cmdlet and gather up all the entries that have a source of outlook. I store these EventLogEntry objects in a variable I call $log. The EventLogEntry .NET Framework class appears in the System.Diagnostics namespace, and documentation appears on MSDN. This command is shown here:
$log = Get-EventLog application -Source outlook
Once I have all of the outlook entries in a variable, I can begin to examine them. I am curious about how many entries I have, so I use the count property to see. This command and output are shown here:
PS C:\> $log = Get-EventLog application -Source outlook
PS C:\> $log.count
280
Next, I want to see what type of structure I am dealing with, so I index into the array of records, and pick off one record to examine and pipe the results to the Format-List cmdlet (fl is an alias for the Format-List cmdlet). Here is the command I use:
$log[0] | fl * -Force
This command and the associated output appear in the following figure.
Wow, as it turns out, that was a bad example because it goes on and on and on. It can be useful, however, because this shows me all of the add-ins that Microsoft loads. Remember, seeing the Microsoft Outlook splash screen that shows how many add-ins it is loading? It looks like EventID 45 tells me about loading add-ins.
A better event log entry is the one that is shown in the following figure. The reason this event log entry is better is that it allows me to see representative data from all of the different properties in a single screen shot.
Event ID 26 looks like it tells me when Microsoft Outlook has lost connectivity to the Microsoft Exchange server. Hmmm, that might be useful. Let me look at all event ID 26s and see what they say. To do this, I pipe the collection of event log entries that are stored in the $log variable to the Where-Object cmdlet (? Is an alias for Where-Object). In the script block associated with the Where-Object cmdlet, I look for eventid that is equal (-eq) to 26. This command is shown here:
$log | ? {$_.eventid -eq 26}
My screen quickly floods with entries. Interestingly enough, it seems that Event ID 26 reports lost connectivity as well as restored connectivity. This can actually be a useful thing. What I can do first is look at how many disconnects and how many connection restored messages there are. This command is shown here:
PS C:\> $log | ? {$_.eventid -eq 26} | group message -NoElement | select name, count | ft -AutoSize
Name Count
---- -----
Connection to Microsoft Exchange has been lost. Outlook will restore the connection when possible. 36
Connection to Microsoft Exchange has been restored. 34
Now that I see that there are a significant number of times when the connection to the Microsoft Exchange server dropped, I would really like to see when this is happening. To do this, I want to focus on the timewritten property from the eventlog. The problem is that if I find my events, and group by the timewritten property, the result will be 70 separate lines and no real grouping because each timewritten record will be unique. Therefore, nothing exists to group on. The command is shown here:
$log | ? {$_.eventid -eq 26} | group timewritten
The command and output are shown in the following figure.
The trick is to realize that the timewritten property contains a DateTime object. This is important because I know that an instance of the DateTime object exposes a day property. I can then use the Group-Object cmdlet to organize the eventlog records by day. I used the Get-Member cmdlet (gm is an alias) to discover that the timegenerated property contains a DateTime object. This command and output are shown here:
PS C:\> $log[0] | gm timegenerated
TypeName: System.Diagnostics.EventLogEntry#application/Outlook/1073741869
Name MemberType Definition
---- ---------- ----------
TimeGenerated Property System.DateTime TimeGenerated {get;}
The problem is exposing that DateTime object to the Group-Object cmdlet. For example, the following command attempts to use dotted notation to directly access the day property of the DateTime object:
$log | ? {$_.eventid -eq 26} | group timewritten.day
The command and output are shown here (they are not impressive):
PS C:\> $log | ? {$_.eventid -eq 26} | group timewritten.day
Count Name Group
----- ---- -----
70 {System.Diagnostics.EventLogEntry, System.Diagnostics.EventLogEntry,
The following commands also do not work. In fact, some generate errors:
$log | ? {$_.eventid -eq 26} | group (timewritten).day
$log | ? {$_.eventid -eq 26} | group $(timewritten).day
$log | ? {$_.eventid -eq 26} | group $_.timewritten.day
The trick is to use the Select-Object cmdlet with the expandproperty parameter to expand the timewritten property from the Get-EventLog. In this way, I can then use the Group-Object cmdlet to group the eventlog records by day. I decided to leave the details because they let me see which days I am having problems. The command is shown here:
$log | ? {$_.eventid -eq 26} | select -expandproperty timewritten | group day
The command and associated output are shown in the following figure.
It is obvious from the preceding figure, that there were problems on November 4 and October 16. This is great information because I could work with a customer, or someone who says, “I had a problem with Outlook last week sometime. I don’t really remember when, but it seemed like it kept dropping off, and not working.” And with Windows PowerShell and the Get-EventLog cmdlet, I can actually connect remotely to their computer and retrieve the information I need. And then I can say, “Yes, I see you had a problem on November 4, but we were applying patches to the Exchange server that day, and it was up and down all day. So, no, there is no problem with Outlook.”
But what if we look back, and we were not performing maintenance? Maybe, instead I want to see if there is a pattern by hour. I also know that the DateTime object contains an hour property. Therefore, using my trick from earlier, I come up with the following command:
$log | ? {$_.eventid -eq 26} | select -expandproperty timewritten | group hour
The command and associated output follow. It is very revealing. Thirty-six of the disconnects occurred between the hours of 20:00 (8:00 P.M.) and 22:00 (10:00 P.M.).
BT, that is all there is to using Windows PowerShell to assist in troubleshooting Microsoft Outlook problems. Join me tomorrow for more cool Windows PowerShell things. these trick !
this*
Sorry for the typo.
thanks
Dear Ed!
Your last attempt to use grouping directly on log entries…:
$log | ? {$_.eventid -eq 26} | group $_.timewritten.day
… was pretty close, it missed only something around the code on which data should be grouped:
$log | ? {$_.eventid -eq 26} | group { $_.timewritten.day }
But I would probably choose different grouping code:
$log | ? { $_.eventId -eq 26 } | group { $_.TimeWritten.ToShortDateString() }
Done. 😉
Hi Ed,
thank you for reminding me of the Get-Eventlog Cmdlet!
I nearly forgot about it since the last scripting games …
Analyzing the event logs of our client PCs is really easy with powershell!
That's something where having PS at our fingertips rocks!
The one liner "$log = Get-EventLog application -Source outlook -computername RemoteComputer"
is a highlight on its own!
Up to the advent of powershell I would have (if you don't have high level management infrastructure systems … or you are no admin, like me) opened a remote desktop connection to "RemoteComputer", clicked through the start menu entries until we see the event log icon.
Double clicking it might take a while until the list of events expands and even then we have to filter or at least sort the events and scroll to the potentially interesting part of the list.
But you still will have to copy and paste some of the entries from the list if you have to analyze problems further. At this point we have TEXT … NO OBJECTS! So the nice things presented here like grouping, sorting, filtering … on special properties of these log entries has been impossible or at least much more difficult.
The biggest deal might be (Having enabled remoting on our remote machines) that we can collect data from each of our departments workstations in a powershell one liner! We can save the results to a file and analyze them offline at any time to produce reports of the most common problems related to all or selected machines ( and maybe we like to produce beautiful excel reports directly from our powershell console … 🙂
That's good news … isn't it?
Klaus.
@Klaus Schulte don't forget too much from last years Scripting Games, because the 2012 Scripting Games are coming in April … only a few months away. Yes, I always like using the Get-EventLog cmdlet. It is very easy to use, and very powerful. I am amazed by the things I find out in the log files. I was really happy when I figured out I could group the entries by the time they were written.
@Bartek Bielawski Hello my friend! You are right about using a script block to force evaluation of the DateTime object to permit grouping by day and by hour. The difference, is that in my method of using Select -ExpandProperty is that in the element information, I now have the actual timestamps. Also when grouping by timewritten to string, unelss there are a whole bunch of date / times that are exactly the same time, then the grouping will only be groups of 1. In addition, once you call tostring, you no longer have a datetime object … you have a string. All this aside, you are absolutely correct, if I had of added curly brackets around the timewritten property it would have forced it would have opened it up, and allowed me to meet my original goal. Thank you very much for sharing this … it is a great trick to remember! | https://blogs.technet.microsoft.com/heyscriptingguy/2011/11/09/troubleshoot-outlook-problems-with-powershell/ | CC-MAIN-2018-22 | refinedweb | 1,997 | 64.61 |
JustLinux Forums
>
Community Help: Check the Help Files, then come here to ask!
>
Software
> gdesklets and problems with bonobo.ui
PDA
Click to See Complete Forum and Search -->
:
gdesklets and problems with bonobo.ui
danimal1009
05-22-2004, 04:04 AM
I had gdesklets installed before, until I had a problem with with one of my sources for apt. I got that fixed and then tried to reinstall gdesklets, but now when I start it, it exits with this error: Traceback (most recent call last):
File "/usr/bin/gdesklets", line 12, in ?
from main.Starter import Starter
File "/usr/share/gdesklets/main/Starter.py", line 2, in ?
from factory.DisplayFactory import DisplayFactory
File "/usr/share/gdesklets/factory/DisplayFactory.py", line 3, in ?
from sensor.DefaultSensor import DefaultSensor
File "/usr/share/gdesklets/sensor/DefaultSensor.py", line 1, in ?
from Sensor import Sensor
File "/usr/share/gdesklets/sensor/Sensor.py", line 15, in ?
from SensorConfigurator import SensorConfigurator
File "/usr/share/gdesklets/sensor/SensorConfigurator.py", line 1, in ?
import gnome.ui
ImportError: could not import bonobo.ui
Since I installed it with apt-get, I wouldve thought it take care of the stuff gdesklets needs... Any ideas on how to fix this? :confused:
mdwatts
05-22-2004, 12:07 PM
Originally posted by danimal1009
ImportError: could not import bonobo.ui
Sorry as I don't use gdesklets or bonobo, so of course I know very little or nothing at all about either.
Does bonobo.ui exist on your system?
Searched G4L for "could not import bonobo.ui" to see if others have run into the same problem with apt-get?
Anyone else have any ideas?
danimal1009
05-22-2004, 03:20 PM
According to locate (and of cousre updatedb), the file bonobo.ui doesn't exist on my system. I'm under the assumption that it's some kind of python extension. I looked up all the dependencies according to "apt-cache show" and tried "apt-get install"(ing) all of them. It updated a few, but that didn't work either... :(
Searched G4L for "could not import bonobo.ui" to see if others have run into the same problem with apt-get?
Yes... some of the results mention I have to have python-gnome or something like that installed... I checked and its installed... Still not working, though
danimal1009
05-22-2004, 03:54 PM
Wouldn't you know... a few minutes later, I solved my problem... apparently the problems from the bad apt source I mentioned earlier haven't completely gone away yet... During my continued searchng, I got the idea to try and load the bonobo.ui module through the interective python shell thing... here was the result Python 2.3.3 (#2, May 1 2004, 06:12:12)
[GCC 3.3.3 (Debian 20040401)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygtk
>>> import gtk
>>> import bonobo
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/lib/python2.3/site-packages/gtk-2.0/bonobo/__init__.py", line 6, in ?
import ORBit
ImportError: /usr/lib/libIDL-2.so.0: undefined symbol: __guard
>>> import bonobo.ui
Fatal Python error: could not import ORBit module
Aborted
Now that "undefined symbol: __guard" was showing up a lot and wreaking havoc with apt-get and not letting me install packages and breaking things as a result. So now that I got rid of the bad source, I did am "apt-get --purge remove libIDL0" then "apt-get install"(ed) all the stuff that the purging removed. Problem solved...
mdwatts
05-22-2004, 04:31 PM
Originally posted by danimal1009
Problem solved...
:)
Glad I could at least try to help. Thanks
justlinux.com | http://justlinux.com/forum/archive/index.php/t-127979.html | crawl-003 | refinedweb | 616 | 69.18 |
signalr_flutter 0.1.0
signalr_flutter: ^0.1.0 copied to clipboard
A flutter plugin for .net SignalR client. This client is for ASP.Net SignalR, not for .Net Core SignalR.
Use this package as a library
Depend on it
Run this command:
With Flutter:
$ flutter pub pub add signalr_flutter
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: signalr_flutter: ^0.1.0
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:signalr_flutter/signalr_flutter.dart'; | https://pub.dev/packages/signalr_flutter/install | CC-MAIN-2021-17 | refinedweb | 106 | 62.85 |
Hello,
I am writing an application which has data cached in ObjectGrid. I also have a Loader to query data from a backend system, in case of cache miss. I would like to get data from the Grid using a list of key, so I decide to use ObjectMap.getAll(), as I don’t want to have a loop calling get() for each key. I found that the getAll() is a lot faster than get(), and my application is performance concerned.
The situation is, I have key1, key2, key3 cached in ObjectGrid. I call getAll() for list of “key1, key2, key3, key4”. The Loader will be trigged for key4. At the time of request, assume that the backend is temporary down, so there will be a LoaderException for key4.
From my testing, such a situation will result in Exception. I don’t get any data back, even though the key1, key2, key3 are in cached. Is it possible to get data back for key1, key2, key3 and an Exception for key4? Or if there is any alternative way to do please suggest.
P.S. I use WXS6.1 (and will migrate to WXS7.1 next year)
Thanks,
Jirassaya
Topic
SystemAdmin 110000D4XK
1485 Posts
Pinned topic ObjectMap.getAll() - possible to get data for some keys in the key list?
2012-12-26T10:15:41Z |
Updated on 2013-01-25T13:29:51Z at 2013-01-25T13:29:51Z by jhanders
Re: ObjectMap.getAll() - possible to get data for some keys in the key list?2013-01-14T17:17:23Z
This is the accepted answer. This is the accepted answer.I am assuming here you wrote the Loader yourself and you are not using one of the built-in JPA loaders that come with WXS. Your own loader code, in the "get(...)" method, throws the exception to WXS. Therefore, you can choose to not throw an exception but instead return a special value object when the backend is down. Naturally, ALL your client code must check for this special value object but it does allow you to do what you want to do in a well-designed way. There is no special value built-into WXS for this purpose (like the special KEY_NOT_FOUND value object WXS has). You will need to define one yourself. Since WXS does not enforce strict types in a BackingMap for keys or values (even tho it is nearly always far more sensible and less error-prone to always put one key type and one value type in any given map), you can do the following:
public class MyValueClass {
public static final String GRID_DOWN = "GRID IS DOWN";
private int someInteger;
private Float someFloat;
...
}
The real data in your BackingMap (named MyValueClassMap, for example) will be of type MyValueClass, but if you do a getAll(keylist) and the fourth key results in a Loader call while your backend is down, your Loader would not throw an exception but instead return the GRID_DOWN constant for each key in the Loader.get(...) call (because you don't know the order of the cache hit keys vs the cache miss keys). In your example, the return from getAll(keylist) will be:
{myValueClass1, myValueClass2, myValueClass3, GRID_DOWN}
When you iterate, you need to test each value in the returned List to see if it is GRID_DOWN before you use the object. This will of course slightly slow down things but hopefully not much in your particular case. Note that you can't do
if (!value == MyValueClass.GRID_DOWN)...
because the constant instance of GRID_DOWN is coming from the container JVM and won't have the same memory address as the constant instance of GRID_DOWN in your client JVM. You should do this or similar
if (!GRID_DOWN.equals(value))...
or, more efficient still if your value class is not itself a String,
if (!(value instanceof String)) ...
"instanceof" in Java is far more efficient in Java 1.6 and later than it was in earlier versions.
Re: ObjectMap.getAll() - possible to get data for some keys in the key list?2013-01-25T13:29:51Z
This is the accepted answer. This is the accepted answer.
In the code example the Loader throws an exception for a negative Integer. As such for those keys an Exception was returned instead of the value that is expected. Instead of an Exception another type could be used to indicate that the entry is not available.
List<Integer> keyList = new ArrayList<Integer>(40); for ( int i = -20; i < 20; ++i) { keyList.add(Integer.valueOf(i)); } Map<Integer, Object> getAllValues = agentManager.callMapAgent( new GetAllAgent(), keyList); public class GetAllAgent implements MapGridAgent { private static final long serialVersionUID = 1L; public Object process(Session s, ObjectMap map, Object key) { try { return map.get(key); } catch (ObjectGridException e) { return e; } } @SuppressWarnings( "rawtypes") public Map processAllEntries(Session s, ObjectMap map) { return null; } } | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014923250 | CC-MAIN-2017-43 | refinedweb | 802 | 64.81 |
severity 295134 important retitle 295134 USB FDU in same namespace as harddisk breaks bootloaders thanks On Sunday 13 February 2005 23:03, Michael Forbes wrote: > because there is a "mystery device" at sda: > Feb 9 14:05:21 kernel: Attached scsi removable disk sda at scsi0, > channel 0, id 0, lun 0 > > info: /bin/report-hw: discover: ;;;Y-E DATA;USB-FDU;/dev/sda ^^^^^^^ Well, this looks to me like it _is_ your USB Floppy Disk Unit. > info: /bin/report-hw: discover: ;;;ATA;ST3160023AS;/dev/sdb > > However, when rebooting, the device is mounted as /dev/sda: That is because a floppy driver is loaded much earlier in d-i than the hard-disk drivers. During the boot of the installed system, the order is reversed. I guess we'll have to think of something very smart to find a solution for this situation where a removable device is loaded in the same namespace as the boot device. > I also had this *same* problem on an i686 machine (though at > the time I though the mystery device was an external usb floppy, so I > didn't file a bug). Does your i686 machine also have the harddisk on /dev/sda? If yes, it is most likely the same problem. Cheers, FJP | https://lists.debian.org/debian-boot/2005/02/msg00477.html | CC-MAIN-2019-26 | refinedweb | 210 | 52.53 |
i have searched alot but could not find any refernce across internet. my case is gameobjects are spawning from the top of screen and translating downwords. each gameObject has trigger and a script that translating it. player have to tap on each object to destroy that specific gameobject. i am using raycasting to detect touch on each object. everything working if speed is normal. when object starts to come fast . the gameobject start ignoring touch. any solution for this? i have to scripts one attach to camera that manage raycasting and other translating tile
using UnityEngine;
using System.Collections;
public class RaycastManager: MonoBehaviour {
Touch touch;
public bool stopTime=true;
public float delay,timer;
// Use this for initialization
void Start () {
timer = delay;
}
void Awake()
{
MoveTile.speed = 7f;
}
// Update is called once per frame
void FixedUpdate ()
{
if (Input.touchCount >0) {
//MoveTile.speed = 0f;
touch = Input.GetTouch (0);
RaycastHit2D hit = Physics2D.Raycast (Camera.main.ScreenToWorldPoint ((touch.position)), Vector2.zero);
if (hit.collider.tag == "Pink") {
Destroy (hit.transform.gameObject);
} else if (hit.collider.tag == "Green") {
Destroy (hit.transform.gameObject);
} else if (hit.collider.tag == "Blue") {
Destroy (hit.transform.gameObject);
} else if (hit.collider.tag == "SeaGreen") {
Destroy (hit.transform.gameObject);
} else if (hit.collider.tag == "Purple") {
Destroy (hit.transform.gameObject);
} else if (hit.collider.tag == "Orange") {
Destroy (hit.transform.gameObject);
}
}
}
}
using UnityEngine;
using System.Collections;
public class MoveTile : MonoBehaviour {
public float speed;
// Use this for initialization
public void pointerDown ()
{
speed = 0;
gameObject.SetActive (false);
}
// Update is called once per frame
void Update () {
transform.Translate (speed * Vector2.down*Time.deltaTime);
}
}
I need to know the solution pls!?
Answer by giorashc
·
Jun 07, 2016 at 08:39 AM
Since you are translating your objects in Update() (instead of FixedUpdate()) then in high speeds the raycast check point might miss the actual object since Update() might be called several times between two sequential calls to FixedUpdate(). So while you check the input and raycast in FixedUpdate the object might already been translated.
Update()
FixedUpdate()
FixedUpdate
Check this link about the differences regarding being in sync with the physics engine.
Try translating your objects in a FixedUpdate() method instead of in Update()
Also do raycasting checks in Update() method as FixedUpdate() is executed in fixed intervals while Update() is called per frame (which on higher fps is more frequent the FixedUpdate()).
giorashc Thanx Alot for your reply. I tried to shift translation into fixed upadate. but face same issue.. i tried another technique my making all gamobject ui image and added event trigger on them and using "on pointer down" i achieve the same target but issues is still same. hight speed cant be handle with both techniques
is unity not capable of handling high speed gameobjects?
It sure does... it just needs to be handled right. Did you try putting logs (Debug.Log(..)) around your updates to understand what exactly happens in which order?
i checked using logs "'RaycastHit2D hit = Physics2D.Raycast (Camera.main.ScreenToWorldPoint ((touch.position)), Vector2.zero)""; the hit is racast is not colliding with the collider of gamobject. can say ignoring collision.
When you say 'everything working if speed is normal' you mean that the raycast works (or the event trigger works)?
both things work in normal speed. event triggers and raycast but when speed of gameobject increase the problem arise.
its piano tiles like game and speed of tiles increase with span of time.
Answer by davidjohn123
·
Jun 13, 2016 at 06:08 AM
I find solution of this problem after a week of struggle. if someone need solution can contact me.
Hey David. I am stuck in same situation ,so where to contact you? can you post your email here .
yes why not @ammar
I need to know the solution plz?!, Thanks!
@ahmedhassan my skype husnain.yousaf88
Hey, can you post the solution?
Answer by Brijs
·
Jun 13, 2016 at 06:37 AM
Use OnMouseDown()
OnMouseDown.
Restricting OnGUI to test for touches inside of a RECT???(Unity iPhone)
2
Answers
One more question about touches
1
Answer
Reacting to game being minimised/rotated?
0
Answers
Input.touches vs Input.GetTouch
1
Answer
This code only works on 3D Colliders and I want it to work on 2D. What do I need to change?
1
Answer | https://answers.unity.com/questions/1198965/detecting-raycast-on-moving-object.html | CC-MAIN-2020-29 | refinedweb | 702 | 61.53 |
/* version_VERSION_H #define CSOUND_VERSION_H /* Define to the full name of this package. */ #define CS_PACKAGE_NAME "Csound" /* Define to the full name and version of this package. */ #define CS_PACKAGE_STRING "Csound 5.08" /* Define to the one symbol short name of this package. */ #define CS_PACKAGE_TARNAME "csound" /* Define to the version of this package. */ #define CS_PACKAGE_VERSION "5.08" #define CS_VERSION (5) #define CS_SUBVER (8) #define CS_PATCHLEVEL (2) #define CS_APIVERSION 1 /* should be increased anytime a new version contains changes that an older host will not be able to handle -- most likely this will be a change to an API function or the CSOUND struct */ #define CS_APISUBVER 6 /* for minor changes that will still allow compatiblity with older hosts */ #endif /* CSOUND_VERSION_H */ | http://csound.sourcearchive.com/documentation/5.08.0.dfsg2/version_8h-source.html | CC-MAIN-2018-09 | refinedweb | 115 | 63.59 |
Regex.Matches Method (String, String)
Searches the specified input string for all occurrences of a specified regular expression., String) method to identify any word in a sentence that ends in "es".
using System; using System.Text.RegularExpressions; public class Example { public static void Main() { string pattern = @"\b\w+es\b"; string sentence = "Who writes these notes?"; foreach (Match match in Regex.Matches(sentence, pattern)) Console.WriteLine("Found '{0}' at position {1}", match.Value, match.Index); } } // The example displays the following output: // Found 'writes' at position 4 // Found 'notes' at position 17
The regular expression pattern \b\w+es\b is defined as shown in the following. | http://msdn.microsoft.com/en-us/library/vstudio/b9712a7w(v=vs.100) | CC-MAIN-2014-23 | refinedweb | 106 | 51.44 |
The History of Programming Languages 684
Dozix007 writes "For 50 years, computer programmers have been writing code. New technologies continue to emerge, develop, and mature at a rapid pace. Now there are more than 2,500 documented programming languages and O'Reilly has produced a poster called History of Programming Languages, which plots over 50 programming languages on a multi-layered, color-coded timeline."
Great! (Score:5, Funny)
Re:Great! (Score:5, Informative)
Unless you need an excuse to buy a 40" monitor, in which case, just forget I said anything.
Re:Great! (Score:5, Funny)
One of the quotes direct from that little presentation: "Using the Altair 8800, Bill Gates and Paul Allen develop the first programming language, and begin an extraordinary, history-making journey."
Good to know where it REALLY all started
Re:Great! (Score:4, Interesting)
"...develop a BASIC computer language for the Altair 8800."
So at least in one place they were a little more humble....
Re:Great! (Score:5, Informative)
Re:Great! (Score:5, Funny)
It's just an abridged version of "world's first programming language only i've heard of."
Re:Great! (Score:5, Funny)
I never thought I'd say it, but "first post!"*
Re:Great! (Score:4, Informative)
Abridged means shortened without changing the meaning. Clearly, inventing the first programming language is different than inventing the first commercial programming language for a personal microcomputer. So the statement is wrong, and in no way, even very small, is it correct.
Of course, Gates and Allen did not invent BASIC either, so to even claim that they " develop[ed] the first commercial programming language for a personal microcomputer." is a stretch
Re:Great! (Score:3, Funny)
Let me know when Microsoft actually gets serious about security.
Now seriously. I am cavalier about security. I run as root (same login Linux or domain admin). I leave my computers up and running and logged in.
My security is rotten and it's better than Microsoft's?
Re:Great! (Score:3, Funny)
"With the creation of Windows 3.0, an easy-to-use and highly functional operating system, millions of people discover the empowering abilities of personal computers."
I'm just... wow.
Re:Great! (Score:3, Insightful)
Ummm... Just how is that?
Re:Great! (Score:4, Informative)
That's still a far cry from the first programming language, which is what the quote actually says. Some of us were happily banging away in languages like Fortran and PL/1 well before then.
Still, their place in history can't be denied. They were at the forefront of an industry in its infancy and did perhaps more than anyone to make it a great one.
Re:Great! (Score:3, Funny)
Re:Great! (Score:3, Funny)
Re:Great! (Score:4, Insightful)
Well, maybe the first high-level programming language or the first language interpretter, but I'm willing to bet that the machine didn't run basic natively, so the native language must have been earlier.
Wait a minute! (Score:5, Funny)
Re:Wait a minute! (What about Atlas Autocode?) (Score:3, Interesting)
-wb-
Delphi from VBasic?? (Score:5, Insightful)
Re:Delphi from VBasic?? (Score:4, Insightful)
Re:Delphi from VBasic?? (Score:5, Insightful)
Putting items in categories is notoriously difficult though. It's not as if one language leads to another. Language writers are influenced by all their external stimuli which will no doubt include many other programming languages not too mention beer, pizza and their families. If you accounted for all influences then the number of arrows would increase by orders of magnitude.
There is a difference between language and IDE though. Delphi is not actually a language, rather it is a product including an IDE and a compiler for the Object Pascal language. And Borland's version of the Object Pascal language is, I believe, based on an Apple version.
On a language basis, rather than an IDE basis, the heritage is clearly Pascal -> ObjectPascal -> Delphi. No doubt there are extra bits in between the arrows that I don't know about.
I mean, you can take a basic Pascal program and compile it in Delphi. Pascal is a subset of Delphi in the same way as C is to C++.
Re:Delphi from VBasic?? (Score:5, Funny)
Re:VisualBasic from Pascal (Score:4, Funny)
BZZZZZZZZZZZZT. Wrong. Now take 3 seconds out of your life... wouldn't you think... LOGICALLY... that line numbers would be supported as backward compatibility to GWBasic? I think so. As a matter of fact, I have seen several schools teach Qbasic classes, using GWBasic text books... see where this goes, and your bullshit doesnt?
When I first encountered QuickBasic, I already knew BASIC, Pascal, and C. I could replace the keywords in my Pascal programs and then do minor debugging to have functional QuickBasic programs. The API for VB has grown since then, but it still looks like Pascal with different keywords.
You mean like StrLen? C had to have RIPPED THAT OFF TOO! Oh, let me illustrate how much they are the same:
If You = Retarded Then
Print "Retard"
Else
Print "Still Retarded"
End If
vs.
If You = Retarded Then
Begin
writeln('Retard');
End Else Begin
writeln('Still Retarded');
End
Yep.... identical all right *ugh*.
It was a smart move for MS. Most college grads were learning Pascal, and the transition to developing in the MS-proprietary language was easy since few of the rules were different
Do you even live in the same universe as the rest of us?
Today I work with Java and LotusScript with Domino because it is allows much faster business application development than any other platform. LotusScript was based on VB, and still looks similar: same keywords, different API, and it has the List variable type. I also use Java for applets, servlets, and server applications; and one of my Java projects will require a GUI-based client, although I have not written it yet. (I prefer Java over C/C++ because the native memory management saves development and QA time.) I have never developed with Delphi, although several friends are good with it, and their code looks like super-charged Pascal.
Ok, thanks for clearing up why you are completely unqualified to even exist.
Also, it looks like Super Charged pascal, because its simple Object Pascal.
Re:Delphi from VBasic?? (Score:5, Insightful)
It seems to me that the arrows mean "inspired by"/"taking features from". You should have noticed that Delphi has two arrows pointing to its inception: one from VB, and one from Object Pascal. That seems reasonable. I don't think that's intended to mean that Delphi was created by MS or anything like that, just that it got inspiration from VB.
Re:Delphi from VBasic?? (Score:5, Interesting)
Coincidentally, the guy who (for the most part) made Delphi actually left Borland and went to Microsoft, and he's now the lead architect of... C#.
:o
Perl is just as wrong (Score:5, Informative)
A more accurate history would be:
Perl 1.0: awk, sh, C, BASIC
Perl 5.0: C++, LISP
Listed as a seperate line:
Perl6 A1-12: Perl 5.0, LISP, C#, C++, Ruby, Java, Python, SNOBOL
To be more specific, Perl 1.0 had heavy influences from C. The most obvious influecnes were in the operator precedence, ternary operator and behavior of parens.
In 5.0, the influence of C++ was felt strongly on the establishment of Perl 5's non-object-model object model (AKA the object model construction kit) and from LISP can the idea of closures.
Come Perl6, of course, it's a different language which borrows most of all from Perl 5, but also heavily from the other languages listed. Adding LISP currying, Ruby mix-ins, a Java and/or C#-like VM, python-like exceptions and a number of features from C++ including templated proto-classes and iterators as well as dozens of unique features. But, ultimately I think the most world-view altering change will be the SNOBOL-like inline grammar construction.
Meta Programming Language (Score:3, Funny)
----
Your Boss Might Be A Muppet [blogspot.com]
Re:Meta Programming Language (Score:5, Interesting)
Re:Meta Programming Language (Score:5, Funny)
It's a pretty good language, really. Sort of esoteric and the syntax can be inscruitable at times, but you can really get some shit accomplished with it.
VMs will solve this issue (Score:5, Interesting)
One thing that has always bothered me is the lack of standards for basic syntax. Why not just have a standard for basic operators? For example does anyone really lose flexibility if we say statements are delimited by ';'? Or a standard syntax for if-then-else? e.g. perl's syntax is a pointless departure that adds no value.
Re:VMs will solve this issue (Score:4, Interesting)
This is how we end up with completely inscruitable languages like Visual Basic. BASIC was designed to not look like a programming language and instead look like an English sentance. When we removed the line numbers, the end of a line signified the end of a statement. Of course, adding object oriented syntax to Basic made it even more apocryphal, and we ended up with the mess we have today. We can't change it, because if we did than older stuff wouldn't work. Besides, if you want to use VB with a better syntax, you can just use C# (which is either C++ without the hassles of memory management, pointers or header files or Java without the hassles of explicit declaration of exception handling, separate Get and Set methods for properties, or cross platform execution).
I actually think it makes sense. (Score:3, Interesting)
The semicolon is often used at the end of a clause or list, therefore it does not defer to the right and thus is a suitable indicator for a logical break.
I would think language programmers at some point flirted with the idea of using the period for an end-of-statement marker, but perhaps because it is als
Re:VMs will solve this issue (Score:3, Funny)
Re:VMs will solve this issue (Score:5, Funny)
More likely, we'll see Stupid Language Wars replaced by Stupid VM wars.
One thing that has always bothered me is the lack of standards for basic syntax.
You can have my parentheses when you pry them from my cold, dead hands.
For example does anyone really lose flexibility if we say statements are delimited by ';'?
Fuck you.
Re:VMs will solve this issue (Score:4, Informative)
Re:Meta Programming Language (Score:5, Interesting)
Its interesting to note that most people don't see history repeating itself with Java and C# (the fourth level of abstraction). The story goes as follows: in the late 60s, almost all systems programming was done in assembler(1st level), just for speed. In fact, no operating system was ever written in anything else than assembler, so there was no portable OS. People scoffed when UNIX was implemented in C (second or third level depending upon who you ask) in the mid 70s because it would be too slow. Of course computers get faster and a portable easy to edit OS took off.
Its really funny to hear people give the same arguments against Java and C# that are word-for-word the same as what was said about C.
Re:Meta Programming Language (Score:5, Insightful)
"Its really funny to hear people give the same arguments against Java and C# that are word-for-word the same as what was said about C." is so true as to be almost scary. I recently was looking at the huge arguments against C++ (vs C) and just about died laughing.
To me, it all comes down to two things:
1) Can I do (x) with (new language)?
2) Will it take me longer to do (x) with (old language) than it does with (new language)?
The whole concept of programming can be summed up that way. I have reached the sad state of no longer caring at all about language performance. I have such incredibly tight deadlines to meet now, with so few people, I have to say that programming time is worth dollars while execution time is only worth cents. Especially since about 75% of the work we do is for "one-off" or "disposable" projects. It sucks, but it puts food on the table.
-WS
Re:Meta Programming Language (Score:5, Interesting)
>arguments against Java and C# that are
>word-for-word the same as what was said about C.
Not really: if java is going to replace c/c++ the way c/c++ replaced assembly for systems programming, then everyone would already be using lisp.
If java and the like are going to replace anything, its going to be vb/pascal and friends.
Lisp (Score:3, Interesting)
Lisp is it.
Other "modern" (higher level than C) languages are special cases of primordial Lisp, optimized for various niches and programmer mentalities.
This does not imply that Lisp is the best programming language (Python is
Re:Lisp (Score:5, Interesting)
Where is a language with the power of Lisp and the ease of Python? Python has some wonderful features in terms of speed and readability, but it is too tied to its primitives. After reading on Lisp, then going back to coding Python, I was really frustrated that the language wasn't better generalized - that all statements (if, import, etc) are hard coded - what if I want to make a custom block statement (like if or while) or something similar? Can't do that in Python, because you don't really have access to parsed code objects the way you do in Lisp.
I've looked at the modern Lisp languages (Common Lisp and Scheme) and I can't figure out which ones are worthy of my attention. Scheme seems like it has lost the intelligent simplicity of Python in favour of clumsy "special character" based syntax, while Common Lisp has many detractors that don't complain much of details. Is your complaint about Common Lisp based on all Lisp variants? Or is CL especially bad?
I know Lisp is not the ideal language - its ugly, illegible, and slower than compiled languages - but the fact is it existed at a time so far before many languages that pathetically failed to implement its features, so I'm a little confused at the way the computing world has ignored it, instead of trying to work its principles into modern languages (Python does a little, but ends up feeling cobbled together and inconsistent).
Lisp++? Try Unicon instead! (Score:3, Informative)
Have you looked into Icon [arizona.edu], or its extension Unicon [sourceforge.net]? You can make custom control structures (using what are called co-expressions). It also has goal-directed evaluation (backtracking, think continuations of the LISP world) built into its expression evaluation. Of cour
Re:Lisp (Score:5, Interesting)
No, I just don't think that a proper Lisp has been implemented yet - I'm thinking of a language with all of the semantics of Lisp *plus* easily readable syntactic sugar. I'd like to see a standardized lisp that I can write and read as quickly and Python.
In Python we have a very succesful programming model, in Lisp we have potential for every conceivable programming model. Specializing the Lisp a little bit to optimize for pythonic programming would do a world of good.
I want do do
o = SomeClass()
instead of
(setq o (make-instance SomeClass) )
The latter might be semantically more elegant, but boy, it doesn't flow like the python variant.
As far as non-language-feature issue goes, Lisp does need a better (quantitavely and qualitatively - no "Functional Programming" people but people who can recognize the realities of programming today) community and one standard open source implementation. Availability of commercial implementations just doesn't cut it. And the one open source implementation should run on Windows too (no, Debian doesn't cut it).
Re:Lisp (Score:5, Interesting)
IMHO, a real, true, ultimate pure _language_ (not standard library) needs to be polished up for an opensource successor. Something with the power of Lisp and the legibility of Python. I'm thinking of something very similar to python except that code-based blocks should be handled as custom objects like everything else in Python.
In Python, the statement
class foo(bar):
def __init__(self):
self.baz = "foo bar baz"
is using the interpreter to auto-insance a bunch of standard Python objects (a class and a method, which is than in turn wrapped with an instancemethod) based around code objects. I can subclass the interpreter "method" object or create new substitute ones in its place, but if I want to use them in the interpreter, then I have to instance them the normal way, using
mycustommethodinstance = mycustommethodclass(constructorarguments)
whereas the main method object gets the nice
def funcname(args):
statement. This is the biggest failure of Pythons generalism - its inorexicably tied to its core objects, so that if you are using it like Lisp as a fully custom-made lexicon, you still have to either a) tear the contents out of the engine objects and relocate them into your own objects or b) use stupid constructors like
myfunc = myfuncclass("myname", "my massivestring of text that is actually the whole code block that this contructor will compile into code but I have to enter it as a string like this its kinda stupid eh?")
Not very nice. I want to make custom if and class statements, replace the implicit behaviour that typing i=1 creates an int object instead of some other custom object I want it to make. Likewise, I want to use other datatypes otehr than a true python Dictionary object as the local namespace or the global namespace (well, the globals can be any kind of mapping, actually).
A generalized Python would be my dream language - Python, but where all the core objects and statements (like "if" or "class") were part of the standard library.
Re:Lisp (Score:3, Insightful)
A generalized Python would be my dream language - Python, but where all the core objects and statements (like "if" or "class") were part of the standard library.
If you went that route, you would eventually realize that python-like syntax is unecessarily complicated, so you would simplify it down.. and you would have Lisp.
I hate to sound like a Lisp weenie, but if you take all these programming ideas and take them "to the logical conclusion", you *have* have a language with a light, uniform syntax, li
Lisp bad, python good? (Score:3, Insightful)
It's been tried - see Dylan [double.co.nz]. As near as I can tell, Dylan didn't take off because:
The Lisp people saw no major advantages to it other than the syntax, and they'd already gotten past that barrier
The non-Lisp people apparently didn't understand that it really was better than C++/Java
Like the one growing here [common-lisp.net]?
Now I'm really confu
Re:Lisp bad, python good? (Score:3, Insightful)
CPython is still the "standard" implementation of Python. Stackless is just some patches on CPython, and Jython is of interest mostly to Java community. There is no CPython equivalent for Lisp. Jython basically extends the scope of CPython, it doesn't compete with it. The multitude of Lisp implementations don't really have that excuse.
Also, all the Python
Re:Lisp (Score:4, Insightful)
Ugly and illegible are matters of opinion - most Lisp people will gladly trade a certain amount of syntactic suger for extensibily. `Slower than compiled languages' is just silly: modern (say, in the last 30 years) Lisp implementations are (a) compiled, and (b) not generally slow.
Re:Lisp (Score:3, Informative)
As for the peculiar syntax, you get used to it rather quickly. Just like with other languages, there are editor tools to help you be productive "in spite of the parenthesis."
Re:Lisp (Score:5, Insightful)
Wait a minute. I have serious complaints about Lisp, but those are not among them. Let me take those in reverse order:
SLOWER THAN COMPILED LANGUAGES: No, there are compilers for both Lisp and Scheme that generate VERY fast code. There are interpreters that are as fast as Python that are nice to use during development, then you run it thru a compiler and the speed is on a par with C++, when performing similar operations. Of course a small amount of C++ code will often run much faster than a small amount of Lisp code, but that's because a small amount of Lisp code can say so much more than a small amount of C++ source can say. That shouldn't be counted against Lisp.
ILLEGIBLE: Not really, in my experience. I know what you mean, though. After a lot of use, it's still not quite as easy to read (for me) as something like Ruby or Python, which were already pretty clear even before I had written a single line of either. But it's nowhere near as hard to read as Perl still sometimes is for me, and I've been coding in Perl (occasionally) for a decade. Even C++, which I've done a lot of over the last decade, still gets pretty darned hard to read sometimes, such as when using templates to call old-style C APIs.
Lisp is a lot better than that. I've certainly grown to appreciate the way you can build abstractions out of abstractions and the top level is still called the same way as the bottom level. Self similarity at every level of abstraction, so you just have to think about the algorithms and not the syntax.
Trust me that, with practice, Lisp gets much easier to read, though it never seems to get quite as easy as something like Ruby or Python.
UGLY -- of course this is related to legibility. Again, I know what you mean, and I agree in some ways. A simple mathematical expression or loop is quite ugly, I think, compared to the same thing in Ruby or Python.
However, as soon as you leave the simple, built-in stuff and start building your own more complex functionality, you discover that Ruby and Python get uglier and uglier but Lisp still looks the same. As soon as you start trying to express really interesting algorithms (fancy searches, AI stuff, etc.), you'll see the beauty in the simple consistency of the syntax. (Much more true of Scheme than Common Lisp, BTW.)
So, no, I don't have any serious complaints in that regard. There's no speed problem at all and where it is harder to read, it's a small price to pay for the significant power boost that style of syntax gives you when working with really interesting problems.
So, what don't I like? The online community of users, for one. The misanthropes that took over comp.lang.lisp are pathetic. I've never seen a techical discussion group that hostile and defensive. Don't even think of asking them questions that might clear up some of your skepticism about Lisp. The fact that you have any doubts makes you unworthy of being treated with anything other than utter hostility.
They love Common Lisp like a religion, and hate everything, and everyONE else, even natural allies like Scheme. Common Lisp fossilized sometime back in the Reagan Administration and has since lost almost all ability to improve. As a result, the vast majority of former users have abandoned it and those who remain almost have to take a position that there is no further NEED for improvement except in trivial ways (more libraries, more "complete" implementations, etc.) that, if you think about it, are merely restatements of the "nothing needs to be improved" notion.
And that brings me to what I like least: it seems that the fundamental ideas underlying all forms of Lisp (incl. Scheme) are fascinating, and if redone in a way such as Paul Graham's Arc () could turn out to be a terrific language. Unfortunately, I don't see it happening. Arc is announcementware. It has shown no signs of life since its first few weeks. Com
Re:Lisp (Score:4, Insightful)
Just in case you don't know, Lisp is a compiled language and not slow, especially when compiled with appropriate type declarations. Only the very most early dialects (forty or so years ago) were interpreted-only. Some interpreted implementations exist now, but that's a choice of the implementor, not a requirement of the language.
I disagree with your remarks about ugly and illegible, too, but that's personal taste, I guess. My views on all this are copiously documented in my Slashdot interview, Part I [slashdot.org] and Part II [slashdot.org].
However, what really disappointed me in this chart was its unscientific and subjective decision about what to include and how to present things.
Some of the arrows stop mysteriously so far leftward (as if to hint "this language is no longer used). That's apparently a subjective assessment on their part offered with no foundation, and irresponsibly inappropriate in a document intended to fairly describe history. Common Lisp's arrow stops short for reasons I don't understand since it continues in commmercial use today.
I didn't check the table thoroughtly, but the absence of mention of the fact that Scheme influences Common Lisp seems odd since it's a well-advertised truth.
The omissions of ISLISP, an ISO standard (ISO/IEC 13816:1997) [dkuug.dk] is also surprising and shows poor researching. The absence of Interlisp, Portable Standard Lisp (PSL), Eulisp, Gnu Emacs-Lisp (in spite of huge distribution world-wide as customization substrate for Emacs), and Xlisp (hugely distributed as part of Autocad) as important dialects is similarly sad.
O'Reilly sells books and has for a long time requested outright [oreilly.com] that no Lisp authors approach them. I and others have long noted that it has an apparent chip on its shoulder about Lisp, and little surprise they couldn't help exposing that bias in their chart. They want you to think the books they sell define the market. But that's just not so, especially when they voluntarily close their eyes to what's going on around them.
People should look skeptically at a company that wants a reputation as a "documentation" company yet so easily falls victim to its own commercial decision to close its eyes to this language family's achievements (such as an international standard).
A quick glance at other parts of the table leave out many other important languages and dialects, with no explanation of their rationale. Just for example: Teco, which strongly influenced Emacs-Lisp. I don't see HyperTalk there, either, even though I thought it influential. And there were many dialects of BASIC and LISP that are too small to mention, yet variations on the Unix shell language like bash are apparently worth mention. I guess that more reflects O'Reilly's sales than an attempt to explain history.
As a consequence, I have to regard this chart of theirs as commercial eye candy and not a properly scholarly work. I think it's a shame that Slashdot has chosen to give it all this free press. I'm sure that's just what they were hoping. And I'm sure they just don't care about their errors, omissions, and biases. I imagine they just want to sell books, and that all this free press will do just that.
Me, I buy my books from other sources. And I recommend you do, too.
Link is a 39x17 PDF (Score:5, Informative)
You may want to "right-click, Save As" that puppy . . .
A program written in many of them (Score:5, Interesting)
Re:A program written in many of them (Score:5, Interesting)
Re:A program written in many of them (Score:5, Informative)
Starts with 3GLs. (Score:5, Interesting)
He was referring to Assembler.
Autocode (Score:3, Informative)
Re:Starts with 3GLs. (Score:3, Informative)
Assembley language is simply assigning pneumonics to those binary
Re:Starts with 3GLs. (Score:4, Insightful)
There are macro assemblers which do preprocessing, which ranges from simple to sophisticated, and some which generate different opcodes for the same mnemonic based on what operands are present. Most assemblers also support the evaluation of expressions.
In some cases, the very same assembler language can produce binary for different machines, so there is not necessarily a one to one mapping between assembler and processor.
Re:Starts with 3GLs. (Score:3, Insightful)
Logo! (Score:3, Interesting)
Ok, so maybe LegoLogo is a little iffy, but LogoWriter included some pretty significant changes to Logo as a whole.
From "The Tao of Programming" (Score:5, Funny)
The assembler gave birth to the compiler. Now their are ten thousand languages.
Each language has its purpose, however humble. Each language expresses the Yin and Yang of software. Each language has its place within the Tao.
But do not program in COBOL if you can avoid it.
Do we blame the acid? (Score:5, Funny)
Re:Do we blame the acid? (Score:5, Funny)
TMI (Score:5, Interesting)
That being said, the lighter connecting arrows between languages (Lisp to Logo, Algol to almost everything else) makes the chart easy to follow and interesting to look at.
Interesting read (Score:4, Insightful)
So which couple dozen will we continue to use?
"And no one will be definitely right" (Score:5, Insightful)
Re:"And no one will be definitely right" (Score:4, Interesting)
Not to mention a fair understanding of algorithms, data structires and computation theroy not because you will remember the exact things you learned a year out of school but so you know they exist, why they exist and to prevent you from coming up with crazy ideas that are not computationaly feasable. I would love to see how many programs use a bubble sort to sort a large set of data, ya know.
As for high level languages lowering the bar, it's marketing telling people these things that are simply not true. Marketers will tell clients "this language, anyone can write in it..". It's not that anyone can write in it, sure anyone can get a book and program some codes, but will they understad the consequence of what they are doing? It's hard for people that haven't at least had a math background or engineering (which includes much math) to really understand what they are doing unless they have done a bunch of research themselves, been trained by someone else over a period of time or gone to school for CS.
On another rant, I really hate how the study of Computer Science has been basterdized by vocational programs and get rich quick schemes (tech skills "universities" or "schools".."start your new carrer in IT in 3 months!") such that most people think they teach you how to fix computers, install software, how to program a bit and how to become an "IT Professional" who makes sure all the computers are networked in the office. When I try and explain my fascination of algorithms, languages and computational theroy (Turing-Church Thesis)
Incase of Slashdotting... (Score:5, Funny)
Enjoy!
/ob
ActionScript?!? (Score:5, Funny)
After all, to managers, "newer, and therefore better." *sigh*
Re:ActionScript?!? (Score:5, Funny)
And if my manager gets ahold of this, I'll end up having to program in it by the month's end!
Don't worry. He'll hand it to Human Resources, and ask that they be on the lookout for candidates with six years of ActionScript 2.0 experience.
And then you'll lose your job to some twit who claims seven years experience in ActionScript 1.0, 2.0 *and* 3.0.
Re:ActionScript?!? (Score:3, Funny)
Plankalkül? (Score:4, Insightful)
There is another programming language family tree on that page aswell. This was mentioned in a previous story.
It seems that Lisp holds the record for
"Longest Lived Language That Is Still Relevant Yet Underappreciated"
It just amazes me that something concieved that long ago is still going strong. I guess it makes sense, as it was concieved initially as a language for describing algorithms, then later implemented. With abstraction on the rise as it seems to be, this quality of being much closer to theory than practice is quite a useful one.
Re:Check out Lisp (Score:5, Interesting)
From the classic essay by Richard Gabriel, Worse is better [jwz.org]: "Unix and C are the ultimate computer viruses." (Follow the link to see why he's saying this.)
Re:Check out Lisp (Score:3, Interesting)
a few months ago. You can get a copy in PDF format online for free, and it's a great book to help you start thinking in Lisp.
I've been through a lot of languages, and I've found some favorites, but Lisp is the only one that I can't directly fault the language for anything (C++ is a close second). I haven't found anything in any other language that Common Lisp doesn't have, and a lot of things ar
A half centry of coding! (Score:5, Funny)
For 49.5 years, computer programmers have been saying "but it worked on *my* computer"!
Functional programming languages dying? F# XSLT? (Score:5, Interesting)
Anyway, I didn't see any programming language versions for functional languages (the ones I recognize are Haskell, ML and Miranda) after some time in -99.
Does that mean that they are dying out?
I've heard rumors of F# from Microsoft but I don't know if that is true.
It would be a pity if functional languages would die at this point in time since proponents of functional languages always used the argument that "they may be slow now but they scale really well on massively parallell computer systems" (because of no side effects) and we are at the brink of seeing multi-processor systems starting to go mainstream.
On a separate note, XSLT, which isn't a programming language in the traditional sense, is functional in its design. I think the designers of XSLT really put some thought into it. In any event, XSLT doesn't have any side effects, making it a functional language in a sense, and this means that it also should scale really well on massively parallell systems.
So, I guess the theory behind functional languages live on in one of the hottest technologies around today.
Also, the last version of Prolog was in -97. Pity, you can really do some magic in that language.
Re:Functional programming languages dying? F# XSLT (Score:3, Informative)
Lahey [lahey.com] has a Fortan for
.NET Compiler [lahey.com]
I think this is what you meant by F#, right? Fortrant.NET wasn't written by microsoft, they just used the specs to write to IL (or so I think).
Re:Functional programming languages dying? F# XSLT (Score:3, Informative)
I think this is what you meant by F#, right?
Certainly not. It's a Caml for
.NET thing. Here's [microsoft.com] a link.
Re:Functional programming languages dying? F# XSLT (Score:4, Informative)
I bet that's not the only example. They list Java 1.4.1_2002, but don't list minor releases of more obscure languages.
Circular... (Score:5, Funny)
Fortran 2060!
Related (Score:5, Informative)
Beating the averages [paulgraham.com]
Both are amazingly good.
O'Reilly's favorites go furthest right (Score:3, Insightful)
How could they leave off Brainf*** (Score:3, Insightful)
Its also the smallest compiler ever written.
Ordering (Score:3, Insightful)
Well, duh! (Score:3, Interesting)
Because they want you to by more books. They are not in the poster business, they are in the book business.
Re:Ordering (Score:3, Informative)
$5.95 Java vs
$5.95 PHP Security Collection (PDF)
$5.95 Web Services Collection (PDF)
$7.95 Smileys
$8.95 Oracle PL/SQL Built-ins Pocket Reference
Did they give credit to the original? (Score:3, Informative)
Is it just me... (Score:4, Funny)
They left out a couple (Score:3, Informative)
only 50 years? Ada Lovelace? (Score:3, Interesting)
Re:only 50 years? Ada Lovelace? (Score:3, Interesting)
Earlier than that: she translated an article on the Analytical Engine (written by L. F. Menebrea) with several added notes of her own, including a sample program to compute Bernoulli numbers. This was published in the October 1843 issue of Scientific Memoirs.
See
t m#G [yorku.ca]
They should tree it out. (Score:3, Interesting)
What do they mean by programming language? (Score:4, Interesting)
They didn't define what they consider a programming language (Turing complete? General purpose?). Powerbuilder and m4 are general purpose languages but I didn't see them on the diagram.
If domain-specific languages are allowed, I think these were overlooked:
BTW, you can download a more printer-friendly version here: Eric Levenez's Computer Languages History [levenez.com]
Also, a German version is available here: German PDF [oreilly.de]
Re:I don't see Ruby on there (Score:4, Interesting)
cool programming challenge: figure out the optimal vertical order for the languages so as to minimize the length of relationship indicators
Re:Reminds me of (Score:4, Funny)
FORTRAN I begot ALGOL 58 begot ALGOL 60 begot CPL begot BCPL begot B begot C begot C++
And it was good.
Re:SmallTalk (Score:5, Informative)
It's quite impressive how it has evolved, and is still one of the most entertaining software environments around.
Re:Crappy fonts (Score:3, Funny)
If only they would make it poster sized and ready for print!
Re:Actionscript (Score:3, Informative)
Re:Where's MUMPS? (Score:3, Informative)
Re:No love? (Score:3, Funny)
It looks like assembler but works like COBOL. The red headed step child of programming languages.
With each subsequent release it becomes easier and harder at the same time. Dysfunctional personified.
I use it on a daily basis and not once have I found anything to like about it.
And yet...there's something so right about being so wrong. It's survived for decades on a single platform. It does the job. It's easy to learn. It pays the bill | http://developers.slashdot.org/story/04/06/17/1529256/the-history-of-programming-languages | CC-MAIN-2015-06 | refinedweb | 6,421 | 63.49 |
A python library for interacting with Pokémon Showdown
Project description
pslib
A python library for interacting with Pokémon Showdown.
🚧 Work in progress 🚧
import asyncio import pslib async def join_battles(client): while True: for battle in await client.query_battles(): try: await battle.join() except pslib.JoiningRoomFailed: pass async def display_logs(client): async for message in client.listen(pslib.WinMessage, all_rooms=True): print(message.room.logs) await message.room.leave() async def main(): async with pslib.connect() as client: await asyncio.gather(join_battles(client), display_logs(client)) asyncio.run(main())
Installation
The package can be installed with
pip.
$ pip install pslib
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pslib/ | CC-MAIN-2020-34 | refinedweb | 121 | 55.3 |
In this article you will learn how to export content which is inside in div to pdf using iTextSharp.
This article explains how to export content in a div to a PDF using iTextSharp. What is ITextSharp iTextSharp is a free and open source assembly that helps to convert page output or HTML content in A PDF file. You can download IT from here, Now add that DLL to the application. Getting Started Start Visual Studio and create a new website in ASP.Net and add these 2 DLLs to the solution.
Now add these namespaces:
This is my div content,
Code behind
Image 1. Image 2. For more information, download the attached sample application.
View All | https://www.c-sharpcorner.com/uploadfile/raj1979/export-div-content-to-pdf-using-itextsharp/ | CC-MAIN-2019-51 | refinedweb | 117 | 73.68 |
47568/convert-csv-file-to-numpy-array.
Here you go:
import glob
path_to_excel_files = glob.glob('path/to/excel/files/*.xlsx')
for ...READ MORE
Some services require table data in CSV ...READ MORE
XLSX tables are usually created in MS ...READ MORE
Irrespective of whether the dataframe has similar ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
Hi @Sumanth, try something like this:
import csv
with ...READ MORE
Suppose you have the series stored in ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/47568/convert-csv-file-to-numpy-array | CC-MAIN-2021-10 | refinedweb | 116 | 71.41 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.